Notes on semi-Thue Systems in a Context of Morphogrammatics

Further explanations of the formal notions behind morphic cellular automata

Rudolf Kaehr Dr.phil@

Copyright ThinkArt Lab ISSN 2041-4358



The main differences between symbolic and morphic formal systems: Instead of the Kleene star for the symbolic universes, morphic universes are generated by the Stirling cross. Symbolic substitution and concatenation is preserving production concatenation. Morphic substitution/concatenation is opening up a system of interactive complexions of derivations. Comparison of substitution based production systems (Thue, Post, Markov) with Hausser’s systems of “possible continuations” of Left-Associative languages is sketched.

1.  Semi-Thue Systems

1.1.  Production systems

1.1.1.  Deconstruction remarks

”It is a gross simplification to view languages as sets of strings.
The idea that they can be defined by means of formal processes did not become apparent until the 1930s. The idea of formalizing rules for transforming strings was first formulated by
Axel Thue (1914). The observation that languages (in his case formal languages) could be seen as generated from semi Thue systems, is due to Emil Post. Also, he has invented independently what is now known as the Turing machine and has shown that this machine does nothing but string transformations. [...] The idea was picked up by Noam Chomsky and he defined the hierarchy which is now named after him (see for example (Chomsky, 1959), but the ideas have been circulating earlier)." Marcus Kracht  2003, The Mathematics of Language, Rewriting Systems, p. 53

"In formal language theory, languages are sets of strings over some alphabet. We assume throughout that an alphabet is a finite, nonempty set, usually called A. It has no further structure, it only defines the material of primitive letters.” (ibd, p. 16)

A deconstruction of “sign”, “string” and “set” is necessary to understand morphogrammatics and morphogrammatic semi-Thue systems, morphic finite state machines and morphic cellular automata as introduced in recent papers. A further deconstruction has to go into the topics of finiteness and infiniteness of alphabets and strings. It also has to be seen that the term “sign” is understood as a purely syntactical mark, letter or character and is not involved in any serious semiotical distinctions.

The kernel of formal language considerations is the monoid, M = (M, o, 1) and the Kleene (star) production A*.

A monoid is a triple M = (M, o, 1) where : “o” is a binary operation on M and 1 an element such that for all x, y, z ∈ M
the following holds.
x o 1 = x                    (Idempotence)
1 o x = x                     (Idempotence)
(x o y) o z = x o (y o z)   (Associativity).

Hence, a deconstruction of a monoid M has firstly to deconstruct the binary operation (composition)    “o” and then, secondly, more or less as a consequence of it, a deconstruction of the elements of M.

A deconstruction of the concept of set-theoretical elements has led to the introduction of a new ‘data-type’, the kenogrammatic and morphogrammatic data patterns used in kenomic and morphic cellular automata constructions.

Criticism: Just an abstraction more?
At a first glance it seems that such a deconstruction which leads from the Kleene product to a Stirling distribution might simply be an abstraction over the set of values producing an equivalence class as it is well known. Hence, Stirling K* = Σ*/eq. There are some academic publications insisting on such profound insight. Furthermore it is trivial to conclude that the same abstraction holds for the introduction of kenomic cellular automata: kenoCA = ECA/eq. In such a view the kenomic rules are just an abstraction of the CA rules.  If we consider the situation for ECAtypeset structure with the complete rule set 23= 8 and a complete rule range of 2typeset structure and the corresponding kenoCAtypeset structurewith an incomplete rule set of StirlingSn2(3, 2) = 4 and an incomplete rule range of StirlingSn2(23, 2) = 128, then results look quite trivially as an abstraction from 23 to 2typeset structure= 4 and 2typeset structure to 2typeset structure/2 = 128. Unfortunately, the complete rule set for the elementary kenoCAtypeset structure is StirlingSn2(4, 4) = 15 and not 8. As a consequence of this asymmetry between complete rule sets, different kinds of rules, methods and features are surpassing the classical definitions of CAs without the Stirling approach those new constellations wouldn’t be accessible. •

There is not much chance to achieve such a transformation of the concept and functioning of an elementary operation like the composition "o” in a monoid.

Nevertheless, there are some still recent but well elaborated and tested approaches to recognize. The diamondization of composition has been demonstrated in my papers to a Diamond Category Theory.

With “ x o 1 = x”  and “1 o x = x”  it follows that the equation “x o 1 = x = 1 o x”  holds. This is not surprising and has its rock solid foundations in first-order logic and category theory and their epistemologies.

Does it hold for morphogrammatics? Obviously not! The equation might be interpreted as the equality of right- and left-oriented self-identity of the object “x” of a morphism.

<br /> (X o 1) | x = X | x            & ... mpotence) <br /> <br /> f  _ X o f  _ id => ((X ∈ Iter)/(X ∈ Accr))

Hence, even the simplest presumbtion, namly that X = X has to be deconstructed.
As a consequence, the obvious symmetry of A = B iff B = A is not obvious anymore.

A deconstruction of associativity of composition follows, at first, quite automatically:
The context-independent associativity "(x o y) o z = x o (y o z)" becomes the contextualized associativity
"(X o Y)|(x; y) o Z | z = X | x o (Y o Z)|(y; z)".

<br /> Diamondization of associativity of composition <br /> <br /> (∀ x, y, z : (x o y) ... sp;  o Z | z) | u =  _ DIAM  (X | x o    (Y o Z) | (y ; z)) | u, _] <br />

This has consequences for any introductory rule like R0: --> X.
There is no simple beginning in a diamond world. Setting a beginning is always multiple, at least double: a beginning as an iterative or an accretive beginning. The act of beginning happens in a context of a beginning and has its own notion in a calculus of beginnings.

Hence, R0: --> X becomes diamond R0: --> X | x.

Therefore, the statement of a beginning kenogram [kg] of kenogrammatic sequences in a trito-universe TU as in TU = ([1] Tsucc) of a recursive formula is just a beginning of the process of deconstruction of the notions and terms of keno- and morphogrammatics and not an end at all.

Nevertheless, diamond-theoretic thematizations had been, more or less, omitted in the proposals on kenomic and morphic cellular automata, finite-state machines and semi-Thue systems.

And just the Stirling effect is in focus that is affecting the rules of the morphogrammatic game of semi-Thue systems and cellular automata deconstruction.

Hence, the universe of trito-structural kenogram sequences, kgs, TU, remains defined without its diamond environment as
TU = [[1] Tsucc], with x + 0 = 0 + x = x.

Further deconstructions of the concept of ‘beginnings’ in formal systems at:
“Quadralectic Diamonds: Four-foldness of beginnings”,

1.1.2.  Langton’s rules for simple linear growth

A classical example of a production system is introduced by Langton‘s L-system.

"Here is an example of the simplest kind of L-system. The rules are context free, meaning that the context in which a particular part is situated is not considered when altering it. There must be only one rule per part if the system is to be deterministic.

The rules: (the “recursive description” of a GTYPE)
1) A --> CB
2) B --> A
3) C --> DA
4) D --> C

When applied to the initial seed structure “A,” the following structural history develops (each successive line is a successive time step):
          time   structure    rules applied (L to R)

            1.       A            : start
            2.      CB           : rule1 on 1.
            3.    DA   A        : rule3 on 2. C, rule2 on 2. B
                 ↙ ↓    ↘        
            4. C CB    CB     : rule4 on 3. D, rule1 on 3. A, rule1 on 3. A

Christopher Langton, Artificial Life, 1989, p. 26

typeset structure

Atomic elements are substituted by unary and binary elements. Binary elements are seen as a concatenation of 2 identical unary elements. Because of this atomism or elementarism a kenomic abstraction is empty, i.e. all unary elements are kenomically equivalent.
Because rewritting systems are substitutional systems the point of substitution in this case is atomistic.

kenoCAtypeset structure rule set
typeset structure

Example for kenoCA rules of the form: [axb] --> y
rule1: [AAA] --> A   
rule7: [AAB] --> B    
rule8: [ABA] --> B    
rule4: [ABB] --> A    

typeset structure

typeset structure

1.1.3.  Introducing kenogrammatic rules

Technical alphabet, standard normal form of kenograms: {A, B, C}.
Rules: rule1, rule2, rule3.

One trito-equivalence of the calculus, not applicable to the technical, meta-linguistic alphabet:
A = KGC:

PKG = [mode=KG; {A, B, C}, rule1, rule2, rule3]
rule1:  --> A
rule2: A --> AB
rule3: AB --> C

A, AB, C, AB, C, AB, ...: chiastic interchange between A(2) and C(3).

The term A(1) as operator is set, C(3) as operand of the operator AB (3) becomes the operator A(2). Hence, A(2) is involved in the chiasm of operator and operand, playing both roles at once. Considering the roles of C, the same holds for a B in place of C.

Différance and memristivity
Strictly speaking, we encouter with this tiny Ptypeset structure a situation where both, the halt and the continuation of the production, happens at once. The chiastic interplay of the situation “A” and the situation “C” is playing the différance of the difference of “A” and “C” and its defer of change from ”C” to “A”. Jaques Derrida’s différance, which is neither a word nor a term, and is phoneticlly indistinguishable from “différence”, plays on the fact that the French word différer means both "to defer and to differ."
"Différance as temporization, différance as spacing. How are they to be joined?” (J. Derrida)
Jacques Derrida, Différance,

This mechanism of a chiastic interchange between A and C invites to interpret it as a memristive mechanism and probably as the smallest model of non-destructive self-referentiality in/of a formal system. The self-referentiality of the production scheme seems to be obvious. What isn’t obvious at a first glance is its memristivity. Memristivity is involved with the chiasm between ‘operand’ C and ‘operator’ A. The property of re-entry and sameness has to be remembered during the substitution. In a classical setting, nothing of this kind has to be reached because it is presumed and installed from the ‘outside’ by an external designer/user of the rules that the re-entry ’port’ is not missed and that the object has not changed in the process of substitution from one identity (A/C) to another identity (C/A).

For the kenomic calculus, the technical alphabet is build by distinctive letters, characters, elements, but inside the kenomic game and its rules, all occurrences of monadic elements are kenomically equal.

The substitution of C from rule3 to rule2 as A has the choice to decide for a kenomic or for a symbolic interpretation of the substitution. With a symbolic interpretation the calculus stops here because the application is refused. For a kenomic interpretation rule2 holds, and the game goes on. Hence, with A !=SEM C, the semiotic rule system is terminating with C, and with  A =KG C the production goes on with C =KG A.

Hence, the general decision problem gets confronted with an as yet unknown situation of a rewriting system having properly a state and at once not having that state in the calculus.

Consequences for the concepts and constructions of replication, cloning and self-production/production of a self (autopoiesis) have at first to deconstruct the underlying concepts of iterability in their concepts of recursion.

Decidability and non-decidability, therefore, is not focussed on the identification or non-identification of an object, i.e. a state, with decidable or non-decidable properties but on the interaction between applications inside and between formal systems.

Further examples
PID = [mode=ID; {A, B, C}, rule1, rule2, rule3]
rule1:  --> A
rule2: A --> AB
rule3: AB --> C

A, AB, C.

Production systems are based on substitution. Kenomic and morphic substitutions are context-dependent.

Chiastic rule applications are not to be confused with the identity systems with A(1) = A(2) in a (self-referential) circular rule system:

PID = [mode=ID; {A, B}, rule1, rule2]
Alphabet = {A, B}
Rules (Id):
rule1: A --> AB
rule2: AB --> A.

A, AB, A, AB, ... : non-terminating

Logic, Copies and DNA Replication
"In logic there is a level beyond the simple copying of symbols that contains a non-trivial description of self-replication. The schema is as follows: There is a universal building machine B that can accept a text or description x (the program) and build what the text describes. We let lowercase x denote the description and uppercase X denote that which is described. Thus B with x will build X. In fact, for bookkeeping purposes we also produce an extra copy of the text x. This is appended to the production X as X, x. Thus B, when supplied with a description x, produces that which x describes, with a copy of its description attached. Schematically we have the process shown below.
B, x --> B, x; X, x
Self-replication is an immediate consequence of this concept of a universal building machine. Let b denote the text or program for the universal building machine. Apply B to its own description.
B, b --> B, b; B, b
The universal building machine reproduces itself. Each copy is a universal building machine with its own description appended. Each copy will proceed to reproduce itself in an unending tree of duplications. In practice this duplication will continue until all available resources are used up, or until someone removes the programs or energy sources from the proliferating machines."

Louis H. Kauffman,  Biologic

Chiastic self-reference with 2 trito-equivalences: A = KG C and AB = KG BA

PKG = [mode=KG, {A, B, C}, rule1, rule2, rule3]
Alphabet = {A, B, C}
rule1: --> A
rule2: A --> AB
rule3 : BA --> C

(1): A, AB, A, AB, A, ...

(2): A, AB, C, AB, C, ...

Also they differ as resulting productions semiotically, both production chains are kenogrammatically equivalent: (1) =KG (2).

Polycontextural productions
Further interesting results are obtained by polycontextural production systems. In this context of polycontexturality it is obvious to state “The same is different”.

PPCLtypeset structure = (P1∐ P2) ∐ P3= [mode=PolyKtypeset structure, {A, B, C}typeset structure, (rule1, rule2, rule3)typeset structure], ∐: mediation
Alphabet = {A, B, C}typeset structure:
Alph = {A, B, C}1     Alph = {A, B, C}2      Alph = {A, B, C}3
rule1.1 : --> A        rule1.2 : --> A         rule1.3 : --> A  
rule2.1 : A --> AB    rule2.2 : A --> AB    rule2.3 : A --> AB
rule3.1 : AB --> A    rule3.2 : AB --> A    rule3.3 : AB --> A

A1, AB1, A1= A2, (AB)2, A2 = A1, (AB)1, ...

The substitution process distributed between P1and P2 might be reflected from the third position of P3 from which it is reasonable to state that there is no classical circular production with (A1= A2/Atypeset structure= A1) but a chiastic self-referentiality between the two mediated contexturally different production systems P1and P2 albeit both are using the “same” alphabets and the “same” rules.

The questions of termination of programs, calculations, productions and the Halting problem are one side of the classical constellations. The other side is that a non-terminating program has different meanings in the new constellation. Computation as an interactive media is not problem solving and is therefore ‘beyond’ the classical questions of termination and non-termination. Media of computation don’t have a start or an initial configuration nor do they have a terminal goal. Non-termination in polycontextural and morphic systems is not the same as empty repetition, infinite loop or endless iteration in the classical framework.

This hint or metaphoric construction is not excluding the conservation of the classical situations and their results ‘inside’ the different contextures.

Hence, non-termination is not anymore a bad property of ‘algorithmic’ systems but the intrinsic character of inter-medial activity.

Some further entertainment from “Nick Haflinger”:
For a more philosophical intervention, go to “Nancy: Destruktion als Erinnerung der Struktion oder Techné” at:;byline=0&amp;portrait=0
Or you might prefer: “Slickaphonics - Procrastination (Wow Bag - 1983)" at:
The real thing? Peter Wegner, Interactive Computation

1.2.  Semi-Thue Systems

1.2.1.  Definitions for semi-Thue systems

Following PlanetMath we get a helpful definition of a semi-Thue system.

"A semi-Thue system S is a pair (Σ, P) where Σ is an alphabet and P is a non-empty finite binary relation on Σ* , the Kleene star of Σ .

Elements of P are variously called defining relations, productions, or rewrite rules, and S itself is also known as a rewriting system. If (x, y)∈ P, we call x the antecedent, and y the consequent.
Instead of writing (x, y)∈ P or xPy , we usually write
                                                                      x --> y.

Let S = (Σ, P) be a semi-Thue system.
Given a word u over Σ, we say that a word v over Σ is immediately derivable from u if there is a defining relation  
x --> y such that
                    u = rxs and v = rys,
for some words r, s (which may be empty) over  Σ.

If v is immediately derivable from u, we write
                     u => v.
Let P’ be the set of all pairs (u, v)∈ Σ x Σ* such that u => v.
Then P⊆ P’, and

If u => v, then wu => wv and uw => vw for any word w.”

typeset structure

"Let S be a semi-Thue system over the alphabet Σ = {a,b,c} , with the set of defining relations given by
P = {ab --> bc, bc --> cb} . Then words ac3b , a2c2b and as bc4 are all derivable from a2bc2 :

a2bc2 => a(bc)c2 => ac(bc)c => ac2(cb) = ac3b ,
a2bc2 => a2(cb)c => a2c(cb) = a2c2b , and
a2bc2 => a(bc)c2 => (bc)cc2 = bc4.” (PlanetMath)

"Under S , we see that if v is derivable from u , then they have the same length: |u| = |v| . Furthermore, if we denote |a|u the number of occurrences of letter a in a word u , then |a|v <= |a|u , |c|v <= |c|u , and |b|v = |b|u . Also, in order for a word u to have a non-trivial word v (non-trivial in the sense that u != v ) derivable from it, u must have either ab or bc as a subword. Therefore, words like a3 or c3b4a2 have no non-trivial derived words from them.” (PlanetMath)

1.2.2.  Discussion of the presuppositions

Each repetition of the rules rule1= ab --> bc and rule2 = bc --> cb is realizing an identification of the result (operand)of the substitution with the initial word of the applied rule in the mode of identity.

1. aabc2 => a(bc)c2     : by rule1 = ab --> bc
2. a(bc)c2  => ac(bc)c  : by rule2 = bc --> cb

Rules: ab --> bc/bc --> cb

Rule1 recognizes in the mode of identity the left-most substring “ab” of the word and substitutes it at the place of its occurrence in that word with “bc”.
The second step recognizes “bc”, now as an antecedent for the rule2 and replaces it at the place of its occurrence with the succedent of the rule2 “cb".

This procedure is highly obvious. We are used to it. And such an explicit description I tried to give is quite superfluous, except we want to explain the procedure to a robot or to an alien. At least I need it to understand my own aversion against the frozenness of the whole paradigm of mathematical formalization.

But this game is presuming several “intuitions” which are not obvious at all.

It is not necessarily obvious that the whole procedure is supported by the principle of identity. A reuse of a word or a string, like “bc” in the antecedent of the rule1 and as a precedent in the rule2, has to match the matching conditions of iterability in the mode of identity. If the two occurrences of “bc” differ in the process of application, the substitution fails. Hence, the letters of the word are taken in a strictly atomistic and essentialistic sense to guarantee identity independently of their use and their role in the game. The use of the signs is not changing the signs.

Matching conditions
Matching conditions for the composition of the rules r1 and r2:

r1= ab --> bc, r2 = bc --> cb,
cod(r1) ≡ dom(r2) => r1 o r2

As shown in earlier papers, a reflection of the matching conditions is enabling the possibility of an antidromic formalization by saltatories. Saltatories are complementary constructions to categories.

Morphogrammatic turn
For the application of the rules there is no need to know the internal structure of the words u , v and w.

With the morphogrammatic turn things are getting slightly more dynamic. A new kind of interplay between identification and application opens up first chances to avoid the frozenness of operative formalisms.

The presumption of identity in the substitution process gets some interesting deliberation and generalization.
If the substitution rule holds for “any word” under identity, a first attempt to liberalize its application happens with the understanding of a word (w) as a kenomic word [w].  
Hence, for w = (ab), any keno-word of the form (ab) is applicable: [w] = {ab} = [bc] =  ... = [&, #]. Both, (w) and [a] are of the same morphogrammatic structure and are of the same ‘length’.

A more radical generalization is achieved with the abstraction of bisimilarity: Two words (morphograms) [w1], [w2] are equal iff the have the same behavior. Hence, the length of [w1] and [w2] is not anymore defining sameness of morphograms.

Additional to identity and equality, some more kinds of thematizations enter the game of symbolic or ’mathematical’ writing: equivalence, simularity, bisimularity and different types of metamorphosis.

Definitions, theorems, methods, applications to recall the state of the art approach to formal language theory, look at:
John M. Abela , ETS Learning of Kernel Languages, 2002's_Thesis.pdf

1.3.  Kenomic semi-Thue systems

1.3.1.  Semiotics

Σ* denotes the set of all finite strings of symbols from Σ. This statement of semiotics becomes in morphogrammatics:

Sn2(Σtypeset structure denotes the universe of all finite tritograms of monomorphies from Sn2(Σ).

Sn2(Σtypeset structure is the trito-universe of keno-sequences “ks”. (Morphogrammatik, p. 77)

Sets in the trito-universe of keno-sequences are not sets in a definitorial sense, they might be called collections.
The objects (elements) of a collection are tritograms (ks-sequences) and are not defined by the set-theoretical rules of elements and ensemble (sets) which are based on identical concepts of first-order logic.

Σ = {a, b, c}
Σ* = {a, b, c, aa, bb, cc, ab, ac, bc, aaa, bbb, ccc, ...}

StirlingSn2(Σ*) = {a, aa, ab, aaa, aab, aba, abb, abc, aaaa, ...}

nfirstq(n, seq):

- nfirstq (3, TU) = {a, aa, ab, aaa, aab, aba, abb, abc}

Kenosequence: length (states) and technical signs (a, b, c)

kseq (3):

{[aaa], [aab], [aba], abb], [abc]}.

typeset structure

Sn2(3, 3) = 1+2+1= 5

An element of the alphabet is a sequence of length 1: [a] = typeset structure.

[a] = typeset structure, [b] = typeset structure: [a] =KG [b]

Semiotically there are n unary elements in an alphabet:

(a) = typeset structure, (b) = typeset structure.

(a), (b) ∈ Alph: (a) !=SEM (b) =>  typeset structure

sum(Sn2(1,1)) = 1

Like semiotic sign sequences are defined by their length and their alphabet, kenogrammatic sequences kseq are defined by their positions and the realization of the positions, i.e. by their interaction of place and inscription.
Semiotic sequences are therefore defined by their Cartesian product |sign|typeset structure and keno-sequences are defined by their Stirling distribution(partition) of places and kenograms: Sn2(place, kenos).

A morphogrammatic turn which is focused on monomorphies instead of kenograms is changing the presupposition of equal length for the equivalence (sameness) of morphograms, too. Two morphograms are the same iff their behavior is not distinguishable. That is, two morphograms are bisimilar if they have equal behavior. The abstraction of bisimilarity takes the fact into account that there are different fundamental morphogrammatic operations and therefore an abstraction over the operators instead of the morphograms as ‘objects’ is applied.

Hence, for semiotics the star or Kleene closure is

Σ* = typeset structure Σi =  Σtypeset structure Σ2 ∪ Σ3 ∪ ... ∪ Σn ,

the kenogrammatic universe TU instead is defined by

TU = ([1], Tsucc).

Sn2(Σ, n)
nfirstq(n, seq)

nfirstq(3, seq):
[aaa], [aab], [aba], [abb], [abc].

monomorphic TU = ([1], monomorphy(Tsucc)).

morph(nfirstq(3, seq)):
{[aaa], [aa][b], [a][b][a], [a][bb], [a][b][c]}.

Contextual repetition of monomorphies.

A =MG B iff  EN(A) = EN(B)

Two words are kenogrammatically equivalent iff they have the identical EN-structure.

1.3.2.  Tectonics

A calculus is defined by 2 alphabets and a set of rules (Paul Lorenzen, Haskell Curry):
1. the alphabet of signs
2. the alphabet of variables.

Two semiotic words are semiotically equal iff they are of the same length and all the occurrences in the words are equal (equiform) at the same places (positions) of the words (string).
Concatenation of words and star product of words, the empty word.

An introduction of a kenogrammatic calculus is applying the EN-abstraction on the objects of the calculus, i.e. on the words of the sign-alphabet and not yet on the meta-objects, i.e. the alphabet of the variables.

The rules of the calculus are applied to the signs of the alphabet with the help of variables which are not elements of the set of signs.

Second-order rules
Rules for morphogrammatics are not fully defined by the concept of constant, i.e. elements from a pre-given alphabet, and variables over the alphabet.

Additionally to the classic requisites of constant and variable, the rules have to be calculated, i.e. produced by rules on a different may be meta-level. Hence, the notions of constant, variable and rules together are determining the rules of a morphogrammatic calculus. Prolongation, continuation, concatenation etc. are ruled by rules of rules, therefore the rules of morphogrammatic operations, like iteration and accretion, are second-order rules.

iteration:  MG --> MGx, x∈ AG(MG)
accretion: MG --> MGx, x∈ AG(MG)+1

AG(MG) is the operation (rule) to calculate the constants of the continuation operation (rule) applied to the encountered morphogram.

In fact, there are no first-order constants, like elements from an alphabet, in the game. Morphogrammatic ‘constants’ are calculated, i.e. produced, hence variables or second-order constants. In this sense, they are not constants but variables determined by the preceding kenograms of the morphogram. The first-order constants are the elements of a semiotic alphabet which is involved technically by supporting the notational systems of morphogrammatics.

Nor are there any variable as stable containers of previous productions in the game of kenomic calculi and algorithms.
An application of a rule depends 1. on its definition and 2. on the structure of the previous productions represented by the variables. Therefore, as it is explained before, a concatenation is always depending on the ‘conctenat’ too. That is the crucial difference between atomistic and kenomic production rules.

Stroke calculus (Lorenzen, 1950/60s)
Atom : {|}
Variable : {n}
Rules : {R0, R1}
R0: --> |
R1: n --> n|

(R2: Rtypeset structure)

Production: |, ||, |||, ||||, ....

Kenomic calculus (Kaehr 1970s)
Atom     : {[kg]}            :( = kenomic constant )
Variable : {[mg]}           :( = kenomic variable )
Rules     : {R0, R1}
R0:        --> [kg]
R1: [mg] --> [mg][kg]   :( = conckeno([mg], [kg]))

(R2: Rtypeset structure

Atom     : {[a]}       
Variable : {[mg]}  
Rules     : {R0, R1}
R0:          --> [a]
R1.1:       mg = [a]  -->  conckeno(mg = [a], kg = [a])  = {[aa], [ab]}typeset structure
R1.2:       mg = [aa] --> conckeno(mg = [aa], kg = [a]) = {[aaa, [aab]}  
              mg = [ab] --> conckeno(mg = [ab], kg = [a]) = {[aba, [abb], [abc]}.

Atom     : {[a]}       
Variable : {[mg]}  
Rules     : {R0, R1}
R0 :         -->  [a]
R1:          [mg] --> [mg]^[a]:

R1.1:   [a]  -->  [a][a])      = {[a]^[a], [a]^[b]} = {[aa], [ab]}  
R1.2:   [aa] --> [aa][a])     = {[aa]^[a], [aa]^[b]}  = {[aaa], [aab]}
          [ab] --> [ab][a])     = {[ab]^[a], [ab]^[b], [ab]^[c]}.

Production (out of [a]): [a], [aa], [ab], [aaa], [aab], [aba], [abb], [abc]...

This example, again, shows clearly the dependence of the alphabet from the applications of the rules and surely, the dependence of the rules from the generated alphabet. The classical definition is constructed over an alphabet Σ by a binary relation (x, y) ∈ Σ* x Σ*, while the kenomic case is constructed out of a ‘beginning' [a] generating ‘words’ by binary rules of the Stirling universe
(x, y) ∈ (StirlingSn2(Σ, *) x StirlingSn2(Σ, *)) = (x, y) ∈ K x Ktypeset structure.

For notational reasons we have to add to the start alphabet of the Stirling universe of a calculus an alphabet, i.e. a technical sign repertoire, of technical letters, characters like brackets, dots etc.
Because the definition of binary relations depends on a Cartesian product and the kenomic ‘binary relation’ on a Stirling distribution, kenomic relations are in fact technically not binary relations at all.

Quite obviously, Lorenzen calculi are alphabet-stable, they are defined over a pre-given alphabet. Therefore, their tectonics is hierarchical. Kenomic calculi are alphabet-variable. The alphabet is part of the production, and the production depends solely on the precedent productions and not on an abstract application of rules over an alphabet. Hence, their tectonics is not hierarchical but heterarchical and is defining a retro-grade recursivity. This little difference is in fundamental conflict with the main statement of computation (Gurevich):“The vocabulary does not change during that evolution.”

Stable base set vs. self-modifying media
"The choice of the vocabulary is dictated by the chosen abstraction level. In a proper formalization, the vocabulary reflects only truly invariant features of the algorithm rather than details of a particular state. In particular, the vocabulary does not change during the computation. One may think about a computation as an evolution of the initial state. The vocabulary does not change during that evolution.
Is it reasonable to insist that the vocabulary does not change during that evolution?

"There are also so-called self-modifying or “non-von-Neumann” algorithms which change their programs during the computation. For such an algorithm, the so-called program is just a part of the data. The real program changes that part of the data, and the real program does not change.

"While the base set can change from one initial state to another, it does not change during the computation. All states of a given run have the same base set. Is this plausible? There are, for example, graph algorithms which require new vertices to be added to the current graph. But where do the new vertices come from? We can formalize a piece of the outside world and stipulate that the initial state contains an infinite naked set, the reserve. The new vertices come from the reserve, and thus the base set does not change during the evolution.
Who does the job of getting elements from the reserve? The

"Formalizing this, we can use a special external function to fish out an element from the reserve. It is external in the sense that it is controlled by the environment.” (Gurevich)
Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms
ACM Transactions on Computational Logic, vol. 1, no. 1 (July 2000), pages 77-111.

Comparison between “Calculuses and Formal Systems” by Haskell B. Curry, Dialectica 47/48, pp. 249-271, 1958

A new challenge for polycontextural designs of formal languages, grammars, rewriting systems and calculi occurs with the chiastification of the object- and meta-system, i.e. the chiasm of objects (alphabets, signs, keno- and morphograms) and variables (schemes, frames, )

Same length morphograms
Encountered morphogram MG2 = [abbcdd]. How can it be produced by which rules from “axiom” MGtypeset structure[aabaac]?
Both morphograms are correctly produced by the rules of the morphogrammatic system. Both are of the same length, therefore they cannot be equivalent. Hence, there is no derivation in the morphogrammatic calculus from MG1to MG2.

                      [aa]  [ab]
                            ↙  ↓
                  [abba] [abbb] [abbc]
                                         ↙  ↓
                                          [abbcda] [abbcdd] [abbcdb] [abbcdc][abbcde]
               [aa] [ab]
          [aaa] [aab]
             ↙     ↘
      [aaaa] [aaba]  [aabb]

Both produced morphograms are of the same ‘length’, hence there is no evolving production rule which is generating a ‘word’ of the same ‘length’ with a different pattern and the same path.
Therefore, the rules of differentiation, called emanation, shall be introduced to transform morphograms of the same complexity into each other of the same length.

1.3.3.  Semi-Thue systems with morphograms

A keno-string rewriting system or keno-semi-Thue system is a tuple (Σ, R) where tnf(Σ) is an alphabet, usually assumed finite. The elements of the set Sn2(Σ *) (* is the Kleene star here, Sn2(Σ *) is the Stirling distribution) are finite (possibly empty) keno-strings on tnf(Σ), sometimes called keno-sequences or morphograms in formal writing systems; we will simply call them keno-strings here.

Two keno-strings A and B are equivalent iff EN(A) = EN(B),

A behavioral or actional approach is contemplating on the behavior of kenograms and not on the semio-ontological question of what is a kenogram.
Therefore a mix of different definitions of sign-use is possible: EN, TNF, SEMiotic, MONomorphy, etc.

Semiotic alphabet: ΣSEM = {a, b}
Kenomic words over the semiotic alphabet ΣSEM of length 3:
Sn2(Σ*, 3) = {a, aa, ab, aaa, aab, aba, abb}

Sn2(Σ*)≡ Κ*
Κ* is the trito-universe TU.

Monomorphies of Κ*
Monomorphies in kenomic systems are a kind of an analogy to a “sub-string” in a word or string production system.

For ΣSEM = {a, b}, length(Κ*) = 3:
Monomorphies of Κ*(3) are μ(Κ*, 3) = {[a], [aa], [aaa],[aa][b], [a][b][a], [a][bb]}.

Hence, μ(Κ*) = {[a], [aa], [aaa]}.

1.3.4.  Monomorphies

Monomorphic notation
Monomorphies in morphograms are playing a similar role as atomic signs in sign sequences.
The monomorphies of the morphogram MG = [abbcaa] are writen in a table with the distinctions locus, monomorphy and kenogram as follows. Monomorphies are produced by the monomorphic decomposition Dec of the morphogram MG: Dec(MG) = (mg1, mg2, mg3, mg1) wit the kenograms {a, b, c} for mg1= [a], mg2 = [bb], mg3 = [c].

 a bb c aa =                                                                                   ... nbsp;   ø            a    

             1.2 .3 .1                                                                         ...                                                                                                  1

Systematics of morphograms
typeset structure

typeset structure        typeset structure

Decomposition of morphograms into monomorphies (Gunther)
Given a morphogram [aabc] how to decompose it into its monomorphies?

1.                          [a]
2.                     [aa]  [ab]
3.        [aaa] [aab] [aba] [abb] [abc]
4.     [aaaa] [aaab] .... [aabc] ...[abcd]

Dec([aabc]) = ([aab]; [aa], [ab]; [a])

[aabc] -->  [aab]|[a] --> [aa]|[ab] --> [a].

Production of [aabc]: [a] -->iter [aa] -->accr [aab] -->accr [aabc].

In contrary to the semiotic case, composition and decomposition of morphograms are not symmetric.

The decomposition Dec([aabc]) is ”over-complete”, according to Gunther’s classification of complexity into incomplete (I), complete (C)and over-complete(O) because it decomposes into two monomorphies of the same length, [aa] and [ab]. Nevertheless, the morphogram [ab] decomposes finally into the monomorphy [a]. Hence, the remaining basic monomorphies of [aabc] are [a] and [aa]. The monomorphy [aa] is not decomposable into smaller monomorphies, say [a], because it is a morphogram without differentiation which would be necessary for a decomposition of the morphogram.

Decomposition for MGtypeset structure
Dec([aaaa]) = [aaaa]                          typeset structure
Dec([aaab]) = [aaa], [a]                     typeset structure
Dec([aaba]) = [aa] |[ab], [a]        typeset structure typeset structure
Dec([aabb]) = [aa]                             typeset structure
Dec([aabc]) = [aab], [aa]|[ab], [a]      typeset structure
Dec([abaa]) = [ab] |[aa], [a]              typeset structure
Dec([abab]) = [aba], [ab], [a]             typeset structure
Dec([abac]) = [aba], [ab], [a]             typeset structure
Dec([abba]) = [abb], [aa], [ab], [a]     typeset structure
Dec([abbb]) = [a], [aaa]                     typeset structure
Dec([abbc]) = [a], [aa], [ab], [a]         typeset structure
Dec([abca]) = [ab, [aa]|[ab], [a]         typeset structure
Dec([abcb]) = [aab, [aa]|[ab], [a]       typeset structure
Dec([abcd]) = [abc], [ab], [a].            typeset structure

Decomposition of [abbcaa]
Dec([abbcaa]) = ([abbc], [aab]|[abb], [aa]|[ab], [a]).

                                               StirlingSn2(m, n), m = n = 1 to 6
1.                          [a]                                     : 1
2.                     [aa]   [ab]                                : 2
3.        [aaa] [aab] [aba] [abb] [abc]                   : 5
4.     [aaaa] [aaab] ....[abbc] ...  [abcd]              :15
5.   [aaaaa]  [aaaab] ...................[abcde]        : 52
6.  [aaaaaa]  ...     [a bb c aa]   ...       [abcdef]   : 203

1.3.5.  Morphogrammatic rewriting rules

P ∈ Sn2(Κ*, Κ*)
(u, v) ∈ Sn2(Κ*, Κ*) such that u => v.
Then P ⊆ P’, and
If u => v, then uw => vw and wu => wv for any monomorphy w.

u => v  iff there exists a context C and (u, v) ∈ P such that u = C(x) and v = C(y).

"Definition 1.34 A context is a pair C = <y, z>  of strings.
The substitution of x into C, in symbols C(x), is defined to be the string y^x^z.
We say that x occurs in v in the context C if v = C(x).
Every occurrence of x in a string v is uniquely defined by its context.
We call C a substring occurrence of x in v.” (Kracht)

Contexts and contextures
Cid(x), Cequi(x), Csim(x), Cbisim(x), Cmorph(x).

(aa) => (aaa), then (aa)w1 => (aaa)w2, w3(aa) => w4(aaa);  Cid(wtypeset structure)
and wtypeset structure, i, j = 1,2,3,4

1.3.6.  The Fibonacci word exercise

”Fibonacci words are easily defined by iterating a morphism. In fact, the Fibonacci morphism is among the absolute simplest (more precisely shortest) conceivable morphism: discard the one letter alphabet, and try to define a non trivial short morphism on two letters. It suffices, for this, that the image of one letter has length two, and you already get Fibonacci’s morphism.” (Jean Berstel, Fibonacci Words - A survey, in: G. Rozenberg, A. Salomaa, The Book of L, 1985, pp. 13 - 27)

Production of Fibonacci words
A = {a, b}
e: A*  --> A*,  A* is the Kleene product of signs, e is a morphism from A* to A* .
e(a) = ab
e(b) = a

Iteration of this morphism defines the Fibonacci words.
f0= a, f1= ab
ftypeset structure= ftypeset structurefn

f0 = a
f1 = ab        
f2 = aba                    1.0
f3 = abaab                 2.1
f4 = abaababa            3.2
f4 = abaababaabaaba   4.3

Kenogrammatic analogon to Fibonacci words

A = {a, b}
{a, b} ⊆ A: (a) !=SEM (b)  
eKG: K*  --> K*,  K* is the Stirling distribution of kenogram sequences.
{a, b} ⊆ K* : [a] =KG [b]
eKG([a]) =KG [ab]
eKG([b]) =KG [a]

f0 = [a], f1= [ab]
ftypeset structure= ftypeset structurefn)KENO   

Kenogrammatic composition with iteration and accretion.

f  _ (n + 2) = f  _ (n + 1)  ((f _ n^iter <br />    ∐)/f _ n^acc)

A = {a, b} is the alphabet of the kenomic Fibonacci example . Also atomic " signs "  ... the calculus, i . e . as kenograms of kenomically defined operations (recursion) f  _ n .

Fibonacci derivations FIBtypeset structureKG(a, ab):
f0 = a
f1 = ab        
f2 = aba; abb; abc                                                          1.0
f3 =
aba’ab, aba’ba, aba’ac, aba’bc, aba’ca, aba’cb, aba’cd;     2.1
abb’ab, abb’ba, abb’ac, abb’bc, abb’ca, abb’cb, abb’cd;
abc’ab, abc’ba, abc’ac, abc’bc, abc’ca, abc’cb, abc’cd.              

1, 1, 3, 21, 85, ...
f2 = f1(f0) = typeset structure(aba; abb; abc), i.e. the mediated parallelism typeset structure.

Fibonacci derivations FIBtypeset structure(a, ab):
f0 = a
f1 = ab        
f2 = aba; abb               1.0
f3 = aba’ab, aba’ba,     2.1
      abb’ab, abb’ba,

1, 1, 2, 4, ...
f2 = f1(f0) = typeset structure(aba; abb), i.e. the mediated parallelism typeset structure .

1.3.7.  Inversion and equivalence

In a general morphogrammatic word algebra the equality (equivalence, similarity, bisimilarity) of words has to be defined formally. A simple operator, the inversion of the order of a word, called reflector(refl) offers some distinctions between words.

Inverse words produced by the Fibonacci word system might be compared.

For f1 = (ab) we get refl(f1) = (ba). The word (ba) is not a word of the semiotic Fibonacci word system.
A kenomic consideration shows that both words (ab) and (ba) are equivalent: (ab) =KG refl(ab).

This holds generally for all symmetrical kenomic words:

Htypeset structure ∈ SYM: Htypeset structure=KG H2.

This nice property of kenomic word systems is helping to reduce the work into half, like duality in category theory is offering “two for one" (Herrlich).

2.  Finite State Automata

2.1.  Classical FSA

2.1.1.  Automaton and language

Finite State Machine
"We consider non-deterministic finite state machines with no accepting states, defined as follows.
A finite state machine (FSM) is a quadruple M = (Σ, Q, qtypeset structure, δ), where Σ is the alphabet of input symbols, Q is the set of states, q0 is the initial state, and δ is the transition function, which maps Q × Σ to subsets of Q. If every δ(q, a) contains exactly one state, then M is deterministic.
In this case we may write δ(q, a) = q’ instead of δ(q, a) = {q' }."

Binary relation
A binary relation, denoted by ->, is any subset of the Cartesian product P × P .
For any binary relation -> ⊂ P × P :
domain( ->) =def { a | ∃b, (a, b) ∈ ->}, and
range( ->)   =def { b | ∃a, (a, b) ∈ ->}.

typeset structure

typeset structure
δ: Q x Σ --> Q,
rewriting rules: qiak --> qj, with qi, qj are states, akis input symbol

The transition function δ : Q × Σ -> Q of a DFA can be extended to Q × Σ* as follows:
δ(q, ε) = q
δ(q, wa) = δ(δ(q, w), a).

L = { ω ∈ Σ* | q0ω =>typeset structureqfε, with qf ∈ F}

2.2.  Kenomic FSA

2.2.1.  Explanations and motivations

This exercise is focusing on the transition rule (function). The consequences for the concepts of the alphabet, the states and the initial state will be reflected later and will be conceived then as the pre-conditions of the new understanding of the transition function and the concept of the kenogrammatic finite state machines (kenoFSM) as such.

Elementary cellular automata are collections of simple finite state machines.
In a similar sense, morphogrammatic cellular automata are interacting collections of elementary kenomic ‘finite state automata’. Each term, ‘finite’, 'state’, 'automata’, deserves a proper deconstruction.

In earlier approaches, the strategic order was inverse. The focus was on the intriguing situation of the ‘non'-alphabet character of kenogrammatics and its paradoxical consequences. The new approach plays with the fact of the Stirling character of the kenogram sequences and morphograms and with a standard representation of the ‘non'-representable alphabet and kenogrammatic sequences, i.e. the trito-normal form (tnf).

In other words, only the kind of usage of marks defines their role as semiotic, kenogrammatic or morphogrammatic in the graphematic game. Hence, marks in an alphabet are playing in the context of the alphabet their semiotic role. In the use of a kenomic context, the mark of the alphabet are playing the roles of kenograms. Then as a collection of kenograms, all elements of an alphabet are kenomically the same.

One of the most elicit analysis of an abstract theory of computation is given by Gurevich’s  Abstract State Machines (ASM). This way of thinking was reflected in my “Skizze-0.9.5” from 2003. Like with Konrad Zuse, computation is defined by Gurevitch as a step-wise transition in time, guided by rules, from an initial to a terminal object, the result of the computation.
Obviously, the limits of this paradigm are clear: no interactivity. Computation is conceived as problem-solving and not as a media of interacting processes, without beginning nor end.

2.2.2.  Keno-Languages and -Automata

A deconstruction of FSM and rewriting systems has to start wit a deconstruction of the underlying basics concepts. One important basic concept is the binary relation.

A first deconstructive step would have to contextualize the concept of relationality of the concept of binary relation which is based on relational logic and the Wiener-Kuratovsky definition of an ordered pair of elements.
A second step has to deconstruct the concept of binarity, P × P, of the binary relation.

P × P |->  P x P; Q : contextualization (context-logical decomposition)
P × P |->  StirlingSn2(P, 2): From sets to distributions.

Binary relation
A kenomic binary relation, denoted by ->, is any subset of the Stirling
distribution StirlingSn(P, 2) . For any kenomic binary relation -> ⊂ StirlingSn(P, 2) :
domain( ->) =def { a | ∃b, (a, b) ∈ ->}, and
range( ->)   =def { b | ∃a, (a, b) ∈ ->}.

A n-ary kenomic relation, denoted ->n, is any subset of the Stirling distribution StirlingSn(P, n).

typeset structure

typeset structure

Retro-grade recursion
typeset structure([q,] ε) = [q]
typeset structure([q], [w]^[a]) = typeset structure(typeset structure([q], [w]), [a]).

rewriting rules: [qi][ak] typeset structure [qtypeset structure, with [qi], [qj] as states, [ak] is input symbol.

L = {ω ∈ Κ* | [q0] [ω] typeset structuretypeset structureqf]ε, with [qf] ∈ F}, Κ* = Sum(StirlingSn2(Σ, *))

2.2.3.  Formal aspects of kenomic cellular automata

Alphabet, language and classical CA
"An alphabet Σ is a finite nonempty set of symbols. Σ* denotes the set of all finite strings of symbols from Σ. The empty string is denoted λ. A language is any subset of Σ*. Σk denotes those elements of Σ* of length k. The symbols in a string s of length n are indexed from 1 to n  and s[i] denotes the ith symbol of s.

"Kari [6] notes that cellular automata have several fundamental properties of the physical world: they are massively parallel, homogeneous, and reversible, have only local interactions, and facilitate formulation of conservation laws based on local update rules. We consider one-dimensional asynchronous reversible cellular automata with insertions and deletions because they support universal computation.

"A cellular automaton C = (Σ, δ) is composed of an alphabet of symbols Σ and a set δ transition rules of the form axb <-> ayb for substitutions or ab <-> axb for insertions and deletions, where a, b, x, y ∈ Σ.
The idea is that the value of a given cell of the automaton may change only when both its neighbors have specific values.

"For s1, s2 ∈ Σ*, s1 can reach s2 in one step of C , denoted s1 ->C s2, if applying one transition rule to s1 yields s2. And s1 can reach s2 in C if s1 ->*C s2. Given an input string s ∈ Σ*, a snapshot of C on input s is any string s’ such that s can reach s’ in C."

2.2.4.  Operations on kenoCAs

Union of machines

A. Classical CAs
1. Q = states
Q = {(r1, r2) | r1∈ Q1 and  r2∈ Q2}

The set is the Cartesian product of sets Q1and Q2 and is written Q1 x Q2.
typeset structure |head(R9)| x |head(R4)| = 3 x 3 = 9

2. Alphabet: Σ1 = Σ2 = {typeset structure, typeset structure}

3. rules
(r1, r2)∈ Q, a ∈ Σ :
δ ((r1, r2), a) = (δ1(r1, a), δ2(r2, a))

4. initial
q0 = (q1, q2)

B. Kenogrammatic CAs
kenogrammatics of union
1. States
Q = {(r1; r2)|r1∈ Q1; r2∈ Q2}

The set is the Stirling union of sets Q1 and Q2 and is written StirlingSn2(Q1, Q2).
StirlingSn2( Q1) x StirlingSn2( Q2)!= StirlingSn2(Q1 x Q2)

typeset structure StirlingSn2(add(head(R9), head(R4))) = 4

2.  Alphabet:
Σ = Σ1 = Σ2:
add( Σ1, Σ2) > Σ
additer( Σ1, Σtypeset structure) = Σ
addaccr( Σ1, Σtypeset structure) = add( Σ1, Tsucc(Σ2)) > Σ

add( Σ1={typeset structure, typeset structure}, Σ2 = {typeset structure, typeset structure}) =
Σ1={typeset structure, typeset structure},
Σ2 = Tsucc({typeset structure, typeset structure}) = {typeset structure, typeset structure, typeset structure}.

3. rules
(r1, r2)∈ Q, a ∈ Σ :
δ ((r1, r2), a) = (δ1 (r1, a), δ2 (Tsucc(r2, a)))

r1=  typeset structure],
r2 = [typeset structure]

typeset structure

typeset structure

4. initial
qtypeset structure= (q1, q2) => q0 = add(q1, q2)

q0 = add(q1, q2) = (q1, q2, q3, q4)
q0 = (qtypeset structure= {[typeset structure}, q2 = {typeset structure}, q3= {[typeset structure}, q4 = {typeset structure}

Concatenation of kenomic languages
A =SEM {a, b}, B =SEM {b, c}}
AB =SEM {ab, ac, bb, bc}

A =KENO {1, 2}, B =KENO {1, 2}
kconcat ([1, 2], [1, 2]):
[AB] =KENO {[1212],[1221],[1213],[1231],[1223],[1232],[1234]}.  

2.2.5.  Symmetric cellular automata and reduction

"Exploiting the symmetry with respect to renaming of q states of cellular automata allows us to reduce the number of rules to consider. Namely, it suffices to consider only orbits (equivalence classes) of the rules under q! permutations forming the group Sq.” Vladimir V. Kornyak, Cellular Automata with Symmetric Local Rules, 2006

"Definition 3.1. A cellular automaton A = (S, N, δ) is said to be symmetric
δ(s1 , s2 , . . . , s typeset structure ) = δ(stypeset structure , stypeset structure , . . . , stypeset structure ),
for every s1 , s2 , . . . , s typeset structure ∈ S and σ ∈ Stypeset structure (the permutation group of |N | degree)."

Permutations are not in conflict with the concept of identity of their elements. Identity is of the elements is a precondition for their permutation.  Permutational equivalence is not kenogrammatic sameness. On a kenogrammatic level, permutational equivalence leads from the trito- to the deutero-level of structuration. Therefore, the permutational reduction of CAs is different from a kenomic reduction of CAs.

3.  Conditions for concatenation and substitution

3.1.  Types of compositions

typeset structure

3.1.1.  Equaity: Concatenation

  u   =>  _ id   v : <br /> u =>  _ MG v, w  _ 1 u =>  ...   _ MG v, u =>  _ SEM v and w  _ (1, 2, 3, 4) ∈ C  _ ID .

typeset structure

typeset structure


typeset structure =  typeset structure

<br />   Example      <br /> u = v = (aab),    <br /> w  _ 1 = &#x ... derscript[=>, ID]  (aab) (cc), <br /> (cc) (aab) Underscript[=>, ID] (cc) (aab) , _]

<br /> Lambda - Example I <br /> <br />           ... p;                

<br /> Cellular automata scheme <br />         CA transitio ... p;               <br /> Null

3.1.2.  Equivalence: Juxtaposition

u =>eq v:  
w1u =>MG w2v,  w3u =>SEM w4v,
uw1 =>MG vw2,  uw3 =>SEM vw4,

wi ∈ CEQ, i = 1,...,4:
wtypeset structure!=sem w2, w1 =MG w2,
wtypeset structure!=sem w4, wtypeset structure=MG w4,
wtypeset structure=sem w3, w1 =MG w3,
wtypeset structure=sem w4, wtypeset structure=MG w4.

typeset structure


typeset structure

<br />   Example   <br /> u = [aab], v = [bba] <br /> w  _ (1, 2) = [cc], w  _ ... [=>, EG] [bba][dd] <br /> <br /> [cc][aab] Underscript[=>, EQ] [dd][bba] , _] <br />

Lambda - Example - II <br />             ... ;                

Lambda - Example - III <br />           &nbs ... ;                

kenoCA transition scheme        <br />     & ...  :       c  _ i(t + 1) : Rule - Result <br />    

3.1.3.  Similarity: Cooperation

<br /> Underscript[(u =>  _ SIM v) => ( (w  _ 1 u =>  _ SIM w ... #xF3A0; _ 2 !=  _ sem w  _ 4, w  _ 2 =  _ MG w  _ 4 <br />

    Similarity <br />      w  ∈ C  _  ... F3A0; _ 3 Underscript[=>, SIM] vw  _ 4   , _]      <br />

Table u typeset structure v

typeset structure

Example: u typeset structure v

u = [aab], v = [bba]
u =>MG v, u ¬=>SEMv
wtypeset structurecc], w2 = [dd] : typeset structure typeset structure
wtypeset structureee], w4 = [ff] : typeset structure typeset structure

wtypeset structureSIM, i=1,2.3.4

length(w1) = length(w2)
typeset structure
sem(wi) ∩ sem(u)typeset structure, i = 1,2

length(w3) = length(w4)
typeset structure
sem(wi) ∩ sem(v)typeset structure, i = 3,4

length(w1) = length(w3)
typeset structure
sem(wi) ∩ sem(v)typeset structure, i = 1, 3

length(w2) = length(w4)
typeset structure
sem(wi) ∩ sem(v)typeset structure, i = 2, 4

<br />               &nb ... ript[=>, SIM] [bba][dd], <br /> <br /> [ee][aab] Underscript[=>, SIM] [ff][bba] , _]

Example I - SIM              Terms ...        : t  _ 1 =  _ SIM t  _ 2,   . <br />

3.1.4.  Bisimilarity: Fusion

Bisimilarity w  _ 1 Underscript[=>, BIS]    w  _ 2 iff <br />  ... #xF3A0; _ 2) ∈ (Dec, Vk, Vs, EVk, EVs) length (w  _ 1) != length (w  _ 2) :

∃ (x  _ 1, x  _ 2) : EVk ([w  _ 1]) = (x  _ 1, x  ... 8707; (y  _ 1, y  _ 2) : EVs ([w  _ 2]) = (y  _ 1, y  _ 2)

<br /> IF ((Vs (x  _ 1, x  _ 2) = [w  _ 2])/(Vk (y  _ 1, y &#x ...                 <br />

Table: u typeset structure v

typeset structure

<br /> Conditions : u   Underscript[=>, BIS]   v   <br /> wu ¬ =>  _ SEM wv, ...  /> w  _ 2 !=  _ sem w  _ 4, w  _ 2 =  _ MG w  _ 4

typeset structure

     Example <br />      u Underscript[=>, BIS] v , ... ;                <br />

[Graphics:HTMLFiles/Notes on semi-Thue systems_175.gif]

typeset structure

Morphic transition rule scheme

typeset structure

typeset structure

3.1.5.  Metamorphosis of rewriting

Recall, “Elements of P are variously called defining relations, productions, or rewrite rules, and S itself is also known as a rewriting system. If (x, y)∈ P, we call x the antecedent, and y the consequent.
Instead of writing (x, y)∈ P or xPy , we usually write
                                                                      x --> y."

Up to now, the transformation rules of rewriting systems had been defined in a still quite straight forward sense of succession of antecedent and consequent by substitution.

Funny candidates joined this game which was opened up by Thoralf Skolem with his identity conserving productions. Kenogrammatic rewriting systems introduced an abstraction on the 'data' transforming sign systems to kenogrammatic systems. With the idea of overlapping and fusion a mechanism to deal with overdetermined rewriting systems had been opened up as morphic systems.

Nevertheless, the brave succession and hierarchical order of antecendents and precedents had been untouched and accepted by this change of the modi of interaction, and was leading the introduction of the semiotic, kenomic and morphic concepts of rewriting systems.

A much more intriguing situation is possible with the idea of metamorphic transformations.

A full fledged involvement of the concept of diamond categorical interchangeability of distributed functors allows to introduce the paradox of a simultaneity of sameness and differentness in the game of interchanging roles.
Therefore, morphisms are not just changing objects in the mode of equality, equivalence, similarity and bisimilarity but in the mode of metamorphosis too. Metamorphosis is understood in the strict sense of an interplay of change and pertinence.
Hence, an antecedent is not just producing its precendent by a transition rule but is at once also keeping itself in the game of change as the antecendet of a precendent.

3.1.6.  Types of change

1. transition  _ succession : <br /> <br /> Antecedent --> Precedent, succession - modi = {id, eq, sim, bisim} <br />

2. transition  _ chiasm : <br /> <br /> ((Antecedent  _ 1  --> Precedent &# ...  <br /> The wording here is   Antecedents becomes Precedents and Precedents becomes Antecedents .

<br /> 3. transition  _ polycontextural : <br /> <br /> ((Antecedent  _ 1.3  - ...       ↕)/(Precedent  _ 2.3 <-- Antecedent  _ 2))

The wording here is, "Antecedents becomes Precedents and Precedents becomes Antecedents. The result is reflected in system3".

<br /> 4. transition  _ diamond : <br /> <br /> ((Antecedent  _ 1.3 --> Pre ... ;   )/(Precedent  _ 2.3 <-- Antecedent  _ 2)) | (system  _ 4)

typeset structure   

"The matching conditions of the chaistic and polycontextural construction are reflected in the ‘antidromic’ system4.”


[(system     ) ∐ (system     )                                           ...        1.2 (metaphor : Gregor Samsa (Franz Kafka) as Gregor in the process of metamorphosis) Null

<br /> 6 . transition  _ (diamond - metamorphosis) : <br /> [   (system  ...                                                                          4 : (1.1 - 2.1 ; 22 - 12)

3.1.7.  Type/term model of metamorphosis

typeset structure

"The wording here is not only "types becomes terms and terms becomes types" but “a type as a term becomes a term" and, at the same time, a type as type remains a type. Thus, a type as a term becomes a term and as a type it remains a type. And the same round for terms.

Full wording for a chiasm between terms and types over two loci  Explicitly, first the g ... 963;  _ 2.2 it remains a type σ  _ 2.2 for a term M  _ 2.2 " .

And simultaneously, mediated, the second round in red, the same for terms : <br /> " A te ... incidence is realized . <br /> While between terms and types a morphism (order relation) exists .

3.1.8.  Interchangeability of metamorphosis

    <br />      Metamorphic interactivity   <br />   <br /> ...                                                   1                                              1

3.1.9.  Some summary

Interdependence   of   operators   ( o , ∐ , ◊ ,   ~~   ) :   Metamorphism ((M &#x ...                                                                             1                   2

<br /> Interdependence   of   operators   ( o , ⊗ , ≡ ) : Equality   <br /> [((M & ...                                                                         2                        2

<br /> Interdependence   of   the   operators   ( o , ∐ , ~=) : Similarity

[ ((M   o σ  )) = (M       ) o (σ          ... bsp;                2                  2                2                                   2

3.2.  Transitive closures

3.2.1.  Linearity

"Next, take the reflexive transitive closure P” of P' . Write a => b for (ab)∈ P". So a =>* b means that either a = b , or there is a finite chain a = a1, ..., an = b such that ai => atypeset structure for i =1, ..., n−1. When a =>* b, we say that b is derivable from a.“

"Concatenation preserves derivability:
a => * b and c  => * d imply ac  => * bd .” (PlanetMath)

Morphogramatics of concatenation

a => * b and c => *d imply a^c  => * b^d .

Depending on the definition of the concatenation operation "^", different realizations have to be distinguished.

Candidates are:

Identity (equality)

a =>id * b and c =>id *d imply a^c  =>id * b^d


(a =>  _ eq * b) and    (c =>  _ eq * d) imply ((a^^i  ... (((a) <br /> ∐ )/( (c) )) =>  _ eq    (((b) <br /> ∐ )/( (d) )))

3.2.2.  Bifunctoriality

For ambiguous semi-Thue systems, like the morphogrammatic semi-Thue systems, the interplay of bifunctorial interchangeability gets some relevance in the definition of the rewriting system as such.

Up to isomorphism and down to kenomic sameness
Two kenomic semi-Thue systems are equal iff they are equivalent, i.e. isomorphic.

3.3.  Category theory and rewriting systems

3.3.1.  Graph transformation

"We have introduced a new notion of abstract rewriting system based on categories. These systems are designed for dealing with abstract rewriting frameworks where rewrite steps are defined by means of matches. We have defined the properties of (horizontal) composition as well as functoriality of rewriting in our abstract setting and we have illustrated these properties throughout several algebraic graph rewriting systems.” (F. Prost et al)

Also horizontal and vertical aspects of categorical compositions are considered, the main point still is to develop a well glued approach in the sense of the Berlin school of Graph transformation (Ehrig, König). The ultimate glue is offered by the category-theoretic span concept. Pushouts and spans are ”used for describing graph transformation systems as categorical rewriting systems”.

"In a categorical rewriting system, the matches introduce a “vertical dimension”, in addition to the “horizontal dimension” provided by the rules.
This composition gives rise to the bicategory of categorical rewriting systems (as for spans, we get a bicategory rather than a category, because the unicity of pushouts is only up to isomorphism)."

In contrast, a more or less glue-free construction is introduced by the concept of categorical bifunctoriality and its generalization to a diamond category-theoretic interchangeability of morphisms and contextures. In this approach “horizontal” and “vertical” structures of graph transformation systems are not glued together but are interacting in the framework of interchangeability.

3.3.2.  Pushouts and diamonds

"After having developed some insights and experiences with the diamond approach and its complementary structures, a design of diamond category theory might be introduced which is not as close to the introductory analogy to classic category theory.”

Excerpts from: Kaehr, Category of Glue III, (2009), unpublished.

Hetero-morphisms are reflecting the matching conditions of the composition of morphisms in a category.
There is an analogy between the concatenation of production rules in re-writing systems and the composition of morphisms. Graph transformation systems and graph grammars are surpassing the limitations of linear concatenation of sign sequences. Graph transformation is formalized by Schneider, Ehrig et al. by categorical pushouts.

"Therefore, graph transformations become attractive as a modeling and programming paradigm for complex-structured software and graphical interfaces. In particular, graph rewriting is promising as a comprehensive framework in which the transformation of all these very different structures can be modeled and studied in a uniform way.” (Ehrig, Padberg, p. 3)

Hetero-morphisms of pushouts are reflecting the complexity of graph composition.
Because of its complexity a more complex interplay between hetero-morphisms and graph composition is opened up.  
Focused on graph derivations, the saltatorical hetero-morphisms are complementarily defined. But the inverse complementary situation holds too. During a graph derivation, the saltatorical system might be changed and therefore re-defining the structural conditions of the categorical graph derivation.

This kind of mutual interplay had been defined for categories and saltatories concerning the matching conditions of the composition of morphisms. Therefore, the interplay in diamondized graph systems is a generalization of the compositional approach.

3.3.3.  Concatenation and pushouts

Production systems are based on concatenation. They have an initial and a terminal object.
A generalization of concatenation production systems is introduced by a transition from strings to graphs. Strings consists of atomic signs. Graphs are composed by elementary graphs, consisting of nodes and edges. Hence, graph grammars are a generalization of sign production systems. Sign production systems are mapped as trees, graph transformations as graphs.

Graph transformation and graph grammars based on pushout constructions are well embedded in category theory. Pushouts and their dual pullbacks are save categorical constructions based on the composition rules for morphisms in categories.
Categories in general are well complemented by saltatories.

Because pushouts are defined in categories, a diamondization of pushouts follows naturally.
Hence, pushouts as models for graph grammars gets diamondized pushouts for diamond graph grammars.

As a consequence of the dependence of graph grammars from category theory, it seems obvious that graph transformations are not surpassing the limitations of computability of sign production systems.
Graph transformation sequences are computational equivalent to sign production sequences.
This correspondence between the computability of sign production systems and graph derivation might be disturbed by diamondized graph grammars.