Rudolf Kaehr Dr.phil^{@}

Copyright ThinkArt Lab ISSN 2041-4358

Abstract

Morphogrammatics is not presuming a multitude of contextures but is creating a polyverse of contextures in each situation of a realization of an operation. This continues studies proposed with “Notes on semi-Thue Systems in a Context of Morphogrammatics".

*If term X can be reduced to both Y and Z, then there must be a further term V (possibly equal to either Y or Z) to which both Y and Z can be reduced.*

**Kenomic inversion**

"If there are two morphogrammatically equivalent but semiotically different terms t_{1} and t_{2}, t_{1} t_{2}, and t_{1} !=_{sem} t_{2}, then two separated terms t_{1} and t_{2} can be produced from both t_{1} and t_{2}, which then produce a prior term t which can be produced from both t_{1} and t_{2}."

*Two representions, one solution*

λ X, Y, Z:

CONS(((XY),Z):

CONS(XY), Z) ((ab), a).

**Deconstruction, step one***Church-Rosser: *two representation, one solution.*Morphogrammatics*: two solutions, one representation.

**β-equivalence **

"Finally, β-equivalence is obtained by allowing reduction steps as well as *inverse* reduction steps, i.e., by making --> symmetric:*Definition*. We write M = M’ if M can be transformed into M’ by zero or more reduction steps and/or *inverse* reduction steps.

Formally, = is defined to be the reflexive symmetric transitive closure of -->, i.e., the smallest *equivalence* relation containing -->.” (Selinger, p.17)

Nonetheless, the β-equivalence is based on the principle *“One representation, one solution” a*nd is therefore not to be confused with the given examples of the concept *“Many solutions, one representation"*.

What the difference to the polycontextural approach as it was presented under the motto *“Lambda calculi in polycontextural situations"*? As the title suggests polycontexturality of situations is presupposed to place, i.e. distribute and mediate, lambda calculi. The study of such distributed lambda calculi was partly elaborated and has delivered some interesting results. Nevertheless, polycontexturality wasn’t generated by such dissemination of lambda calculi but presupposed.

Polycontexturality has its own calculus of generating distributed contextures in a polyverse.

On the other hand, a justification for the choice of polycontexturality isn’t necessary. The reason is simple, there is no generally acceptable reason to opt for the classical monocontextural approach. The fact that monocontexturality is world-wide accepted and technically in many respects highly successful doesn’t mean that this monocontextural approach has found a a secure foundation. Its legitimacy is pragmatically, its limits more and more obvious.

Therefore, an option for polycontexturality isn’t less arbitrary than an option for the established monocontexturality.

In contrast to the contextural decision, keno- and morphogrammatic constructions as sketched with the paper *“Notes on semi-Thue Systems in a Context of Morphogrammatics” *are not presupposing but *generating* their polycontexturality on the base of their own operations. Each repetition and iteration has *per se* its retrograde recursive double role as iteration and as accretion.

Morphogrammatics is not presuming a multitude of contextures but is creating a polyverse of contextures in each situation of a realization of an operation.

**Definition**. The (capture-avoiding) *substitution* of N for free occurrences of x in M, in symbols M[N/x], is defined as follows:

This opens up the possibility to introduce different kinds of abstractions involved in the process of substitution. The general table of different kinds of substitutions as they are introduced for morphogrammatic semi-Thue systems shall be applied.

http://memristors.memristics.com/semi-Thue/Notes%20on%20semi-Thue%20systems.pdf

**Example**

**Identity (equality)**

ID = (x_{1} ≡ x_{2} ≡ x_{3})

**Table u **** v **

**Equivalence**

(x_{1} =_{keno} x_{2} =_{keno} x_{3})

**Table **** u **** v **

**Similarity**

(x_{1} =_{sim} x_{2} =_{sim} x_{3})

**Table u **** v **

**Bisimilarity**

(x_{1} =_{bis} x_{2} =_{bis} x_{3})

**Table: u **** v **

**Different version***Reversing beta-reduction produces beta-abstraction rule.*

Kenneth Slonneger, Formal syntax and semantics for programming languages, §5, p.149, 1995

"In fact, when a form contains more than one lambda that can be reduced, it does not matter which one is reduced first, the result will be the same. This is known as the Church-Rosser property, or, informally, as the diamond property.” (Barker)

Obviously this says that both branches are terminal and that both terminal results of the branches are semantically equal, therefore the branches are meeting in a common end. This corresponds to the graphematical equality or identity case.

Thus, we have :

branch_{1} branch_{2}

| |

=

and _{SEM} , hence the final result is t_{3} =.

Hence, for *equality* of terms we get: t_{3.1}=_{SYNT} t_{3.2}.

Again, this result is supposing a common arithmetic of the example, given by the term t But what happens if we allow a game where arithmetics might be colored, and thus the results might still be syntactically equivalent but semantically different, because, say, blue is not green.Or if the number systems are running differently. This situation could result into a *kenogrammatical* calculus if we accept that the difference is not just in color but in the syntactical terms with results t_{3.1} = (1 + 1) + (1 + 1) and t_{3.2} = (2 + 2) +(2 + 2).

Hence, for *equivalence* of terms we get: t_{3.1} !=_{SEM} t_{3.2} but t_{3.1} =_{KENO} t_{3.2}.

More interesting deviations happens with the case when the terms differ in the number of sub-terms too.

Say, if we get a result with different operators “+" and “x”.

t_{3.1} = (1 + 1) + (1 + 1) and t_{3.2} = (2 x 2) +(2 x 2).

In this case, there is no equality (identity) or kenogrammatical equivalence but *similarity*. Both terms are similar in their abstract structure which conserves the complexity of the terms, i.e. the length of the formula.

Hence, *similarity* of terms we get: t_{3.1} !=_{SEM} t_{3.2}, t_{3.1} !=_{KENO} t_{3.2}, but t_{3.1} =_{SIM} t_{3.2}.

Also similar terms are still of the same length, the number of their elements might differ. Only the *metamorphic* abstraction over terms is introducing a *bisimilarity* between terms of different complexity and therefore different length of the involved terms.

Bisimilarity happens if we get structurally different answers from the ‘same’ constellation t_{0}.

Say, t_{0}= (x.x + x)(y.y +y)1) delivers the answers t_{3.1} = (1 + 1) + (1 + 1) and t_{3.2} = (2 x 2) + (2 x 2) x (1 +1).

Hence, for *bisimilarity* of terms we get: t_{3.1} !=_{SEM} t_{3.2}, t_{3.1} !=_{KENO} t_{3.2}, t_{3.1} !=_{SIM} t_{3.2} but t_{3.1} =_{BIS} t_{3.2}.

All cases, *equality*, *equivalence*, *similarity* and *bisimilarity* are still accepting the main scheme of term the development, here the branching of t_{0} into t_{1} and t_{2}terminating in t_{3}. Therefore, there is not yet any *retrograde* redefinition of the scheme involved during the term development.

This retrograde redefinition is attempted with interactional and interventional *metamorphosis* of general polycontextural formal systems. •

*“In these computations, the final values are syntactically equal, so we know that they are semantically equal. This is interesting in light of the fact that in the first computation the top-level function was applied to the value of its arguments, while the second computation, it was applied to the syntactic expressions defining its arguments!” *(Starck, p. 203)

Substitutions had been quite harmonious for the *identity *(equality) and the *equivalence* case.

Substitution in the mode of identity which is the case for classical lambda calculi is a prototype of a transformation or rewriting system whithout any deviation from identity.

Kenomic substitutions happens in the mode of *equivalence*, in contrast to the equality of identity, enabled by the kenomic abstraction or the Stirling Turn.

Nevertheless, both abstractions are *balanced* in respect of the number of occurrence of their terms.

This presumption is abandoned with the abstraction of *similarity*. Also similar terms are still of the same length, the number of their elements might differ.

Only the *metamorphic* abstraction over terms is introducing a *bisimilarity* between terms of different numbers of elements and different length of the involved terms.

**Table u **** v **

**Example: u **** v**

u = [aab], v = [bba]

u =>_{MG} v, u ¬=>_{SEM}v

wcc], w_{2} = [dd] :

wee], w_{4} = [ff] :

wSIM, i=1,2,3,4

length(w_{1}) = length(w_{2})

sem(w_{i}) ∩ sem(u), i = 1,2

length(w_{3}) = length(w_{4})

sem(w_{i}) ∩ sem(v), i = 3,4

length(w_{1}) = length(w_{3})

sem(w_{i}) ∩ sem(v), i = 1, 3

length(w_{2}) = length(w_{4})

sem(w_{i}) ∩ sem(v), i = 2, 4

http://memristors.memristics.com/semi-Thue/Notes%20on%20semi-Thue%20systems.pdf

**A simple application “Fruit flies like a banana.”**What’s the meaning of an ambiguous sentence like

As it is well known, the sentence has, at least, two meanings:

1. “The insects called fruit flies are positively disposed towards bananas.”

2. “Something called fruit is capable of the same type of trajectory as a banana.”

"

Ambiguity in languages is reflected in the existence of more than one

Fruit flies like a banana. Fruit flies like a banana.

↙↘ ↙ ↘

Fruit flies like a banana Fruit flies like a banana

↙↘ ↙↘ ↙↘

like a banana Fruit flies like a banana

Fruit flies like a banana.

Fruit flies like a banana. ↗↖ a-banana

↗↖ a banana ↗↖ like

↗↖ like ↗↖flies

Fruit-flies Fruit

**Compositionality**

The sentence X = "Fruit flies like a banana.” is de/composable into two different term-trees.

X' = (x_{1}, x_{2}, x_{3}) and X” = (x_{1}, x_{2}, x_{3}, x_{4}) with

X’ = like(banana, fruit-flies), length(X’) = 3

X” = flies((like, banana), fruit), length(X") = 4.

Because the meaning of the two sentences is not decomposable into the same amount of sub-terms, the relation between the two interpretations of the full sentence is neither equal, equivalent or similar, therefore, the concept of polysemy is not properly applicable. The 2 interpretations are also not in an asymmetric meaning relation of one-many-mappings.

The ‘ambiguous’ meaning of the sentence X ‘as such’ is therefore understandable as a *bisimilar* interplay between its two different realizations X’ and X”. The morphogram of X, i.e. its deep-structural meaning prior to any phonological interpretation is thus the morphogram of the whole situation MG = (X, X’, X").

It is a reasonable option to interpret this linguistic example which is an overlapping of two sentences of different structural ‘length’ with the techniques of *morphogrammatic bisimilarity*. With such an interpretation the former interpretation with the help of morphogrammatic bifunctoriality based on *fusion* gets a further and technically more unifying treatment.**Morphogrammatic interpretation** based of “*fusion*” and concatenation.

http://memristors.memristics.com/Memristics%20LISPs/Memristics%20LISPs.html

*"For instance, if f = λx.x is the identity function, then we have f(x) = x for any x. *

In particular, we can take x = f , and we get

f(f) = (λx.x)(f) = f .

Note that the equation f(f) = f never makes sense in ordinary mathematics, since it is not possible (for set-theoretic reasons) for a function to be included in its own domain.” (Selinger, p. 7)

http://www.mscs.dal.ca/~selinger/papers/lambdanotes.pdf

The trick to produce circularity is simple, well accepted and has lost any strangeness:

"In *particular*, we can take x = f, and we get f(f) = (λx.x)(f) = f.”

But it is guided by *logical* decisions and not by the means of the lambda calculus *per se*.

What holds in general, holds in particular too. But all that belongs to innocent hierarchical systems of total governance.

From a quadralectic point of view such naivety is not worth to be followed. Identity, especially in the disguise as beginnings, archai, are not unified by a simple umbrella of identity as it appears with the formula “f = λx.x” and its consequences.

"For example, the lambda term (λx.y)((λz .z z )(λw.w)) can be reduced as follows.

Here, we underline each redex just before reducing it:

(λx.y)((λz .z z )(λw.w) ) ->_{β} (λx.y)((λw.w)(λw.w))

->_{β} (λx.y)(λw.w)

->_{β} y.

The last term, y, has no redexes and is thus in normal form. We could reduce the

same term differently, by choosing the redexes in a different order:

(λx.y)((λz .z z )(λw.w)) ->_{β} y.

"This example also shows that the size of a lambda term need not decrease during reduction it can increase,or remain the same.” (Selinger)

Not every term evaluates to something; some terms can be reduced forever without reaching a normal form.The following is an example:

Two infinite developments with a finite inter-relation between the 2 loops are called chiastic terms. Chiastic terms are a new kind of terminal,i.e.finite,interactions between non-terminal terms. Chiasms themselves might have non-terminal developments too.

Chiastic terms belong to a class of new term configurations and are motivated by polycontextural considerations.

**Identity**: F =_{ID} F'

Y ≡ (f. (x.f(xx)) (x. f(xx)))

YF ≡ (f. (x.f(xx)) (x. f(xx)))F

=> (x. F'(xx)) (x. F(xx))

=> F(x. F'(xx)) (x. F(xx))

=> F'(YF) = F(YF). **Equivalence**: F !=_{ID} F’, F =_{keno} F'

Y ≡ (f. (x.f(xx)) (x. f(xx)))

YF ≡ (f. (x.f(xx)) (x. f(xx)))F

=> (x. F'(xx)) (x. F(xx))

=> F(x. F'(xx)) (x. F(xx))

=> F'(YF).

**Some motivation**

If the meaning of a term is defined by its *use* (Wittgenstein)then we should analyze the use of all crucial terms in the construction of the FixedPoint theorem.

Because this theorem is of crucial importance for formal systems in general and for questions of computability in particular it shouldn't be exaggerated to test all the cases of the use of the main terms involved.

With Barendregt’s formula for the construction of the Fixed Point Theorem, the term “W” is used 6 times and therefore we have to check all its possible ways of use. The comfortable excuse to use the distinction of a syntactic and a semantic or of an object- and meta-language use of the terms to avoid further analysis isn’t of leading importance in this case.

There are certainly better arguments to motivate the decision for a morphogrammatic turn towards a new understanding of the use of signs. For the purpose of a simplified introduction of the techniques of morphogrammatics it should suffice the desire for justification.

**From Y functor to WHY interaction**

A similar constellation appeared in *Lambda Calculi in...* with the modeling of a 3-contextural Y and WHY function.

**Chiasm of operator and operand**

"In YF, the term Y is an operator and F is an operand of the application YF. Because of the highly abstract definition of the Lambda Calculus it is possible to change the operand, step by step, to an operator.

Now, F is an *operator* to (YF) and also an *operand* to Y in (YF). This double-functionality of Y is saved in the mind of the reader, the difference is nullified by notional abstraction.

Polycontextural strategy tries, in contrast and additionally, to inscribe such a notational abstraction into a graphematic play. Because there is no trust in mental representations we have to write it down.” (polyLC, p. 63)

The ingenious construction of the FixedPoint Theorem is given by Barendregt’s formula. It is a simplified definition of the Turing operator “by recognizing that we don’t need to delay the evaluation of the first expression”.

The following analysis of the Fixed Point Theorem is not considering the computational aspects of “eager” and “lazy” programming.

Proof of the theorem by Barendregt:

To analyze this “lazy” construction of the Y combinator we have to consider all occurrences of the main term “W”. Hence, we count 6 different uses of the term W. Therefore we offer each use of the term W a number: W^{i}, with i∈{1, ...,6}.

Thus, the full scheme of the Fixedpoint theorem is now marked by its numbers of occurrence of term “W” in the application.

There is in fact also a difference of use in “*define* X ≡ WW”, W^{2}W^{3}, and the logical application “*Then* X ≡ W”, W^{2}W^{3}. This difference is not specially marked in this discussion.

**Fixed Point scheme for W**

W^{1}≡ x. F(xx), X ≡ W^{2}W^{3}, x. F(xx)W^{4}, F(W^{5}W^{6}).

**Eager Fixed Point Combinator (Selinger)**

*"The lambda calculus contrasts with arithmetic in that every lambda term has a fixpoint. This is perhaps the first surprising fact about the lambda calculus we learn in this course.”*

"Theorem 3.1.

Proof. Let

N = Θ F

= AAF

= (λxy.y(xxy))AF

--> F(AAF) : (β )( λx.M)N --> M[N/x]

= F(ΘF)

= FN.

The term Θ used in the proof is called

**Analysis**

*A = **xy.y(xxy)*, *Θ = AA***,**

N = Θ F = AAF = (λxy.y(xxy))AF --> F(AAF) = F(ΘF) = FN.*A*^{1}* = **xy.y(xxy)*, *Θ = A*^{2}*A*^{3}**,**

N = Θ F = A^{4}A^{5}F = (λxy.y(xxy))A^{6}F --> F(A^{7}A^{8}F) = F(ΘF) = FN.

**Eager Fixed Point combinator in Scheme**

(define Y

(lambda (f)

((lambda (x) (f (lambda (y) ((x x) y)))

(lambda (x) (f (lambda (y) ((x x) y))))))).

*Recursive FACT with Y:*

(define fact

(Y (lambda (f)

(lambda (n)

(if (zero? n) 1 (* n (f (- n 1)))))))).

**Y****-KENO**

(define Y

(lambda (f)

((lambda (x) (f (lambda (y) ((x x) y)))

(lambda (x) (f (lambda (y) ((x x) y))

(lambda (x) (f (lambda (y) ((x x) y))))))).

**Identity : [1,1,1,1,1,1]**

The most obvious use of “W” is the use in the mode of *identity*:

Thus, for all i, j W^{i}, W^{j}: W^{i} ≡ W^{j}, with i,j∈{1, ...,6}.

For all i, j W^{i}, W^{j}: W^{i} ≡ W^{j}, i,j∈{1, ...,6}

=>

X ≡ W^{1}W^{1} ≡ x. F(xx)W^{1} ≡ F(W^{1}W^{1}) ≡ FX.

That is the classical case of identity:

X ≡ WW ≡ x. F(xx)W ≡ F(WW) ≡ FX.

=> X ≡ FX.

**Logical interpretation**

X ≡ WW ≡ x. non(xx)W ≡ non(WW) ≡ nonX.

=> X ≡ nonX.

{X , nonX}∈Contradiction.

The kenomic equivalence takes the *Stirling turn*. What counts in this case is not anymore the identity of the signs in consideration but the structure of their distribution. Also the complexity of distribution is minimal in this application of the fixedpoint construction there are nevertheless enough distinctions available to enable to differentiate between an identity and a kenogrammatic use of signs.

**Example1: [1,1,1; 4,4,4]**

W^{1}≡ W^{2} ≡ W^{3}, W^{4} ≡ W^{5} ≡ W^{6}

W^{1} ≡ x. F(xx), X ≡ W^{1}W^{1}, x. F(xx)W^{4} , F(W^{4}W^{4}):

W^{1} ≡ x. F(xx)

X^{1} ≡ W^{1}W^{1} ≅_{keno}x. F(xx)W^{4} ≡ F(W^{4}W^{4}) ≅_{keno} FX^{4}

=> X^{1} ≅_{keno} FX^{4}.

**Example 2: [1,1,1,4,1,1]**

W^{1}≡ W^{2} ≡ W^{3} ≡ W^{5} ≡ W^{6}, W^{4}!= W^{1}

W^{1} ≡ x. F(xx)

X^{1} ≡ W^{1}W^{1} ≅_{keno}x. F(xx)W^{4} ≅_{keno} F(W^{1}W^{1}) ≅_{keno} FX^{1}

=> X^{1} ≅_{keno} FX^{1}.

**Example3: [1,1,1; 4; 5,5]**

W^{1}≡ W^{2} ≡ W^{3}, W^{4}, W^{5} ≡ W^{6}

W^{1} ≡ x. F(xx)

X^{1} ≡ W^{1}W^{1} ≅_{keno}x. F(xx)W^{4} ≅_{keno} F(W^{5}W^{5})≅_{keno} FX^{5}

=> X^{1}≅_{keno} FX^{5}.

**Logical interpretationa) **X

=> X

=> X

=> {X

While the kenogrammatic use in the case of equivalence is still accepting the symmetry of substitutions the case of morphic similarity is abandoning this restriction too. Hence, the necessity to replace the same term in a function like “f(xx)" is deliberated to the possibility to replace the repetition of “x” in “f(xx)" with different occurrences of the term “W”.

**Example4: [1,1,1; 4; 5; 6]**

W^{1}≡ W^{2}≡ W^{3}, W^{4}, W^{5}, W^{6}

W^{1} ≡ x. F(xx)

X^{1} ≡ W^{1}W^{1} ≅_{keno}x. F(xx)W^{4} ≅_{sim} F(W^{5}W^{6}) ≅_{sim} FX^{5.6}

=> X^{1}≅_{sim} FX^{5.6}.**Example5: [1,2,3; 4; 2; 3]**

W^{1}, W^{2}, W^{3}, W^{4}, W^{2}, W^{3}

W^{1} ≡ x. F(xx)

X^{1} ≡ W^{2}W^{3} ≅_{sim} x. F(xx)W^{4} ≅_{sim} F(W^{2}W^{3}) ≅_{sim} FX^{2.3}.

=> X^{1} ≅_{sim} FX^{2.3}.**Logical interpretation**X

=> {X

Morphic bisimilarity abandons the security of the process of substitution established by the presumption in charge for the identity, the equivalence and the similarity option, of the same “length” of the substituted terms.

Hence, the situation “W^{1} ≡ x. F(xx)" might be replaced by the *bisimilar* term “x. F(xxx)" delivering not WW but WWW with the abstraction W^{1}W^{1} ≅_{bis} W^{5}W.

**Example6: [1,1,1; 4; 5; 6; 7]**

W^{1}≡ W^{2} ≡ W^{3}, W^{4}, W^{5}, W^{6}, W^{7}

W^{1} ≡ x. F(xx)

X^{1} ≡ W^{1}W^{1} ≅_{bis}x. F(xxx)W^{4} ≅_{sim} F(W^{5}W) ≅_{sim} FX

=> X^{1} ≅_{bis} FX.**Example7: [1,2,3,3’; 4; 4; 5]**

W^{1}, W^{2}, W^{3},W^{3}, W^{4}, W^{2}, W^{3}

W^{1} ≡ x. F(xxx)

X^{1} ≡ W^{2}W ≅_{bis} x. F(xx)W^{5} ≅_{sim} F(W^{6}W^{7}) ≅_{sim} FX^{6.7}.

=> X^{1} ≅_{bis} FX^{6.7}.