Rudolf Kaehr Dr.phil^{@}

Copyright ThinkArt Lab ISSN 2041-4358

Abstract

A first sketch of the basic definitions of a memristive polyLISP. The memristivity of the polyLISP refers to the fundamental retrogradness of its recursivity which is not just an iterative repetition of the “function itself” but a self-definition of the range and character of the operation involved. The hint to polycontexturality with the prefix “poly” hints to the fact that iterability for retrograde functions is “polycontextural”, i.e. opening up different contextural domains instead of remaining immanently inside a domain closed by its closure conditions like it holds for classical recursivity.

ULTRA DRAFT

*Question*: How to program memristive systems?**Four fundamental memristic strategies**

The new approach of kenomic formal systems, calculi, machines, programming paradigms, is taking two very simple constructions, principles, strategies into account:**First: Retrograde recursivity**

When ever something is repeated it has to be decided as what it shall be repeated. The scope or range of repetition of an element is fully depending on the history of former repetitions.**Second:** **Enaction**

When ever something is annihilated, eliminated, erased, cancelled, its annihilation has to be registered, to be remembered, memorized at a location of registration. The registration of an annihilation might be involved as a proper object into further iter/alterations of repeatability and enaction.

The *first* principle is realized formally by the strategy of *retrograde recursivity *as it was introduced for keno- and morphogrammatics in the early 70s as a polycontextural answer to the topics of self-referentiality in Second-Order Cybernetics.

The *second* principle is realized by the strategy of *enaction*. Enacation seems to be a recent invention/insight, well motivated by a polycontextural interpretation of the concepts of memristive systems (Chua, Williams) and a subversive move in the understanding of George Spencer Brown’s *Law of Crossing* beyond semiotics and logic sketched at first around 2010.

Therefore, iterability in general is not only happening as retrograde iter/alteration but as an inscription of memristive enaction too.

Also the operation of enaction is new it nevertheless seems to have enough cultural background to be accepted as “natural”.

There are other important new principles which are omitted at this place, like the principle of *localization* of systems and operations and the principle of *diamond* environments.

**Third: Bifunctoriality**

LISP operations are concatenative. This holds for all LISP dialects. Therefore, all combinations of Lisp operators are faithfull to the Lisp operational *closure*.

The combination (composition) of two Lisp operations is a Lisp operation.

∀ op_{i}∈ Lisp: op(op(...(op))∈ Lisp

Hence, op_{1}, op_{2} ∈ Lisp => op_{2}(op_{1}) ∈ Lisp.

The new feature of kenoLisp is *bifunctoriality* between two operations and two different Lisp systems.

**Fourth: Dissemination of Lisp over the kenomic matrix**

**Dissemination of bifunctoriality**

The kenomic operator “CONS” as an operator for construction is in general not predefined like in classical Lisp to act on atomic terms which are ruled by the principles of identity and equality. The new approach to a morpho- and kenogrammatic or thematic Lisp is enabling “CONS” to chose, in the process of application, the *mode* of the further steps of the application by *electing* the data paradigm to be involved. The graphematic possibilities for the data paradigm at hand for now are:

1. the mode of *semiotic* identity with recursivity,

2. the mode of *contextural* comlexity with proemial recursivity,

3. the mode of *kenogrammatic* similarity with retrograde recursivity,

4. the *indicational* mode of “topology-free constellations of signs” with recursive enaction, and

5. the mode of *monomorphic* bisimilarity of morphogrammatics with bisimulation and metamorphosis.

Other modes are possible as further realizations of graphematic styles of inscription. Known examples are the deutero- and proto-structure of kenogrammatics. On the other side, there are fuzzy-logical concepts for the semiotics mode of thematization; and others.

**Domains of application**

The semiotic or *symbolic* mode of thematization is ideal for atomistic binary physical systems as they occur in or as digital computers.

The *contextural* or interactional mode of thematization is ideal for ambigous complex physical systems as they occur in distributed and interacting digital computer and organic systems.

The *kenogrammatical* mode of thematization is ideal for pre-semiotic complex behavioural systems as they occur in memristive physical and cognitive/volitive systems.

The *indicational* mode of thematization is ideal for singular decision systems as they occur in simple actional systems where identity of the agents is relevant but not the order of their appearance.

The *monomorphic* mode of thematization is ideal for metamorphic systems as they occur in complex memristive actional systems.

**Semiotics** a=a, a!=b, with a(bc) = (ab)c

(A)"If the two given tokens of strings have different lengths, then they are different. If they have equal lengths, then go to (B)."

(B) "For each position i from 1 to the common length, check whether the atom at the i-th position of x equals the atom at the i-th position of y. If this is true for all positions i, then the given tokens are equal, otherwise they are different."

IF length(X) = length(Y) and ∀i x_{i}∈X, y_{i}∈Y: x_{i} ≡ y_{i} THEN X = Y.

**Indicational ** a=a, a!=b, ab = ba, aa = a

(B') "Check whether each atom appears equally often in both string-tokens. If this is the case, then they are equal, otherwise they are different."

∀i,j: x_{i}∈X, y_{j}∈Y: {x_{i}} = {y_{j}} THEN X =Y

**Kenogrammatics** a=b, (aa) != (ab), (aa) != (aaa)

(B'') "For each pair i,k, i<k, of positions, check whether within x there is equality between position i and k, and check whether wihin y there is equality between position i and k. If within both x and y there is equality, or if within both x and y there is inequality, then state equality for this pair of positions, otherwise state inequality for this pair of positions. If for each pair of positions there is equality, then x and y are equal. Otherwise they are not."

**Deutero-Structure**(B''') "Take an atom a from x, find out the number k of atoms in x equal to a, and check whether in y there is an atom which occurs exactly k times. If not, then x and y are unequal. If yes, then remove the atoms just considered from x and y. If nothing is left, x and y are equal. Otherwise apply B''' to the remaining string-tokens."

**Comparison**

"The former are invariant w.r.t. permutations of the index set {1,...,n}, while the latter are invariant w.r.t. permutations of the alphabet A.”

SEMIOTIC ABSTRACTIONS IN THE THEORIES OF GOTTHARD GÜNTHER AND GEORGE SPENCER BROWN By Rudolf Matzka, Munich, May 1993

**Monomorphics **a=b,** **(aa) != (aaa)**, (**aba) = (abba)** **

The next feature of the operator “CONS" (also “append") is defined by the retrograde recursivity of iterability, i.e. the modes of ‘concatenation’ and ‘succession'.

Depending on the hermeneutical process of thematization, the operation “CON” might chose its mode of realization. It might switch between different styles, or it might stay stable for a chosen possibility of thematization.

In a polycontextural situation it might be preferable to use simultaneously differend modes of thematizations.

Hence, before any decision for a certain programming paradigm and then for the main topics, a decision, i.e. an *election* of the mode of ‘production’ has to be installed.

An explication of the difference of *selection* (for Lambda Calculus) and *election* (for Contextural Programming) might be found at:

http://works.bepress.com/thinkartlab/20/

Following the clear exposition of the definition of recursive LISP as McCarthy has outlined in his inaugurating paper *“Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I, April 1960*” I will try to sketch a deconstructive approach to the idea and definitions of a recursive kenomic LISP along this historical guidlines.**McCarthy:****a.** A Class of **Symbolic Expressions**. We shall now define the S-expressions

(stands for symbolic). They are formed by using the special characters

·

)

(

and an infinite set of distinguishable atomic symbols.**c.** The Elementary **S-functions** and **Predicates**. We introduce the following

functions and predicates:

1. **atom**. atom[x] has the value of T or F according to whether x is an

atomic symbol. Thus

atom [X] = T

atom [(X · A)] = F

2. **eq**. eq [x;y] is defined if and only if both x and y are atomic.

eq [x; y]= T if x and y are the same symbol, and eq [x; y] = F otherwise.

Thus eq [X; X] = T

eq [X; A] = F

eq [X; (X · A)] is undefined.

3. **car**. car[x] is defined if and only if x is not atomic. car [(e1 · e2)] = e1.

Thus car [X] is undefined.

car [(X · A)] = X

car [((X · A) · Y )] = (X · A)

4. **cdr**. cdr [x] is also defined when x is not atomic. We have cdr

[(e1 · e2)] = e2. Thus cdr [X] is undefined.

cdr [(X · A)] = A

cdr [((X · A) · Y )] = Y

5. **cons**. cons [x; y] is defined for any x and y. We have

cons [e1; e2] = (e1 · e2). Thus

cons [X; A] = (X A)

cons [(X · A); Y ] = ((X · A)Y )

car, cdr, and cons are easily seen to satisfy the relations

car [cons [x; y]] = x

cdr [cons [x; y]] = y

cons [car [x]; cdr [x]] = x, provided that x is not atomic.

2. **subst** [x; y; z]. This function gives the result of substituting the S-

expression x for all occurrences of the atomic symbol y in the S-expression z.

It is defined by

subst [x; y; z] = [atom [z] --> [eq [z; y]--> x; T --> z];

T --> cons [subst [x; y; car [z]]; subst [x; y; cdr [z]]]]

As an example, we have

subst[(X · A);B; ((A · B) · C)] = ((A · (X · A)) · C)

3. **equal** [x; y]. This is a predicate that has the value T if x and y are the

same S-expression, and has the value F otherwise. We have

equal [x; y] = [atom [x] atom [y] eq [x; y]]

[¬ atom [x] ¬ atom [y] equal [car [x]; car [y]] equal [cdr [x]; cdr [y]]].

The following functions are useful when S-expressions are regarded as lists.

1. **append** [x;y].

append [x; y] = [null[x] --> y; T --> cons [car [x]; append [cdr [x]; y]]]

An example is

append [(A, B); (C, D, E)] = (A, B, C, D, E)

2. **among** [x;y]. This predicate is true if the S-expression x occurs among the elements of the list y.

We have

among[x; y] = ¬ null[y] [equal[x; car[y]] among[x; cdr[y]]].

3. **pair** [x;y]. This function gives the list of pairs of corresponding elements of the lists x and y.

We have

pair[x; y] = [null[x] null[y] --> NIL; ¬atom[x] ¬atom[y] --> cons[list[car[x]; car[y]]; pair[cdr[x]; cdr[y]]].

An example is

pair[(A,B,C); (X, (Y, Z), U)] = ((A,X), (B, (Y, Z)), (C, U)).

John McCarthy, Recursive Functions of Symbolic Expressionsand Their Computation by Machine, Part I, MIT, April 1960

http://www-formal.stanford.edu/jmc/recursive.pdf

Based on McCarthy’s presentation of LISP some first steps of its deconstruction shall follow.

Atoms are becoming *kenoms* and patterns of kenoms, i.e. *monomorphies* are building *morphograms*.

1. **kenom**. kenom[x] has the value of T_{i} or F_{i} according to whether x is a kenomic symbol.

Thus

kenom [X] = T_{i}

kenom [(X · A)] = F_{i}, i,j ∈ s(m)**Monomorphy**

For eq [X, A] = T --> monomorph[X . A] = T

2. **eq-kenom**. eq-kenom [x;y] is defined if and only if both x and y are kenomic.

eq-kenom [x; y] = T_{i} if x and y are of the same pattern, and eq-kenom [x; y] = F_{i} otherwise.

Thus eq-kenom [X; X] = T_{i}

X = pattern(A) --> eq-kenom [X; A] = T_{i}

eq-kenom [X; (X · A)] is undefined.

3. **equal** [x; y] = [atom [x] atom [y] eq [x; y]]

[¬ atom [x] ¬ atom [y] equal [car [x]; car [y]] equal [cdr [x]; cdr [y]]].

**Bisimulation**

eq-kenom [X; (X · A)] = U --> ∃ bisimul [X; (X · A)] = T**Examples **

eq-kenom [a, a] = eq [a,b] = eq [b, z] = T, i.e eq[kenom_{1}, kenom = T

eq-kenom [ab, ba] =T

non-eq-kenom [a, ab]

eq-kenom[aa, bb]=T --> monomorph[aa . aa] = T

eq-kenom [aba, abba] = U --> ∃ eq-bisimul [ab; abba] = T

**Substitution**

3. subst [x; y; z] = [atom [z] --> [eq [z; y]--> x; T --> z];

T --> cons [subst [x; y; car [z]]; subst [x; y; cdr [z]]]]

As an example, we have

subst[(X · A);B; ((A · B) · C)] = ((A · (X · A)) · C).

m h H H

**Kenomic substitution**

H_{1} =>

subst[m_{1}, h_{1}, H_{1}] = subst[m_{2}, h_{1}, H_{2}]

3. **car**. car[x] is defined if and only if x is not atomic. car [(e1 · e2)] = e1.

Thus car [X] is undefined.

car [(X · A)] = X

car [((X · A) · Y )] = (X · A)

=>

CAR [(X . A)] = X

CAR [(X . A) · Y )] = (X . A).**Enactional car**

CAR_{EN} [(X . A)] = X. (⊥ | A

CAR_{EN} [(X . A) · Y )] = ((X . A) . (⊥) | Y

4. **cdr**. cdr [x] is also defined when x is not atomic. We have

cdr[(e1 · e2)] = e2. Thus cdr [X] is undefined.

cdr [(X · A)] = A

cdr [((X · A) · Y )] = Y

=>

CDR [(X . A)] = A

CDR [((X . A) · Y )] = Y**Enactional CDR**

CDR_{EN} [(X . A)] = ((⊥. A)| X

CDR_{EN} [((X . A) · Y )] = ((⊥. Y ) | (X . A)

5. **cons**. cons [x; y] is defined for any x and y. We have

cons [e1; e2] = (e1 · e2). Thus

cons [X; A] = (X A)

cons [(X · A); Y ] = ((X · A) Y)

=>**Kenomic CONS**

X = (A) --> CONS [X; A] = (X A)|(X B)

X = (X . A) --> CONS [(X · A); Y ] = ((X · A) Y)|((X · A) Z)

Enaction was previously defined as a combination of replication and elimination. This fits together with an understanding of enactional operations as composed of replication and elimination in the sense of CDR and CAR in LISP. Replication is like transposition an operator belonging to the so called super-operators, ID, PERM, REPL, RED and BIF of polycontextural logic. Super-operators are applicable to all internal LISP terms and operators, hence not only to CAR and CDR but to CONS, APPEND, etc. too.

In this context, the LISP operators CAR and CDR are modeled by the polycontextural operation of reduction (RED), while the new LISP operation of enaction is modeled by replication (REPL). Both together are defining the enactional CDR and CAR albeit based now not on lists but on morphograms.

Classical operations like CAR, CDR and CONS are defined by the identity mapping ID. Hence

ID(CAR) = CAR, ID(CDR) = CDR and ID(CONS) = CONS. Thus, ID : X X with X= {CAR, CDR, CONS}.

Both CAR_{EN} and CDR_{EN} are defined by CAR and CDR and the replicative operation of reflectional enaction.

The idea of a kenomic LISP is based on two decisions. One is for a transition from lists to *morphograms*, the second is a dissemination of symbolic LISP over the* kenomic matrix* which is enabling new 'trans-contextural' operators, like mediation, replication and transposition.

Nevertheless, the new morphogram-based programming paradigm, polyLISP, kenoLISP or morphLISP, gets its first introduction as a mimickry of the methods of the list-based LISP.

LISP= [LISP; ops, sops; n, m∈N]

**Operators**

ops:

CONS, CDR, CAR.

Typs of action on a kenomic object is either iterative, accretive, transposive or enactive and metamorphic.

Despite the name “kenomic Lisp” the proposed kenomic Lisp is not dealing with lists but with kenomic patterns, i.e. morphograms. This has consequences for the concept of recursivity, crucial to Lisp, and applies to different aspects of the newly discovered features of retrograde recursivity.

Parallelism, concurrence and simultaneity of processes, actions and interactions are primordial in kenoLISP and morphoLISP. The feature of parallelism is obvious for polycontextural systems. Each contexture in a polycontextural compound has structural space for its own formalism and therefore programming languages.

For kenomic and morphogrammatic systems the case is slightly less obvious. It becomes ‘natural’ with the understanding of kenomic operations. Kenomic operations are from the very beginning ‘dis-contextural’, delivering simultaneously different results.

A kind of self-referentiality is crucial for classical LISP especially for the definition of recursivity.

But this kind of self-referentiality is not based on a chiastic interplay of terms and operations but on the concept of a “self"-application of functions on its previous values.

The operation of enaction was introduced as a positive interpretation of the elimination of terms like with the double crossing in Spencer Brown’s calculus of indication: {{ }} = . Formal enaction accepts the elimination of the term but recalls it on another level of the formalism. Formal enaction is therefore understood as a memristive cancellation of terms.

How are productions and their reductions related?

Obviously, there is an asymmetry between formula and solution involved.

"S-expressions are the fundamental data objects of LISP. They consist of(1) atoms and (2) CONS-cells.

A list is either (1) the atom NIL or (2) the result of CONSing an s-expression onto the beginning of an existing list."

”...the set of s-expression is closed under CONS,...list are s-expressions."

It follows that s-expressions are non-ambiguous.

Kenomic expressions are not covered by a unique tree but by several “parallel” trees. Therefore, kenomic expressions have a set of trees, or a forest of trees, as their representation.

There is an asymmetry between operators (CONS, CAR, CDR) and data objects.

cons(cons(ab), a) --> (aba), (abb), (abc).

On the other hand we have:

reverse CONS: (aba), (abb), (abc) --> cons(cons(ab), a).

This offers a *reduction method* from objects to operators.

The syntactic structure of the example *“cons(cons(ab), a)"* is simply a singular tree while the kenomic structure is - in this case - a triple of trees.

Hence, morphorams [aba], [abb], [abc] are operationally reducible to the form “cons(cons(ab), a)". That is, the 3 different morphograms have a common singular operational representation in “cons(cons(ab), a)". Or more generally, the operational CONS-representation is “CONS(CONS(X Y)X)".

1. **append** [x;y].

append [x; y] = [null[x] --> y; T --> cons [car [x]; append [cdr [x]; y]]]

Bifunctoriality of polyLISP is a new feature of memristive interchangeability, i.e. parallelism of compositions. Hence, also for kenomic systems interchangeability of its operations is crucial.

With this connection to bifunctoriality established, the whole elaborted apparatus of polycontextural interchangeability of operations might be applied to develop a complex memristive polyLISP.

*"Quote is a a one-argument operation that stops the evaluation process before it reaches its argument.” *(Stark)

QUOTE data-expression --> data-expression.

QUOTE, again, is involved into iterability, and therefore the possibility to distinguish between iterative and accretive quotation is accessible.

**Polycontextural QUOTE**

"In a polycontextural setting we are free to choose a more ﬂexible interplay between quotation and interpretation. To quote means to put the quoted sentence on a higher level of a reﬂectional order or to another heterachical actional level. We are not forced to limit ourselves to any kind of the well known intra-contextural meta-language hierarchies. Those local procedures are nevertheless not excluded at all but localized to their internal place.

To start the argumentation I simply map an index *i* to the sentences. A quotation is augmenting and an interpretation (evaluation) is reducing its value, say by 1.“

http://www.thinkartlab.com/pkl/lola/Godel_Games-short.pdf

http://sds.podval.org/self-ref.html

One of the most striking properties of LISP is its ability for self-referential definition crucial for the definition of recursion and a whole trend of AI programming.

Under the title “*Self-processing*”, Stark writes:

"LISP’s ability to process itself is a direct consequence of (1) the representation of the language in its data structure, (2) the simple algebraic syntax, and (3) the presence of functions such as QUOTE, DEFUN, FUNCTION, EVAL, and APPLY.” (W. Richard Stark, LISP, Lore, and Logic, 1990, p. 92)

Those features are naturally implemented in the paradigm of kenomic LISP. The function “QUOTE” and “EVAL” might be leading to the accessibility of new self-referential constructions. (cf. Godel Games)

But with the development of kenomic definitions of the basic operations of LISP, CDR, CDR and CONS even more direct constructions of self-referentiality are in sight.

In fact, the kenomic retrograde recursivity of the basic operators is uncovered as a fundamentally self-referential notion and construction.

As a consequence, the neat reflectional construction of self-referentiality proposed in my small paper *“Gödel’s Games” *gets a more direct realization on an even more profound level of the very understanding of iterability itself.

An important advantage of this kenomic concept of self-referentiality is the fact that it is stucturally determined in a way that avoids all those annoying misreadings and misunderstandings of the term “self” in all those self-referential configurations and speculations.

On the base of such fundamental properties of self-referentiality, constructions like *enaction* are supporting further aspects of operational self-applications.

Fundamental theorem for pure LISP.*"Every algorithmically computable (in the informal sense) function can be computed by a program in pure LISP."*

The new question is, can every memristive function of polyLISP be computed by a program in pure LISP?

In other words, is there a

There is always a possibility of

**List of translations Simulations**

Kenom to atomic symbol Equivalence class of different symbols

Morphogram to list Equivalence class of lists

Retrogradness to iterativity Double recursion with conditions

Enaction to elimination Elimination plus appending

Poly- to monocontexturality Multi-sets of lists

Polyverse to universe Multi-sorted domains

Super-additivity to composition augmented composition

Polycontextural logic to logic Multi-valued or modal logics

and so on.

(aba) = (abba)

a=b, (aa) != (aaa)

a!=b, (ab) = (ba), (aa) = (a)

comp: [(aa) (b) (cc)] --> [aabcc]

decomp: [aabcc] --> [(aa) (b) (cc)]

(aba) = (abba)

According to the quadralectics of diamond strategies, the simple oriented approach to a kenomic LISP might be distributed and located into the qaudralectics of distinctions.

From the circularity of a list to a chiastic resolution of self-referentiality.

*"*Ambiguity* is the wild child of language interpretation. Whether from the point of view of the philosopher, linguist, psychologist, lexicographer, or computer scientist, ambiguity problems have relentlessly resisted taming. Lexical ambiguity, or polysemy, arises when a word, or a phrase, is associated in the language system with more than one meaning. *Generativity

http://www.aaai.org/Papers/Symposia/Spring/1995/SS-95-01/SS95-01-001.pdf

http://193.6.132.75/honlap/whatispolysemy.pdf

**A simple application “Fruit flies like a banana.”**What’s the meaning of an ambiguous sentence like

As it is well known, the sentence has, at least, two meanings:

1. “the insects called fruit flies are positively disposed towards bananas.”

2. “Something called fruit is capable of the same type of trajectory as a banana.”

"

(Alan P. Parkes, Introduction to Languages, Machines and Logic, 2002, p. 42)

Ambiguity in languages is reflected in the existence of more than one

There are two strategies to deal with ambiguity:

1. Disambiguation and

2. Mediation and Bisimulation.

The decision necessary for the purpose of formal languages and computation is *disambiguation*. Disambiguation is eliminating polysemy and ambiguity of sentences.

Hence, both meanings of the sentence might be chosen and used but separately or just one meaning might be involved in further steps of reasoning, depending on a further context.

On the other hand, the strategy of *mediation* is supporting the polysemy of ambiguous sentences. As the example above is set, there is no context which could help to separate the meanings and to select one meaning only as prior to the other.

Hence, the sentence as such has both meanings at once. Therefore its logical status is demanding for three and not only for two logical places to realize its polysemic ambiguity. Two loci for the separated meanings are necessary and one more for the sentence as such having both meanings at once. This third position is not reducible to a syntactical choice to contrast the semantics of the sentence because the sentence also has syntactically two disjunct parse trees. But more important, the focus of the understanding of the ambiguos sentence is on polysemy, i.e. on the semantic ambiguity of its meaning. A further step will show that the purely semantic approach is not offering a possibility for a ‘re-solution’ of the ambiguity problem of polysemic sentences. What is needed additionally is a pragmatic approach, here formalized with application of a morphogrammatic approach.

Thus, the third position, which places the double meaning of the sentence, shall be inscribed as the morphogram of the double sentence. A morphogram might be understood as an inscription of pre-logical ‘meaning’. It therefore has to be able to deal consistently with semantic ambiguity without running into logical contradictions.**Binary tree analysis**

Fruit flies like a banana. Fruit flies like a banana.

↙↘ ↙ ↘

Fruit flies like a banana Fruit flies like a banana

↙↘ ↙↘ ↙↘

like a banana Fruit flies like a banana

The operation of de/melting (or fusing together) is not anymore a formal operation in the sense of the lambda calculus or LISP. For both, the lambda calculus and LISP, the construct with its ‘composed’ meaning “Fruit flies” is a singular term, i.e. a name for an ontological entity covered by the taxonomy of animals. Nevertheless, this singular name is synthesized, fused together, by two domains, the domain of fruits and the domain of flies. This enables a specific decomposition which is not part of the sentence as itself and its double meaning.

Hence, a decomposition of such a term needs a new abstraction, , paired with a new operator “de/fus”.

The result of the exercise shows a mediation of both meanings of the sentence and a kind of a morphogrammatic deep-structure of the double-meaning of the sentence as its ‘meaning’ as such. This deep-structure is ‘unifying’ the two observer depending readings of the sentence into an observer-independent inscription of its double-meaning, as its morphogram.**Left-associated tree analysis**

Fruit flies like a banana.

Fruit flies like a banana. ↗↖ a-banana

↗↖ a banana ↗↖ like

↗↖ like ↗↖flies

Fruit-flies Fruit

Both sentences are analyzed by a time-linear principle of possible continuations (Hausser).

In contrast, the third analysis is breaking in some respect the time-linearity and is introducing a **planar** extension of the ambiguous terms. Such planar constructions which are holding conflicting and antinomic terms together are covered by morphograms. Lists are not prepared to cover ambiguous terms because they are defined by atomic terms and linear constructions.