Memristics' LISPs

How to program memristive systems?

Rudolf Kaehr Dr.phil@

Copyright ThinkArt Lab ISSN 2041-4358



A first sketch of the basic definitions of a memristive polyLISP. The memristivity of the polyLISP refers to the fundamental retrogradness of its recursivity which is not just an iterative repetition of the “function itself” but a self-definition of the range and character of the operation involved. The hint to polycontexturality with the prefix “poly” hints to the fact that iterability for retrograde functions is “polycontextural”, i.e. opening up different contextural domains instead of remaining immanently inside a domain closed by its closure conditions like it holds for  classical recursivity.


1.  Styles of thematizations

1.1.  Four formal principle of memristics

Question: How to program memristive systems?

Four fundamental memristic strategies
The new approach of kenomic formal systems, calculi, machines, programming paradigms, is taking two very simple constructions, principles, strategies into account:

First: Retrograde recursivity
When ever something is repeated it has to be decided as what it shall be repeated. The scope or range of repetition of an element is fully depending on the history of former repetitions.

Second: Enaction
When ever something is annihilated, eliminated, erased, cancelled, its annihilation has to be registered, to be remembered, memorized at a location of registration. The registration of an annihilation might be involved as a proper object into further iter/alterations of repeatability and enaction.

The first principle is realized formally by the strategy of retrograde recursivity as it was introduced for keno- and morphogrammatics in the early 70s as a polycontextural answer to the topics of self-referentiality in Second-Order Cybernetics.

The second principle is realized by the strategy of enaction. Enacation seems to be a recent invention/insight, well motivated by a polycontextural interpretation of the concepts of memristive systems (Chua, Williams) and a subversive move in the understanding of George Spencer Brown’s Law of Crossing beyond semiotics and logic sketched at first around 2010.

Therefore, iterability in general is not only happening as retrograde iter/alteration but as an inscription of memristive enaction too.

Also the operation of enaction is new it nevertheless seems to have enough cultural background to be accepted as “natural”.

There are other important new principles which are omitted at this place, like the principle of localization of systems and operations and the principle of diamond environments.

Third: Bifunctoriality
LISP operations are concatenative. This holds for all LISP dialects. Therefore, all combinations of Lisp operators are faithfull to the Lisp operational closure.

The combination (composition) of two Lisp operations is a Lisp operation.

∀ opi∈ Lisp: op(op(...(op))∈ Lisp

Hence, op1, op2 ∈ Lisp  => op2(op1) ∈ Lisp.

   Structural law of associativity <br /> <br />    op  _ 1  o (op ... ) = (op  _ 1  o op  _ 2) o    op  _ 3)     <br />

The new feature of kenoLisp is bifunctoriality between two operations and two different Lisp systems.

 op  _ (1, 2) ∈ Lisp  _ 1, op  _ (3, 4) ∈ Lisp  _  ...     4                1                      3               2                                    4

Fourth: Dissemination of Lisp over the kenomic matrix

typeset structure

Dissemination of bifunctoriality

       Transpositional composition   <br /> <br />   & ...            3                  3.1              1           3.1                                   1

1.2.  Styles of thematizations

typeset structure

The kenomic operator “CONS” as an operator for construction is in general not predefined like in classical Lisp to act on atomic terms which are ruled by the principles of identity and equality. The new approach to a morpho- and kenogrammatic or thematic Lisp is enabling “CONS” to chose, in the process of application, the mode of the further steps of the application by electing the data paradigm to be involved. The graphematic possibilities for the data paradigm at hand for now are:
1. the mode of semiotic identity with recursivity,
2. the mode of contextural comlexity with proemial recursivity,
3. the mode of kenogrammatic similarity with retrograde recursivity,
4. the indicational mode of “topology-free constellations of signs” with recursive enaction, and
5. the mode of monomorphic bisimilarity of morphogrammatics with bisimulation and metamorphosis.

Other modes are possible as further realizations of graphematic styles of inscription. Known examples are the deutero- and proto-structure of kenogrammatics. On the other side, there are fuzzy-logical concepts for the semiotics mode of thematization; and others.

<br /> THEMATIZATION : a, a, b, c    -->  <br /> (SET (a, a, b, c)    - ... p;                

Domains of application
The semiotic or symbolic mode of thematization is ideal for atomistic binary physical systems as they occur in or as digital computers.

The contextural or interactional mode of thematization is ideal for ambigous complex physical systems as they occur in distributed and interacting digital computer and organic systems.

The kenogrammatical mode of thematization is ideal for pre-semiotic complex behavioural systems as they occur in memristive physical and cognitive/volitive systems.

The indicational mode of thematization is ideal for singular decision systems as they occur in simple actional systems where identity of the agents is relevant but not the order of their appearance.

The monomorphic mode of thematization is ideal for metamorphic systems as they occur in complex memristive actional systems.

1.2.1.  Graphematic systems

Semiotics               a=a, a!=b, with a(bc) = (ab)c
(A)"If the two given tokens of strings have different lengths, then they are different. If they have equal lengths, then go to (B)."
(B) "For each position i from 1 to the common length, check whether the atom at the i-th position of x equals the atom at the i-th position of y. If this is true for all positions i, then the given tokens are equal, otherwise they are different."

IF length(X) = length(Y) and ∀i xi∈X, yi∈Y: xi ≡ yi THEN X = Y.

Indicational    a=a, a!=b, ab = ba, aa = a
(B') "Check whether each atom appears equally often in both string-tokens. If this is the case, then they are equal, otherwise they are different."

∀i,j: xi∈X, yj∈Y: {xi} = {yj} THEN X =Y

Kenogrammatics     a=b, (aa) != (ab), (aa) != (aaa)
(B'') "For each pair i,k, i<k, of positions, check whether within x there is equality between position i and k, and check whether wihin y there is equality between position i and k. If within both x and y there is equality, or if within both x and y there is inequality, then state equality for this pair of positions, otherwise state inequality for this pair of positions. If for each pair of positions there is equality, then x and y are equal. Otherwise they are not."

(B''') "Take an atom a from x, find out the number k of atoms in x equal to a, and check whether in y there is an atom which occurs exactly k times. If not, then x and y are unequal. If yes, then remove the atoms just considered from x and y. If nothing is left, x and y are equal. Otherwise apply B''' to the remaining string-tokens."

"The former are invariant w.r.t. permutations of the index set {1,...,n}, while the latter are invariant w.r.t. permutations of the alphabet A.”

Monomorphics   a=b, (aa) != (aaa), (aba) = (abba)

The next feature of the operator “CONS" (also “append") is defined by the retrograde recursivity of iterability, i.e. the modes of ‘concatenation’ and ‘succession'.

Depending on the hermeneutical process of thematization, the operation “CON” might chose its mode of realization. It might switch between different styles, or it might stay stable for a chosen possibility of thematization.

In a polycontextural situation it might be preferable to use simultaneously differend modes of thematizations.

Hence, before any decision for a certain programming paradigm and then for the main topics, a decision, i.e. an election of the mode of ‘production’ has to be installed.
An explication of the difference of selection (for Lambda Calculus) and election (for Contextural Programming) might be found at:

2.  Recursive symbolic Lisp

2.1.  McCarthy’s recursive LISP

Following the clear exposition of the definition of recursive LISP as McCarthy has outlined in his inaugurating paper “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I, April 1960” I will try to sketch a deconstructive approach to the idea and definitions of a recursive kenomic LISP along this historical guidlines.

a. A Class of Symbolic Expressions. We shall now define the S-expressions
   (stands for symbolic). They are formed by using the special characters
and an infinite set of distinguishable atomic symbols.

c. The Elementary S-functions and Predicates. We introduce the following
functions and predicates:

1. atom. atom[x] has the value of T or F according to whether x is an
atomic symbol. Thus
atom [X] = T            
atom [(X · A)] = F    

2. eq. eq [x;y] is defined if and only if both x and y are atomic.
eq [x; y]= T if x and y are the same symbol, and eq [x; y] = F otherwise.

Thus eq [X; X] = T
eq [X; A] = F
eq [X; (X · A)] is undefined.

3. car. car[x] is defined if and only if x is not atomic. car [(e1 · e2)] = e1.
Thus car [X] is undefined.
car [(X · A)] = X
car [((X · A) · Y )] = (X · A)

4. cdr. cdr [x] is also defined when x is not atomic. We have cdr
[(e1 · e2)] = e2. Thus cdr [X] is undefined.
cdr [(X · A)] = A
cdr [((X · A) · Y )] = Y

5. cons. cons [x; y] is defined for any x and y. We have
cons [e1; e2] = (e1 · e2). Thus
cons [X; A] = (X A)
cons [(X · A); Y ] = ((X · A)Y )

car, cdr, and cons are easily seen to satisfy the relations
car [cons [x; y]] = x
cdr [cons [x; y]] = y
cons [car [x]; cdr [x]] = x, provided that x is not atomic.

2. subst [x; y; z]. This function gives the result of substituting the S-
expression x for all occurrences of the atomic symbol y in the S-expression z.
It is defined by

subst [x; y; z] = [atom [z] --> [eq [z; y]--> x; T --> z];
T --> cons [subst [x; y; car [z]]; subst [x; y; cdr [z]]]]

As an example, we have
subst[(X · A);B; ((A · B) · C)] = ((A · (X · A)) · C)

3. equal [x; y]. This is a predicate that has the value T if x and y are the
same S-expression, and has the value F otherwise. We have

equal [x; y] = [atom [x] typeset structure atom [y]  typeset structure eq [x; y]]
typeset structure[¬ atom [x] typeset structure¬ atom [y]  typeset structure equal [car [x]; car [y]] typeset structure equal [cdr [x]; cdr [y]]].

The following functions are useful when S-expressions are regarded as lists.

1. append [x;y].
append [x; y] = [null[x] --> y; T --> cons [car [x]; append [cdr [x]; y]]]

An example is
append [(A, B); (C, D, E)] = (A, B, C, D, E)

2. among [x;y]. This predicate is true if the S-expression x occurs among the elements of the list y.
We have

among[x; y] = ¬ null[y] typeset structure [equal[x; car[y]] typeset structure among[x; cdr[y]]].

3. pair [x;y]. This function gives the list of pairs of corresponding elements of the lists x and y.
We have

pair[x; y] = [null[x] typeset structurenull[y] --> NIL; ¬atom[x] typeset structure¬atom[y] -->  cons[list[car[x]; car[y]]; pair[cdr[x]; cdr[y]]].

An example is
pair[(A,B,C); (X, (Y, Z), U)] = ((A,X), (B, (Y, Z)), (C, U)).

John McCarthy, Recursive Functions of Symbolic Expressionsand Their Computation by Machine, Part I, MIT, April 1960

2.1.1.  F. L. Bauer’s formal characteriszation of LISP

<br />    <br />         type LISP   ≡   <br />   ... . L . Bauer, H . Wössner, H . Wassner,  Algorithmische Sprache und Programmentwicklung , 1984

2.1.2.  First steps to an algebraic characterization kenomic LISP

               &nbs ... #xF3A0;^(m, n) χ cdr , lisp ^(m, n) χ repl , lisp ^(m, n) χ transp)

               &nbs ... p;                

               &nbs ...                                                                                  3               4

<br /> mode   lisp ^(m, n) χ | matrix (m, n) ≡ <br /> χ ^(m, n ... #xF3A0;^(m, n) χ cdr , lisp ^(m, n) χ repl , lisp ^(m, n) χ transp)

2.2.  Recursive kenomic LISP

2.2.1.  Basic terms: atom, eq, equal, subst

Based on McCarthy’s presentation of LISP some first steps of its deconstruction shall follow.

Atoms are becoming kenoms and patterns of kenoms, i.e. monomorphies are building morphograms.
1. kenom. kenom[x] has the value of Ti or Fi according to whether x is a kenomic symbol.
kenom [X] = Ti            
kenom [(X · A)] = Fi, i,j ∈ s(m)

For eq [X, A] = T --> monomorph[X . A] = T

2. eq-kenom. eq-kenom [x;y] is defined if and only if both x and y are kenomic.
eq-kenom [x; y] = Ti  if x and y are of the same pattern, and eq-kenom [x; y] = Fi otherwise.

Thus eq-kenom [X; X] = Ti
X = pattern(A) --> eq-kenom [X; A] = Ti
eq-kenom [X; (X · A)] is undefined.

3. equal [x; y] = [atom [x] typeset structure atom [y]  typeset structure eq [x; y]]
typeset structure[¬ atom [x] typeset structure¬ atom [y]  typeset structure equal [car [x]; car [y]] typeset structure equal [cdr [x]; cdr [y]]].

<br /> equal => (equal - symbol    :    equal [x ; y]     ...                          equal - monomorph : equal [x ; y]     

eq-kenom [X; (X · A)] = U -->  ∃ bisimul [X; (X · A)] = T

eq-kenom [a, a] = eq [a,b] = eq [b, z] = T, i.e eq[kenom1, kenomtypeset structure = T
eq-kenom [ab, ba] =T
non-eq-kenom [a, ab]
eq-kenom[aa, bb]=T --> monomorph[aa . aa] = T
eq-kenom [aba, abba] = U  -->  ∃ eq-bisimul [ab; abba] = T

3. subst [x; y; z] = [atom [z] --> [eq [z; y]--> x; T --> z];
T --> cons [subst [x; y; car [z]]; subst [x; y; cdr [z]]]]

As an example, we have
subst[(X · A);B; ((A · B) · C)] = ((A · (X · A)) · C).
          m     h        H                         Htypeset structure

Kenomic substitution
H1typeset structure =>
subst[m1, h1, H1] = subst[m2, h1, H2]

Context rules for substitution CRS     ∀ h, m _ 1 ∈ (H ) _  ...  _ (h/m _ 2)(H _ 2)      ; <br />    modulo CRS   

Example H  _ 1 = [aabbacc], H  _ 2 = [aaccabb], <br /> H  _ 1 = ༺ ...   _ MG [aaccabb]    ==>    [dddbbacc] =  _ MG [eeeccabb] .

2.2.2.  Kenomic CDR, CAR, CONS

3. car. car[x] is defined if and only if x is not atomic. car [(e1 · e2)] = e1.
Thus car [X] is undefined.
car [(X · A)] = X
car [((X · A) · Y )] = (X · A)
CAR [(Xtypeset structure . Atypeset structure)] =  Xtypeset structure
CAR [(Xtypeset structure . Atypeset structure) · Ytypeset structure )] = (Xtypeset structure . Atypeset structure).

Enactional car
CAREN [(Xtypeset structure . Atypeset structure)] = Xtypeset structure. (⊥typeset structure | Atypeset structure
CAREN [(Xtypeset structure . Atypeset structure) · Ytypeset structure )] = ((Xtypeset structure . Atypeset structure) . (⊥typeset structure) | Ytypeset structure

4. cdr. cdr [x] is also defined when x is not atomic. We have
cdr[(e1 · e2)] = e2. Thus cdr [X] is undefined.
cdr [(X · A)] = A
cdr [((X · A) · Y )] = Y
CDR [(Xtypeset structure . Atypeset structure)] = Atypeset structure
CDR [((Xtypeset structure . Atypeset structure) · Ytypeset structure )] = Ytypeset structure

Enactional CDR
CDREN [(Xtypeset structure . Atypeset structure)] = ((⊥typeset structure. Atypeset structure)| Xtypeset structure
CDREN [((Xtypeset structure . Atypeset structure) · Ytypeset structure )] = ((⊥typeset structure. Ytypeset structure ) | (Xtypeset structuretypeset structure . Atypeset structure)

5. cons. cons [x; y] is defined for any x and y. We have
cons [e1; e2] = (e1 · e2). Thus
cons [X; A] = (X A)
cons [(X · A); Y ] = ((X · A) Y)
Kenomic CONS
X = (A) --> CONS [X; A] = (X A)|(X B)
X = (X . A) --> CONS [(X · A); Y ] = ((X · A) Y)|((X · A) Z)

    CONS, CAR and CDR <br /> <br />    Enactional CAR <br />  &n ... gt; cons [(X · A) ; Y ] = ((X · A) Y) | ((X · A) Z)     <br />

2.2.3.  Enactional CAR and CDR as composed operators

Enaction was previously defined as a combination of replication and elimination. This fits together with an understanding of enactional operations as composed of replication and elimination in the sense of CDR and CAR in LISP. Replication is like transposition an operator belonging to the so called super-operators, ID, PERM, REPL, RED and BIF of polycontextural logic. Super-operators are applicable to all internal LISP terms and operators, hence not only to CAR and CDR but to CONS, APPEND, etc. too.
In this context, the LISP operators CAR and CDR are modeled by the polycontextural operation of reduction (RED), while the new LISP operation of enaction is modeled by replication (REPL). Both together are defining the enactional CDR and CAR albeit based now not on lists but on morphograms.

Classical operations like CAR, CDR and CONS are defined by the identity mapping ID. Hence
ID(CAR) = CAR, ID(CDR) = CDR  and ID(CONS) = CONS. Thus, ID : X typeset structure X with X= {CAR, CDR, CONS}.

Both CAREN and CDREN are defined by CAR and CDR and the replicative operation of reflectional enaction.

Replicational enaction for CAR and CDR <br /> CAR  _ EN ≡ CAR (REPL) : <br /> CA ... . j)) . Y  _ (i . j) ) | (X  _ i . j  _ (+ 1) . A  _ (i . j + 1))

<br /> Replicational enaction matrix <br /> CAR  _ REPL : <br />                       ...                                          M                             1.2               -

<br /> Transpositional enaction for CAR and CDR <br /> CAR (TRANSP) : <br /> CAR  _ TR ...  (i . j)) . Y  _ (i . j) ) | (X  _ (i + 1. j)  . A  _ (i + 1. j))

<br /> Transpositional enaction matrix <br /> CAR  _ TRANSP : <br />                   ...      M                                     -                                             -

<br /> Both together : <br /> F  _ repl (F  _ transp) = F  _ trnsp (F  _ repl) : F  _ (i . j) --> F  _ (i + 1. j + 1)

<br /> Enaction   defined   with    DEFUN : <br /> <br /> CAR  _ TRANSP :  ( ... <br /> CDR  _ REPL :  (DEFUN CDR  _ REPL (X)    (CDR (REPL (X)) . <br />

2.2.4.  Rules for CAR, CDR and CONS

car, cdr, and cons are easily seen to satisfy the relations <br /> 1. car [ cons [ x ; y]] = x ...  ( X . X)  _ (i . j) | (A  _ (i . j + 1)    B  _ (i . j + 1)) .

<br /> [X ; A]              & ...                                                                                1.2               -

2.    cdr [cons [x ; y]] = y <br /> => <br /> CDR [CONS [X ; A]] = CDR  &nbs ... . j)    B  _ (i . j)) |    (X    X)  _ (i . j + 1) .

<br /> [X ; A]              & ...                                   -                                                              -

cons [car [x] ; cdr [x]] = x, provided that x is not atomic . => <br /> <br /> CONS [CAR [X ... p;     O   /(X  _ 1.2     A  _ 1.2))

2.2.5.  Formal approach

The idea of a kenomic LISP is based on two decisions. One is for a transition from lists to morphograms, the second is a dissemination of symbolic LISP over the kenomic matrix which is enabling new 'trans-contextural' operators, like mediation, replication and transposition.

Nevertheless, the new morphogram-based programming paradigm, polyLISP, kenoLISP or morphLISP, gets its first introduction as a mimickry of the methods of the list-based LISP.

LISPtypeset structure= [LISP; ops, sops; n, m∈N]

      CONS, CDR, CAR.

Super - Operators  sops : <br /> ID : identity, <br /> PERM : Permutation, <br /> RED : Reduct ... _ (1. i    ) ∐ X  _ (2. i   )) ∐ X  _ (3. i)

2.2.6.  Types of action

Typs of action on a kenomic object is either iterative, accretive, transposive or enactive and metamorphic.

Iteration <br /> Op  _ iter (Ob  _ (i . j)) = (Ob  _ (i . j + 1)) &nbs ... Ob  _ i . j . Ob  _ (i . j))  _ (i . j + 1) . Ob  _ (i . j + 1) ]

<br /> Accretion <br /> Op  _ accr (Ob  _ (i . j)) = (Ob  _ (i + 1. j) ... . Ob  _ (i . j))  _ (i + 1. j) . Ob  _ (i + 1. j) ] <br /> <br /> Enaction

 OP  _ EN [(Ob  _ i . j . OB  _ (i . j))]    = (Ob   ...                                                                                          i . j + 1

Transposition <br /> OP  _ REPL1 [(Ob  _ i . j . Ob  _ (i . j))] = &nb ...                                                    3.3                 3.1                     3.1

Replication OP  _ REPL1 [(Ob  _ i . j . Ob  _ (i . j))] =    ...                                                                                               3.3

2.2.7.  Logical topics

atom [X] = T atom [(X · A)] = F ∈ LISP .

<br /> (                                   i                             ) => (   ...                                                                                                  j

2.3.  Special features of kenomic LISP

2.3.1.  Retrograde recursivity

Despite the name “kenomic Lisp” the proposed kenomic Lisp is not dealing with lists but with kenomic patterns, i.e. morphograms. This has consequences for the concept of recursivity, crucial to Lisp, and applies to different aspects of the newly discovered features of retrograde recursivity.

2.3.2.  Parallelism

Parallelism, concurrence and simultaneity of processes, actions and interactions are primordial in kenoLISP and morphoLISP. The feature of parallelism is obvious for polycontextural systems. Each contexture in a polycontextural compound has structural space for its own formalism and therefore programming languages.
For kenomic and morphogrammatic systems the case is slightly less obvious. It becomes ‘natural’ with the understanding of kenomic operations. Kenomic operations are from the very beginning ‘dis-contextural’, delivering simultaneously different results.

2.3.3.  Self-referentiality

A kind of self-referentiality is crucial for classical LISP especially for the definition of recursivity.
But this kind of self-referentiality is not based on a chiastic interplay of terms and operations but on the concept of a “self"-application of functions on its previous values.

2.3.4.  Enaction

The operation of enaction was introduced as a positive interpretation of the elimination of terms like with the double crossing in Spencer Brown’s calculus of indication: {{ }} = . Formal enaction accepts the elimination of the term but recalls it on another level of the formalism. Formal enaction is therefore understood as a memristive cancellation of terms.

2.4.  Examples of kenomic LISP

2.4.1.  Kenomic CONS

typeset structure

typeset structure

<br /> <br /> X = (ab)    --> cons (cons (ab) a)    --> <br /> ((con ...                                                                                                  c

typeset structure

typeset structure

Kenomic recursion of    " cons ( ab, ab ) " : <br /> cons (ab, ab) = cons  ... re,    <br /> cons (ab, ab) = {(abab), (abba), (abac), (abbc), (abca), (abcb), (abcd)} .

<br />             Production scheme <br /> <br /> &nb ... ca | abcb | abcd       <br />        

<br />            LA - grammer for cons 1 <br /> <br /> &nb ... nbsp;               <br />

   <br />          LA - grammer for cons 2 <br /> <br  ... nbsp;               <br />

<br />       LA - grammer for cons 3 <br /> <br />     cons ((a  ... nbsp;              (abcd) <br />

2.4.2.  Kenomic CONS - and reduction

typeset structure

How are productions and their reductions related?
Obviously, there is an asymmetry between formula and solution involved.

"S-expressions are the fundamental data objects of LISP. They consist of(1) atoms and (2) CONS-cells.

A list is either (1) the atom NIL or (2) the result of CONSing an s-expression onto the beginning of an existing list."

”...the set of s-expression is closed under CONS,...list are s-expressions."

It follows that s-expressions are non-ambiguous.

Kenomic expressions are not covered by a unique tree but by several “parallel” trees. Therefore, kenomic expressions have a set of trees, or a forest of trees, as their representation.

There is an asymmetry between operators (CONS, CAR, CDR) and data objects.

cons(cons(ab), a) --> (aba), (abb), (abc).
On the other hand we have:
reverse CONS: (aba), (abb), (abc) --> cons(cons(ab), a).

This offers a reduction method from objects to operators.

The syntactic structure of the example “cons(cons(ab), a)" is simply a singular tree while the kenomic structure is - in this case - a triple of trees.

<br /> cons (cons (ab) a) <br />           ↙ ... sp; a                 b

Hence, morphorams [aba], [abb], [abc] are operationally reducible to the form “cons(cons(ab), a)". That is, the 3 different morphograms have a common singular operational representation in “cons(cons(ab), a)". Or more generally, the operational CONS-representation is “CONS(CONS(X Y)X)".

               &nbs ... br /> Overscript[<br /> CONS (CONS (X Y) X) | CONS (CONS (X Y) Y) | CONS (CONS (X Y) Z), _] <br />

2.4.3.  Kenomic append as “possible continuations"

1. append [x;y].
append [x; y] = [null[x] --> y; T --> cons [car [x]; append [cdr [x]; y]]]

typeset structure

                &nb ... 0; _ 1            mg _ 2 : append <br />

<br /> LA (MG) = append ( ' mg _ 1 ' mg _ 2 ' mg _ 3 ' mg _ 1  ... A0; _ 2) <br /> <br /> keno - append ( ab, aab) = abaab, baaab, bcaab, = (abaab), (abbba), (abcca)

2.4.4.  Kenomic CDR , CAR and CONS together

<br /> CDR   [(X  _ i . j    . A  _ (i . j))] = ( (⊥ ... ; _ (i . j))] = aa  _ (i . j) . (⊥  _ (i . j)) | ba  _ (i . j + 1) .

<br /> X = (A)    --> cons [X ; A] = (X A) | (X B) <br /> X = (X . A)    ... xF3A0;  . ba ), car (aa . ba )) = cons (( ba ), (aa )) = (ba . aa)

2.5.  Bifunctoriality of CONS, CAR and CDR

2.5.1.  Bifunctoriality for CONS

Bifunctoriality of polyLISP is a new feature of memristive interchangeability, i.e. parallelism of compositions. Hence, also for kenomic systems interchangeability of its operations is crucial.

((list1 <br />       ∐)/list3) * ((list2 <br />   &nb ...                                             (list3 *                                        list4)

CONS ^(m) is retrograde recursive , bifunctorial and super - additive, CDR and CAR are ...  | (X B) X = (X . A) --> cons [(X · A) ; Y] = ((X · A) Y) | ((X · A) Z) .

<br /> (1)    cons [x ; y] => <br /> CONS ^(2)[x ^1 ; y ^ ...   <br />           ∐ )/(e3 *   e4)  ) .

CONS ^(3)[x ^1 ; y ^1 | x ^2 ; y ^2 | x ^3 ; y ...                   (X   A  )        X                   A 

  <br /> (                                                                             ...                                                                                                  3

<br /> (2)     cons [(X · A) ; Y ] = ((X · A) Y) <br />   &n ... A0;^1 | (X ^2 ; A ^2) Y ^2 | (X ^3 ; A ^3) Y ^3] :

   <br />     CONS ^(3) [(X ^1 . A ^1) ; Y &# ... 0;            (X   .    A  )                  Y 

 <br />     (                                                           ...    0.0 .3                                                                                 

2.5.2.  Bifunctoriality for enactional CAR and CDR

Enactional CAR <br />    CAR  _ EN [(X  _ i . j    . A &#x ... 3A0; _ (i . j)) . (⊥  _ (i . j))) | Y  _ (i . j + 1)) . <br />   

CAR  _ EN [(X  _ i . j     . A  _ (i . j))]    ...                                                                                          i . j + 1

    Enactional CAR <br />    CAR  _ EN [(X  _ i . j . ...                                                                                          i . j + 1

<br /> Enactional CDR <br /> <br /> CDR  _ EN [(X  _ i . j    . A &# ...  j))  . Y  _ (i . j)) | (X  _ i . j  _ (+ 1) . A  _ (i . j + 1)) .

<br />     Enactional CDR <br /> <br />    CDR  _ EN [(X  ...                                                            i . j         + 1             i . j + 1

With this connection to bifunctoriality established, the whole elaborted apparatus of polycontextural interchangeability of operations might be applied to develop a complex memristive polyLISP.

2.5.3.  Typical cases of interchangeability of disseminated operations

       Transpositional composition <br /> <br />   &nb ...            3                  3.1              1           3.1                                   1

<br /> Matrix model <br />                              O                    O ...                              3   -                            -                                3.3

<br />       Reflectional interchangeability <br />     <br /> & ...                                                                   3           3.3                3

2.5.4.  QUOTE and EVAL

"Quote is a a one-argument operation that stops the evaluation process before it reaches its argument.” (Stark)

QUOTE data-expression --> data-expression.

QUOTE, again, is involved into iterability, and therefore the possibility to distinguish between iterative and accretive quotation is accessible.

typeset structure

Polycontextural QUOTE
"In a polycontextural setting we are free to choose a more flexible interplay between quotation and interpretation. To quote means to put the quoted sentence on a higher level of a reflectional order or to another heterachical actional level. We are not forced to limit ourselves to any kind of the well known intra-contextural meta-language hierarchies. Those local procedures are nevertheless not excluded at all but localized to their internal place.
To start the argumentation I simply map an index i to the sentences. A quotation is augmenting and an interpretation (evaluation) is reducing its value, say by 1.“

2.6.  Memristive self-referentiality

2.6.1.  Self in LISP

One of the most striking properties of LISP is its ability for self-referential definition crucial for the definition of recursion and a whole trend of AI programming.

Under the title “Self-processing”, Stark writes:  

"LISP’s ability to process itself is a direct consequence of (1) the representation of the language in its data structure, (2) the simple algebraic syntax, and (3) the presence of functions such as QUOTE, DEFUN, FUNCTION, EVAL, and APPLY.” (W. Richard Stark, LISP, Lore, and Logic, 1990, p. 92)

Those features are naturally implemented in the paradigm of kenomic LISP. The function “QUOTE” and “EVAL” might be leading to the accessibility of new self-referential constructions. (cf. Godel Games)

But with the development of kenomic definitions of the basic operations of LISP, CDR, CDR and CONS even more direct constructions of self-referentiality are in sight.

In fact, the kenomic retrograde recursivity of the basic operators is uncovered as a fundamentally self-referential notion and construction.

As a consequence, the neat reflectional construction of self-referentiality proposed in my small paper “Gödel’s Games” gets a more direct realization on an even more profound level of the very understanding of iterability itself.

An important advantage of this kenomic concept of self-referentiality is the fact that it is stucturally determined in a way that avoids all those annoying misreadings and misunderstandings of the term “self” in all those self-referential configurations and speculations.

On the base of such fundamental properties of self-referentiality, constructions like enaction are supporting further aspects of operational self-applications.

2.6.2.  Fundamental theorem for pure LISP

Fundamental theorem for pure LISP.
"Every algorithmically computable (in the informal sense) function can be computed by a program in pure LISP."

The new question is, can every memristive function of polyLISP be computed by a program in pure LISP?
In other words, is there a reduction mechanism which is able to reduce polyLISP to pure LISP? What exactly would it mean if kenomic LISP wouldn’t be reducible to pure LISP?

There is always a possibility of simulating a new formalism in terms of a traditional formalism. But again, simulations don’t become realization.

List of translations                                  Simulations
Kenom to atomic symbol                          Equivalence class of different symbols
Morphogram to list                                  Equivalence class of lists
Retrogradness to iterativity                      Double recursion with conditions
Enaction to elimination                            Elimination plus appending
Poly- to monocontexturality                      Multi-sets of lists
Polyverse to universe                               Multi-sorted domains
Super-additivity to composition                 augmented composition
Polycontextural logic to logic                     Multi-valued or modal logics

and so on.

3.  Monomorphic Lisp

3.1.  General

(aba) = (abba)
a=b, (aa) != (aaa)

3.2.  Concatenational

a!=b, (ab) = (ba), (aa) = (a)

comp: [(aa) (b) (cc)] --> [aabcc]

decomp: [aabcc] --> [(aa) (b) (cc)]

3.3.  Metamorphical

(aba) = (abba)

4.  Diamond LISP

According to the quadralectics of diamond strategies, the simple oriented approach to a kenomic LISP might be distributed and located into the qaudralectics of distinctions.

From the circularity of a list to a chiastic resolution of self-referentiality.

5.  Applications

5.1.  Ambiguity, double-meaning and morphograms

"Ambiguity is the wild child of language interpretation. Whether from the point of view of the philosopher, linguist, psychologist, lexicographer, or computer scientist, ambiguity problems have relentlessly resisted taming.  
Lexical ambiguity, or polysemy, arises when a word, or a phrase, is associated in the language system with more than one meaning.
Generativity refers to the notion that words seem to be able to be used in new and creative ways, reflecting the generative power of language, but at the lexical level.” (Judith Klavans)

A simple application  “Fruit flies like a banana.”
What’s the meaning of an ambiguous sentence like “Fruit flies like a banana.”?

As it is well known, the sentence has, at least, two meanings:
1. “the insects called fruit flies are positively disposed towards bananas.”
2. “Something called fruit is capable of the same type of trajectory as a banana.”
"These two potential meanings are partly based on the (at least) two ways in which the phrase can be parsed.
(Alan P. Parkes, Introduction to Languages, Machines and Logic, 2002, p. 42)

Ambiguity in languages is reflected in the existence of more than one parse tree for one sentence of that language.

There are two strategies to deal with ambiguity:
1. Disambiguation and
2. Mediation and Bisimulation.

The decision necessary for the purpose of formal languages and computation is disambiguation. Disambiguation is eliminating polysemy and ambiguity of sentences.
Hence, both meanings of the sentence might be chosen and used but separately or just one meaning might be involved in further steps of reasoning, depending on a further context.

On the other hand, the strategy of mediation is supporting the polysemy of ambiguous sentences. As the example above is set, there is no context which could help to separate the meanings and to select one meaning only as prior to the other.

Hence, the sentence as such has both meanings at once. Therefore its logical status is demanding for three and not only for two logical places to realize its polysemic ambiguity. Two loci for the separated meanings are necessary and one more for the sentence as such having both meanings at once. This third position is not reducible to a syntactical choice to contrast the semantics of the sentence because the sentence also has syntactically two disjunct parse trees. But more important, the focus of the understanding of the ambiguos sentence is on polysemy, i.e. on the semantic ambiguity of its meaning. A further step will show that the purely semantic approach is not offering a possibility for a ‘re-solution’ of the ambiguity problem of polysemic sentences. What is needed additionally is a pragmatic approach, here formalized with application of a morphogrammatic approach.

Thus, the third position, which places the double meaning of the sentence, shall be inscribed as the morphogram of the double sentence. A morphogram might be understood as an inscription of pre-logical ‘meaning’. It therefore has to be able to deal consistently with semantic ambiguity without running into logical contradictions.

Binary tree analysis

Fruit flies like a banana.                  Fruit flies like a banana.
               ↙↘                                                ↙       ↘
  Fruit flies    like a banana                      Fruit flies   like a banana
                     ↙↘                                    ↙↘                   ↙↘             
                 like  a banana                   Fruit   flies           like  a banana

 <br />       [1] :         & ... bsp;           d      <br />

<br /> [1] : cons ((ab), cons (c, d)) = cons ((ab), cd)) = ((ab) cd) [2] : cons (cons ((a, b), cons (c, d)) = cons ((ab), (cd)) = (abcd) .

<br /> MED ([1], [2]) = ((cons ((ab), cons (c, d))        & ...  ((((ab) cd) <br />            ∐ )/(abcd))

<br /> [3] : (< ab > cd) ≡  _ BISM  (abcd) => <br />     ... sp;     ↕)/(abcd)    -->     (a, b, c, d ) )

<br /> Fusion <br /> fus (a, b) = < ab > => de - fus (< ab >) = (ab), <br /> de ... gt;) = (ab) cons (a, b) = (ab) de - cons (ab) = (a, b) de - cons (ab) = (car (ab), cdr (ab)) Null

 <br />       [(ab), (c, d)] ∐ [(a, b), (c, d)]   &nb ... sp; < b > < c > d     <br />        

The operation of de/melting (or fusing together) is not anymore a formal operation in the sense of the lambda calculus or LISP. For both, the lambda calculus and LISP, the construct typeset structure with its ‘composed’ meaning “Fruit flies” is a singular term, i.e. a name for an ontological entity covered by the taxonomy of animals. Nevertheless, this singular name is synthesized, fused together, by two domains, the domain of fruits and the domain of flies. This enables a specific decomposition which is not part of the sentence as itself and its double meaning.

Hence, a decomposition of such a term needs a new abstraction, typeset structure, paired with a new operator “de/fus”.

The result of the exercise shows a mediation of both meanings of the sentence and a kind of a morphogrammatic deep-structure of the double-meaning of the sentence as its ‘meaning’ as such. This deep-structure is ‘unifying’ the two observer depending readings of the sentence into an observer-independent inscription of its double-meaning, as its morphogram.

Left-associated tree analysis

                                                          Fruit flies like a banana.
Fruit flies like a banana.                              ↗↖ a-banana
           ↗↖ a banana                                ↗↖ like
         ↗↖ like                                      ↗↖flies
Fruit-flies                                         Fruit

 <br />       [( ab), c, d] ∐ [(a, b, c, d)] ∐ [< ab & ... ; verb > < verb ; adv . >    object         

<br /> cons (cons (< ab >, c), d) = [< ab > cd] <br /> cons (cons (cons (a, b), c, ... r (< a >, < b >) = ((ab)/< ab >),    < c > = ((c)/< c >) .

Both sentences are analyzed by a time-linear principle of possible continuations (Hausser).
In contrast, the third analysis is breaking in some respect the time-linearity and is introducing a planar extension of the ambiguous terms. Such planar constructions which are holding conflicting and antinomic terms together are covered by morphograms. Lists are not prepared to cover ambiguous terms because they are defined by atomic terms and linear constructions.