Listening  to ae's universe-shattering   3rd episode  of Special
Education,  while  trying  to just barely  grok my way into deep
learning, this was the series of visions I had.

Using  Hopfield  networks,  which  are a kind of  neurophysicist
spiking toy model that always converges to a stable state due to
its definition  in terms  of the lagrangian of the neurons  (the
Lagrangian  is an often useful  equation  of Lagrangian  :=  (is
defined as) Kinetic energy - the potential energy

(Plus whatever lagrange modifier/constraints  which is  probably
important but I didn't see that mentioned yet).

In the unevolved case, a finished-training stable-state Hopfield
network takes a distorted or incomplete  input of the dimensions
of what it learned from, and returns the best match. Traditional
hopfield networks  have memory for n neurons ~ O ( 0.14 n ), but
a modern modification  gives a storage efficiency of O ( exp n )
(crazy) associated with activation functions like

(lambda (x) (cond ((< x 0) 0) (t (expt x 2))))

but that's not really here nor there.

With some unlettered  sticking  points, and just with the simple
notion   of  a Hopfield  network,  I was imagining  having   two
hopfield  networks,  one storing truly arbitrary  anaphora  that
have been encountered before

I : original-form

(progn (get-out pot) (put-on stove pot) (fill-with water pot))

Say we encode (in different hopfield nets)

II : LEAVES.txt

(a (b c) (d e c) (f g c)) as the leaf structure

and separately

III : INPUTS.txt

(progn get-out pot put-on stove fill-with water)

Then we can recover the original form without deep learning like
this:

(multiple-value-bind (leaves inputs)
(apply 'values
 (mapcar
  (lambda (x)
   (with-open-file (in (format nil "~a.txt"  x)) (read in)))
  '(leaves inputs)))
(eval `(let (,@(pairlis '(a b c d e f g)
                       (mapcar 'list inputs)))
 ,leaves)))

Now the applicability  of a Hopfield  net - a best-matcher  - is
that when faced with a fragment  of a new problem,  a best-match
leaf structure  can be shown  up : and a partially defined  leaf
structure   or  a  partially-known  argument  list can  turn  up
argument   lists.   Metacircularly,  there could be a  class  of
memorised  functions  whose function is to modify a common  leaf
structure   for  another  purpose,   and/or  an  argument   list
similarly.

A research extension  would be to somehow have search references
into  this hopfield  net supporting  larger equations   of  this
hopfield net.

I was thinking of using gray level ~ graphic-char-p,  prin1s  of
II and III above in the normalised range 0..1 if that's actually
important, and simply define a fixed character length.

So far, my impression  was that whereas Bayesian statistics is a
wholly   robotic  tool to be operated  by experts,  for  example
Hopfield  nets are the purview  of physicists  (a physicist   is
anyone who doesn't afraid of calculus).

This is quite counter to the python module providing C++ company
advertisements  everywhere, which imply that deep learning  is a
flight  dashboard  ready for operation by pilots,  like Bayesian
statistics  is.  That's not my impression  at all.  In the sense
that  Bayesian  statistics  is about a hundred  years old,  deep
learning is about 50-60 years old; maybe that's the difference.

I'm not sure  how to make it dynamical.  Since  it by definition
converges (on the condition of symmetric edge weights), normally
it is trained  to convergence:  However Hopfield  found that the
chaos introduced by some asymmetries  did not necessarily affect
other useful bits of the network. That makes me hopeful that new
data could be added in a funky way, if somehow a tour through  a
new-data-add  cycle could be provoked  in between  uses, without
blandly retraining the entire network to a different completion.
But  that kind of local  limit cycle  modification  is  probably
badly behaved, in general.

* I still didn't implement this for the first time myself so WIP