The  mechanism   of a Hopfield  net is that if you keep  updating,   you
eventually   converge   to one of your memories  (the best one, in  some
sense).   The usual / better-biologically-motivated  update function  is
considered  to be a stochastic mechanism,  whose update function  is the
same as breadth-first, but bit-by-bit is stochastic.

Alright  I don't know if I mentioned  this about Hopfield nets-  as deep
learning   goes,  they use a few things  we know/research   from  animal
neuronal models.  For example the Lyapunov condition, that it's possible
to use decreasing energy as a metric for convergence to an answer (which
will be stable).

Here  I  note  two important  things  about the  bit-by-bit   stochastic
updates.   (By stochastic  update,  I just mean you choose  one bit (one
neuron)  uniform  randomly,  then apply the bit-update  formula  to that
neuron;  you're guarunteed by the conditions  of Hopfield nets this will
converge to a memory eventually).

Other  than  being  laborious,  what this does not do is increase   some
confidence   metric  in which memory  will be chosen.   It flips  a  bit
congruent converging  to a stable state.  The same bit might  be flipped
again differently  as the system  evolves towards the stable state,  but
the formula always works. Despite the fact the the same bit might flip a
different way later as other parts of the image have been flipped, under
some constraints  at least  the image becomes more similar to one of the
memories,  and attains  stability  (the update is always  to do nothing)
when it reaches one memory or other.

So the answer  being converged  to fades into view over the updates, the
annoying  thing  being that you have to watch whether changes  are still
happening  or not to know whether  you've converged yet. Hopfield  (etc)
have I believe written about how Hopfield nets are useful without having
absolute  knowledge of whether you've finished converging  or not. Well,
that was long winded, but they fade one memory into the foreground  over
time, eventually. That's what they do.

Ah, I didn't  mention my broad strategy  I'm working on yet.  Imagine  a
tree. You have memories  of leaves, memories of a flower, memories  from
the  trunk.   Then  we divide  a picture  of a tree into  a  grid,   and
independently   converge   each  tile of the grid with the same  set  of
memories / update rule locally.

In that context,  each tile will eventually  converge to a unique memory
independently.  But we can just mix other functions into the update rule
(maybe  much slower than we are updating the bits properly).   One of my
ideas is to - and this is painful to execute - when an unconverged  tile
is sufficiently  close to a memory by some threshold, to have rules like
then flipping some bits in neighboring tiles based on another network of
memories,  to do with the organisation  of tiles  making  up a tree.   A
flower  is amoung  leaves,  leaves are above  a trunk  - have relatively
close matches start to stochastically  mutate nearby tiles towards  what
it thinks should be there, but then continue updating properly.

I  think  this  kind  of method   obviously   ruins  the  guaruntee   of
convergence,  though as researched by Hopfield, even if a little ongoing
chaos  has been introduced,  the chaos  might be small and local  enough
that the mechanism is still useful.

As well as chaos, I think a method like this will introduce  hysteresis-
a  random  path being traversed  towards  the convergence  will remember
which tiles it started to get right first.

Letting my mind wander a bit.  I was also looking  a little at Interlisp
Medley's  implementation  of TCP in Maiko/src/inet.c  etc, and interrupt
driven  ether network  mechanics  - 10MBDRIVER  ?  MAIKOETHER  ?   Still
figuring it out. Using interrupt signals rather than polling.