Well, I'm back to writing the book live. The temptation was to show the ending of a finished project, and then tell a story about it, but a unified retrospective narrative would cost some gritty naturalism of my exploratory programming experience. So we, you and I, are getting the thing as such.

Around the publication of the permacomputing open letter, I had become interested in deep neural networks. There is a cryptocurrency scam electricity in the air about the letters AI I needed to ground myself in the truth of.

One deep neural network tradition is associated with Hopfield in the early 80s. It continues (current date) to be a plausible way for what episodic (singular event) memories are like in human brains.

The neurons of this network are binary bits, taking two values - it is mathematically convenient to identify them as +1 and -1, though for storage the role of -1 will be understudied by 0.

Hopfield nets bit by bit flip bits in an input to match one of its memories, eventually reaching a memory. We can design this to happen by using an energy function of the input and memories, in which each memory is one potential memory well (imagine high school physics). If we keep decreasing energy, it will stop changing when it reaches the bottom of a well. The change is the update rule of the energy function.

Deep means that while decreasing, a single neuron might flip more than once at different steps.

I do go on, but that's about it mathematically. By way of foreshadowing, I intend to staple deep neural nets to hash tables to imbue them with intelligence. After that, I will use those for the earthy, grounded purpose of making a gopher protocol multi user dungeon game engine component. With that backdrop, we will explore a shocking cryptographic security one form of update rules has.

In general I will tack on an embedded C lib for completed lisp functionalities, compiled using Embeddable Common Lisp. This book is about lisp.

I first implemented a working net a few months ago, in a process very like we shall witness here, but worse. So this is a take two. In this implementation, I weakened (made more general) my equations. Weaker in mathematics means having less rules.

Beyond, we will explore these neurons learning with one additional strong rule: Whatever unknown mystery underlies what is being learned, no first or second partial derivatives exist. This precludes backpropagation learning.