Contemporary neuroscience is mostly descriptive, allocating functionality to brain substructures and cortical areas, and unraveling neural circuitry, without really explaining them.
For example, we deciphered the complete neural circuitry of C. Elegans, - but do we really understand it? For the human brain, the circuit diagram of the cerebellum is known for several decades, still I don’t think that we know exactly how the cerebellum works.
In these pages, I am taking a step back, and instead of neural networks, I model the brain as an FSM, a finite state machine.
The FSM of the brain is of course not fixed, it can learn by a method that is very similar to Hebbian or STDP, but more tractable mathematically: something that is called ‘edge-reinforced random walks’ (it has nothing to do with Reinforcement Learning, despite the similar-sounding name).
The FSM model can replicate lots of the peculiar features of the animal and human brain:
- imprinting (learning something once, and never forgetting or correcting it)
- predictive coding (internal representation, based on expected next input)
- mirror neurons (common representation for own actions and observed actions of others)
- a-priori knowledge of space, and ‘path integration’ (understanding loops, where you arrive back from where you started)
- hard-wired / instinctive actions and behaviour
- motor babbling
- fear / anxiety
- aggression
- mental rehearsal of actions
- distinguishing an ‘instance’ vs the ‘class’, eg ‘dog’ is a class while ‘snoopy’ is an instance, a specific dog.