The World of the Neuron


By Adrian Zidaritz

Original: 02/09/20
Revised: no

We have seen in the article Main AI Concepts that neural networks, the basic algorithms used in deep learning, have borrowed the concept of neuron and synaptic activation from the biological neuron. Here are the two neurons, the original biological neuron and its synthesized AI counterpart side by side.




The deep learning algorithms of AI weave artificial neurons into networks. One such network is sketched on the right. You may think that these network architectures are designed in a very precise way but that's not yet the case. Every problem that AI has resolved, whether recognizing your face on Facebook, or translating natural language, etc. involves a certain degree of craftsmanship and experimenting. Different types of neurons, layers of different sizes, the number of layers, these are all decisions which impact the efficiency of the network but it is only experience which guides you to making the choices.

So let's say you decide to train a network, and after training, you remove the top 5 neurons of the 43rd layer. You then retrain the network, hopefully with data that you held back from the first iteration. The network may still produce satisfactory results, it may even produce better results! This is the closest we come in AI to something like brain neuroplasticity, the brain's capacity to reorganize itself after injury in order to accomplish the same tasks as before the injury. It's a very small encouraging sign.

We have seen that the learning process in these AI neural networks is not well understood yet. The best attempt so far has been based on the Information Bottleneck Theory, and even though the application of that theory does not appear to offer a complete explanation, the viewing of the learning process as a combination of splitting the information into details in the beginning layers of the network and squeezing those details within the subsequent middle layers (forgetting some of the details and thus making forgetting an essential part of learning), before allowing new information to reemerge into the ending layers, seems to resonate.

The biological neuron is infinitely more complicated and powerful than its counterpart used in deep learning, and the question is how do these biological neurons combine together in the brain to facilitate learning. Can our approach to artificial learning shed some light on the biological learning in the brain? While our understanding of the artificial learning is at least reachable nowadays, the truth is that we are very far from understanding the process of biological learning in the brain. Vice-versa, if we discover additional learning mechanisms in the brain, then we could possibly think about designing artificial counterparts in AI. To see why these are such difficult questions, let's look at the biological neuron and understand some of its structure and function.




We have many times looked at the brain as a Bayesian prediction machine, a conceptual simplification that is nevertheless very useful in many situations. But the kind of predictions that are happening in the brain look a bit more complex than the type of classical computations that our computers use. Nature has designed two other forms of computation: the biological computation and the fundamental quantum computation which is present in all matter, organic or not. We have seen already that Penrose and Hameroff think that these two types of natural computations may be related and that the physical substrate that may link the two is the microtubule in the neuron. As you can see in the more detailed picture below, these microtubules are weaved through all the components of the neuron.




If we are to look at the brain as an information processing system and if we are daring (or foolish) enough to include consciousness within this processing, then it follows that we have to understand (biological) neuronal network architectures, and the variety of such networks in the brain is mind-boggling (no time for puns now ☺). It just seems that every part of the brain (cerebrum, cerebellum, thalamus, claustrum, etc.) uses a different network architecture and to do so, its neurons take almost all possible configurations, in terms of dendrite-axon topology. Yes, graphs again.