Recurrent neural networks are [[Neural Networks]] with *feedforward* and *feedback* pathways. These pathways are similar to *ascending* and *descending* pathways in the [[Nervous System]], so to give you an idea of what it means for a network to be recurrent, let's consider what it means in the case of the nervous system.
[[The brain processes information hierarchically]]. In the ascending or bottom-up pathway, information flows from the sensory receptors to the central nervous system and back to the muscles. The simplest example of this is in the case of [[Reflexes]], where a sensory neuron brings information to an interneuron located in the spinal cord, which then activates a motor neuron that signals a muscle to move. In reflexes, information only flows through the ascending pathway, but when higher-order pathways are activated in the nervous system, information flows not only from the senses to the nervous system to the muscles, but also in loops within the nervous system itself.
Descending or top-down pathways bring information from the higher levels of the nervous system to the lower levels, allowing for the possibility of changing the behavior of neurons lower in the hierarchy of the nervous system. For example, in the vase illusion below, you can choose to either see a flower vase or two faces looking at each other. In order to change your perception, higher levels of your nervous system send information on the *descending*, *feedback* pathway to your lower level perceptual neurons in order to change what you experience yourself seeing. The sensory stimulus has not changed, but it is possible for higher levels of your nervous system to change the behavior of the nervous system itself, thus altering what you perceive.
![[vase-face illusion.png|300]]
In an artificial neural network, we can add descending pathways to allow the network to change its own behavior. Some important consequences of this small addition are that recurrent networks begin to become capable of processing time, and they are capable of continuing activity in the absence of sensory input. When information in a neural network can be sent not only from layer 2 to layer 3, but also from layer 3 to layer 2, the neural network is essentially capable of telling itself that something has happened, and it should adjust its behavior in response. The recurrent network pictured below is an extra simple example of how this might look, but in an environment as complex as the brain, recurrent networks can have billions of ascending and descending pathways. This is one of the fundamental aspects of the [[Memory-Prediction Framework]].
![[recurrent-neural-network.png]]
The fact that recurrent networks can continue to process information in the absence of sensory input is key to understanding why recurrent networks behave like [[Chaos - James Gleick|dynamic systems]]. If the connections are wired just right, you can have networks that never cease activity after experiencing some input until they reach thermodynamic equilibrium, or networks that shift from one attractor to another as they surpass various [[Bifurcation Points]].
---
#AI #Psychology/Learning #Philosophy/Mind #Psychology/Cognition #2021/8
*August 10, 2021*