Spontaneous Neural Firing in Biological & Artificial Neural Systems Christian R. Huyck and Richard Bowles Middlesex University, London, UK, email: C.Huyck@mdx.ac.uk, web site: http://www.cwa.mdx.ac.uk/chris/chrisroot.html September 21, 2001 Abstract Neurons spontaneously fire. This neurophysiological phenomenon has computational ramifications and encourages useful short and long term dynamics in the brain. The noise of spontaneous neural activation may actually improve the system. Spontaneous neural firing can attenuate neural waves. This allows the waves to carry more information and to stay active for longer. Spontaneous neural activation also improves Cell Assembly (CA) dynamics. Neurons that receive little or no activation are unused. We argue and describe an experiment that shows that spontaneous activation stabilises unused neurons. Spontaneous activation, along with a post-Hebbian learning rule, puts the synaptic weights of these neurons in a neutral position. This may encourage later recruitment into CAs. Spontaneous neural activation also encourages recruitment of neurons into nearby CAs. We describe an experiment that shows neurons are recruited into nearby CAs when there is spontaneous activation, and remain isolated with no spontaneous activation. The neurophysiological phenomena of spontaneous neural firing has at least three computational rationales. It encourages more information travelling in neural waves. It allows unused neurons to remain stable until they are eventually recruited into CAs. It encourages neural recruitment into CAs. 1 Background and Introduction Neurons generally fire when they receive activation from other neurons. However, there is evidence to show that they can also fire spontaneously, i.e. without any external activation (Bevan and Wilson , 1999; Abeles et al. , 1993). (Zeki , 1993) suggests that spontaneous firing of neurons in the visual cortex may be responsible for hallucinations. Neurons generally only fire spontaneously when they have been inactive for a long period (relative to their normal firing rate), typically several milliseconds. When a neuron has been active, it fatigues, which tends to discourage it from firing until it has recovered from that fatigue. There is also some evidence that spontaneous activation is used to set neuronal function. Spontaneous retinal discharges is necessary for the segregation of the afferents from the two eyes in ocular dominance columns (Singer , 1995; Stryker and Harris , 1986). Spontaneous activation is necessary to set neurons in parts of early visual processing. There is no reason to assume that it is not needed elsewhere in the brain. Spontaneous neural firing may have some physiological reason, but neurons are computational devices. What are the computational ramifications of spontaneous neural firing? The overall physiological reason for spontaneous neural firing may simply be, like the appendix, an accident of evolution. Still, it, like flies' wings, requires some expenditure of energy. When flies are bred in an environment that prevents them from using their wings, flies with wings are bred out of the population. In the evolutionary race, the expense of wings is too much when there is no reward. Similarly, the expense of spontaneous neural firing might not be justified, from an evolutionary perspective, if it did not have a useful function. Computational research suggests two useful functions: attenuation of neural waves, and recruitment of neurons to Cell Assemblies(CAs). 2 Attenuation of Neural Waves Beurle (Beurle , 1956) models neural functioning as a wave of active neurons. Imagine a pebble dropped in a pond with ripples spreading from the centre. In the brain an area is activated (the pebble is dropped), and activation spreads from that point (the ripples). The original area ceases to be active due to neural fatigue, leaving a wave of active neurons centred about the original area of activation. This argument assumes two dimensions, but generalises to three dimensions. Circular waves of activation are replaced by spheres of activation. The height of the ripple is analogous to the percentage of neurons firing in the wave. If the height is too low, then the wave will die out because new (as yet unactivated) neurons will not get enough stimulus to fire. When a wave is absent from an area of the cortex (i.e. none of the neurons in that particular area is firing), there is clearly no information content. Similarly, when all the neurons in that area are firing there is very little information present, since there is no differentiation of one subarea from any other. However, when only some of the neurons are firing, there is the possibility of different patterns of information being encoded, and that the level of activation may increase or decrease. Therefore, an almost essential requirement for information to be present in the patterns of neural firing is that some of the neurons do not fire. Assuming that the wave represents a concept, all neurons firing corresponds to the perfect instance of the concept, the prototype (Rosch and Mervis , 1975). Some neurons not firing corresponds to a non-prototypical concept; these non-firing neurons carry information about properties that are not present in the currently active concept. For example a fully firing bird wave might refer to a typical robin (considered to be the perfect example of a bird), while an incomplete wave might refer to a penguin (a bird lacking the archetypal quality of flight), or a dead robin. Spontaneous neural firing makes it less likely that all neurons in the area of the wave will fire. This is due to the dynamics of firing and to the dynamics of learning. The short-term dynamics of neural firing are affected by spontaneous firing. When a neuron fires, spontaneously or not, it is less likely to fire again due to fatigue. More importantly, spontaneous firing may lead to other neurons firing; thus many neurons may be fatigued. These fatigued neurons will be less likely to fire when the wave passes through. The dynamics of learning are also influenced by spontaneous firing. Anti-Hebbian learning rules reduce the strength of connections when one neuron connected to a synapse fires and the other does not. Using anti-Hebbian learning rules (Shouval and Perrone , 1995), spontaneously fired neurons may reduce the strength of already strong connections. Thus activated neurons will be less likely to activate other neurons, leading to a reduced height of the wave. 3 Recruitment of Neurons to a CA Buerle's wave model is related to Hebb's CA hypothesis. CAs are reverberating circuits of neurons. The idea comes from Donald Hebb (Hebb , 1949), who proposed the model as a mechanism for short and long-term memory, for perception, for separating figure and ground, and many other problems. A CA is a large number of neurons with many connections between them. The activation within the loop is self-sustaining. Once activated by external input, the CA as a whole remains active longer than any individual neuron within it due to its property of self-reactivation. The wave model relates to reverberating cycles, but cycles may exist without waves. A CA is the neural correlate of a concept. 3.1 Recruiting Nearby Neurons CAs form by a Hebbian learning mechanism. When two neurons fire simultaneously, the connection between them, if it exists, is strengthened. This local learning rule allows groups of neurons to activate each other. These learned groups are CAs. If a neuron is not part of a CA, it will only rarely, if ever, be fired by activation from other neurons. This is because it has only weak connections from other neurons. (If it had strong connections, it would be part of a CA.) Since it has only weak connections, it never gets enough activation to surpass the firing threshold. If a neuron never fired it would be useless. If, on the other hand, neuron A fires spontaneously, it may fire at the same time as neuron B. Assume neuron B is connected to A and is in a CA. When they both fire, the connection between the two neurons is strengthened. If this happens often enough, the neuron becomes part of the CA. Thus spontaneous firing of a neuron allows it to be recruited into a CA. 3.2 Post-Hebbian Learning Rule In the developing brain, it may be possible that areas of the brain receive little or no activation from other areas or from the environment. Latter in development, these areas may become part of CAs. This may relate to Piagetian stages (Inhelder and Piaget , 1958). This lack of external activation may also occur with single neurons that are initially poorly connected A mechanism that enabled these weakly connected neurons to develop some connections without external activation would make computational sense. There is a large amount of cell death in infants. While this may be related to lack of stimulus, it is very inefficient to kill off these neurons. Initially it might appear that spontaneous activation would lead to zero valued excitatory weights and inhibitory weights with values tending to infinity in a portion of the network that does not receive external stimulus. However, post-Hebbian learning rules keep the weights at a neutral value. This neutral value might allow the neurons to be easily recruited into CAs when external stimulus eventually arrives. When no CAs exist in an area and there is spontaneous activation, naive learning rules can lead to all excitatory weights going to zero. Anti-Hebbian learning rules are a model of long-term depression (LTD). When one neuron fires and an adjacent neuron does not fire, the weight of connecting synapses are reduced. LTD is used to prevent the synaptic weights growing infinitely. When neurons are far from CAs they will be stimulated mostly by spontaneous activation. Since only a small percentage of neurons are stimulated, it will be the rare case when adjacent neurons are active. Thus, LTD will be applied much more frequently than long-term potentiation (LTP). There are two standard types of LTD pre-not-post and post-not-pre. Both pre-not-post and post-not-pre LTD encourage weights going to zero. Pre-not-post LTD occurs when the pre-synaptic neuron is active and post-synaptic neuron is inactive. The spontaneously activated neuron fires, and none of those it connects to fires; all of its synaptic weights will be reduced and will tend toward zero. Post-not-pre LTD occurs when the post-synaptic neuron is active and the pre-synaptic neuron is inactive. The spontaneously activated neuron fires and none of those neurons that feed it fires, so all the incoming synapses have their weights reduced, again tending toward zero. The situation is even worse when inhibitory neurons are concerned. Extending LTD to inhibitory neurons increases inhibition when one neuron fires and the adjoining neuron does not. This means that inhibition will increase without a limit. Post-Hebbian learning rules (Shouval and Perrone , 1995) allow the weight to avoid going to zero. In the standard Hebbian (LTP) and Anti-Hebbian (LTD) learning rules, the change of weights is a constant. Hebb proposes the rule \Delta w = jaiaj. In this equation j is a constant and ai and aj are the activations of the neurons. In our learning rule, learning only occurs when neurons fire, thus activation levels can be considered as 1. In post-Hebbian learning rules, the change may be based on the current synaptic strength. When the strength is high the increase from LTP is small, but when it is low the increase from LTP is large. Similarly, when strength is low, the decrease from LTD is small and when the strength is high, the decrease is large. In a network where the only stimulation comes from spontaneous activation, all weights will tend toward a neutral value. This argument is tested with a neural simulation. This simulation is fully described in section 4.1. 4 Experiments We have implemented several simple CA models in software (for a further description see (Huyck , 2000, 2001)). These models have a small number of neurons (hundreds), but show how important spontaneous neural firing is for recruiting neurons into CAs. Our models are based on leaky integrator neurons. Each neuron has activation, which it receives from other neurons or the environment. If the neuron has enough activation to exceed the firing threshold, it fires and sends activation down to its synapses. If the neuron fires, it loses all of its activation. If the neuron does not fire, it loses some but not all of its activation; the activation leaks away. Each neuron has many synapses and each synapse connects to another neuron. Through learning, the strength of the synapse may change; this models LTP and LTD. Activation is passed to the post-synaptic neuron when the pre-synaptic neuron fires; the strength of the synapse is the amount of activation that is passed. The neurons are connected in a distance-biased fashion. That is, a neuron is more likely to be connected to a nearby neuron than a distant one. In this paper we describe two experiments. The first is running an untrained network for many cycles and measuring the change in connection strengths. This is a comparison between postHebbian learning and standard Hebbian learning. As discussed in section 3.2, this experiment shows that connection weights reach a reasonable weight when post-Hebbian learning is used and unreasonable weights when standard learning is used. In the second experiment, two patterns are learned. When spontaneous activation of neurons is added, neurons that are unaffiliated with a pattern/CA are recruited into the CAs. 4.1 Experiment 1: Spontaneous Activation and Synaptic Weight In this experiment, the initially untrained network is allowed to run for 1200 cycles. No input pattern is presented to the network, but each neuron has a one percent chance of being spontaneously activated. The network is run in two different modes. In the first mode a post-Hebbian learning Figure 1: Average Synaptic Strengths rule is used, and in the second mode a standard Hebbian learning rule is used. The post-Hebbian rule modifies the strength of the synapse depending on the existing strength of the synapse. The standard Hebbian rule modifies the strength by a constant. The results of the test are shown in figure 1. The vertical axis refers to total synaptic strength. The horizontal axis refers to the number of cycles run. For both modes there are two lines; the first is the sum of all the excitatory weights, and the second is the sum of all the inhibitory weights. For both modes, the initial weights are the same. These initial weights are somewhat arbitrary and the same results should occur for most initial weight settings. In the post-Hebbian case the weights of the excitatory neurons stay stable from the start and the inhibitory weights converge around cycle 1200. In the Hebbian case, the excitatory neurons' weights get near to zero within 500 cycles, and the inhibitory weights decrease without limit. With post-Hebbian learning, excitatory weights decrease due to LTD. However, as the weights are already small, the decrease is small. Occasionally, the neurons will be spontaneously activated together and the strength will increase due to LTP. This increase will be large because the synaptic strength is small. This interplay between LTP and LTD allows the synapse to go to a neutral weight. This experiment verifies the argument proposed in section 3.2. Spontaneous activation is a problem for learning based on constant weights, but is not a problem for post-Hebbian learning rules. 4.2 Experiment 2: Spontaneous Activation and CA Spread In this experiment, neural patterns are presented to the initial net. Varying amounts of spontaneous activation allows unaffiliated neurons to be recruited into CAs. In this experiment, there is a 200 by 200 grid of neurons. There are two different patterns: Figure 2: Neuron Column Activation Rate the first pattern is a band of neurons between columns 30 and 70; the second pattern is a band of neurons between columns 130 and 170. These patterns are modelled by directly activating neurons. In a given run, only some of the neurons in the pattern are activated externally. After repeated presentation, the net has learned the patterns. It will classify a pattern by activating the entire CA. Almost all neurons are activated via internal reverberation once the CA has formed. Figure 2 shows the frequency of firing of the cells in a cross section through the rectangular grid of cells. The solid line represents average neural activation when there is no spontaneous activation. The key point is that there is no activation outside of the pattern. For measurement purposes, spontaneous activation was turned off for testing. The dotted line shows the behaviour when 5 percent of the neurons are spontaneously activated during training. This shows that many neurons outside the pattern are being activated, and thus have been recruited into the CA. The dashed line shows a further increase when 10 percent are spontaneously activated. This experiment indicates that CAs do in fact recruit cells at their edges which receive no activation other than spontaneous activation. Clearly, increasing the probability of spontaneous activation, Pr(SA), encourages recruitment, as the cell assemblies start to spread into the territory of uncommitted cells. However, as Pr(SA) is increased beyond about 0.3, the cell assemblies have extended so far that cells tend to be recruited into both at the same time. This results in a join between the cell assembly for pattern A and that for pattern B, so that they effectively fuse into one large CA. Increasing the spacing between the patterns reduces the likelihood of this happening. 4.3 Experiment 3: CA Ignition from Spontaneous Activation It is perfectly reasonable that spontaneous activation alone should be capable of igniting a CA. Often, when walking down the street, ideas `pop into one's head' totally unbidden, and it is quite possible that this occurs as the result of the spontaneous activation of a CA. Figure 3: Time steps required for CA ignition A further experiment was therefore designed to discover whether a previously-trained CA could be activated solely by the presence of spontaneous activity occurring in cells forming part of the CA. A grid of 20-by-20 cells each with a connectivity of 6 to others within its immediate neighbourhood was trained as follows: Cells within the central 10-by-10 square of the grid each had a 20being activated at each time step. It was found that no more than 50 time steps were required for a stable CA to form. No spontaneous activation of the cells outside the central square was permitted during training, although these cells could, of course, be stimulated through their connections. After training, the net was run taking spontaneous activation of cells as its only input; all the cells had a specified spontaneous activation probability at each time step. Experiments were repeated 100 times. The number of time steps before the CA ignited was noted, ignition being taken as the point when the activity level within the region where the CA had previously formed rose above 70activity level in the CA directly after training. Figure 3 shows the average number of time steps for CA ignition. The CA did not ignite in all the test runs, and the proportion of test runs where ignition did occur was noted for all spontaneous activation probabilities. Figure 4 shows likelihood of CA ignition. Repeated experiments showe d that the length of time required before CA ignition and the probability of that ignition in the first place depended strongly on the spontaneous activation probability of the cells. Unsurprisingly, the greater the level of spontaneous activation, the greater the proportion of trials runs in which CA ignition occurred. When the probability rose above 0.01, ignition of the CA was virtually guaranteed. 5 Discussion and Conclusion Neurons spontaneously fire. This has positive ramifications for a theory of neural processing. Spontaneous firing should lead to more information content in neural waves. This is due to the Figure 4: Frequency of spontaneous CA ignition dynamics of firing and of learning being influenced by spontaneous firing. Both of these processes make it less likely that all of the neurons in a neural wave will not fire. Fewer neurons firing increases information content. Spontaneous firing also improves the dynamics of neural recruitment in CAs. Our experiments show that spontaneous firing can lead to more robust CAs. The first experiment shows that spontaneous activation stabilises the synaptic weights of areas of neurons that receive no external stimulation. This may support colonisation of these areas by new CAs. The second experiment shows that spontaneous activation is essential to recruiting neurons into nearby CAs. Neurons near to CAs are easily recruited when there is no spontaneous activation. The third experiment showed that spontaneous activity could ignite CAs. This indicates that concepts may simply become active spontaneously. In a larger system this spontaneous ignition would often need to be suppressed, perhaps by inhibition from active CAs. Long-term CA dynamics will include CA formation, neural recruitment, and CA fractionation. Concepts form, evolve, and break into sub-concepts. Though it is not the sole reason for these operations, spontaneous neural activation may facilitate all of them. CAs by their very nature are dynamic. The number of neurons in a CA will increase and decrease. Neurons may only have partial membership in a CA. One of the cornerstones of CA theory is that neurons may participate in multiple CAs (Sakurai , 1998). Future work will be needed to explore these dynamics, but they will almost certainly be influenced by spontaneous neural activation. This work is by no means conclusive. It does give a computational rationale for the neurophysiological phenomena of spontaneous neural activation. It shows how spontaneous firing might lead to better performance in real neural systems. This work provides extra evidence for the CA hypothesis, and provides an example of how computational modelling with information theoretic goals can inform neurophysiology and neuropsychology. While we are a long way from complete understanding of how neurons generate behaviour, this work is another contribution toward that understanding. References Abeles, M., H. Bergman, E. Margalit, and E. Vaadia. (1993) Spatiotemporal Firing Patterns in the Frontal Cortex of Behaving Monkeys. Journal of Neurophysiology 70(4):1629-38 Beurle, R. L. (1956). Properties of a Mass of Cells Capable of Regenerating Pulses. In Brain Theory: Reprint Volume G. Shaw and G. Palm eds. World Scientific Publishing Co. (1988) ISBN 9971504847 Bevan, M. D., C. J. Wilson, (1999) Mechanisms Underlying Spontaneous Oscillation and Rhythmic Firing in Rat Subthalamic Neurons.In Journal of Neuroscience 19 pp. 7617-7628. Hebb, D.O. (1949) The Organization of Behavior. John Wiley and Sons, New York. Huyck, C. (2000). Modelling Cell Assemblies. Proceedings of the International Conference on Artificial Intelligence ISBN: 1-892512-59-9 pp. 891-7 Huyck, C. (2001) Cell Assemblies as an Intermediate Level Model of Cognition. In Emerging Neural Architectures based on Neuroscience W. Horn, J. Austin and D. Willshaw (eds.) Inhelder, B. and J. Piaget (1958) The Growth of Logical Thining from Childhood to Adolescence. New York: Basic Books. Rosch, E. and C. Mervis. (1975). Family Resemblances: Studies in the Internal Structure of Cate gories. In Cognitive Psychology 7 pp. 573-605. Sakurai, Yoshio. 1998. The search for cell assemblies in the working brain. In Behavioural Brain Research 91 pp. 1-13. Shouval, H. Z. and M. P. Perrone. (1995) Post-Hebbian Learning Rules In The Handbook of Brain Theory and Neural Networks. Arbib, M. ed. MIT Press pp. 745-8 Singer, W. (1995). Development and Plasticity of Cortical Processing Architectures. In Science 270 pp. 758-64. Stryker, M. P., and W.A. Harris (1986). In Journal of Neuroscience 6,2117 Zeki, S., (1993) A Vision of the Brain. Blackwell Sciences, Oxford.