\documentclass{elsart}
\usepackage{graphics}
\begin{document}
\title {Overlapping Cell Assemblies from Correlators}
{\author Christian R. Huyck \hspace {0.5 in}Middlesex University, UK}
\maketitle
\section {Introduction and Background}
The Cell Assembly (CA) \cite {Hebb} is a central concept in
computational neuropsychology and cognitive science in general.
The basic idea is that what we consider concepts are stored in the
brain by reverberating neural circuits called CAs. Neurons that are
connected by synapses with large strengths are the basis of CAs, and
the CA is activated by some of its neurons being fired. These neurons
then cause other neurons in the CA to fire leading to a cascade that
ignites the CA enabling a reverberating circuit that can remain active
longer than a single neuron could remain active.
Two cornerstones of CA theory are that Hebbian learning is the major
type of learning in the brain, and neurons participate in multiple CAs
\cite {Sakurai}. Hebb's simple rule leaves a wide range of possible
interpretations. There is biological evidence that Hebbian learning
occurs in the brain \cite {Markram}, but the current state of
neurophysiology does not provide the computational basis of this rule.
Also, neurons participate in multiple CAs, so more CAs can exist and
there is a wider range of interrelations between CAs. These
interrelations are needed to form associative memories, cognitive
maps, and more sophisticated structures.
This paper explores learning rules that allow overlapping CAs to form.
It starts with a learning rule based on the correlation, and then
extends this with a compensatory modifier that allows CAs to form on a
wide range of patterns and to form overlapping CAs. Simulations are
described to support this claim.
\section {Hebbian Learning Makes Synapses Correlators}
The Hebbian learning rule states that if node A is connected to node
B, and both are activated, then the strength of the connecting synapse
is increased. The logical extension of the Hebbian learning rule is
that synaptic weights correlate how often the postsynaptic neuron
fires when the presynaptic neuron fires. We have developed a learning
rule that enables the synapse to tend towards a linear correlation
value; the synaptic weight has a linear relation to how often the
postsynaptic neuron fires when the presynaptic neuron fires. The
formal proof of the learning rule can be found in \cite {Huyck} along
with a longer description of the effect of different neural properties
on synaptic weights.
$$\Delta^+ w_{ij} = (1 - w_{ij}) * R \hspace {0.2 in} Eq. \hspace{0.1 in}1
\hspace {0.6 in}
\Delta^- w_{ij} = w_{ij} * -R \hspace {0.2 in} Eq. \hspace{0.1 in}2 $$
Equation 1 represents the increase when both pre and postsynaptic
neurons fire and Equation 2 represents the decrease when the presynaptic
neuron fires and the postsynaptic neuron does not. {\it R} is a learning
constant between 0 and 1, and $w_{ij}$ is the synaptic weight between neurons
{\it i} and {\it j}. The rule takes advantage of the current synaptic weight as an
approximation of the correlation to force the weight towards the real
correlation value.
Hebbian learning makes synapses mere correlators, but biological
neurons show that synapses are more. Neurons spread activation based
on synaptic weights; this tends to increase the weights and via
reverberation to markedly increase them. Neurons also fatigue which
tends to reduce synaptic weights. Fatigue stops CAs; since active CAs
are short-term memory items, fatigue can automatically remove items
from short-term memory. Synapses spread activation from the firing
presynaptic neuron to the postsynaptic neuron. This spread allows
neural firing without external environmental stimulus. Spreading
activation enables a reverberating circuit that maintains a concept in
working memory, has completion effects, and passes information to
other CAs.
The primary function of CAs is categorisation. A stimulus pattern is
presented, activation spreads from these externally stimulated neurons
to other neurons, and if there is enough activation the CA is ignited,
categorising the stimulus as an instance of that CA. A CA is formed
by Hebbian learning increasing the intra-CA synaptic strengths. This
enables the CA to be a reverberating circuit, but reverberation changes
the connections from correlation measures to the primitives in a
pattern attracting system. The stronger the connection, the more
likely the neurons are to be in the same CA.
\section {CAs with Neurons Participating in Multiple CAs}
\label {sec-CA}
A fundamental tenet of CA theory is that neurons can participate in
more than one CA \cite {Sakurai} allowing more CAs per neuron and more
co-operation between CAs. If neurons can participate in multiple CAs,
it is possible to have more CAs than neurons \cite {Wickelgren}. CAs
can communicate by neurons from one CA sending activation through
their synapses to neurons in another CA, but when neurons are shared
even more long and short-term information is communicated.
There have been many simulations of CAs that have created
non-overlapping or orthogonal CAs e.g. \cite {Hetherington,Lansner},
but it is more difficult to have neurons included in multiple CAs.
With overlapping CAs one CA ignites and the neurons that are included
in both CAs can easily ignite the other CA.
The compensatory learning rule solves this problem, and solves the
problem of learning CAs for sparse patterns. The compensatory
learning rule is a variant of the Hebbian learning rule that is
based only on properties of the pre and postsynaptic
neurons, but considering the total strength of all the synapses
leaving the neuron. The compensatory modifier, described
by Equations 3 and 4, is multiplied
by the correlational gain from Equations 1 and 2.
$$\Delta^+_{mod} w_{ij} = 5^{(W_B-W_i)/C} \hspace {0.2 in}
Eq. \hspace{0.1 in}3 \hspace{0.7 in}\Delta^-_{mod} w_{ij} =
5^{(W_i-W_B)/C} \hspace {0.2 in} Eq. \hspace{0.1 in}4 $$
$W_B$ is a constant representing the desired total synaptic strength
of a neuron, $W_i$ is the current total synaptic strength of neuron
{\it i} and {\it C} is a constant for regulating the variance about {\it
$W_B$}. If the overall strength is low, the new strength is increased
a large amount and if the strength is high it is reduced a large
amount. These rules force total synaptic strength toward $W_B$.
Sparse patterns cannot form CAs with just simple correlational
learning because there is not enough activation to cause CA ignition.
When neurons participate in only one CA, compensatory learning
increases weights above correlational values, enabling CAs for a
wider range of patterns.
When neurons participate in multiple CAs, their total synaptic weight
based on correlation is higher. The compensatory rule reduces the
weights making CAs compete for synaptic strength. Though Wickelgren
\cite {Wickelgren} has simulated CAs when neurons participate in
multiple CAs, there has been little other work.
Using the CANT simulator \cite {Huyck}, CAs have been learned with
biologically plausible neurons. The networks consist of a 20x20 grid
of leaky integrate and fire neurons that fatigue. Neurons are
connected in a distance-biased fashion with nearby neurons more likely
to have connections than distant ones. The top row is adjacent to the
bottom and the left to the right to avoid boundary effects. All
learning is done by the compensatory learning rule. 20\% of the
neurons are inhibitory and the inhibitory variants of the learning
rules force synapses between co-active neurons toward 0 and
contradictory pairs toward {$-W_B$}.
Each net learned two types of patterns with each training cycle
externally activating 20 neurons randomly selected from the pattern.
Training presented alternating types of patterns to the net with no
spread of activation for 3000 cycles. Simulations were run with
between 0 and 100\% overlap. In the orthogonal case (0\% overlap),
the patterns were either the top or bottom 10 rows, with overlap added
in the middle until at 100\% overlap there is one pattern.
The test presented one pattern for 10 cycles, then compared the state
of a different pattern. The measurement is Pearson's product moment
correlation coefficient, testing whether a neuron fires. Figure 1
shows a series of comparisons of trained nets averaged over 10
simulations. The Intra-CA line is a comparison between different runs
with instances of the same pattern, and shows that at least one CA is
formed. The Intra-CA Pearson value stays positive but descends
roughly linearly as overlap increases due to a smaller percentage of
neurons in the CA becoming active. As overlap increases, the size of
the CA grows; the same number of neurons activate after ignition, but
it is more likely that they will be different neurons on different
runs.
\begin{figure}
\resizebox{\textwidth}{!}{\includegraphics{overlap.eps}}
\centerline{{\bf Pearson Measurements:} Figure 1.}
\end{figure}
The dotted Cross-CA line is a comparison between runs of the top and
bottom pattern. In the orthogonal case the correlations are negative showing
two independent CAs, but as the overlap grows, the network starts
to develop one CA instead of two. It is sensing that both types
of patterns are just variants of the same pattern. In this experiment
that happens around 40\% overlap.
Correlational learning with a compensatory modifier enables
overlapping CAs to be learned. Moreover, CAs have been learned with a
wide range of overlap.
\section {Discussion and Conclusion}
CAs may be the neural basis of human concepts and thus are crucial to
our understanding of human intelligence. Studying CAs from simulated
neurons enables us to gain an understanding of how they are learned
and how they behave. CAs are pseudo-stable states similar to those in
Hopfield nets \cite {Hopfield} and we are trying to understand how these
pseudo-stable states are reliably formed.
The particular Hebbian rule that is used is crucial to the development
of these pseudo-states. This paper began by basing the learning rule
on a linear correlator, then adding a compensatory modifier that
allowed multiple overlapping stable states. The compensatory modifier
forces neurons that participate in two CAs to have lower average
synaptic weights to both because they are competing for that weight.
Lower weights reduce the likelihood that both CAs will be
simultaneously activated. At some degree of overlap, the patterns
merge into one CA with the system attributing the extra variance to
noise.
CAs can also communicate with each other to form more complex
structures such as sequences and semantic nets \cite {Fransen}.
Theoretically, they can even be bound together to form rules and
short-term associations. Thus overlapping CAs can
form the basis of sophisticated cognitive systems.
{\bf Acknowledgements:} This work was supported by EPSRC grant GR/R13975/01.
\begin {thebibliography}{99}
\bibitem {Fransen} Fransen, E. {\it et al.} (1992) A
Model of Cortical Associative Memory Based on Hebbian Cell
Assemblies. In {\em Connectionism in a Broad Perspective}. Niklasson,
L. and M. Boden eds. Ellis Horwood Springer-Verlag pp. 165-71
\bibitem {Hebb} Hebb, D.O. (1949) The Organization of Behavior. J. Wiley \& Sons,
New York.
\bibitem {Hetherington} Hetherington, P. A., and M. Shapiro. (1993)
Simulating Hebb cell assemblies: the necessity for partitioned dendritic trees
and a post-net-pre LTD rule. {\em Network: Computation in Neural Systems} 4:135-153
\bibitem {Hopfield} Hopfield, J. (1982) Neural Nets and Physical
Systems with Emergent Collective Computational Abilities. {\em
Proc. of the Nat. Academy of Sciences USA} 79:2554-8
\bibitem {Huyck} Huyck, C. (2002) Cell Assemblies and Neural Network
Theory: From Correlators to Cell Assemblies.
Middlesex U. Tech. Rep. ISSN:1462-0871 CS-02-02
\bibitem {Lansner} Lansner, A. and E. Fransen. (1992)
Modelling Hebbian cell assemblies comprised of cortical neurons.
In {\em Network} 3:105-119
\bibitem {Markram} Markram, H. , J. Lubke, M. Frotscher, and B. Sakmann. (1997)
Regulation of Synaptic efficacy by Coincidence of Postsynaptic APS \& EPSPs
In {\em Science} 275 Jan. pp. 213-5
\bibitem {Sakurai} Sakurai, Yoshio. (1998). The search for cell assemblies
in the working brain. In {\em Behavioural Brain Research} 91:1-13
\bibitem {Wickelgren} Wickelgren, W. A. (1999) Webs, Cell Assemblies, and
Chunking in Neural Nets. {\em Canadian Journal of Experimental Psychology}
53:1 pp. 118-131
\end {thebibliography}
\end {document}