The CABots and developing simulated neural agents
by Chris Huyck

Over the past several years, we have developed several agents
implemented entirely in simulated neurons.  These Cell Assembly
robots, or CABots, are situated in a simple virtual environment, and
can work with a user in that environment.  The most recent, CABot3,
processes text commands from the user in a psychologically plausible
way, uses environmental feedback to learn the meaning of commands in a
psychologically plausible way, views the environment with several
components mapping directly to brain areas, makes and
executes simple plans, and learns simple spatial cognitive maps.
These agents are quite simple but each is a further step in the
development of neural agents.  


The agents have three important constraints: a neural implementation,
psychological accuracy, and the environment.  All three constraints
are imperfectly met, but together form a solid basis for continued
development that we hope will avoid unproductive research dead ends.
A neural implementation requires an accurate model of neurons and
neural connectivity, that is efficiently simulated. Our fatiguing Leaky
Integrate and Fire model running with 10ms time slices is a reasonable
trade off.  We approach psychological accuracy through the use
of cell assemblies and the development of cognitive models.  The agents
are not complete models of cognition, but show steady progress.  Finally,
interacting with the environment forces the agent to sense and behave
rapidly and efficiently.  The virtual environment is simple, but again
the environments have become more sophisticated for each agent.

At the workshop we will describe the agents, the constraints, how the
constraints have been met, and future work.  This may lead to
discussions about good next steps. The agent and virtual environment
both run on a standard PC, so they may provide a basis for
collaboration.