Cell Assembly Robots: Vision and Cognitive Modelling Chris Huyck Middlesex University (www.cwa.mdx.ac.uk/CABot/CABot.html) April 2009 We have dubbed our current EPSRC project the CABot project. The project is to develop an agent based on simulated neurons that behaves in a simulated environment. We have already built two prototype Cell Assembly roBots, CABot1 and CABot2. CABot3 is currently being developed. These agents receive dynamic visual input from the environment. They are directed by natural language commands from a user who directly controls another agent in the environment. The CABots maintain their own plans. In CABot1, and hopefully CABot3, this is all done solely with simulated neurons. In the talk, I will describe the agents and how they were developed. The agents are large, so the talk will focus on the vision subsystems. This includes a simulated retina of on-off receptors, a primary visual cortex of line, angle and edge detectors, and texture detection via grating cells. These are used as input to the object recognition subnet. The natural language parser is a reasonable psycholinguistic model. The use of the agents and the underlying neural technology as the basis of a cognitive model will also be explored.