Learning
- Aside for easy and massive parallelism, the only reason you'd
use neural systems is learning.
- Learning means a lot of different things. Let's ignore
the issue of synaptic learning rules, and just talk about
cognitive learning.
- In the long and medium term, what is interesting about agents
learning in an environment is that the agents can learn about that
environment.
- There are lots of different systems to learn.
- Many agents will benefit from learning a spatial map.
- Any reasonably sophisticated agent will have to learn semantics.
This means it
has to learn about the types of entities and actions in the
environment, and to learn relationships between these.
- It can learn to improve its performance.
- It can discuss with a user or users, but it needs to know
about conversation in general and learn about the current
conversation.
- It's not just learning one simple task like a particular categorisation
task.
- It needs to learn over a long time (days or even years)
- The good news, for neural agent developers, is that there is a neural
system that does this (animals), and currently really no other
system.
- What type of things is the architectural agent going to have to learn?
- Some things can be hard coded in, but it is going to have to learn
about particular conversations. It would be useful if it could
learn about new buildings. What if it could learn about the effect
of a change to a structure? How will it know what it knows and what
it doesn't know?