Conclusion
- Neurons are leaky integrators
- Synaptic weights change and this is how we learn
- Connectionist models are inspired by this: simple processors
and lots of them can do lots of things
- Learning is essential to the process
- There are a range of neural nets
- They do lots of things like associative memory, learn functions,
and content addressable memory.
- They are less brittle than symbolic systems.
- They may be a good way of getting real AI, though
we're not really close right now.
- Try Igor Aleksander's book An Introduction to Neural Computing.
It's a bit old, but it is really solid.