Back Propagation
- Back Propogation is a way to learn those feed forward weights
- (It was a big discovery of the 80s and led to a renewed interest
in connectionist systems. This interest had been quashed in
69 by Papert and Minsky's proof that single perceptrons can't
learn simple things like exclusive or. linear separability. Multi
layered nets can learn these things.)
- The idea is that you put an input into the system, run it through, and
compare it to the output.
- The error is then calculated, and you adjust the weights a bit, so
that the next time you try you have less error.
- For one input output pair this is pretty easy, but you want
the net to learn the function for all input pairs.
- If it's an interesting domain there will be lots, or even an
infinite number of pairs.
- You train using multiple IO pairs, repeating the process until
the error is small.
- Hopefully, the function will then have been learned.