Feed Forward Networks
- Perceptrons and linear separability
- Perhaps the most popular connectionist network is the feed forward
network
- Each neuron/node takes input, adds it together, and sends it along
its connections.
- The connections have weights and they multiply there input by
the weight and send it along to the next neuron.
- The neurons are arranged in layers (typically three). External
input is fed to the first layer, those neurons and connections
run, and send information onto the second (or hidden) layer, they
run and send information onto the output layer.
- Typically feed forward networks use a supervised learning
algorithm known as back propagation to learn a function (connection
weights) that maps inputs to outputs.
- Thus a 3 layer network with 3 2 and 1 nodes is the equation:
(((i1*w11)+(i2*w12)+(i3*w13)*W1)+(((i1*w12)+(i2*w22)+(i3*w23)*W2)
or f(i1,i2,i3).
- Once the weights are learned, this equation goes really fast.
- The nodes can do other types of functions (binary, sigmoidal, even
polynomial), but this is the basic idea and is pretty close to
what neurons do.