Back-Propagation
Basic idea: Supply training inputs, computation
feeds forward, error computed with training
output, error propagates backward for weight
updates.
Start with final layer
Update output weights of layer according to layer
output error as with perceptron learning rule
Assign error to units of previous layer according to
weights
Repeat this process backwards through layers