Backpropagation Learning in Feedforward Neural Nets
- We cannot use the Perceptron Learning Rule for
learning in Feedforward Nets because for hidden units we don't
have a teacher to provide desired values.
- Our solution is to use the backpropagation approach: gradient-descent
algorithm to minimize the error on the training data by propagating errors
backwards through the network starting at the output units and
working backwards towards the input units.
- Algorithm
- Initialize the weights in the network (often randomly)
- repeat
- foreach example e in the training set do
- O = neural-net-output(network, e) ; forward pass
- T = teacher output for e
- Calculate error (T - O) at the output units
- Compute delta_wi for all weights from hidden layer to output layer ; backward pass
- Compute delta_wi for all weights from input layer to hidden layer ; backward pass continued
- Update the weights in the network
- end
- until all examples classified correctly or stopping criterion satisfied
- return(network)