|
|
|
|
|
|
|
|
|
|
|
• |
Perceptron
learning is a gradient descent search
|
|
|
through
the space of possible weights.
|
|
|
• |
Each
training example provides an "error surface"
|
|
|
for
weights. Learning rule runs weights
downhill
|
|
|
with
learning rate a as step size.
|
|
|
• |
For linearly
separable functions, there are no local
|
|
minima,
and guaranteed to converge if learning
|
|
|
rate a not
too high (overshoot)
|
|
|
• |
Summary:
Very effective for very simple
|
|
|
representable
functions.
|
|