Lecture-4 Multi-Layer Perceptrons
Lecture-4 Multi-Layer Perceptrons
Lecture-4
Multi-Layer Perceptrons
Introduction
First Second
Input hidden hidden Output
layer layer layer layer
Characteristics of MLP
• The hidden units enable the MLP to learn complex tasks and
meaningful features from the input/output relationship.
• The output layer of the MLP presents the output of the MLP
to the outside.
Many neurons:
Higher accuracy
Slower
Risk of over‐fitting
Memorizing, rather than understanding
The network will be useless with new problems.
Few neurons:
Lower accuracy
Inability to learn at all
Optimal number.
A Multilayer Feed-Forward Neural
Output Class Network
Ok
Output nodes
w jk
Oj
Hidden nodes
wij - weights
Input nodes
Network is fully connected
Input Record : xi
Examples of Multi-layer NNs
• Backpropagation
• Neocognitron
• Probabilistic NN (radial basis function NN)
• Boltzmann machine
• Cauchy machine
The MLP Activation Function
– tangent-sigmoid (tansig)