As name suggests, MLP net consists of multiple layers of neurons usually interconnected in a feed-forward way, where the data flow from input to output units is strictly feed forward (FF networks).
Feed-forward neural networks are the most popular and most widely used models in many practical applications.
A typical topology of FF network is shown in previous figure, where each neuron in one layer has directed connections to all the neurons of the subsequent layer.
Each MLP neuron takes a weighted sum of the inputs and passes it through the activation function to produce the output.
MLP net uses a very similar to Rosenblatt's neuron model with the exception of the activation function.
MLP neuron employees sigmoid (or logistic function).
The logistic function has an easily calculated derivative, which is important when calculating the weight updates in the network.
In fact, it makes the network more easily manipulable mathematically. For this reason, this function is commonly used in multi-layer perceptrons using back-propagation algorithm.