Starting from neuron biological model in the 50s Frank Rosenblatt developed a mathematical model called perceptron.
The perceptron uses an algorithm for supervised learning which is the task of inferring a function from training data. This consists of a set of training examples, each one represented by a pair of an input and a desired output values.
Supervised learning algorithm of perceptron analyzes the training data and produces a binary classifier, which can decide whether a new input belongs to one class or another.
Perceptron model, with n inputs is described by a weight vector w and input vector x and a bias b.
The output y is 1 (active) only if the activation function (which is non-linear step function) returns 1.
The perceptron can be trained to solve any two-class classification problem. Its learning rule establishes that if the training set is linearly separable, then the perceptron is guaranteed to converge.
The famous book entitled Perceptrons by Marvin Minsky and Seymour Papert, showed that was impossible for perceptron to learn an XOR function.
This did not mean that the perceptron was unusable for this task, but it was necessary to use multi-levels networks, the so-called multi-layer perceptrons (MLP). Minsky and Papert also showed that an MLP net is able to learn an XOR function.
MLP learning is not simple because the previous algorithm (delta-rule) works fine only with single perceptron (layer) so it needs a different algorithm and a different neuron model as well.
This a graphical illustration of a perceptron, where the weighted inputs and the unity bias are first summed and then processed by a step function to yield the output.
Perceptron training algorithm