Starting from neuron biological model in the 50s Frank Rosenblatt developed a mathematical model called perceptron. The perceptron uses an algorithm for supervised learning which is the task of inferring a function from training data. This consists of a set of training examples, each one represented by a pair of an input and a desired output values. Supervised learning algorithm of perceptron analyzes the training data and produces a binary classifier, which can decide whether a new input belongs to one class or another. Perceptron model, with n inputs is described by a weight vector w and input vector x and a bias b. The output y is 1 (active) only if the activation function (which is nonlinear step function) returns 1. The perceptron can be trained to solve any twoclass classification problem. Its learning rule establishes that if the training set is linearly separable, then the perceptron is guaranteed to converge. The famous book entitled Perceptrons by Marvin Minsky and Seymour Papert, showed that was impossible for perceptron to learn an XOR function. This did not mean that the perceptron was unusable for this task, but it was necessary to use multilevels networks, the socalled multilayer perceptrons (MLP). Minsky and Papert also showed that an MLP net is able to learn an XOR function. MLP learning is not simple because the previous algorithm (deltarule) works fine only with single perceptron (layer) so it needs a different algorithm and a different neuron model as well. Reference  Perceptron training algorithm Initialize the weights to small random numbers For each training sample x(i):

Home >