API Reference‎ > ‎

rmlp_neural_net_t

class rmlp_neural_net_t : public xmlp_neural_net_t< rneuron_t<double> >

This class represents a Recurrent Multi-Layer Perceptrons neural network.

This implementation consists of multiple layers of sigmoid neurons.
This implementation of neural net learns 
by example by using Back Propagation Through Time learning algorithm.

You can give it examples of what you want the network to do and the algorithm changes the network's weights. When training is finished, the net will give you the required output for a particular input.

BPTT algorithm is an extension of standard back-propagation used with MLP, which performs gradient descent on a complete unfolded network.
The BPTT network training sequence starts at time t0 and ends at time t1, the total cost function is simply the sum over time of the standard error function at each time-step.





Header


Namespace

  • nu


Constructors

  • rmlp_neural_net_t() = default;
    Default constructor. Create a not initialized neural network.
    You can initialize the net later by loading the net status from a string stream.

  • rmlp_neural_net_t(
       const topology_t& topology, 
       double learning_rate = 0.1, 
       double momentum = 0.5, 
       err_cost_t ec = err_cost_t::MSE
    );

    Creates a fully connected neural network
    • topology is defined via a vector of positive integers, where first one represents the input layer size and last one represents the output layer size. All other values represent the hidden layers from input to output. 
      The topology vector must be at least of 3 elements and all of them must be non-zero positive integer values.
      If (topology.size() < 3) this method will throw an exception exception_t::size_mismatch
    • learning_rate must be a number in the range (0.0-1.0). The learning rate is used to determine how aggressive training should be for the training algorithm.
    • momentum The learning momentum can be used to speed up training. A too high momentum will however not benefit training. Setting momentum to 0 will be the same as not using the momentum parameter. The recommended value of this parameter is between 0.0 and 1.0.
    • ec (error cost function selector) selects the training error cost function. 
    • Valid parameters are:
      • err_cost_t::MSE: mean square error
      • err_cost_t::CROSS_ENTROPY: cross entropy
      • err_cost_t::USERDEF: user defined error function (see set_error_cost_function)

Initialization example


    nu::rmlp_neural_net_t::topology_t topology = {
       2, // input layer takes a two dimensional vector
       3, // first hidden layer contains 3 neurons
      5, // second hidden layer contains 5 neurons
       2  // two outputs
    };

    // Construct the network using given topology and 
    // learning rate and momentum 
    nu::rmlp_neural_net_t nn   {
       topology,
       0.4, // learing rate
       0.9, // momentum
    };
  • rmlp_neural_net_t(const rmlp_neural_net_t& nn);
    Copy Constructor
  • rmlp_neural_net_t(rmlp_neural_net_t&& nn);
    Move Constructor

Copy Operators
  • rmlp_neural_net_toperator=(const rmlp_neural_net_t& nn) = default;
    Copy-assignment operator
  • mlp_neural_net_toperator=(mlp_neural_net_t&& nn);
    Move-assignment operator

String Stream Operators

  • friend std::stringstreamoperator>>(std::stringstream& ss, rmlp_neural_net_t& net);
    Load and reinitialize the neural network by using data of the given string stream
    In case of invalid stream format this method will throw an exception exception_t::invalid_sstream_format
  • friend std::stringstreamoperator<<(std::stringstream& ss, rmlp_neural_net_t& net);
    Save net status into the given string stream

Output Stream Operators 

  • friend std::ostreamoperator<<(std::ostream& os, rmlp_neural_net_t& net);
    Print the net status out to the given standard output stream

Public methods

  • void reshuffle_weights() noexcept;
    Reset all net weights by using new random values 

Protected methods

  • const char* _get_id_ann() const noexcept override;
    Called for serializing network status, returns the NN id string
  • const char* _get_id_neuron() const noexcept override;
    Called for serializing network status, returns the neuron id string
  • const char* _get_id_neuron_layer() const noexcept override;
    Called for serializing network status, returns the neuron-layer id string
  • const char* _get_id_topology() const noexcept override;
    Called for serializing network status, returns the topology id string
  • const char* _get_id_inputs() const noexcept override;
    Called for serializing network status, returns the inputs id string
  • void _update_neuron_weights(Neuron& neuron, size_t layer_idxoverride;
    neuron_tdouble
     >& neuron: [in/out]
    size_t
     layer_idx:           [in/out]

    This method has been implemented in order to update network weights according to the specific implementation of BPTT algorithm.


Inherited public methods

The following methods and properties are inherited from xmlp_neural_net_t class:
  • void select_error_cost_function(err_cost_t ecnoexcept;
    Select error cost function
    err_cost_t ec: error cost selector is an enumerator
    • err_cost_t::MSE: mean square error
    • err_cost_t::CROSS_ENTROPY: cross entropy
    • err_cost_t::USERDEF: user defined error function (see set_error_cost_function)
  • void set_error_cost_function(cost_func_t cfnoexcept;
    cost_func_t 
    cf: [in] user define function 

    Where cost_func_t is alias of std::function<cf::costfunc_t>;
    Set an user defined cost function. The selector is automatically set to err_cost_t::USERDEF;
  • err_cost_t get_err_cost() const noexcept;
    Get current error cost selector value
  • size_t get_inputs_count() const noexcept;
    Return number of inputs. This number is specified in the topology.
  • size_t get_outputs_count() const noexcept;
    Return number of outputs. This number is specified in the topology.
  • const topology_tget_topology() const noexcept;
    Return a const reference to topology vector.
    Topology vector is of 3 or more elements and all of them are non-zero positive integer values.
    First element represents the input layer size while last element represents the output layer size. 
    All other values represent the hidden layers size ordered from input layer to output layer.
  • double get_learning_rate() const noexcept;
    Return current learning rate
  • void set_learning_rate(double new_rate) noexcept;
    double new_rate: [in] 
    new learning rate
    Change learning rate
  • double get_momentum() const noexcept;
    Returns current momentum
  • void set_momentum(double new_momentumnoexcept;
    double new_momentum: [in] 
    new momentum

    Change learning momentum
  • void set_inputs(const rvector_tinputs);
    const rvector_tinputs: [in] 
    input vector
    Set new network input vector.
    If inputs.size() != get_inputs_count() this method will throw an exception exception_t::size_mismatch 
    • const rvector_t& get_inputs() const noexcept;
      Get net inputs
    • void get_outputs(rvector_toutputsnoexcept;
      rvector_toutputs: [out] 
      net output vector
      Get a copy of net outputs
    • void feed_forward() noexcept;
      Fire all network neurons and calculate the corresponding outputs. 
      Using get_outputs() method you can get a copy of output vector.
    • virtual void back_propagate(const rvector_t & target_v);
    • virtual void back_propagate(const rvector_t & target_vrvector_t & output_v);
      const rvector_t & 
      target_v: [in]  expected output vector
      rvector_t & output_v:       [out] net output vector calculated during feed-forwarding step

      Fire all neurons of the net and calculate the outputs, then apply the back propagation algorithm to the net
      If target_v.size() != get_outputs_count(), this methods will throw an exception exception_t::size_mismatch
    • virtual std::stringstreamload(std::stringstreamss);
      std::stringstreamss: [in/out] 
      string stream
      Build the net by using data of a given string stream
      In case of invalid stream format this method will throw an exception exception_t::invalid_sstream_format
    • virtual std::stringstreamsave(std::stringstreamssnoexcept;
      std::stringstreamss: [in/out] string stream
      Save net status into a given string stream
    • virtual std::ostreamdump(std::ostreamosnoexcept;
      std::ostreamos: [in/out] 
      output stream

      Print the net state out to a given output stream
    • double mean_squared_error(const rvector_ttarget);
      const rvector_t & target_v: [in] expected output vector
      Calculate mean squared error using net output vector and target parameter which should represent the expected output value
      If target_v.size() != get_outputs_count(), this methods will throw an exception exception_t::size_mismatch
    • double cross_entropy(const rvector_ttarget);
      const rvector_t & target_v: [in] expected output vector
      Calculate cross-entropy cost defined as (target*log(output)+(1-target)*log(1-output))/output.size(), where output is the net output vector while target should represent the corresponding expected output value
      If target_v.size() != get_outputs_count(), this methods will throw an exception exception_t::size_mismatch
    • virtual double calc_error_cost(const rvector_ttarget);
      const rvector_t & target_v: [in] expected output vector
      Calculate error cost. It depends on error cost function selector.
      If target_v.size() != get_outputs_count(), this methods will throw an exception exception_t::size_mismatch
    • virtual errv_func_t get_errv_func() noexcept;
      Return error vector function. 
      Error vector function is used by back-propagation algorithm and depends on error cost selector. 
      User may change the standard back-propagation algorithm implementation overriding this method in extended class.
      Predefined error vector functions are mean square error (MSE) which calculates error vector as (
      1 - output) * output * (target - output), and cross entropy error given by target-outputwhere output is the net output vector while target should represent the expected output value.

    Inherited protected methods 

    • virtual void _back_propagate(const rvector_t & target_vconst rvector_t & output_v);
      const rvector_t & target_v: [in] expected output vector
      const rvector_t & output_v: [in] net output vector

      This method can be redefined in order to provide a specific implementation of network learning algorithm. 
      If target_v.size() != output_v.size() or output_v.size() != get_outputs_count() 
      , this method will throw an exception exception_t::size_mismatch 
    • double _get_input(size_t layersize_t idxnoexcept;
      size_t layer: [in] layer index
      size_t idx:   [in] input index
      Return input value for a neuron belonging to a given layer. 
      If layer index is 0, idx refers to the corresponding input of the net.
      If layer index is greater than 0, the input returned corresponds to the output of neuron of previous layer, in this case idx refers to index of this neuron.
    • void _fire_neuron(neuron_layer_t & nlayer,size_t layer_idx,size_t out_idxnoexcept;
      Fire all neurons of a given layer

    Inherited protected static functions

    • static void _build(const topology_ttopology, std::vectorneuron_layer_t >& neuron_layers, rvector_tinputs);
      const topology_ttopology:                   [in] the topology
      std::vectorneuron_layer_t >& neuron_layers: [out] neuron layers
      rvector_t & inputs:                           
      [out] input vector
      Initialize inputs and neuron layers of a net using a given topology.
      If (topology.size() < 3) this method will throw an exception exception_t::size_mismatch
    • static void _calc_mse_err_v(const rvector_ttarget_v,const rvector_toutputs_v, rvector_tres_v);
      const rvector_ttarget_v: [in] expected output vector
      const rvector_toutputs_v:[in] output vector
      rvector_tres_v:          [out] result vector
      Calculate error vector in using MSE function
    • static void _calc_xentropy_err_v(const rvector_ttarget_v,const rvector_toutputs_v, rvector_tres_v);
      const rvector_ttarget_v: [in] expected output vector
      const rvector_toutputs_v:[in] output vector
      rvector_tres_v:          [out] result vector
      Calculate error vector in using Cross-Entropy function

    Base class 

    Comments