API Reference‎ > ‎

xmlp_neural_net_t

template <class Neuronclass xmlp_neural_net_t
This template abstract class represents the base class for mlp_neural_net_t and rmlp_neural_net_t classes




Header


Namespace

  • nu


Constructors

  • xmlp_neural_net_t() = default;
  • xmlp_neural_net_t(
        const topology_t& topology, 
        double learning_rate = 0.1, 
        double momentum = 0.5, 
        err_cost_t ec = err_cost_t::MSE) noexcept;
    • topology is defined via a vector of positive integers, where first one represents the input layer size and last one represents the output layer size. All other values represent the hidden layers from input to output. 
      Topology vector must be at least of 3 elements and all of them must be non-zero positive integer values.
    • learning_rate must be a number in the range (0.0-1.0). Learning rate is used to determine how aggressive training should be for back-propagation training algorithm.
    • momentum momentum can be used to speed up training. A too high momentum will however not benefit training. Setting momentum to 0 will be the same as not using the momentum parameter. Recommended value of this parameter is between 0.0 and 1.0.
    • ec is error cost function selector which selects the training error cost function used by learning algorithm.
      Valid ec values are:
      • err_cost_t::MSE: mean square error
      • err_cost_t::CROSS_ENTROPY: cross entropy
      • err_cost_t::USERDEF: user defined error function (see set_error_cost_function)
  • xmlp_neural_net_t(const mlp_neural_net_t& nn) default;
    Copy constructor
  • xmlp_neural_net_t(mlp_neural_net_t&& nn) noexcept;
    Move constructor

Copy Operators

  • xmlp_neural_net_toperator=(const mlp_neural_net_t& nn) = default;
    Copy-assignment operator
  • xmlp_neural_net_toperator=(mlp_neural_net_t&& nn) noexcept;
    Move-assignment operator

Public methods

  • void select_error_cost_function(err_cost_t ecnoexcept;
    Select error cost function
    err_cost_t ec: error cost selector is an enumerator
    • err_cost_t::MSE: mean square error
    • err_cost_t::CROSS_ENTROPY: cross entropy
    • err_cost_t::USERDEF: user defined error function (see set_error_cost_function)
  • void set_error_cost_function(cost_func_t cfnoexcept;
    cost_func_t 
    cf: [in] user define function 

    Where cost_func_t is alias of std::function<cf::costfunc_t>;
    Set an user defined cost function. The selector is automatically set to err_cost_t::USERDEF;
  • err_cost_t get_err_cost() const noexcept;
    Get current error cost selector value
  • size_t get_inputs_count() const noexcept;
    Return number of inputs. This number is specified in the topology.
  • size_t get_outputs_count() const noexcept;
    Return number of outputs. This number is specified in the topology.
  • const topology_t& get_topology() const noexcept;
    Return a const reference to topology vector.
    Topology vector is of 3 or more elements and all of them are non-zero positive integer values.
    First element represents the input layer size while last element represents the output layer size. 
    All other values represent the hidden layers size ordered from input layer to output layer.
  • double get_learning_rate() const noexcept;
    Return current learning rate
  • void set_learning_rate(double new_rate) noexcept;
    double new_rate: [in] 
    new learning rate

    Change learning rate
  • double get_momentum() const noexcept;
    Returns current momentum
  • void set_momentum(double new_momentumnoexcept;
    double new_momentum: [in] 
    new momentum
    Change learning momentum
  • void set_inputs(const rvector_t& inputs);
    const rvector_tinputs: [in] 
    input vector
    Set new network input vector.
    If inputs.size() != get_inputs_count() this method will throw an exception exception_t::size_mismatch 
    • const rvector_t& get_inputs() const noexcept;
      Get net inputs
    • void get_outputs(rvector_t& outputsnoexcept;
      rvector_toutputs: [out]
      net output vector

      Get a copy of net outputs
    • void feed_forward() noexcept;
      Fire all network neurons and calculate the corresponding outputs.
      Using get_outputs() method you can get a copy of output vector.
    • virtual void back_propagate(const rvector_t & target_v);
    • virtual void back_propagate(const rvector_t & target_vrvector_t & output_v);
      const rvector_t & 
      target_v: [in]  expected output vector
      rvector_t & output_v:       [out] net output vector calculated during feed-forwarding step
      Fire all neurons of the net and calculate the outputs, then apply the back propagation algorithm to the net
      If target_v.size() != get_outputs_count(), this methods will throw an exception exception_t::size_mismatch
    • virtual std::stringstream& load(std::stringstreamss);
      std::stringstreamss: [in/out] 
      string stream
      Build the net by using data of a given string stream
      In case of invalid stream format this method will throw an exception exception_t::invalid_sstream_format
    • virtual std::stringstream& save(std::stringstreamssnoexcept;
      std::stringstreamss: [in/out] string stream
      Save net status into a given string stream
    • virtual std::ostream& dump(std::ostreamosnoexcept;
      std::ostreamos: [in/out] 
      output stream

      Print the net state out to a given output stream
    • double mean_squared_error(const rvector_ttarget);
      const rvector_t & target_v: [in] expected output vector
      Calculate mean squared error using net output vector and target parameter which should represent the expected output value
      If target_v.size() != get_outputs_count(), this methods will throw an exception exception_t::size_mismatch
    • double cross_entropy(const rvector_ttarget);
      const rvector_t & target_v: [in] expected output vector

      Calculate cross-entropy cost defined as (target*log(output)+(1-target)*log(1-output))/output.size(), where output is the net output vector while target should represent the corresponding expected output value
      If target_v.size() != get_outputs_count(), this methods will throw an exception exception_t::size_mismatch
    • virtual double calc_error_cost(const rvector_ttarget);
      const rvector_t & target_v: [in] expected output vector
      Calculate error cost. It depends on error cost function selector.
      If target_v.size() != get_outputs_count(), this methods will throw an exception exception_t::size_mismatch
    • virtual errv_func_t get_errv_func() noexcept;
      Return error vector function.
      Error vector function is used by back-propagation algorithm and depends on error cost selector.
      User may change the standard back-propagation algorithm implementation overriding this method in extended class.
      Predefined error vector functions are mean square error (MSE) which calculates error vector as (
      1 - output) * output * (target - output), and cross entropy error given by target-outputwhere output is the net output vector while target should represent the expected output value.


    Protected methods 

    • virtual void _back_propagate(const rvector_t & target_vconst rvector_t & output_v);
      const rvector_t & target_v: [in] expected output vector
      const rvector_t & output_v: [in] net output vector

      This method can be redefined in order to provide a specific implementation of network learning algorithm. 
      If target_v.size() != output_v.size() or output_v.size() != get_outputs_count() 
      , this method will throw an exception exception_t::size_mismatch 
    • double _get_input(size_t layer, size_t idxnoexcept;
      size_t layer: [in] layer index
      size_t idx:   [in] input index
      Return input value for a neuron belonging to a given layer. 
      If layer index is 0, idx refers to the corresponding input of the net.
      If layer index is greater than 0, the input returned corresponds to the output of neuron of previous layer, in this case idx refers to index of this neuron.
    • void _fire_neuron(neuron_layer_t & nlayer,size_t layer_idx,size_t out_idxnoexcept;
      Fire all neurons of a given layer

    Protected abstract methods

    • virtual const char* _get_id_ann() const noexcept = 0;
      Called for serializing network status, must return a unique NN id string
    • virtual const char* _get_id_neuron() const noexcept = 0;
      Called for serializing network status, must return a unique neuron id string
    • virtual const char* _get_id_neuron_layer() const noexcept = 0;
      Called for serializing network status, must return a unique neuron-layer id string
    • virtual const char* _get_id_topology() const noexcept = 0;
      Called for serializing network status, must return a unique topology id string
    • virtual const char* _get_id_inputs() const noexcept = 0;
      Called for serializing network status, must return a unique inputs id string
    • virtual void _update_neuron_weights(Neuron&, size_t) = 0;
      This method must be implemented in order to update network weights according to the specific implementation of learning algorithm

    Protected static functions

    • static void _build(const topology_t& topology, std::vector< neuron_layer_t >& neuron_layers, rvector_t& inputs);
      const topology_ttopology:                   [in] the topology
      std::vectorneuron_layer_t >& neuron_layers: [out] neuron layers
      rvector_t & inputs:                           
      [out] input vector
      Initialize inputs and neuron layers of a net using a given topology.
      If (topology.size() < 3) this method will throw an exception exception_t::size_mismatch
    • static void _calc_mse_err_v(const rvector_t& target_v,const rvector_t& outputs_v, rvector_t& res_v);
      const rvector_ttarget_v: [in] expected output vector
      const rvector_toutputs_v:[in] output vector
      rvector_tres_v:          [out] result vector
      Calculate error vector in using MSE function
    • static void _calc_xentropy_err_v(const rvector_ttarget_v,const rvector_toutputs_v, rvector_tres_v);
      const rvector_ttarget_v: [in] expected output vector
      const rvector_toutputs_v:[in] output vector
      rvector_tres_v:          [out] result vector
      Calculate error vector in using Cross-Entropy function


    Comments