AI-NNEasy

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

Revision history for Perl extension AI::NNEasy.

0.06  2005-01-16
      - Added reinforce learning algorithm.
      - Added check of errors bigger than 1 at learn_set().
      - Fix some memory leak for non mortal SV*.

0.05  2005-01-15
      - Fixed default values for layers, specially the activation
        funtion for the output layer that is better as linear.
      - Added samples.
      - Changed some internal values for learn_set() to learn faster.
      - More XS support: AI::NNEasy::NN::backprop::RMSErr_c

0.04  2005-01-15
      - POD fixes.
      - Added some XS support to learn_set() method.

0.03  2005-01-14

lib/AI/NNEasy.hploo  view on Meta::CPAN

  
  ERROR_OK = (1 - 0.5) / 2 = 0.25 ;

*> IN_SIZE
The input size (number of nodes in the inpute layer).

*> OUT_SIZE
The output size (number of nodes in the output layer).

*> @HIDDEN_LAYERS
A list of size of hidden layers. By default we have 1 hidden layer, and
the size is calculated by I<(IN_SIZE + OUT_SIZE)>. So, for a NN of
2 inputs and 1 output the hidden layer have 3 nodes.

*> %CONF
Conf can be used to define special parameters of the NN:

Default:

 {networktype=>'feedforward' , random_weights=>1 , learning_algorithm=>'backprop' , learning_rate=>0.1 , bias=>1}
 

lib/AI/NNEasy.hploo  view on Meta::CPAN

==> learn_set (@SET , OK_OUTPUTS , LIMIT , VERBOSE)

Learn a set of inputs until get the right error for the outputs.

*> @SET
A list of inputs and outputs.

*> OK_OUTPUTS
Minimal number of outputs that should be OK when calculating the erros.

By default I<OK_OUTPUTS> should have the same size of number of different
inouts in the @SET.

*> LIMIT
Limit of interations when learning. Default: 30000

*> VERBOSE
If TRUE turn verbose method ON when learning.

==> get_set_error (@SET , OK_OUTPUTS)
Get the actual error of a set in the NN. If the returned error is bigger than

lib/AI/NNEasy.hploo  view on Meta::CPAN



Basically, when we have an input, let's say [0,1], it will active I<n2>, that will
active I<n3> and I<n3> will active I<n4> and I<n5>, but the link between I<n3> and I<n4> has a I<weight>, and
between I<n3> and I<n5> another I<weight>. The idea is to find the I<weights> between the
nodes that can give to us an output near the real output. So, if the output of [0,1]
is [1,1], the nodes I<output1> and I<output2> should give to us a number near 1,
let's say 0.98654. And if the output for [0,0] is [0,0], I<output1> and I<output2> should give to us a number near 0,
let's say 0.078875.

What is hard in a NN is to find this I<weights>. By default L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers until we find
a set of I<weights> that give to us the right output.

The secret of a NN is the number of hidden layers and nodes/neurons for each layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition exists because the number of inputs define the maximal variability of
the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like
int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the

lib/AI/NNEasy.pm  view on Meta::CPAN

=item IN_SIZE

The input size (number of nodes in the inpute layer).

=item OUT_SIZE

The output size (number of nodes in the output layer).

=item @HIDDEN_LAYERS

A list of size of hidden layers. By default we have 1 hidden layer, and
the size is calculated by I<(IN_SIZE + OUT_SIZE)>. So, for a NN of
2 inputs and 1 output the hidden layer have 3 nodes.

=item %CONF

Conf can be used to define special parameters of the NN:

Default:

 {networktype=>'feedforward' , random_weights=>1 , learning_algorithm=>'backprop' , learning_rate=>0.1 , bias=>1}

lib/AI/NNEasy.pm  view on Meta::CPAN

=over 4

=item @SET

A list of inputs and outputs.

=item OK_OUTPUTS

Minimal number of outputs that should be OK when calculating the erros.

By default I<OK_OUTPUTS> should have the same size of number of different
inouts in the @SET.

=item LIMIT

Limit of interations when learning. Default: 30000

=item VERBOSE

If TRUE turn verbose method ON when learning.

lib/AI/NNEasy.pm  view on Meta::CPAN



Basically, when we have an input, let's say [0,1], it will active I<n2>, that will
active I<n3> and I<n3> will active I<n4> and I<n5>, but the link between I<n3> and I<n4> has a I<weight>, and
between I<n3> and I<n5> another I<weight>. The idea is to find the I<weights> between the
nodes that can give to us an output near the real output. So, if the output of [0,1]
is [1,1], the nodes I<output1> and I<output2> should give to us a number near 1,
let's say 0.98654. And if the output for [0,0] is [0,0], I<output1> and I<output2> should give to us a number near 0,
let's say 0.078875.

What is hard in a NN is to find this I<weights>. By default L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers until we find
a set of I<weights> that give to us the right output.

The secret of a NN is the number of hidden layers and nodes/neurons for each layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition exists because the number of inputs define the maximal variability of
the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like
int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the



( run in 0.460 second using v1.01-cache-2.11-cpan-0a6323c29d9 )