AI-NNEasy
view release on metacpan or search on metacpan
lib/AI/NNEasy.pm view on Meta::CPAN
## Save the NN:
$nn->save ;
}
## Use the NN:
my $out = $nn->run_get_winner([0,0]) ;
print "0 0 => @$out\n" ; ## 0 0 => 0
my $out = $nn->run_get_winner([0,1]) ;
print "0 1 => @$out\n" ; ## 0 1 => 1
my $out = $nn->run_get_winner([1,0]) ;
print "1 0 => @$out\n" ; ## 1 0 => 1
my $out = $nn->run_get_winner([1,1]) ;
print "1 1 => @$out\n" ; ## 1 1 => 0
## or just interate through the @set:
for (my $i = 0 ; $i < @set ; $i+=2) {
my $out = $nn->run_get_winner($set[$i]) ;
print "@{$set[$i]}) => @$out\n" ;
}
=head1 METHODS
=head2 new ( FILE , @OUTPUT_TYPES , ERROR_OK , IN_SIZE , OUT_SIZE , @HIDDEN_LAYERS , %CONF )
=over 4
=item FILE
The file path to save the NN. Default: 'nneasy.nne'.
=item @OUTPUT_TYPES
An array of outputs that the NN can have, so the NN can find the nearest number in this
list to give your the right output.
=item ERROR_OK
The maximal error of the calculated output.
If not defined ERROR_OK will be calculated by the minimal difference between 2 types at
@OUTPUT_TYPES dived by 2:
@OUTPUT_TYPES = [0 , 0.5 , 1] ;
ERROR_OK = (1 - 0.5) / 2 = 0.25 ;
=item IN_SIZE
The input size (number of nodes in the inpute layer).
=item OUT_SIZE
The output size (number of nodes in the output layer).
=item @HIDDEN_LAYERS
A list of size of hidden layers. By default we have 1 hidden layer, and
the size is calculated by I<(IN_SIZE + OUT_SIZE)>. So, for a NN of
2 inputs and 1 output the hidden layer have 3 nodes.
=item %CONF
Conf can be used to define special parameters of the NN:
Default:
{networktype=>'feedforward' , random_weights=>1 , learning_algorithm=>'backprop' , learning_rate=>0.1 , bias=>1}
Options:
=over 4
=item networktype
The type of the NN. For now only accepts I<'feedforward'>.
=item random_weights
Maximum value for initial weight.
=item learning_algorithm
Algorithm to train the NN. Accepts I<'backprop'> and I<'reinforce'>.
=item learning_rate
Rate used in the learning_algorithm.
=item bias
If true will create a BIAS node. Usefull when you have NULL inputs, like [0,0].
=back
=back
Here's a completly example of use:
my $nn = AI::NNEasy->new(
'xor.nne' , ## file to save the NN.
[0,1] , ## Output types of the NN.
0.1 , ## Maximal error for output.
2 , ## Number of inputs.
1 , ## Number of outputs.
[3] , ## Hidden layers. (this is setting 1 hidden layer with 3 nodes).
{random_connections=>0 , networktype=>'feedforward' , random_weights=>1 , learning_algorithm=>'backprop' , learning_rate=>0.1 , bias=>1} ,
) ;
And a simple example that will create a NN equal of the above:
my $nn = AI::NNEasy->new('xor.nne' , [0,1] , 0.1 , 2 , 1 ) ;
=head2 load
Load the NN if it was previously saved.
=head2 save
Save the NN to a file using L<Storable>.
=head2 learn (@IN , @OUT , N)
Learn the input.
=over 4
=item @IN
The values of one input.
=item @OUT
The values of the output for the input above.
=item N
Number of times that this input should be learned. Default: 100
Example:
$nn->learn( [0,1] , [1] , 10 ) ;
=back
=head2 learn_set (@SET , OK_OUTPUTS , LIMIT , VERBOSE)
Learn a set of inputs until get the right error for the outputs.
=over 4
=item @SET
A list of inputs and outputs.
=item OK_OUTPUTS
Minimal number of outputs that should be OK when calculating the erros.
By default I<OK_OUTPUTS> should have the same size of number of different
inouts in the @SET.
=item LIMIT
Limit of interations when learning. Default: 30000
=item VERBOSE
If TRUE turn verbose method ON when learning.
=back
=head2 get_set_error (@SET , OK_OUTPUTS)
Get the actual error of a set in the NN. If the returned error is bigger than
I<ERROR_OK> defined on I<new()> you should learn or relearn the set.
=head2 run (@INPUT)
Run a input and return the output calculated by the NN based in what the NN already have learned.
=head2 run_get_winner (@INPUT)
Same of I<run()>, but the output will return the nearest output value based in the
I<@OUTPUT_TYPES> defined at I<new()>.
For example an input I<[0,1]> learned that have
the output I<[1]>, actually will return something like 0.98324 as output and
not 1, since the error never should be 0. So, with I<run_get_winner()>
we get the output of I<run()>, let's say that is 0.98324, and find what output
is near of this number, that in this case should be 1. An output [0], will return
by I<run()> something like 0.078964, and I<run_get_winner()> return 0.
=head1 Samples
Inside the release sources you can find the directory ./samples where you have some
examples of code using this module.
=head1 INLINE C
Some functions of this module have I<Inline> functions writed in C.
I have made a C version only for the functions that are wild called, like:
AI::NNEasy::_learn_set_get_output_error
AI::NNEasy::NN::tanh
AI::NNEasy::NN::feedforward::run
AI::NNEasy::NN::backprop::hiddenToOutput
AI::NNEasy::NN::backprop::hiddenOrInputToHidden
AI::NNEasy::NN::backprop::RMSErr
What give to us the speed that we need to learn fast the inputs, but at the same time
be able to create flexible NN.
=head1 Class::HPLOO
I have used L<Class::HPLOO> to write fast the module, specially the XS support.
L<Class::HPLOO> enables this kind of syntax for Perl classes:
class Foo {
sub bar($x , $y) {
$this->add($x , $y) ;
}
sub[C] int add( int x , int y ) {
int res = x + y ;
return res ;
}
}
What make possible to write the module in 2 days! ;-P
=head1 Basics of a Neural Network
I<- This is just a simple text for lay pleople,
to try to make them to understand what is a Neural Network and how it works
without need to read a lot of books -.>
A NN is based in nodes/neurons and layers, where we have the input layer, the hidden layers and the output layer.
For example, here we have a NN with 2 inputs, 1 hidden layer, and 2 outputs:
Input Hidden Output
input1 ---->n1\ /---->n4---> output1
\ /
n3
/ \
input2 ---->n2/ \---->n5---> output2
Basically, when we have an input, let's say [0,1], it will active I<n2>, that will
active I<n3> and I<n3> will active I<n4> and I<n5>, but the link between I<n3> and I<n4> has a I<weight>, and
between I<n3> and I<n5> another I<weight>. The idea is to find the I<weights> between the
nodes that can give to us an output near the real output. So, if the output of [0,1]
is [1,1], the nodes I<output1> and I<output2> should give to us a number near 1,
let's say 0.98654. And if the output for [0,0] is [0,0], I<output1> and I<output2> should give to us a number near 0,
let's say 0.078875.
What is hard in a NN is to find this I<weights>. By default L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers until we find
a set of I<weights> that give to us the right output.
The secret of a NN is the number of hidden layers and nodes/neurons for each layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition exists because the number of inputs define the maximal variability of
the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like
int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the
logic we have 3 groups of inputs:
0 0 => 0 # false
0 1 => 1 # or
1 0 => 1 # or
1 1 => 1 # true
Actually this is not the real explanation, but is the easiest way to understand that
you need to have a number of nodes/neuros in the hidden layer that can give the
right output for your problem.
Other inportant step of a NN is the learning fase. Where we get a set of inputs
and paste them through the NN until we have the right output. This process basically
will adjust the nodes I<weights> until we have an output near the real output that we want.
Other important concept is that the inputs and outputs in the NN should be from 0 to 1.
So, you can define sets like:
0 0 => 0
0 0.5 => 0.5
0.5 0.5 => 1
1 0.5 => 0
1 1 => 1
But what is really recomended is to always use bollean values, just 0 or 1, for inputs and outputs,
since the learning fase will be faster and works better for complex problems.
=head1 SEE ALSO
L<AI::NNFlex>, L<AI::NeuralNet::Simple>, L<Class::HPLOO>, L<Inline>.
=head1 AUTHOR
Graciliano M. P. <gmpassos@cpan.org>
I will appreciate any type of feedback (include your opinions and/or suggestions). ;-P
Thanks a lot to I<Charles Colbourn <charlesc at nnflex.g0n.net>>, that is the
author of L<AI::NNFlex>, that 1st wrote it, since NNFlex was my starting point to
do this NN work, and 2nd to be in touch with the development of L<AI::NNEasy>.
=head1 COPYRIGHT
This program is free software; you can redistribute it and/or
modify it under the same terms as Perl itself.
=cut
( run in 2.123 seconds using v1.01-cache-2.11-cpan-39bf76dae61 )