AI-NNEasy
view release on metacpan or search on metacpan
lib/AI/NNEasy.pm view on Meta::CPAN
PUSHMARK(SP) ;
XPUSHs( nn );
XPUSHs( set_out );
PUTBACK ;
call_method("RMSErr", G_SCALAR) ;
SPAGAIN ;
ret = POPs ;
er = SvNV(ret) ;
if (er < 0) er *= -1 ;
if (er < error_ok) ++learn_ok ;
err += er ;
if ( verbose ) sv_catpvf(print_verbose , "%s => %s > %f\n" ,
SvPV( _av_join( OBJ_AV(set_in) ) , len) ,
SvPV( _av_join( OBJ_AV(set_out) ) , len) ,
er
) ;
}
err /= ins_ok ;
if (verbose) {
EXTEND(SP , 3) ;
ST(0) = sv_2mortal(newSVnv(err)) ;
ST(1) = sv_2mortal(newSViv(learn_ok)) ;
ST(2) = print_verbose ;
XSRETURN(3) ;
}
else {
EXTEND(SP , 2) ;
ST(0) = sv_2mortal(newSVnv(err)) ;
ST(1) = sv_2mortal(newSViv(learn_ok)) ;
XSRETURN(2) ;
}
}
__INLINE_C_SRC__
}
1;
__END__
=head1 NAME
AI::NNEasy - Define, learn and use easy Neural Networks of different types using a portable code in Perl and XS.
=head1 DESCRIPTION
The main purpose of this module is to create easy Neural Networks with Perl.
The module was designed to can be extended to multiple network types, learning algorithms and activation functions.
This architecture was 1st based in the module L<AI::NNFlex>, than I have rewrited it to fix some
serialization bugs, and have otimized the code and added some XS functions to get speed
in the learning process. Finally I have added an intuitive inteface to create and use the NN,
and added a winner algorithm to the output.
I have writed this module because after test different NN module on Perl I can't find
one that is portable through Linux and Windows, easy to use and the most important,
one that really works in a reall problem.
With this module you don't need to learn much about NN to be able to construct one, you just
define the construction of the NN, learn your set of inputs, and use it.
=head1 USAGE
Here's an example of a NN to compute XOR:
use AI::NNEasy ;
## Our maximal error for the output calculation.
my $ERR_OK = 0.1 ;
## Create the NN:
my $nn = AI::NNEasy->new(
'xor.nne' , ## file to save the NN.
[0,1] , ## Output types of the NN.
$ERR_OK , ## Maximal error for output.
2 , ## Number of inputs.
1 , ## Number of outputs.
[3] , ## Hidden layers. (this is setting 1 hidden layer with 3 nodes).
) ;
## Our set of inputs and outputs to learn:
my @set = (
[0,0] => [0],
[0,1] => [1],
[1,0] => [1],
[1,1] => [0],
);
## Calculate the actual error for the set:
my $set_err = $nn->get_set_error(\@set) ;
## If set error is bigger than maximal error lest's learn this set:
if ( $set_err > $ERR_OK ) {
$nn->learn_set( \@set ) ;
## Save the NN:
$nn->save ;
}
## Use the NN:
my $out = $nn->run_get_winner([0,0]) ;
print "0 0 => @$out\n" ; ## 0 0 => 0
my $out = $nn->run_get_winner([0,1]) ;
print "0 1 => @$out\n" ; ## 0 1 => 1
my $out = $nn->run_get_winner([1,0]) ;
print "1 0 => @$out\n" ; ## 1 0 => 1
my $out = $nn->run_get_winner([1,1]) ;
print "1 1 => @$out\n" ; ## 1 1 => 0
lib/AI/NNEasy.pm view on Meta::CPAN
}
sub[C] int add( int x , int y ) {
int res = x + y ;
return res ;
}
}
What make possible to write the module in 2 days! ;-P
=head1 Basics of a Neural Network
I<- This is just a simple text for lay pleople,
to try to make them to understand what is a Neural Network and how it works
without need to read a lot of books -.>
A NN is based in nodes/neurons and layers, where we have the input layer, the hidden layers and the output layer.
For example, here we have a NN with 2 inputs, 1 hidden layer, and 2 outputs:
Input Hidden Output
input1 ---->n1\ /---->n4---> output1
\ /
n3
/ \
input2 ---->n2/ \---->n5---> output2
Basically, when we have an input, let's say [0,1], it will active I<n2>, that will
active I<n3> and I<n3> will active I<n4> and I<n5>, but the link between I<n3> and I<n4> has a I<weight>, and
between I<n3> and I<n5> another I<weight>. The idea is to find the I<weights> between the
nodes that can give to us an output near the real output. So, if the output of [0,1]
is [1,1], the nodes I<output1> and I<output2> should give to us a number near 1,
let's say 0.98654. And if the output for [0,0] is [0,0], I<output1> and I<output2> should give to us a number near 0,
let's say 0.078875.
What is hard in a NN is to find this I<weights>. By default L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers until we find
a set of I<weights> that give to us the right output.
The secret of a NN is the number of hidden layers and nodes/neurons for each layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition exists because the number of inputs define the maximal variability of
the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like
int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the
logic we have 3 groups of inputs:
0 0 => 0 # false
0 1 => 1 # or
1 0 => 1 # or
1 1 => 1 # true
Actually this is not the real explanation, but is the easiest way to understand that
you need to have a number of nodes/neuros in the hidden layer that can give the
right output for your problem.
Other inportant step of a NN is the learning fase. Where we get a set of inputs
and paste them through the NN until we have the right output. This process basically
will adjust the nodes I<weights> until we have an output near the real output that we want.
Other important concept is that the inputs and outputs in the NN should be from 0 to 1.
So, you can define sets like:
0 0 => 0
0 0.5 => 0.5
0.5 0.5 => 1
1 0.5 => 0
1 1 => 1
But what is really recomended is to always use bollean values, just 0 or 1, for inputs and outputs,
since the learning fase will be faster and works better for complex problems.
=head1 SEE ALSO
L<AI::NNFlex>, L<AI::NeuralNet::Simple>, L<Class::HPLOO>, L<Inline>.
=head1 AUTHOR
Graciliano M. P. <gmpassos@cpan.org>
I will appreciate any type of feedback (include your opinions and/or suggestions). ;-P
Thanks a lot to I<Charles Colbourn <charlesc at nnflex.g0n.net>>, that is the
author of L<AI::NNFlex>, that 1st wrote it, since NNFlex was my starting point to
do this NN work, and 2nd to be in touch with the development of L<AI::NNEasy>.
=head1 COPYRIGHT
This program is free software; you can redistribute it and/or
modify it under the same terms as Perl itself.
=cut
( run in 0.726 second using v1.01-cache-2.11-cpan-39bf76dae61 )