AI-NNEasy
view release on metacpan or search on metacpan
lib/AI/NNEasy.pm view on Meta::CPAN
AI::NNEasy::NN::backprop::hiddenToOutput
AI::NNEasy::NN::backprop::hiddenOrInputToHidden
AI::NNEasy::NN::backprop::RMSErr
What give to us the speed that we need to learn fast the inputs, but at the same time
be able to create flexible NN.
=head1 Class::HPLOO
I have used L<Class::HPLOO> to write fast the module, specially the XS support.
L<Class::HPLOO> enables this kind of syntax for Perl classes:
class Foo {
sub bar($x , $y) {
$this->add($x , $y) ;
}
sub[C] int add( int x , int y ) {
int res = x + y ;
return res ;
}
}
What make possible to write the module in 2 days! ;-P
=head1 Basics of a Neural Network
I<- This is just a simple text for lay pleople,
to try to make them to understand what is a Neural Network and how it works
without need to read a lot of books -.>
A NN is based in nodes/neurons and layers, where we have the input layer, the hidden layers and the output layer.
For example, here we have a NN with 2 inputs, 1 hidden layer, and 2 outputs:
Input Hidden Output
input1 ---->n1\ /---->n4---> output1
\ /
n3
/ \
input2 ---->n2/ \---->n5---> output2
Basically, when we have an input, let's say [0,1], it will active I<n2>, that will
active I<n3> and I<n3> will active I<n4> and I<n5>, but the link between I<n3> and I<n4> has a I<weight>, and
between I<n3> and I<n5> another I<weight>. The idea is to find the I<weights> between the
nodes that can give to us an output near the real output. So, if the output of [0,1]
is [1,1], the nodes I<output1> and I<output2> should give to us a number near 1,
let's say 0.98654. And if the output for [0,0] is [0,0], I<output1> and I<output2> should give to us a number near 0,
let's say 0.078875.
What is hard in a NN is to find this I<weights>. By default L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers until we find
a set of I<weights> that give to us the right output.
The secret of a NN is the number of hidden layers and nodes/neurons for each layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition exists because the number of inputs define the maximal variability of
the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like
int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the
logic we have 3 groups of inputs:
0 0 => 0 # false
0 1 => 1 # or
1 0 => 1 # or
1 1 => 1 # true
Actually this is not the real explanation, but is the easiest way to understand that
you need to have a number of nodes/neuros in the hidden layer that can give the
right output for your problem.
Other inportant step of a NN is the learning fase. Where we get a set of inputs
and paste them through the NN until we have the right output. This process basically
will adjust the nodes I<weights> until we have an output near the real output that we want.
Other important concept is that the inputs and outputs in the NN should be from 0 to 1.
So, you can define sets like:
0 0 => 0
0 0.5 => 0.5
0.5 0.5 => 1
1 0.5 => 0
1 1 => 1
But what is really recomended is to always use bollean values, just 0 or 1, for inputs and outputs,
since the learning fase will be faster and works better for complex problems.
=head1 SEE ALSO
L<AI::NNFlex>, L<AI::NeuralNet::Simple>, L<Class::HPLOO>, L<Inline>.
=head1 AUTHOR
Graciliano M. P. <gmpassos@cpan.org>
I will appreciate any type of feedback (include your opinions and/or suggestions). ;-P
Thanks a lot to I<Charles Colbourn <charlesc at nnflex.g0n.net>>, that is the
author of L<AI::NNFlex>, that 1st wrote it, since NNFlex was my starting point to
do this NN work, and 2nd to be in touch with the development of L<AI::NNEasy>.
=head1 COPYRIGHT
This program is free software; you can redistribute it and/or
modify it under the same terms as Perl itself.
=cut
( run in 4.742 seconds using v1.01-cache-2.11-cpan-39bf76dae61 )