AI-NNEasy
view release on metacpan or search on metacpan
lib/AI/NNEasy.hploo view on Meta::CPAN
nodes that can give to us an output near the real output. So, if the output of [0,1]
is [1,1], the nodes I<output1> and I<output2> should give to us a number near 1,
let's say 0.98654. And if the output for [0,0] is [0,0], I<output1> and I<output2> should give to us a number near 0,
let's say 0.078875.
What is hard in a NN is to find this I<weights>. By default L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers until we find
a set of I<weights> that give to us the right output.
The secret of a NN is the number of hidden layers and nodes/neurons for each layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition exists because the number of inputs define the maximal variability of
the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like
int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the
logic we have 3 groups of inputs:
0 0 => 0 # false
0 1 => 1 # or
1 0 => 1 # or
lib/AI/NNEasy.pm view on Meta::CPAN
nodes that can give to us an output near the real output. So, if the output of [0,1]
is [1,1], the nodes I<output1> and I<output2> should give to us a number near 1,
let's say 0.98654. And if the output for [0,0] is [0,0], I<output1> and I<output2> should give to us a number near 0,
let's say 0.078875.
What is hard in a NN is to find this I<weights>. By default L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers until we find
a set of I<weights> that give to us the right output.
The secret of a NN is the number of hidden layers and nodes/neurons for each layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition exists because the number of inputs define the maximal variability of
the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like
int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the
logic we have 3 groups of inputs:
0 0 => 0 # false
0 1 => 1 # or
1 0 => 1 # or
lib/AI/NNEasy/NN/feedforward.hploo view on Meta::CPAN
$node->{activation} += $$inputPatternRef[$counter] ;
}
else {
$node->{activation} = $$inputPatternRef[$counter] ;
}
}
++$counter ;
}
# Now flow activation through the network starting with the second layer
my ( $function ) ;
foreach my $layer ( @{$this->{layers}}[1 .. $#{$this->{layers}}] ) {
foreach my $node ( @{$layer->{nodes}} ) {
$node->{activation} = 0 if !$node->{persistent_activation} ;
$function = $node->{activation_function} ;
foreach my $connectedNode ( @{$node->{connectedNodesWest}->{nodes}} ) {
lib/AI/NNEasy/NN/feedforward.pm view on Meta::CPAN
$node->{activation} += $$inputPatternRef[$counter] ;
}
else {
$node->{activation} = $$inputPatternRef[$counter] ;
}
}
++$counter ;
}
# Now flow activation through the network starting with the second layer
my ( $function ) ;
foreach my $layer ( @{$this->{layers}}[1 .. $#{$this->{layers}}] ) {
foreach my $node ( @{$layer->{nodes}} ) {
$node->{activation} = 0 if !$node->{persistent_activation} ;
$function = $node->{activation_function} ;
foreach my $connectedNode ( @{$node->{connectedNodesWest}->{nodes}} ) {
( run in 0.699 second using v1.01-cache-2.11-cpan-39bf76dae61 )