AI-NNEasy
view release on metacpan or search on metacpan
lib/AI/NNEasy.hploo view on Meta::CPAN
706707708709710711712713714715716717718719720721722723724725726727What is hard in a NN is to find this I<weights>. By
default
L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers
until
we find
a set of I<weights> that give to us the right output.
The secret of a NN is the number of hidden layers and nodes/neurons
for
each
layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition
exists
because the number of inputs define the maximal variability of
the inputs (N**2
for
bollean inputs), and the output defines
if
the variability is reduced by some logic restriction, like
int
the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the
logic we have 3 groups of inputs:
0
0
=> 0
# false
0
1
=> 1
# or
1
0
=> 1
# or
1
1
=> 1
# true
Actually this is not the real explanation, but is the easiest way to understand that
you need to have a number of nodes/neuros in the hidden layer that can give the
right output
for
your problem.
lib/AI/NNEasy.pm view on Meta::CPAN
844845846847848849850851852853854855856857858859860861862863864865What is hard in a NN is to find this I<weights>. By
default
L<AI::NNEasy> uses
I<backprop> as learning algorithm. With I<backprop> it pastes the inputs through
the Neural Network and adjust the I<weights> using random numbers
until
we find
a set of I<weights> that give to us the right output.
The secret of a NN is the number of hidden layers and nodes/neurons
for
each
layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition
exists
because the number of inputs define the maximal variability of
the inputs (N**2
for
bollean inputs), and the output defines
if
the variability is reduced by some logic restriction, like
int
the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the
logic we have 3 groups of inputs:
0
0
=> 0
# false
0
1
=> 1
# or
1
0
=> 1
# or
1
1
=> 1
# true
Actually this is not the real explanation, but is the easiest way to understand that
you need to have a number of nodes/neuros in the hidden layer that can give the
right output
for
your problem.
( run in 0.330 second using v1.01-cache-2.11-cpan-26ccb49234f )