AI-NeuralNet-SOM
view release on metacpan or search on metacpan
lib/AI/NeuralNet/SOM.pm view on Meta::CPAN
=item C<learning_rate>: (optional, default C<0.1>)
This is a magic number which controls how strongly the vectors in the grid can be influenced. Stronger
movement can mean faster learning if the clusters are very pronounced. If not, then the movement is
like noise and the convergence is not good. To mediate that effect, the learning rate is reduced
over the iterations.
=item C<sigma0>: (optional, defaults to radius)
A non-negative number representing the start value for the learning radius. Practically, the value
should be chosen in such a way to cover a larger part of the map. During the learning process this
value will be narrowed down, so that the learning radius impacts less and less neurons.
B<NOTE>: Do not choose C<1> as the C<log> function is used on this value.
=back
Subclasses will (re)define some of these parameters and add others:
Example:
lib/AI/NeuralNet/SOM.pm view on Meta::CPAN
=item I<train>
I<$nn>->train ( I<$epochs>, I<@vectors> )
I<@mes> = I<$nn>->train ( I<$epochs>, I<@vectors> )
The training uses the list of sample vectors to make the network learn. Each vector is simply a
reference to an array of values.
The C<epoch> parameter controls how many vectors are processed. The vectors are B<NOT> used in
sequence, but picked randomly from the list. For this reason it is wise to run several epochs,
not just one. But within one epoch B<all> vectors are visited exactly once.
Example:
$nn->train (30,
[ 3, 2, 4 ],
[ -1, -1, -1 ],
[ 0, 4, -3]);
( run in 0.251 second using v1.01-cache-2.11-cpan-8d75d55dd25 )