AI-NeuralNet-SOM
view release on metacpan or search on metacpan
lib/AI/NeuralNet/SOM.pm view on Meta::CPAN
use AI::NeuralNet::SOM::Hexa;
my $nn = new AI::NeuralNet::SOM::Hexa (output_dim => 6,
input_dim => 4);
$nn->initialize ( [ 0, 0, 0, 0 ] ); # all get this value
$nn->value (3, 2, [ 1, 1, 1, 1 ]); # change value for a neuron
print $nn->value (3, 2);
$nn->label (3, 2, 'Danger'); # add a label to the neuron
print $nn->label (3, 2);
=head1 DESCRIPTION
This package is a stripped down implementation of the Kohonen Maps
(self organizing maps). It is B<NOT> meant as demonstration or for use
together with some visualisation software. And while it is not (yet)
optimized for speed, some consideration has been given that it is not
overly slow.
Particular emphasis has been given that the package plays nicely with
others. So no use of files, no arcane dependencies, etc.
=head2 Scenario
The basic idea is that the neural network consists of a 2-dimensional
array of N-dimensional vectors. When the training is started these
vectors may be completely random, but over time the network learns
from the sample data, which is a set of N-dimensional vectors.
Slowly, the vectors in the network will try to approximate the sample
vectors fed in. If in the sample vectors there were clusters, then
these clusters will be neighbourhoods within the rectangle (or
whatever topology you are using).
Technically, you have reduced your dimension from N to 2.
=head1 INTERFACE
=head2 Constructor
The constructor takes arguments:
=over
=item C<input_dim> : (mandatory, no default)
A positive integer specifying the dimension of the sample vectors (and hence that of the vectors in
the grid).
=item C<learning_rate>: (optional, default C<0.1>)
This is a magic number which controls how strongly the vectors in the grid can be influenced. Stronger
movement can mean faster learning if the clusters are very pronounced. If not, then the movement is
like noise and the convergence is not good. To mediate that effect, the learning rate is reduced
over the iterations.
=item C<sigma0>: (optional, defaults to radius)
A non-negative number representing the start value for the learning radius. Practically, the value
should be chosen in such a way to cover a larger part of the map. During the learning process this
value will be narrowed down, so that the learning radius impacts less and less neurons.
B<NOTE>: Do not choose C<1> as the C<log> function is used on this value.
=back
Subclasses will (re)define some of these parameters and add others:
Example:
my $nn = new AI::NeuralNet::SOM::Rect (output_dim => "5x6",
input_dim => 3);
=cut
sub new { die; }
=pod
=head2 Methods
=over
=item I<initialize>
I<$nn>->initialize
You need to initialize all vectors in the map before training. There are several options
how this is done:
=over
=item providing data vectors
If you provide a list of vectors, these will be used in turn to seed the neurons. If the list is
shorter than the number of neurons, the list will be started over. That way it is trivial to
zero everything:
$nn->initialize ( [ 0, 0, 0 ] );
=item providing no data
Then all vectors will get randomized values (in the range [ -0.5 .. 0.5 ]).
=item using eigenvectors (see L</HOWTOS>)
=back
=item I<train>
I<$nn>->train ( I<$epochs>, I<@vectors> )
I<@mes> = I<$nn>->train ( I<$epochs>, I<@vectors> )
The training uses the list of sample vectors to make the network learn. Each vector is simply a
reference to an array of values.
The C<epoch> parameter controls how many vectors are processed. The vectors are B<NOT> used in
sequence, but picked randomly from the list. For this reason it is wise to run several epochs,
not just one. But within one epoch B<all> vectors are visited exactly once.
Example:
$nn->train (30,
[ 3, 2, 4 ],
[ -1, -1, -1 ],
[ 0, 4, -3]);
=cut
sub train {
my $self = shift;
my $epochs = shift || 1;
die "no data to learn" unless @_;
$self->{LAMBDA} = $epochs / log ($self->{_Sigma0}); # educated guess?
my @mes = (); # this will contain the errors during the epochs
for my $epoch (1..$epochs) {
$self->{T} = $epoch;
my $sigma = $self->{_Sigma0} * exp ( - $self->{T} / $self->{LAMBDA} ); # compute current radius
my $l = $self->{_L0} * exp ( - $self->{T} / $epochs ); # current learning rate
my @veggies = @_; # make a local copy, that will be destroyed in the loop
while (@veggies) {
my $sample = splice @veggies, int (rand (scalar @veggies) ), 1; # find (and take out)
my @bmu = $self->bmu ($sample); # find the best matching unit
push @mes, $bmu[2] if wantarray;
my $neighbors = $self->neighbors ($sigma, @bmu); # find its neighbors
map { _adjust ($self, $l, $sigma, $_, $sample) } @$neighbors; # bend them like Beckham
}
}
return @mes;
}
sub _adjust { # http://www.ai-junkie.com/ann/som/som4.html
my $self = shift;
my $l = shift; # the learning rate
my $sigma = shift; # the current radius
my $unit = shift; # which unit to change
my ($x, $y, $d) = @$unit; # it contains the distance
my $v = shift; # the vector which makes the impact
my $w = $self->{map}->[$x]->[$y]; # find the data behind the unit
my $theta = exp ( - ($d ** 2) / (2 * $sigma ** 2)); # gaussian impact (using distance and current radius)
foreach my $i (0 .. $#$w) { # adjusting values
$w->[$i] = $w->[$i] + $theta * $l * ( $v->[$i] - $w->[$i] );
}
}
=pod
=item I<bmu>
(I<$x>, I<$y>, I<$distance>) = I<$nn>->bmu (I<$vector>)
( run in 1.923 second using v1.01-cache-2.11-cpan-39bf76dae61 )