AI-NeuralNet-SOM

 view release on metacpan or  search on metacpan

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN

package AI::NeuralNet::SOM;

use strict;
use warnings;

require Exporter;
use base qw(Exporter);

use Data::Dumper;

=pod

=head1 NAME

AI::NeuralNet::SOM - Perl extension for Kohonen Maps

=head1 SYNOPSIS

  use AI::NeuralNet::SOM::Rect;
  my $nn = new AI::NeuralNet::SOM::Rect (output_dim => "5x6",
                                         input_dim  => 3);
  $nn->initialize;
  $nn->train (30, 
    [ 3, 2, 4 ], 
    [ -1, -1, -1 ],
    [ 0, 4, -3]);

  my @mes = $nn->train (30, ...);      # learn about the smallest errors
                                       # during training

  print $nn->as_data;                  # dump the raw data
  print $nn->as_string;                # prepare a somehow formatted string

  use AI::NeuralNet::SOM::Torus;
  # similar to above

  use AI::NeuralNet::SOM::Hexa;
  my $nn = new AI::NeuralNet::SOM::Hexa (output_dim => 6,
                                         input_dim  => 4);
  $nn->initialize ( [ 0, 0, 0, 0 ] );  # all get this value

  $nn->value (3, 2, [ 1, 1, 1, 1 ]);   # change value for a neuron
  print $nn->value (3, 2);

  $nn->label (3, 2, 'Danger');         # add a label to the neuron
  print $nn->label (3, 2);


=head1 DESCRIPTION

This package is a stripped down implementation of the Kohonen Maps
(self organizing maps). It is B<NOT> meant as demonstration or for use
together with some visualisation software. And while it is not (yet)
optimized for speed, some consideration has been given that it is not
overly slow.

Particular emphasis has been given that the package plays nicely with
others. So no use of files, no arcane dependencies, etc.

=head2 Scenario

The basic idea is that the neural network consists of a 2-dimensional
array of N-dimensional vectors. When the training is started these
vectors may be completely random, but over time the network learns
from the sample data, which is a set of N-dimensional vectors.

Slowly, the vectors in the network will try to approximate the sample
vectors fed in. If in the sample vectors there were clusters, then
these clusters will be neighbourhoods within the rectangle (or

 view all matches for this distribution
 view release on metacpan -  search on metacpan

( run in 0.470 second using v1.00-cache-2.02-grep-82fe00e-cpan-2c419f77a38b )