view release on metacpan or search on metacpan
Changes
examples/game_ai.pl
examples/logical_or.pl
Makefile.PL
MANIFEST
META.yml Module meta-data (added by MakeMaker)
README
Simple.xs
lib/AI/NeuralNet/Simple.pm
t/10nn_simple.t
t/20nn_multi.t
t/30nn_storable.t
t/pod-coverage.t
}
return build_rv(av);
}
#define EXPORT_VERSION 1
#define EXPORTED_ITEMS 9
/*
* Exports the C data structures to the Perl world for serialization
* by Storable. We don't want to duplicate the logic of Storable here
* even though we have to do some low-level Perl object construction.
*
* The structure we return is an array reference, which contains the
* following items:
*
* 0 the export version number, in case format changes later
* 1 the amount of neurons in the input layer
* 2 the amount of neurons in the hidden layer
* 3 the amount of neurons in the output layer
* 4 the learning rate
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
else {
require DynaLoader;
push @ISA, 'DynaLoader';
AI::NeuralNet::Simple->bootstrap($VERSION);
}
sub handle { $_[0]->{handle} }
sub new {
my ( $class, @args ) = @_;
logdie "you must supply three positive integers to new()"
unless 3 == @args;
foreach (@args) {
logdie "arguments to new() must be positive integers"
unless defined $_ && /^\d+$/;
}
my $seed = rand(1); # Perl invokes srand() on first call to rand()
my $handle = c_new_network(@args);
logdie "could not create new network" unless $handle >= 0;
my $self = bless {
input => $args[0],
hidden => $args[1],
output => $args[2],
handle => $handle,
}, $class;
$self->iterations(10000); # set a reasonable default
}
sub train {
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
sub train_set {
my ( $self, $set, $iterations, $mse ) = @_;
$iterations ||= $self->iterations;
$mse = -1.0 unless defined $mse;
return c_train_set( $self->handle, $set, $iterations, $mse );
}
sub iterations {
my ( $self, $iterations ) = @_;
if ( defined $iterations ) {
logdie "iterations() value must be a positive integer."
unless $iterations
and $iterations =~ /^\d+$/;
$self->{iterations} = $iterations;
return $self;
}
$self->{iterations};
}
sub delta {
my ( $self, $delta ) = @_;
return c_get_delta( $self->handle ) unless defined $delta;
logdie "delta() value must be a positive number" unless $delta > 0.0;
c_set_delta( $self->handle, $delta );
return $self;
}
sub use_bipolar {
my ( $self, $bipolar ) = @_;
return c_get_use_bipolar( $self->handle ) unless defined $bipolar;
c_set_use_bipolar( $self->handle, $bipolar );
return $self;
}
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
my $largest = 0;
for ( 0 .. $#$arrayref ) {
$largest = $_ if $arrayref->[$_] > $arrayref->[$largest];
}
return $largest;
}
sub learn_rate {
my ( $self, $rate ) = @_;
return c_get_learn_rate( $self->handle ) unless defined $rate;
logdie "learn rate must be between 0 and 1, exclusive"
unless $rate > 0 && $rate < 1;
c_set_learn_rate( $self->handle, $rate );
return $self;
}
sub DESTROY {
my $self = shift;
c_destroy_network( $self->handle );
}
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
__END__
=head1 NAME
AI::NeuralNet::Simple - An easy to use backprop neural net.
=head1 SYNOPSIS
use AI::NeuralNet::Simple;
my $net = AI::NeuralNet::Simple->new(2,1,2);
# teach it logical 'or'
for (1 .. 10000) {
$net->train([1,1],[0,1]);
$net->train([1,0],[0,1]);
$net->train([0,1],[0,1]);
$net->train([0,0],[1,0]);
}
printf "Answer: %d\n", $net->winner([1,1]);
printf "Answer: %d\n", $net->winner([1,0]);
printf "Answer: %d\n", $net->winner([0,1]);
printf "Answer: %d\n\n", $net->winner([0,0]);
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
Please note that the following information is terribly incomplete. That's
deliberate. Anyone familiar with neural networks is going to laugh themselves
silly at how simplistic the following information is and the astute reader will
notice that I've raised far more questions than I've answered.
So why am I doing this? Because I'm giving I<just enough> information for
someone new to neural networks to have enough of an idea of what's going on so
they can actually use this module and then move on to something more powerful,
if interested.
=head2 The Biology
A neural network, at its simplest, is merely an attempt to mimic nature's
"design" of a brain. Like many successful ventures in the field of artificial
intelligence, we find that blatantly ripping off natural designs has allowed us
to solve many problems that otherwise might prove intractable. Fortunately,
Mother Nature has not chosen to apply for patents.
Our brains are comprised of neurons connected to one another by axons. The
axon makes the actual connection to a neuron via a synapse. When neurons
receive information, they process it and feed this information to other neurons
who in turn process the information and send it further until eventually
commands are sent to various parts of the body and muscles twitch, emotions are
felt and we start eyeing our neighbor's popcorn in the movie theater, wondering
if they'll notice if we snatch some while they're watching the movie.
=head2 A simple example of a neuron
Now that you have a solid biology background (uh, no), how does this work when
we're trying to simulate a neural network? The simplest part of the network is
the neuron (also known as a node or, sometimes, a neurode). A we might think
of a neuron as follows (OK, so I won't make a living as an ASCII artist):
Input neurons Synapses Neuron Output
----
n1 ---w1----> / \
n2 ---w2---->| n4 |---w4---->
n3 ---w3----> \ /
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
next synapse, where it will be multiplied by w4 and the process will continue.
=head2 C<AI::NeuralNet::Simple> architecture
The architecture used by this module has (at present) 3 fixed layers of
neurons: an input, hidden, and output layer. In practice, a 3 layer network is
applicable to many problems for which a neural network is appropriate, but this
is not always the case. In this module, we've settled on a fixed 3 layer
network for simplicity.
Here's how a three layer network might learn "logical or". First, we need to
determine how many inputs and outputs we'll have. The inputs are simple, we'll
choose two inputs as this is the minimum necessary to teach a network this
concept. For the outputs, we'll also choose two neurons, with the neuron with
the highest output value being the "true" or "false" response that we are
looking for. We'll only have one neuron for the hidden layer. Thus, we get a
network that resembles the following:
Input Hidden Output
input1 ----> n1 -+ +----> n4 ---> output1
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
then adjust the weights on the "output to hidden" synapses, measure the error
on the hidden nodes and then adjust the weights on the "hidden to input"
synapses. This is what is referred to as "back error propagation".
We continue this process until the amount of error is small enough that we are
satisfied. In reality, we will rarely if ever get precise results from the
network, but we learn various strategies to interpret the results. In the
example above, we use a "winner takes all" strategy. Which ever of the output
nodes has the greatest value will be the "winner", and thus the answer.
In the examples directory, you will find a program named "logical_or.pl" which
demonstrates the above process.
=head2 Building a network
In creating a new neural network, there are three basic steps:
=over 4
=item 1 Designing
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
results are not satisfactory, perhaps a different number of neurons per layer
should be tried or a different set of training data should be supplied.
=back
=head1 Programming C<AI::NeuralNet::Simple>
=head2 C<new($input, $hidden, $output)>
C<new()> accepts three integers. These number represent the number of nodes in
the input, hidden, and output layers, respectively. To create the "logical or"
network described earlier:
my $net = AI::NeuralNet::Simple->new(2,1,2);
By default, the activation function for the neurons is the sigmoid function
S() with delta = 1:
S(x) = 1 / (1 + exp(-delta * x))
but you can change the delta after creation. You can also use a bipolar
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
Returns whether the network currently uses a bipolar activation function.
If an argument is supplied, instruct the network to use a bipolar activation
function or not.
You should not change the activation function during the traning.
=head2 C<train(\@input, \@output)>
This method trains the network to associate the input data set with the output
data set. Representing the "logical or" is as follows:
$net->train([1,1] => [0,1]);
$net->train([1,0] => [0,1]);
$net->train([0,1] => [0,1]);
$net->train([0,0] => [1,0]);
Note that a one pass through the data is seldom sufficient to train a network.
In the example "logical or" program, we actually run this data through the
network ten thousand times.
for (1 .. 10000) {
$net->train([1,1] => [0,1]);
$net->train([1,0] => [0,1]);
$net->train([0,1] => [0,1]);
$net->train([0,0] => [1,0]);
}
The routine returns the Mean Squared Error (MSE) representing how far the
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
If you choose a lower learning rate, you will train the network slower, but you
may get a better accuracy. A higher learning rate will train the network
faster, but it can have a tendancy to "overshoot" the answer when learning and
not learn as accurately.
=head2 C<infer(\@input)>
This method, if provided with an input array reference, will return an array
reference corresponding to the output values that it is guessing. Note that
these values will generally be close, but not exact. For example, with the
"logical or" program, you might expect results similar to:
use Data::Dumper;
print Dumper $net->infer([1,1]);
$VAR1 = [
'0.00993729281477686',
'0.990100297418451'
];
That clearly has the second output item being close to 1, so as a helper method
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
"AI Application Programming by M. Tim Jones, copyright (c) by Charles River
Media, Inc.
The C code in this module is based heavily upon Mr. Jones backpropogation
network in the book. The "game ai" example in the examples directory is based
upon an example he has graciously allowed me to use. I I<had> to use it
because it's more fun than many of the dry examples out there :)
"Naturally Intelligent Systems", by Maureen Caudill and Charles Butler,
copyright (c) 1990 by Massachussetts Institute of Technology.
This book is a decent introduction to neural networks in general. The forward
feed back error propogation is but one of many types.
=head1 AUTHORS
Curtis "Ovid" Poe, C<ovid [at] cpan [dot] org>
Multiple network support, persistence, export of MSE (mean squared error),
training until MSE below a given threshold and customization of the
t/10nn_simple.t view on Meta::CPAN
throws_ok {$net->learn_rate(2)}
qr/^\QLearn rate must be between 0 and 1, exclusive\E/,
'... and setting it outside of legal boundaries should die';
is(sprintf("%.1f", $net->learn_rate), "0.2", '... and it should have the correct learn rate');
isa_ok($net->learn_rate(.3), $CLASS => '... and setting it should return the object');
is(sprintf("%.1f", $net->learn_rate), "0.3", '... and should set it correctly');
$net->learn_rate(.2);
can_ok($net, 'train');
# teach the network logical 'or'
ok($net->train([1,1], [0,1]), 'Calling train() with valid data should succeed');
for (1 .. 10000) {
$net->train([1,1],[0,1]);
$net->train([1,0],[0,1]);
$net->train([0,1],[0,1]);
$net->train([0,0],[1,0]);
}
can_ok($net, 'winner');
is($net->winner([1,1]), 1, '... and it should return the index of the highest valued result');
is($net->winner([1,0]), 1, '... and it should return the index of the highest valued result');
is($net->winner([0,1]), 1, '... and it should return the index of the highest valued result');
is($net->winner([0,0]), 0, '... and it should return the index of the highest valued result');
# teach the network logical 'and' using the tanh() activation with delta=2
$net = $CLASS->new(2,1,2);
$net->delta(2);
$net->use_bipolar(1);
my $mse = $net->train_set([
[1,1] => [0,1],
[1,0] => [1,0],
[0,1] => [1,0],
[0,0] => [1,0],
], 10000, 0.2);