AI-NeuralNet-Simple
view release on metacpan or search on metacpan
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
}
my $seed = rand(1); # Perl invokes srand() on first call to rand()
my $handle = c_new_network(@args);
logdie "could not create new network" unless $handle >= 0;
my $self = bless {
input => $args[0],
hidden => $args[1],
output => $args[2],
handle => $handle,
}, $class;
$self->iterations(10000); # set a reasonable default
}
sub train {
my ( $self, $inputref, $outputref ) = @_;
return c_train( $self->handle, $inputref, $outputref );
}
sub train_set {
my ( $self, $set, $iterations, $mse ) = @_;
$iterations ||= $self->iterations;
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
=head1 Programming C<AI::NeuralNet::Simple>
=head2 C<new($input, $hidden, $output)>
C<new()> accepts three integers. These number represent the number of nodes in
the input, hidden, and output layers, respectively. To create the "logical or"
network described earlier:
my $net = AI::NeuralNet::Simple->new(2,1,2);
By default, the activation function for the neurons is the sigmoid function
S() with delta = 1:
S(x) = 1 / (1 + exp(-delta * x))
but you can change the delta after creation. You can also use a bipolar
activation function T(), using the hyperbolic tangent:
T(x) = tanh(delta * x)
tanh(x) = (exp(x) - exp(-x)) / (exp(x) + exp(-x))
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
=head2 C<train_set(\@dataset, [$iterations, $mse])>
Similar to train, this method allows us to train an entire data set at once.
It is typically faster than calling individual "train" methods. The first
argument is expected to be an array ref of pairs of input and output array
refs.
The second argument is the number of iterations to train the set. If
this argument is not provided here, you may use the C<iterations()> method to
set it (prior to calling C<train_set()>, of course). A default of 10,000 will
be provided if not set.
The third argument is the targeted Mean Square Error (MSE). When provided,
the traning sequence will compute the maximum MSE seen during an iteration
over the training set, and if it is less than the supplied target, the
training stops. Computing the MSE at each iteration costs, but you are
certain to not over-train your network.
$net->train_set([
[1,1] => [0,1],
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
[1,0] => [0,1],
[0,1] => [0,1],
[0,0] => [1,0],
);
$net->iterations(100000) # let's have lots more iterations!
->train_set(\@training_data);
=head2 C<learn_rate($rate)>)
This method, if called without an argument, will return the current learning
rate. .20 is the default learning rate.
If called with an argument, this argument must be greater than zero and less
than one. This will set the learning rate and return the object.
$net->learn_rate; #returns the learning rate
$net->learn_rate(.1)
->iterations(100000)
->train_set(\@training_data);
If you choose a lower learning rate, you will train the network slower, but you
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
This method returns the index of the highest value from inferred results:
print $net->winner([1,1]); # will likely print "1"
For a more comprehensive example of how this is used, see the
"examples/game_ai.pl" program.
=head1 EXPORT
None by default.
=head1 CAVEATS
This is B<alpha> code. Very alpha. Not even close to ready for production,
don't even think about it. I'm putting it on the CPAN lest it languish on my
hard-drive forever. Hopefully someone will get some use out of it and think to
send me a patch or two.
=head1 TODO
( run in 0.372 second using v1.01-cache-2.11-cpan-0a6323c29d9 )