AI-NeuralNet-Simple
view release on metacpan or search on metacpan
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
output += input.neuron.output * input.neuron.synapse.weight
ouput = activation_function(output)
The "activation function" is a special function that is applied to the inputs
to generate the actual output. There are a variety of activation functions
available with three of the most common being the linear, sigmoid, and tahn
activation functions. For technical reasons, the linear activation function
cannot be used with the type of network that C<AI::NeuralNet::Simple> employs.
This module uses the sigmoid activation function. (More information about
these can be found by reading the information in the L<SEE ALSO> section or by
just searching with Google.)
Once the activation function is applied, the output is then sent through the
next synapse, where it will be multiplied by w4 and the process will continue.
=head2 C<AI::NeuralNet::Simple> architecture
The architecture used by this module has (at present) 3 fixed layers of
neurons: an input, hidden, and output layer. In practice, a 3 layer network is
applicable to many problems for which a neural network is appropriate, but this
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
over the training set and it is more efficient because there are less memory
copies back and forth.
=head2 C<train_set(\@dataset, [$iterations, $mse])>
Similar to train, this method allows us to train an entire data set at once.
It is typically faster than calling individual "train" methods. The first
argument is expected to be an array ref of pairs of input and output array
refs.
The second argument is the number of iterations to train the set. If
this argument is not provided here, you may use the C<iterations()> method to
set it (prior to calling C<train_set()>, of course). A default of 10,000 will
be provided if not set.
The third argument is the targeted Mean Square Error (MSE). When provided,
the traning sequence will compute the maximum MSE seen during an iteration
over the training set, and if it is less than the supplied target, the
training stops. Computing the MSE at each iteration costs, but you are
certain to not over-train your network.
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
"logical or" program, you might expect results similar to:
use Data::Dumper;
print Dumper $net->infer([1,1]);
$VAR1 = [
'0.00993729281477686',
'0.990100297418451'
];
That clearly has the second output item being close to 1, so as a helper method
for use with a winner take all strategy, we have ...
=head2 C<winner(\@input)>
This method returns the index of the highest value from inferred results:
print $net->winner([1,1]); # will likely print "1"
For a more comprehensive example of how this is used, see the
"examples/game_ai.pl" program.
( run in 0.920 second using v1.01-cache-2.11-cpan-39bf76dae61 )