AI-NeuralNet-Simple
view release on metacpan or search on metacpan
{
NEURAL_NETWORK *n;
int handle;
SV **sav;
AV *av;
int i = 0;
/*
* Unfortunately, since those data come from the outside, we need
* to validate most of the structural information to make sure
* we're not fed garbage or something we cannot process, like a
* newer version of the serialized data. This makes the code heavy.
* --RAM
*/
if (!is_array_ref(rv))
croak("c_import_network() not given an array reference");
av = get_array(rv);
/* Check version number */
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
=head2 The Biology
A neural network, at its simplest, is merely an attempt to mimic nature's
"design" of a brain. Like many successful ventures in the field of artificial
intelligence, we find that blatantly ripping off natural designs has allowed us
to solve many problems that otherwise might prove intractable. Fortunately,
Mother Nature has not chosen to apply for patents.
Our brains are comprised of neurons connected to one another by axons. The
axon makes the actual connection to a neuron via a synapse. When neurons
receive information, they process it and feed this information to other neurons
who in turn process the information and send it further until eventually
commands are sent to various parts of the body and muscles twitch, emotions are
felt and we start eyeing our neighbor's popcorn in the movie theater, wondering
if they'll notice if we snatch some while they're watching the movie.
=head2 A simple example of a neuron
Now that you have a solid biology background (uh, no), how does this work when
we're trying to simulate a neural network? The simplest part of the network is
the neuron (also known as a node or, sometimes, a neurode). A we might think
of a neuron as follows (OK, so I won't make a living as an ASCII artist):
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
The "activation function" is a special function that is applied to the inputs
to generate the actual output. There are a variety of activation functions
available with three of the most common being the linear, sigmoid, and tahn
activation functions. For technical reasons, the linear activation function
cannot be used with the type of network that C<AI::NeuralNet::Simple> employs.
This module uses the sigmoid activation function. (More information about
these can be found by reading the information in the L<SEE ALSO> section or by
just searching with Google.)
Once the activation function is applied, the output is then sent through the
next synapse, where it will be multiplied by w4 and the process will continue.
=head2 C<AI::NeuralNet::Simple> architecture
The architecture used by this module has (at present) 3 fixed layers of
neurons: an input, hidden, and output layer. In practice, a 3 layer network is
applicable to many problems for which a neural network is appropriate, but this
is not always the case. In this module, we've settled on a fixed 3 layer
network for simplicity.
Here's how a three layer network might learn "logical or". First, we need to
lib/AI/NeuralNet/Simple.pm view on Meta::CPAN
The type of network we use is a forward-feed back error propagation network,
referred to as a back-propagation network, for short. The way it works is
simple. When we feed in our input, it travels from the input to hidden layers
and then to the output layers. This is the "feed forward" part. We then
compare the output to the expected results and measure how far off we are. We
then adjust the weights on the "output to hidden" synapses, measure the error
on the hidden nodes and then adjust the weights on the "hidden to input"
synapses. This is what is referred to as "back error propagation".
We continue this process until the amount of error is small enough that we are
satisfied. In reality, we will rarely if ever get precise results from the
network, but we learn various strategies to interpret the results. In the
example above, we use a "winner takes all" strategy. Which ever of the output
nodes has the greatest value will be the "winner", and thus the answer.
In the examples directory, you will find a program named "logical_or.pl" which
demonstrates the above process.
=head2 Building a network
In creating a new neural network, there are three basic steps:
=over 4
=item 1 Designing
This is choosing the number of layers and the number of neurons per layer. In
( run in 0.261 second using v1.01-cache-2.11-cpan-8d75d55dd25 )