AI-NeuralNet-BackProp
view release on metacpan or search on metacpan
<LI><A HREF="#notes">NOTES</A></LI>
<LI><A HREF="#other included packages">OTHER INCLUDED PACKAGES</A></LI>
<LI><A HREF="#bugs">BUGS</A></LI>
<LI><A HREF="#author">AUTHOR</A></LI>
<LI><A HREF="#thanks">THANKS</A></LI>
<LI><A HREF="#download">DOWNLOAD</A></LI>
<LI><A HREF="#mailing list">MAILING LIST</A></LI>
<LI><A HREF="#what can it do">WHAT CAN IT DO?</A></LI>
</UL>
<!-- INDEX END -->
<HR SIZE=1 COLOR=BLACK>
<P>
<H1><A NAME="name">NAME</A></H1>
<P>AI::NeuralNet::BackProp - A simple back-prop neural net that uses Delta's and Hebbs' rule.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="synopsis">SYNOPSIS</A></H1>
<PRE>
use AI::NeuralNet::BackProp;
# Create a new network with 1 layer, 5 inputs, and 5 outputs.
my $net = new AI::NeuralNet::BackProp(1,5,5);
# Add a small amount of randomness to the network
$net->random(0.001);
# Demonstrate a simple learn() call
my @inputs = ( 0,0,1,1,1 );
my @ouputs = ( 1,0,1,0,1 );
print $net->learn(\@inputs, \@outputs),"\n";
# Create a data set to learn
my @set = (
[ 2,2,3,4,1 ], [ 1,1,1,1,1 ],
[ 1,1,1,1,1 ], [ 0,0,0,0,0 ],
[ 1,1,1,0,0 ], [ 0,0,0,1,1 ]
);
# Demo learn_set()
my $f = $net->learn_set(\@set);
print "Forgetfulness: $f unit\n";
# Crunch a bunch of strings and return array refs
my $phrase1 = $net->crunch("I love neural networks!");
my $phrase2 = $net->crunch("Jay Lenno is wierd.");
my $phrase3 = $net->crunch("The rain in spain...");
my $phrase4 = $net->crunch("Tired of word crunching yet?");
# Make a data set from the array refs
my @phrases = (
$phrase1, $phrase2,
$phrase3, $phrase4
);
# Learn the data set
$net->learn_set(\@phrases);
# Run a test phrase through the network
my $test_phrase = $net->crunch("I love neural networking!");
my $result = $net->run($test_phrase);
# Get this, it prints "Jay Leno is networking!" ... LOL!
print $net->uncrunch($result),"\n"
</PRE>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="updates">UPDATES</A></H1>
<P>This is version 0.89. In this version I have included a new feature, output range limits, as
well as automatic crunching of <A HREF="#item_run"><CODE>run()</CODE></A> and learn*() inputs. Included in the examples directory
are seven new practical-use example scripts. Also implemented in this version is a much cleaner
learning function for individual neurons which is more accurate than previous verions and is
based on the LMS rule. See <A HREF="#item_range"><CODE>range()</CODE></A> for information on output range limits. I have also updated
the <A HREF="#item_load"><CODE>load()</CODE></A> and <A HREF="#item_save"><CODE>save()</CODE></A> methods so that they do not depend on Storable anymore. In this version
you also have the choice between three network topologies, two not as stable, and the third is
the default which has been in use for the previous four versions.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="description">DESCRIPTION</A></H1>
<P>AI::NeuralNet::BackProp implements a nerual network similar to a feed-foward,
back-propagtion network; learning via a mix of a generalization
of the Delta rule and a disection of Hebbs rule. The actual
neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.
</P>
You constuct a new network via the new constructor:
<PRE>
my $net = new AI::NeuralNet::BackProp(2,3,1);</PRE>
<P>The <CODE>new()</CODE> constructor accepts two arguments and one optional argument, $layers, $size,
and $outputs is optional (in this example, $layers is 2, $size is 3, and $outputs is 1).</P>
<P>$layers specifies the number of layers, including the input
and the output layer, to use in each neural grouping. A new
neural grouping is created for each pattern learned. Layers
is typically set to 2. Each layer has $size neurons in it.
Each neuron's output is connected to one input of every neuron
in the layer below it.
</P>
This diagram illustrates a simple network, created with a call
to "new AI::NeuralNet::BackProp(2,2,2)" (2 layers, 2 neurons/layer, 2 outputs).
<PRE>
input
/ \
O O
|\ /|
| \/ |
| /\ |
|/ \|
O O
\ /
mapper</PRE>
<P>In this diagram, each neuron is connected to one input of every
neuron in the layer below it, but there are not connections
between neurons in the same layer. Weights of the connection
are controlled by the neuron it is connected to, not the connecting
neuron. (E.g. the connecting neuron has no idea how much weight
its output has when it sends it, it just sends its output and the
weighting is taken care of by the receiving neuron.) This is the
method used to connect cells in every network built by this package.</P>
to add enough weight to get anything other than a 0.</P>
<P>The second option to allow for 0s is to enable a maximum error with the 'error' option in
<A HREF="#item_learn"><CODE>learn()</CODE></A> , <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> , and <A HREF="#item_learn_set_rand"><CODE>learn_set_rand()</CODE></A> . This allows the network to not worry about
learning an output perfectly.</P>
<P>For accuracy reasons, it is recomended that you work with 0s using the <A HREF="#item_random"><CODE>random()</CODE></A> method.</P>
<P>If anyone has any thoughts/arguments/suggestions for using 0s in the network, let me know
at <A HREF="mailto:jdb@wcoil.com.">jdb@wcoil.com.</A></P>
<P></P></DL>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="other included packages">OTHER INCLUDED PACKAGES</A></H1>
<DL>
<DT><STRONG><A NAME="item_AI%3A%3ANeuralNet%3A%3ABackProp%3A%3Aneuron">AI::NeuralNet::BackProp::neuron</A></STRONG><BR>
<DD>
AI::NeuralNet::BackProp::neuron is the worker package for AI::NeuralNet::BackProp.
It implements the actual neurons of the nerual network.
AI::NeuralNet::BackProp::neuron is not designed to be created directly, as
it is used internally by AI::NeuralNet::BackProp.
<P></P>
<DT><STRONG><A NAME="item_AI%3A%3ANeuralNet%3A%3ABackProp%3A%3A_run">AI::NeuralNet::BackProp::_run</A></STRONG><BR>
<DD>
<DT><STRONG><A NAME="item_AI%3A%3ANeuralNet%3A%3ABackProp%3A%3A_map">AI::NeuralNet::BackProp::_map</A></STRONG><BR>
<DD>
These two packages, _run and _map are used to insert data into
the network and used to get data from the network. The _run and _map packages
are connected to the neurons so that the neurons think that the IO packages are
just another neuron, sending data on. But the IO packs. are special packages designed
with the same methods as neurons, just meant for specific IO purposes. You will
never need to call any of the IO packs. directly. Instead, they are called whenever
you use the <A HREF="#item_run"><CODE>run()</CODE></A> or <A HREF="#item_learn"><CODE>learn()</CODE></A> methods of your network.
<P></P></DL>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="bugs">BUGS</A></H1>
<P>This is an alpha release of <CODE>AI::NeuralNet::BackProp</CODE>, and that holding true, I am sure
there are probably bugs in here which I just have not found yet. If you find bugs in this module, I would
appreciate it greatly if you could report them to me at <EM><<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>></EM>,
or, even better, try to patch them yourself and figure out why the bug is being buggy, and
send me the patched code, again at <EM><<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>></EM>.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="author">AUTHOR</A></H1>
<P>Josiah Bryan <EM><<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>></EM></P>
<P>Copyright (c) 2000 Josiah Bryan. All rights reserved. This program is free software;
you can redistribute it and/or modify it under the same terms as Perl itself.</P>
<P>The <CODE>AI::NeuralNet::BackProp</CODE> and related modules are free software. THEY COME WITHOUT WARRANTY OF ANY KIND.</P>
<P></P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="thanks">THANKS</A></H1>
<P>Below is a list of people that have helped, made suggestions, patches, etc. No particular order.</P>
<PRE>
Tobias Bronx, tobiasb@odin.funcom.com
Pat Trainor, ptrainor@title14.com
Steve Purkis, spurkis@epn.nu
Rodin Porrata, rodin@ursa.llnl.gov
Daniel Macks dmacks@sas.upenn.edu</PRE>
<P>Tobias was a great help with the initial releases, and helped with learning options and a great
many helpful suggestions. Rodin has gave me some great ideas for the new internals, as well
as disabling Storable. Steve is the author of AI::Perceptron, and gave some good suggestions for
weighting the neurons. Daniel was a great help with early beta testing of the module and related
ideas. Pat has been a great help for running the module through the works. Pat is the author of
the new Inter game, a in-depth strategy game. He is using a group of neural networks internally
which provides a good test bed for coming up with new ideas for the network. Thankyou for all of
your help, everybody.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="download">DOWNLOAD</A></H1>
<P>You can always download the latest copy of AI::NeuralNet::BackProp
from <A HREF="http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl">http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl</A></P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="mailing list">MAILING LIST</A></H1>
<P>A mailing list has been setup for AI::NeuralNet::BackProp for discussion of AI and
neural net related topics as they pertain to AI::NeuralNet::BackProp. I will also
announce in the group each time a new release of AI::NeuralNet::BackProp is available.</P>
<P>The list address is at: <A HREF="mailto:ai-neuralnet-backprop@egroups.com">ai-neuralnet-backprop@egroups.com</A></P>
<P>To subscribe, send a blank email to: <A HREF="mailto:ai-neuralnet-backprop-subscribe@egroups.com">ai-neuralnet-backprop-subscribe@egroups.com</A></P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="what can it do">WHAT CAN IT DO?</A></H1>
<P>Rodin Porrata asked on the ai-neuralnet-backprop malining list,
"What can they [Neural Networks] do?". In regards to that questioin,
consider the following:</P>
<P>Neural Nets are formed by simulated neurons connected together much the same
way the brain's neurons are, neural networks are able to associate and
generalize without rules. They have solved problems in pattern recognition,
robotics, speech processing, financial predicting and signal processing, to
name a few.</P>
<P>One of the first impressive neural networks was NetTalk, which read in ASCII
text and correctly pronounced the words (producing phonemes which drove a
speech chip), even those it had never seen before. Designed by John Hopkins
biophysicist Terry Sejnowski and Charles Rosenberg of Princeton in 1986,
this application made the Backprogagation training algorithm famous. Using
the same paradigm, a neural network has been trained to classify sonar
returns from an undersea mine and rock. This classifier, designed by
Sejnowski and R. Paul Gorman, performed better than a nearest-neighbor
classifier.</P>
<P>The kinds of problems best solved by neural networks are those that people
are good at such as association, evaluation and pattern recognition.
Problems that are difficult to compute and do not require perfect answers,
just very good answers, are also best done with neural networks. A quick,
very good response is often more desirable than a more accurate answer which
takes longer to compute. This is especially true in robotics or industrial
controller applications. Predictions of behavior and general analysis of
data are also affairs for neural networks. In the financial arena, consumer
loan analysis and financial forecasting make good applications. New network
designers are working on weather forecasts by neural networks (Myself
included). Currently, doctors are developing medical neural networks as an
aid in diagnosis. Attorneys and insurance companies are also working on
neural networks to help estimate the value of claims.</P>
<P>Neural networks are poor at precise calculations and serial processing. They
are also unable to predict or recognize anything that does not inherently
contain some sort of pattern. For example, they cannot predict the lottery,
since this is a random process. It is unlikely that a neural network could
be built which has the capacity to think as well as a person does for two
reasons. Neural networks are terrible at deduction, or logical thinking and
the human brain is just too complex to completely simulate. Also, some
problems are too difficult for present technology. Real vision, for
example, is a long way off.</P>
<P>In short, Neural Networks are poor at precise calculations, but good at
association, evaluation, and pattern recognition.
</P>
( run in 0.487 second using v1.01-cache-2.11-cpan-62a16548d74 )