AI-NeuralNet-BackProp
view release on metacpan or search on metacpan
BackProp.pm view on Meta::CPAN
=head1 UPDATES
This is version 0.89. In this version I have included a new feature, output range limits, as
well as automatic crunching of run() and learn*() inputs. Included in the examples directory
are seven new practical-use example scripts. Also implemented in this version is a much cleaner
learning function for individual neurons which is more accurate than previous verions and is
based on the LMS rule. See range() for information on output range limits. I have also updated
the load() and save() methods so that they do not depend on Storable anymore. In this version
you also have the choice between three network topologies, two not as stable, and the third is
the default which has been in use for the previous four versions.
=head1 DESCRIPTION
AI::NeuralNet::BackProp implements a nerual network similar to a feed-foward,
back-propagtion network; learning via a mix of a generalization
of the Delta rule and a disection of Hebbs rule. The actual
neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.
BackProp.pm view on Meta::CPAN
$ perl some_script.pl > .out
Then I can simply go and use emacs or any other text editor and read the output at my leisure,
rather than have to wait or use some 'more' as it comes by on the screen.
=head2 METHODS
=over 4
=item new AI::NeuralNet::BackProp($layers, $size [, $outputs, $topology_flag])
Returns a newly created neural network from an C<AI::NeuralNet::BackProp>
object. The network will have C<$layers> number layers in it
and each layer will have C<$size> number of neurons in that layer.
There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size. $outputs may not exceed $size. If $outputs
exceeds $size, the new() constructor will return undef.
The optional parameter, $topology_flag, defaults to 0 when not used. There are
three valid topology flag values:
B<0> I<default>
My feed-foward style: Each neuron in layer X is connected to one input of every
neuron in layer Y. The best and most proven flag style.
^ ^ ^
O\ O\ /O Layer Y
^\\/^/\/^
| //|\/\|
|/ \|/ \|
This is version 0.89. In this version I have included a
new feature, output range limits, as well as automatic
crunching of run() and learn*() inputs. Included in the
examples directory are seven new practical-use example
scripts. Also implemented in this version is a much cleaner
learning function for individual neurons which is more
accurate than previous verions and is based on the LMS
rule. See range() for information on output range limits.
I have also updated the load() and save() methods so that
they do not depend on Storable anymore. In this version you
also have the choice between three network topologies, two
not as stable, and the third is the default which has been
in use for the previous four versions.
Checkout the nifty HTML-format docs in "docs.htm"
** What do you think?
Now I know you people are out there that are using the module...
I can hear the fists hitting the keyboards in frustration. :-) Relieve
some of that frustration by e-mailing me and letting me know what
</PRE>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="updates">UPDATES</A></H1>
<P>This is version 0.89. In this version I have included a new feature, output range limits, as
well as automatic crunching of <A HREF="#item_run"><CODE>run()</CODE></A> and learn*() inputs. Included in the examples directory
are seven new practical-use example scripts. Also implemented in this version is a much cleaner
learning function for individual neurons which is more accurate than previous verions and is
based on the LMS rule. See <A HREF="#item_range"><CODE>range()</CODE></A> for information on output range limits. I have also updated
the <A HREF="#item_load"><CODE>load()</CODE></A> and <A HREF="#item_save"><CODE>save()</CODE></A> methods so that they do not depend on Storable anymore. In this version
you also have the choice between three network topologies, two not as stable, and the third is
the default which has been in use for the previous four versions.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="description">DESCRIPTION</A></H1>
<P>AI::NeuralNet::BackProp implements a nerual network similar to a feed-foward,
back-propagtion network; learning via a mix of a generalization
of the Delta rule and a disection of Hebbs rule. The actual
neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.
</P>
<P>If you do call $net->debug(1), it is a good
idea to point STDIO of your script to a file, as a lot of information is output. I often
use this command line:</P>
<PRE>
$ perl some_script.pl > .out</PRE>
<P>Then I can simply go and use emacs or any other text editor and read the output at my leisure,
rather than have to wait or use some 'more' as it comes by on the screen.</P>
<P>
<H2><A NAME="methods">METHODS</A></H2>
<DL>
<DT><STRONG><A NAME="item_BackProp">new AI::NeuralNet::BackProp($layers, $size [, $outputs, $topology_flag])</A></STRONG><BR>
<DD>
Returns a newly created neural network from an <CODE>AI::NeuralNet::BackProp</CODE>
object. The network will have <CODE>$layers</CODE> number layers in it
and each layer will have <CODE>$size</CODE> number of neurons in that layer.
<P>There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size. $outputs may not exceed $size. If $outputs
exceeds $size, the <CODE>new()</CODE> constructor will return undef.</P>
<P>The optional parameter, $topology_flag, defaults to 0 when not used. There are
three valid topology flag values:</P>
<P><STRONG>0</STRONG> <EM>default</EM>
My feed-foward style: Each neuron in layer X is connected to one input of every
neuron in layer Y. The best and most proven flag style.</P>
<PRE>
^ ^ ^
O\ O\ /O Layer Y
^\\/^/\/^
| //|\/\|
|/ \|/ \|
O O O Layer X
designers are working on weather forecasts by neural networks (Myself
included). Currently, doctors are developing medical neural networks as an
aid in diagnosis. Attorneys and insurance companies are also working on
neural networks to help estimate the value of claims.</P>
<P>Neural networks are poor at precise calculations and serial processing. They
are also unable to predict or recognize anything that does not inherently
contain some sort of pattern. For example, they cannot predict the lottery,
since this is a random process. It is unlikely that a neural network could
be built which has the capacity to think as well as a person does for two
reasons. Neural networks are terrible at deduction, or logical thinking and
the human brain is just too complex to completely simulate. Also, some
problems are too difficult for present technology. Real vision, for
example, is a long way off.</P>
<P>In short, Neural Networks are poor at precise calculations, but good at
association, evaluation, and pattern recognition.
</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<A HREF="http://www.josiah.countystart.com/modules/AI/rec.pl?docs.htm">AI::NeuralNet::BackProp</a> - <i>Written by Josiah Bryan, <<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>></I>
</BODY>
( run in 1.862 second using v1.01-cache-2.11-cpan-49f99fa48dc )