AI-NeuralNet-BackProp

 view release on metacpan or  search on metacpan

docs.htm  view on Meta::CPAN

        my $result = $net-&gt;run(\@map);</PRE>
<P>Now, this call would probably not give what you want, because
the network hasn't ``learned'' any patterns yet. But this
illustrates the call. Run now allows strings to be used as
input. See <A HREF="#item_run"><CODE>run()</CODE></A> for more information.</P>
<P>Run returns a refrence with $size elements (Remember $size? $size
is what you passed as the second argument to the network
constructor.) This array contains the results of the mapping. If
you ran the example exactly as shown above, $result would probably 
contain (1,1) as its elements.</P>
<P>To make the network learn a new pattern, you simply call the learn
method with a sample input and the desired result, both array
refrences of $size length. Example:</P>
<PRE>
        use AI;
        my $net = new AI::NeuralNet::BackProp(2,2);

        my @map = (0,1);
        my @res = (1,0);

        $net-&gt;learn(\@map,\@res);

        my $result = $net-&gt;run(\@map);</PRE>
<P>Now $result will conain (1,0), effectivly flipping the input pattern
around. Obviously, the larger $size is, the longer it will take
to learn a pattern. <CODE>Learn()</CODE> returns a string in the form of</P>
<PRE>
        Learning took X loops and X wallclock seconds (X.XXX usr + X.XXX sys = X.XXX CPU).</PRE>
<P>With the X's replaced by time or loop values for that loop call. So,
to view the learning stats for every learn call, you can just:
</P>
<PRE>
        print $net-&gt;learn(\@map,\@res);</PRE>
<P>If you call ``$net-&gt;debug(4)'' with $net being the 
refrence returned by the <CODE>new()</CODE> constructor, you will get benchmarking 
information for the learn function, as well as plenty of other information output. 
See notes on <A HREF="#item_debug"><CODE>debug()</CODE></A> in the METHODS section, below.</P>
<P>If you do call $net-&gt;debug(1), it is a good 
idea to point STDIO of your script to a file, as a lot of information is output. I often
use this command line:</P>
<PRE>
        $ perl some_script.pl &gt; .out</PRE>
<P>Then I can simply go and use emacs or any other text editor and read the output at my leisure,
rather than have to wait or use some 'more' as it comes by on the screen.</P>
<P>
<H2><A NAME="methods">METHODS</A></H2>
<DL>
<DT><STRONG><A NAME="item_BackProp">new AI::NeuralNet::BackProp($layers, $size [, $outputs, $topology_flag])</A></STRONG><BR>
<DD>
Returns a newly created neural network from an <CODE>AI::NeuralNet::BackProp</CODE>
object. The network will have <CODE>$layers</CODE> number layers in it
and each layer will have <CODE>$size</CODE> number of neurons in that layer.
<P>There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size. $outputs may not exceed $size. If $outputs
exceeds $size, the <CODE>new()</CODE> constructor will return undef.</P>
<P>The optional parameter, $topology_flag, defaults to 0 when not used. There are
three valid topology flag values:</P>
<P><STRONG>0</STRONG> <EM>default</EM>
My feed-foward style: Each neuron in layer X is connected to one input of every
neuron in layer Y. The best and most proven flag style.</P>
<PRE>
        ^   ^   ^               
        O\  O\ /O       Layer Y
        ^\\/^/\/^
        | //|\/\|
        |/ \|/ \|               
        O   O   O       Layer X
        ^   ^   ^</PRE>
<P>(Sorry about the bad art...I am no ASCII artist! :-)</P>
<P><STRONG>1</STRONG>
In addition to flag 0, each neuron in layer X is connected to every input of 
the neurons ahead of itself in layer X.</P>
<P><STRONG>2</STRONG> <EM>(``L-U Style'')</EM>
No, its not ``Learning-Unit'' style. It gets its name from this: In a 2 layer, 3
neuron network, the connections form a L-U pair, or a W, however you want to look
at it.</P>
<PRE>
        ^   ^   ^
        |   |   |
        O--&gt;O--&gt;O
        ^   ^   ^
        |   |   |
        |   |   |
        O--&gt;O--&gt;O
        ^   ^   ^
        |   |   |</PRE>
<P>As you can see, each neuron is connected to the next one in its layer, as well
as the neuron directly above itself.</P>
<P>Before you can really do anything useful with your new neural network
object, you need to teach it some patterns. See the <A HREF="#item_learn"><CODE>learn()</CODE></A> method, below.</P>
<P></P>
<DT><STRONG><A NAME="item_learn">$net-&gt;learn($input_map_ref, $desired_result_ref [, options ]);</A></STRONG><BR>
<DD>
This will 'teach' a network to associate an new input map with a desired resuly.
It will return a string containg benchmarking information. You can retrieve the
pattern index that the network stored the new input map in after <A HREF="#item_learn"><CODE>learn()</CODE></A> is complete
with the <CODE>pattern()</CODE> method, below.
<P><B>UPDATED:</B> You can now specify strings as inputs and ouputs to learn, and they will be crunched
automatically. Example:</P>
<PRE>
        $net-&gt;learn('corn', 'cob');
        # Before update, you have had to do this:
        # $net-&gt;learn($net-&gt;crunch('corn'), $net-&gt;crunch('cob'));</PRE>
<P>Note, the old method of calling crunch on the values still works just as well.</P>
<P><B>UPDATED:</B> You can now learn inputs with a 0 value. Beware though, it may not <A HREF="#item_learn"><CODE>learn()</CODE></A> a 0 value 
in the input map if you have randomness disabled. See NOTES on using a 0 value with randomness
disabled.</P>
<P>The first two arguments may be array refs (or now, strings), and they may be of different lengths.</P>
<P>Options should be written on hash form. There are three options:
</P>
<PRE>
         inc    =&gt;      $learning_gradient
         max    =&gt;      $maximum_iterations
         error  =&gt;      $maximum_allowable_percentage_of_error</PRE>
<P>$learning_gradient is an optional value used to adjust the weights of the internal
connections. If $learning_gradient is ommitted, it defaults to 0.20.
</P>
<P>
$maximum_iterations is the maximum numbers of iteration the loop should do.
It defaults to 1024.  Set it to 0 if you never want the loop to quit before

docs.htm  view on Meta::CPAN

<P>Josiah Bryan <EM>&lt;<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>&gt;</EM></P>
<P>Copyright (c) 2000 Josiah Bryan. All rights reserved. This program is free software; 
you can redistribute it and/or modify it under the same terms as Perl itself.</P>
<P>The <CODE>AI::NeuralNet::BackProp</CODE> and related modules are free software. THEY COME WITHOUT WARRANTY OF ANY KIND.</P>
<P></P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="thanks">THANKS</A></H1>
<P>Below is a list of people that have helped, made suggestions, patches, etc. No particular order.</P>
<PRE>
                Tobias Bronx, tobiasb@odin.funcom.com
                Pat Trainor, ptrainor@title14.com
                Steve Purkis, spurkis@epn.nu
                Rodin Porrata, rodin@ursa.llnl.gov
                Daniel Macks dmacks@sas.upenn.edu</PRE>
<P>Tobias was a great help with the initial releases, and helped with learning options and a great
many helpful suggestions. Rodin has gave me some great ideas for the new internals, as well
as disabling Storable. Steve is the author of AI::Perceptron, and gave some good suggestions for 
weighting the neurons. Daniel was a great help with early beta testing of the module and related 
ideas. Pat has been a great help for running the module through the works. Pat is the author of 
the new Inter game, a in-depth strategy game. He is using a group of neural networks internally 
which provides a good test bed for coming up with new ideas for the network. Thankyou for all of
your help, everybody.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="download">DOWNLOAD</A></H1>
<P>You can always download the latest copy of AI::NeuralNet::BackProp
from <A HREF="http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl">http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl</A></P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="mailing list">MAILING LIST</A></H1>
<P>A mailing list has been setup for AI::NeuralNet::BackProp for discussion of AI and 
neural net related topics as they pertain to AI::NeuralNet::BackProp. I will also 
announce in the group each time a new release of AI::NeuralNet::BackProp is available.</P>
<P>The list address is at: <A HREF="mailto:ai-neuralnet-backprop@egroups.com">ai-neuralnet-backprop@egroups.com</A></P>
<P>To subscribe, send a blank email to: <A HREF="mailto:ai-neuralnet-backprop-subscribe@egroups.com">ai-neuralnet-backprop-subscribe@egroups.com</A></P>

<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="what can it do">WHAT CAN IT DO?</A></H1>
<P>Rodin Porrata asked on the ai-neuralnet-backprop malining list,
"What can they [Neural Networks] do?". In regards to that questioin,
consider the following:</P>

<P>Neural Nets are formed by simulated neurons connected together much the same
way the brain's neurons are, neural networks are able to associate and
generalize without rules.  They have solved problems in pattern recognition,
robotics, speech processing, financial predicting and signal processing, to
name a few.</P>

<P>One of the first impressive neural networks was NetTalk, which read in ASCII
text and correctly pronounced the words (producing phonemes which drove a
speech chip), even those it had never seen before.  Designed by John Hopkins
biophysicist Terry Sejnowski and Charles Rosenberg of Princeton in 1986,
this application made the Backprogagation training algorithm famous.  Using
the same paradigm, a neural network has been trained to classify sonar
returns from an undersea mine and rock.  This classifier, designed by
Sejnowski and R.  Paul Gorman, performed better than a nearest-neighbor
classifier.</P>

<P>The kinds of problems best solved by neural networks are those that people
are good at such as association, evaluation and pattern recognition.
Problems that are difficult to compute and do not require perfect answers,
just very good answers, are also best done with neural networks.  A quick,
very good response is often more desirable than a more accurate answer which
takes longer to compute.  This is especially true in robotics or industrial
controller applications.  Predictions of behavior and general analysis of
data are also affairs for neural networks.  In the financial arena, consumer
loan analysis and financial forecasting make good applications.  New network
designers are working on weather forecasts by neural networks (Myself
included).  Currently, doctors are developing medical neural networks as an
aid in diagnosis.  Attorneys and insurance companies are also working on
neural networks to help estimate the value of claims.</P>

<P>Neural networks are poor at precise calculations and serial processing. They
are also unable to predict or recognize anything that does not inherently
contain some sort of pattern.  For example, they cannot predict the lottery,
since this is a random process.  It is unlikely that a neural network could
be built which has the capacity to think as well as a person does for two
reasons.  Neural networks are terrible at deduction, or logical thinking and
the human brain is just too complex to completely simulate.  Also, some
problems are too difficult for present technology.  Real vision, for
example, is a long way off.</P>

<P>In short, Neural Networks are poor at precise calculations, but good at
association, evaluation, and pattern recognition.
</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<A HREF="http://www.josiah.countystart.com/modules/AI/rec.pl?docs.htm">AI::NeuralNet::BackProp</a> - <i>Written by Josiah Bryan, &lt;<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>&gt;</I>
</BODY>

</HTML>



( run in 2.077 seconds using v1.01-cache-2.11-cpan-39bf76dae61 )