AI-NeuralNet-Mesh

 view release on metacpan or  search on metacpan

mesh.htm  view on Meta::CPAN

a <CODE>for()</CODE> loop over the range of numbers, calculating the size instead of just going
$r1a..$r1b because we use the loop index with the next layer up as well.</P>
<P>$y + $r1a give the index into the mesh array of the current node to connect the output FROM.
We need to connect this nodes output lines to the next layers input nodes. We do this
with a simple method of the outputing node (the node at $y+$r1a), called add_output_node().</P>
<P><CODE>add_output_node()</CODE> takes one simple arguemnt: A blessed refrence to a node that it is supposed
to output its final value TO. We get this blessed refrence with more simple addition.</P>
<P>$y + $r2a gives us the node directly above the first node (supposedly...I'll get to the ``supposedly''
part in a minute.) By adding or subtracting from this number we get the neighbor nodes.
In the above example you can see we check the $y index to see that we havn't come close to
any of the edges of the range.</P>
<P>Using $y+$r2a we get the index of the node to pass to <CODE>add_output_node()</CODE> on the first node at
$y+<STRONG>$r1a</STRONG>.</P>
<P>And that's all there is to it!</P>
<P>For the fun of it, we'll take a quick look at the default connector.
Below is the actual default connector code, albeit a bit cleaned up, as well as
line numbers added.</P>
<PRE>
        = line 1  =     sub _c {
        = line 2  =     my $self        =       shift;
        = line 3  =     my $r1a         =       shift;
        = line 4  =     my $r1b         =       shift;
        = line 5  =     my $r2a         =       shift;
        = line 6  =     my $r2b         =       shift;
        = line 7  =     my $mesh        =       $self-&gt;{mesh};
        = line 8  =             for my $y ($r1a..$r1b-1) {
        = line 9  =                     for my $z ($r2a..$r2b-1) {
        = line 10 =                             $mesh-&gt;[$y]-&gt;add_output_node($mesh-&gt;[$z]);
        = line 11 =                     }
        = line 12 =             }
        = line 12 =     }
</PRE>
<P>Its that easy! The simplest connector (well almost anyways). It just connects each
node in the first layer defined by ($r1a..$r1b) to every node in the second layer as
defined by ($r2a..$r2b).</P>
<P>Those of you that are still reading, if you do come up with any new connection functions,
PLEASE SEND THEM TO ME. I would love to see what others are doing, as well as get new
network ideas. I will probably include any connectors you send over in future releases (with
propoer credit and permission, of course).</P>
<P>Anyways, happy coding!</P>
<P>
<HR>
<H1><A NAME="what can it do">WHAT CAN IT DO?</A></H1>
<P>Rodin Porrata asked on the ai-neuralnet-backprop malining list,
``What can they [Neural Networks] do?''. In regards to that questioin,
consider the following:</P>
<P>Neural Nets are formed by simulated neurons connected together much the same
way the brain's neurons are, neural networks are able to associate and
generalize without rules.  They have solved problems in pattern recognition,
robotics, speech processing, financial predicting and signal processing, to
name a few.</P>
<P>One of the first impressive neural networks was NetTalk, which read in ASCII
text and correctly pronounced the words (producing phonemes which drove a
speech chip), even those it had never seen before.  Designed by John Hopkins
biophysicist Terry Sejnowski and Charles Rosenberg of Princeton in 1986,
this application made the Backprogagation training algorithm famous.  Using
the same paradigm, a neural network has been trained to classify sonar
returns from an undersea mine and rock.  This classifier, designed by
Sejnowski and R.  Paul Gorman, performed better than a nearest-neighbor
classifier.</P>
<P>The kinds of problems best solved by neural networks are those that people
are good at such as association, evaluation and pattern recognition.
Problems that are difficult to compute and do not require perfect answers,
just very good answers, are also best done with neural networks.  A quick,
very good response is often more desirable than a more accurate answer which
takes longer to compute.  This is especially true in robotics or industrial
controller applications.  Predictions of behavior and general analysis of
data are also affairs for neural networks.  In the financial arena, consumer
loan analysis and financial forecasting make good applications.  New network
designers are working on weather forecasts by neural networks (Myself
included).  Currently, doctors are developing medical neural networks as an
aid in diagnosis.  Attorneys and insurance companies are also working on
neural networks to help estimate the value of claims.</P>
<P>Neural networks are poor at precise calculations and serial processing. They
are also unable to predict or recognize anything that does not inherently
contain some sort of pattern.  For example, they cannot predict the lottery,
since this is a random process.  It is unlikely that a neural network could
be built which has the capacity to think as well as a person does for two
reasons.  Neural networks are terrible at deduction, or logical thinking and
the human brain is just too complex to completely simulate.  Also, some
problems are too difficult for present technology.  Real vision, for
example, is a long way off.</P>
<P>In short, Neural Networks are poor at precise calculations, but good at
association, evaluation, and pattern recognition.</P>
<P>
<HR>
<H1><A NAME="examples">EXAMPLES</A></H1>
<P>Included are several example files in the ``examples'' directory from the
distribution ZIP file. Each of the examples includes a short explanation 
at the top of the file. Each of these are ment to demonstrate simple, yet 
practical (for the most part :-) uses of this module.</P>
<P>
<HR>
<H1><A NAME="other included packages">OTHER INCLUDED PACKAGES</A></H1>
<P>These packages are not designed to be called directly, they are for internal use. They are
listed here simply for your refrence.</P>
<DL>
<DT><STRONG><A NAME="item_AI%3A%3ANeuralNet%3A%3AMesh%3A%3Anode">AI::NeuralNet::Mesh::node</A></STRONG><BR>
<DD>
This is the worker package of the mesh. It implements all the individual nodes of the mesh.
It might be good to look at the source for this package (in the Mesh.pm file) if you
plan to do a lot of or extensive custom node activation types.
<P></P>
<DT><STRONG><A NAME="item_AI%3A%3ANeuralNet%3A%3AMesh%3A%3Acap">AI::NeuralNet::Mesh::cap</A></STRONG><BR>
<DD>
This is applied to the input layer of the mesh to prevent the mesh from trying to recursivly
adjust weights out throug the inputs.
<P></P>
<DT><STRONG><A NAME="item_AI%3A%3ANeuralNet%3A%3AMesh%3A%3Aoutput">AI::NeuralNet::Mesh::output</A></STRONG><BR>
<DD>
This is simply a data collector package clamped onto the output layer to record the data 
as it comes out of the mesh.
<P></P></DL>
<P>
<HR>
<H1><A NAME="bugs">BUGS</A></H1>
<P>This is a beta release of <CODE>AI::NeuralNet::Mesh</CODE>, and that holding true, I am sure 
there are probably bugs in here which I just have not found yet. If you find bugs in this module, I would 
appreciate it greatly if you could report them to me at <EM>&lt;<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>&gt;</EM>,
or, even better, try to patch them yourself and figure out why the bug is being buggy, and
send me the patched code, again at <EM>&lt;<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>&gt;</EM>.</P>
<P>
<HR>
<H1><A NAME="author">AUTHOR</A></H1>



( run in 1.106 second using v1.01-cache-2.11-cpan-39bf76dae61 )