AI-NeuralNet-BackProp
view release on metacpan or search on metacpan
BackProp.pm view on Meta::CPAN
# the next 'layer'. Remeber, layers only exist in terms of where the connections
# are divided. For example, if a person requested 2 layers and 3 neurons per layer,
# then there would be 6 neurons in the {NET}->[] list, and $div would be set to
# 3. So we would loop over and every 3 neurons we would connect each of those 3
# neurons to one input of every neuron in the next set of 3 neurons. Of course, this
# is an example. 3 and 2 are set by the new() constructor.
# Flag values:
# 0 - (default) -
# My feed-foward style: Each neuron in layer X is connected to one input of every
# neuron in layer Y. The best and most proven flag style.
#
# ^ ^ ^
# O\ O\ /O Layer Y
# ^\\/^/\/^
# | //|\/\|
# |/ \|/ \|
# O O O Layer X
# ^ ^ ^
#
# 1 -
BackProp.pm view on Meta::CPAN
There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size. $outputs may not exceed $size. If $outputs
exceeds $size, the new() constructor will return undef.
The optional parameter, $topology_flag, defaults to 0 when not used. There are
three valid topology flag values:
B<0> I<default>
My feed-foward style: Each neuron in layer X is connected to one input of every
neuron in layer Y. The best and most proven flag style.
^ ^ ^
O\ O\ /O Layer Y
^\\/^/\/^
| //|\/\|
|/ \|/ \|
O O O Layer X
^ ^ ^
object. The network will have <CODE>$layers</CODE> number layers in it
and each layer will have <CODE>$size</CODE> number of neurons in that layer.
<P>There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size. $outputs may not exceed $size. If $outputs
exceeds $size, the <CODE>new()</CODE> constructor will return undef.</P>
<P>The optional parameter, $topology_flag, defaults to 0 when not used. There are
three valid topology flag values:</P>
<P><STRONG>0</STRONG> <EM>default</EM>
My feed-foward style: Each neuron in layer X is connected to one input of every
neuron in layer Y. The best and most proven flag style.</P>
<PRE>
^ ^ ^
O\ O\ /O Layer Y
^\\/^/\/^
| //|\/\|
|/ \|/ \|
O O O Layer X
^ ^ ^</PRE>
<P>(Sorry about the bad art...I am no ASCII artist! :-)</P>
<P><STRONG>1</STRONG>
<P>One of the first impressive neural networks was NetTalk, which read in ASCII
text and correctly pronounced the words (producing phonemes which drove a
speech chip), even those it had never seen before. Designed by John Hopkins
biophysicist Terry Sejnowski and Charles Rosenberg of Princeton in 1986,
this application made the Backprogagation training algorithm famous. Using
the same paradigm, a neural network has been trained to classify sonar
returns from an undersea mine and rock. This classifier, designed by
Sejnowski and R. Paul Gorman, performed better than a nearest-neighbor
classifier.</P>
<P>The kinds of problems best solved by neural networks are those that people
are good at such as association, evaluation and pattern recognition.
Problems that are difficult to compute and do not require perfect answers,
just very good answers, are also best done with neural networks. A quick,
very good response is often more desirable than a more accurate answer which
takes longer to compute. This is especially true in robotics or industrial
controller applications. Predictions of behavior and general analysis of
data are also affairs for neural networks. In the financial arena, consumer
loan analysis and financial forecasting make good applications. New network
designers are working on weather forecasts by neural networks (Myself
included). Currently, doctors are developing medical neural networks as an
aid in diagnosis. Attorneys and insurance companies are also working on
neural networks to help estimate the value of claims.</P>
( run in 0.900 second using v1.01-cache-2.11-cpan-4e96b696675 )