AI-NeuralNet-BackProp
view release on metacpan or search on metacpan
BackProp.pm view on Meta::CPAN
#!/usr/bin/perl
# $Id: BackProp.pm,v 0.89 2000/08/12 01:05:27 josiah Exp $
#
# Copyright (c) 2000 Josiah Bryan USA
#
# See AUTHOR section in pod text below for usage and distribution rights.
# See UPDATES section in pod text below for info on what has changed in this release.
#
BEGIN {
$AI::NeuralNet::BackProp::VERSION = "0.89";
}
#
# name: AI::NeuralNet::BackProp
#
# author: Josiah Bryan
BackProp.pm view on Meta::CPAN
my $layers = shift;
my $size = shift;
my $out = shift || $size;
my $flag = shift || 0;
bless $self, $type;
# If $layers is a string, then it will be nummerically equal to 0, so try to load it
# as a network file.
if($layers == 0) {
# We use a "1" flag as the second argument to indicate that we want load()
# to call the new constructor to make a network the same size as in the file
# and return a refrence to the network, instead of just creating the network from
# pre-exisiting refrence
return $self->load($layers,1);
}
#print "Creating $size neurons in each layer for $layers layer(s)...\n";
AI::NeuralNet::BackProp::out2 "Creating $size neurons in each layer for $layers layer(s)...\n";
BackProp.pm view on Meta::CPAN
# Returns mean of (rgb) value of palette index passed
sub avg {
my $self = shift;
my $color = shift;
return $self->{parent}->intr(($self->{palette}->[$color]->{red}+$self->{palette}->[$color]->{green}+$self->{palette}->[$color]->{blue})/3);
}
# Loads and decompresses a PCX-format 320x200, 8-bit image file and returns
# two arrays, first is a 64000-byte long array, each element contains a palette
# index, and the second array is a 255-byte long array, each element is a hash
# ref with the keys 'red', 'green', and 'blue', each key contains the respective color
# component for that color index in the palette.
sub load_pcx {
shift if(substr($_[0],0,4) eq 'AI::');
# open the file
open(FILE, "$_[0]");
binmode(FILE);
my $tmp;
BackProp.pm view on Meta::CPAN
based on the LMS rule. See range() for information on output range limits. I have also updated
the load() and save() methods so that they do not depend on Storable anymore. In this version
you also have the choice between three network topologies, two not as stable, and the third is
the default which has been in use for the previous four versions.
=head1 DESCRIPTION
AI::NeuralNet::BackProp implements a nerual network similar to a feed-foward,
back-propagtion network; learning via a mix of a generalization
of the Delta rule and a disection of Hebbs rule. The actual
neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.
You constuct a new network via the new constructor:
my $net = new AI::NeuralNet::BackProp(2,3,1);
The new() constructor accepts two arguments and one optional argument, $layers, $size,
and $outputs is optional (in this example, $layers is 2, $size is 3, and $outputs is 1).
BackProp.pm view on Meta::CPAN
my $result = $net->run(\@map);
Now, this call would probably not give what you want, because
the network hasn't "learned" any patterns yet. But this
illustrates the call. Run now allows strings to be used as
input. See run() for more information.
Run returns a refrence with $size elements (Remember $size? $size
is what you passed as the second argument to the network
constructor.) This array contains the results of the mapping. If
you ran the example exactly as shown above, $result would probably
contain (1,1) as its elements.
To make the network learn a new pattern, you simply call the learn
method with a sample input and the desired result, both array
refrences of $size length. Example:
use AI;
my $net = new AI::NeuralNet::BackProp(2,2);
BackProp.pm view on Meta::CPAN
my @res = (1,0);
$net->learn(\@map,\@res);
my $result = $net->run(\@map);
Now $result will conain (1,0), effectivly flipping the input pattern
around. Obviously, the larger $size is, the longer it will take
to learn a pattern. Learn() returns a string in the form of
Learning took X loops and X wallclock seconds (X.XXX usr + X.XXX sys = X.XXX CPU).
With the X's replaced by time or loop values for that loop call. So,
to view the learning stats for every learn call, you can just:
print $net->learn(\@map,\@res);
If you call "$net->debug(4)" with $net being the
refrence returned by the new() constructor, you will get benchmarking
information for the learn function, as well as plenty of other information output.
See notes on debug() in the METHODS section, below.
If you do call $net->debug(1), it is a good
idea to point STDIO of your script to a file, as a lot of information is output. I often
use this command line:
$ perl some_script.pl > .out
Then I can simply go and use emacs or any other text editor and read the output at my leisure,
rather than have to wait or use some 'more' as it comes by on the screen.
BackProp.pm view on Meta::CPAN
=item $net->range("string of values");
With this construct you can specify a string of values to be allowed as the outputs. This string
is simply taken an crunch() -ed internally and saved as an array ref. This has the same effect
as calling:
$net->range($net->crunch("string of values"));
=item $net->range("first string","second string");
This is the same as calling:
$net->range($net->crunch("first string"),$net->crunch("second string"));
Or:
@range = ($net->crunch("first string"),
$net->crunch("second string"));
$net->range(\@range);
=item $net->range($value1,$value2);
This is the same as calling:
$net->range([$value1,$value2]);
Or:
@range = ($value1,$value2);
$net->range(\@range);
The second example is the same as the first example.
=item $net->benchmarked();
UPDATE: bencmarked() now returns just the string from timestr() for the last run() or
loop() call. Exception: If the last call was a loop the string will be prefixed with "%d loops and ".
This returns a benchmark info string for the last learn() or the last run() call,
whichever occured later. It is easily printed as a string,
BackProp.pm view on Meta::CPAN
When randomness is enabled (that is, when you call random() with a value other than 0), it interjects
a bit of randomness into the output of every neuron in the network, except for the input and output
neurons. The randomness is interjected with rand()*$rand, where $rand is the value that was
passed to random() call. This assures the network that it will never have a pure 0 internally. It is
bad to have a pure 0 internally because the weights cannot change a 0 when multiplied by a 0, the
product stays a 0. Yet when a weight is multiplied by 0.00001, eventually with enough weight, it will
be able to learn. With a 0 value instead of 0.00001 or whatever, then it would never be able
to add enough weight to get anything other than a 0.
The second option to allow for 0s is to enable a maximum error with the 'error' option in
learn() , learn_set() , and learn_set_rand() . This allows the network to not worry about
learning an output perfectly.
For accuracy reasons, it is recomended that you work with 0s using the random() method.
If anyone has any thoughts/arguments/suggestions for using 0s in the network, let me know
at jdb@wcoil.com.
learning function for individual neurons which is more accurate than previous verions and is
based on the LMS rule. See <A HREF="#item_range"><CODE>range()</CODE></A> for information on output range limits. I have also updated
the <A HREF="#item_load"><CODE>load()</CODE></A> and <A HREF="#item_save"><CODE>save()</CODE></A> methods so that they do not depend on Storable anymore. In this version
you also have the choice between three network topologies, two not as stable, and the third is
the default which has been in use for the previous four versions.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="description">DESCRIPTION</A></H1>
<P>AI::NeuralNet::BackProp implements a nerual network similar to a feed-foward,
back-propagtion network; learning via a mix of a generalization
of the Delta rule and a disection of Hebbs rule. The actual
neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.
</P>
You constuct a new network via the new constructor:
<PRE>
my $net = new AI::NeuralNet::BackProp(2,3,1);</PRE>
<P>The <CODE>new()</CODE> constructor accepts two arguments and one optional argument, $layers, $size,
and $outputs is optional (in this example, $layers is 2, $size is 3, and $outputs is 1).</P>
<P>$layers specifies the number of layers, including the input
and the output layer, to use in each neural grouping. A new
my $net = new AI::NeuralNet::BackProp(2,2);
my @map = (0,1);
my $result = $net->run(\@map);</PRE>
<P>Now, this call would probably not give what you want, because
the network hasn't ``learned'' any patterns yet. But this
illustrates the call. Run now allows strings to be used as
input. See <A HREF="#item_run"><CODE>run()</CODE></A> for more information.</P>
<P>Run returns a refrence with $size elements (Remember $size? $size
is what you passed as the second argument to the network
constructor.) This array contains the results of the mapping. If
you ran the example exactly as shown above, $result would probably
contain (1,1) as its elements.</P>
<P>To make the network learn a new pattern, you simply call the learn
method with a sample input and the desired result, both array
refrences of $size length. Example:</P>
<PRE>
use AI;
my $net = new AI::NeuralNet::BackProp(2,2);
my @map = (0,1);
my @res = (1,0);
$net->learn(\@map,\@res);
my $result = $net->run(\@map);</PRE>
<P>Now $result will conain (1,0), effectivly flipping the input pattern
around. Obviously, the larger $size is, the longer it will take
to learn a pattern. <CODE>Learn()</CODE> returns a string in the form of</P>
<PRE>
Learning took X loops and X wallclock seconds (X.XXX usr + X.XXX sys = X.XXX CPU).</PRE>
<P>With the X's replaced by time or loop values for that loop call. So,
to view the learning stats for every learn call, you can just:
</P>
<PRE>
print $net->learn(\@map,\@res);</PRE>
<P>If you call ``$net->debug(4)'' with $net being the
refrence returned by the <CODE>new()</CODE> constructor, you will get benchmarking
information for the learn function, as well as plenty of other information output.
See notes on <A HREF="#item_debug"><CODE>debug()</CODE></A> in the METHODS section, below.</P>
<P>If you do call $net->debug(1), it is a good
idea to point STDIO of your script to a file, as a lot of information is output. I often
use this command line:</P>
<PRE>
$ perl some_script.pl > .out</PRE>
<P>Then I can simply go and use emacs or any other text editor and read the output at my leisure,
rather than have to wait or use some 'more' as it comes by on the screen.</P>
<P>
<H2><A NAME="methods">METHODS</A></H2>
<DL>
will be allows as an output, no other values will be allowed.
<P></P>
<DT><STRONG>$net->range(``string of values'');</STRONG><BR>
<DD>
With this construct you can specify a string of values to be allowed as the outputs. This string
is simply taken an <A HREF="#item_crunch"><CODE>crunch()</CODE></A> -ed internally and saved as an array ref. This has the same effect
as calling:
<PRE>
$net->range($net->crunch("string of values"));</PRE>
<P></P>
<DT><STRONG>$net->range(``first string'',``second string'');</STRONG><BR>
<DD>
This is the same as calling:
<PRE>
$net->range($net->crunch("first string"),$net->crunch("second string"));</PRE>
<P>Or:</P>
<PRE>
@range = ($net->crunch("first string"),
$net->crunch("second string"));
$net->range(\@range);</PRE>
<P></P>
<DT><STRONG>$net->range($value1,$value2);</STRONG><BR>
<DD>
This is the same as calling:
<PRE>
$net->range([$value1,$value2]);</PRE>
<P>Or:
</P>
<PRE>
@range = ($value1,$value2);
$net->range(\@range);</PRE>
<P>The second example is the same as the first example.</P>
<P></P>
<DT><STRONG><A NAME="item_benchmarked">$net->benchmarked();</A></STRONG><BR>
<DD>
<B>UPDATED:</B> <CODE>bencmarked()</CODE> now returns just the string from <CODE>timestr()</CODE> for the last <A HREF="#item_run"><CODE>run()</CODE></A> or
<A HREF="#item_learn"><CODE>learn()</CODE></A> call. Exception: If the last call was a loop the string will be prefixed with ``%d loops and ''.
<P>This returns a benchmark info string for the last <A HREF="#item_learn"><CODE>learn()</CODE></A> or the last <A HREF="#item_run"><CODE>run()</CODE></A> call,
whichever occured later. It is easily printed as a string,
as following:</P>
<PRE>
print $net->benchmarked() . "\n";</PRE>
or you must set an error-minimum with the 'error => 5' option (you can use some other error value
as well).</P>
<P>When randomness is enabled (that is, when you call <A HREF="#item_random"><CODE>random()</CODE></A> with a value other than 0), it interjects
a bit of randomness into the output of every neuron in the network, except for the input and output
neurons. The randomness is interjected with rand()*$rand, where $rand is the value that was
passed to <A HREF="#item_random"><CODE>random()</CODE></A> call. This assures the network that it will never have a pure 0 internally. It is
bad to have a pure 0 internally because the weights cannot change a 0 when multiplied by a 0, the
product stays a 0. Yet when a weight is multiplied by 0.00001, eventually with enough weight, it will
be able to learn. With a 0 value instead of 0.00001 or whatever, then it would never be able
to add enough weight to get anything other than a 0.</P>
<P>The second option to allow for 0s is to enable a maximum error with the 'error' option in
<A HREF="#item_learn"><CODE>learn()</CODE></A> , <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> , and <A HREF="#item_learn_set_rand"><CODE>learn_set_rand()</CODE></A> . This allows the network to not worry about
learning an output perfectly.</P>
<P>For accuracy reasons, it is recomended that you work with 0s using the <A HREF="#item_random"><CODE>random()</CODE></A> method.</P>
<P>If anyone has any thoughts/arguments/suggestions for using 0s in the network, let me know
at <A HREF="mailto:jdb@wcoil.com.">jdb@wcoil.com.</A></P>
<P></P></DL>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="other included packages">OTHER INCLUDED PACKAGES</A></H1>
<DL>
examples/ex_add.pl view on Meta::CPAN
[ 2, 2 ], [ 4 ],
[ 20, 20 ], [ 40 ],
[ 100, 100 ], [ 200 ],
[ 150, 150 ], [ 300 ],
[ 500, 500 ], [ 1000 ],
]);
$addition->save('add.dat');
}
print "Enter first number to add : "; chomp(my $a = <>);
print "Enter second number to add : "; chomp(my $b = <>);
print "Result: ",$addition->run([$a,$b])->[0],"\n";
examples/ex_sub.pl view on Meta::CPAN
[ 2, 1 ], [ 1 ],
[ 10, 5 ], [ 5 ],
[ 20, 10 ], [ 10 ],
[ 100, 50 ], [ 50 ],
[ 500, 200 ], [ 300 ],
]);
$subtract->save('sub.dat');
}
print "Enter first number to subtract : "; chomp(my $a = <>);
print "Enter second number to subtract : "; chomp(my $b = <>);
print "Result: ",$subtract->run([$a,$b])->[0],"\n";
( run in 0.798 second using v1.01-cache-2.11-cpan-39bf76dae61 )