AI-NeuralNet-BackProp

 view release on metacpan or  search on metacpan

BackProp.pm  view on Meta::CPAN

if(0) {
       
       		# Formula by Steve Purkis
       		# Converges very fast for low-value inputs. Has trouble converging on high-value
       		# inputs. Feel free to play and try to get to work for high values.
			my $delta	=	$ammount * ($what - $value) * $self->{SYNAPSES}->{LIST}->[$i]->{INPUT};
			$self->{SYNAPSES}->{LIST}->[$i]->{WEIGHT}  +=  $delta;
			$self->{SYNAPSES}->{LIST}->[$i]->{PKG}->weight($ammount,$what);
}
			
			# This formula in use by default is original by me (Josiah Bryan) as far as I know.
			
			# If it is equal, then don't adjust
			#
			### Disabled because this soemtimes causes 
			### infinte loops when learning with range limits enabled
			#
			#next if($value eq $what);
			
			# Adjust increment by the weight of the synapse of 
			# this neuron & apply direction delta

BackProp.pm  view on Meta::CPAN

			
			# Inner loop connects every neuron in this 'layer' to one input of every neuron in
			# the next 'layer'. Remeber, layers only exist in terms of where the connections
			# are divided. For example, if a person requested 2 layers and 3 neurons per layer,
			# then there would be 6 neurons in the {NET}->[] list, and $div would be set to
			# 3. So we would loop over and every 3 neurons we would connect each of those 3 
			# neurons to one input of every neuron in the next set of 3 neurons. Of course, this
			# is an example. 3 and 2 are set by the new() constructor.
			
			# Flag values:
			# 0 - (default) - 
			# 	My feed-foward style: Each neuron in layer X is connected to one input of every
			#	neuron in layer Y. The best and most proven flag style.
			#
			#   ^   ^   ^               
			#	O\  O\ /O       Layer Y
			#   ^\\/^/\/^
			#	| //|\/\|
			#   |/ \|/ \|		
			#	O   O   O       Layer X
			#   ^   ^   ^

BackProp.pm  view on Meta::CPAN


=head1 UPDATES

This is version 0.89. In this version I have included a new feature, output range limits, as
well as automatic crunching of run() and learn*() inputs. Included in the examples directory
are seven new practical-use example scripts. Also implemented in this version is a much cleaner 
learning function for individual neurons which is more accurate than previous verions and is 
based on the LMS rule. See range() for information on output range limits. I have also updated 
the load() and save() methods so that they do not depend on Storable anymore. In this version 
you also have the choice between three network topologies, two not as stable, and the third is 
the default which has been in use for the previous four versions.


=head1 DESCRIPTION

AI::NeuralNet::BackProp implements a nerual network similar to a feed-foward,
back-propagtion network; learning via a mix of a generalization
of the Delta rule and a disection of Hebbs rule. The actual 
neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.
	
You constuct a new network via the new constructor:

BackProp.pm  view on Meta::CPAN

=over 4

=item new AI::NeuralNet::BackProp($layers, $size [, $outputs, $topology_flag])

Returns a newly created neural network from an C<AI::NeuralNet::BackProp>
object. The network will have C<$layers> number layers in it
and each layer will have C<$size> number of neurons in that layer.

There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size. $outputs may not exceed $size. If $outputs
exceeds $size, the new() constructor will return undef.

The optional parameter, $topology_flag, defaults to 0 when not used. There are
three valid topology flag values:

B<0> I<default>
My feed-foward style: Each neuron in layer X is connected to one input of every
neuron in layer Y. The best and most proven flag style.

	^   ^   ^               
	O\  O\ /O       Layer Y
	^\\/^/\/^
	| //|\/\|
	|/ \|/ \|		
	O   O   O       Layer X
	^   ^   ^

BackProp.pm  view on Meta::CPAN

The first two arguments may be array refs (or now, strings), and they may be of different lengths.

Options should be written on hash form. There are three options:
	 
	 inc	=>	$learning_gradient
	 max	=>	$maximum_iterations
	 error	=>	$maximum_allowable_percentage_of_error
	 

$learning_gradient is an optional value used to adjust the weights of the internal
connections. If $learning_gradient is ommitted, it defaults to 0.20.
 
$maximum_iterations is the maximum numbers of iteration the loop should do.
It defaults to 1024.  Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.

$maximum_allowable_percentage_of_error is the maximum allowable error to have. If 
this is set, then learn() will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then learn() will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.


BackProp.pm  view on Meta::CPAN


	flag     =>  $flag
	pattern  =>  $row

If "flag" is set to some TRUE value, as in "flag => 1" in the hash of options, or if the option "flag"
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set() will return an integer specifying the amount of forgetfulness when all the patterns 
are learned. 

If "pattern" is set, then learn_set() will use that pattern in the data set to measure forgetfulness by.
If "pattern" is omitted, it defaults to the first pattern in the set. Example:

	my @set = (
		[ 0,1,0,1 ],  [ 0 ],
		[ 0,0,1,0 ],  [ 1 ],
		[ 1,1,0,1 ],  [ 2 ],  #  <---
		[ 0,1,1,0 ],  [ 3 ]
	);
	
If you wish to measure forgetfulness as indicated by the line with the arrow, then you would
pass 2 as the "pattern" option, as in "pattern => 2".

BackProp.pm  view on Meta::CPAN

=item $net->join_cols($array_ref,$row_length_in_elements,$high_state_character,$low_state_character);

This is more of a utility function than any real necessary function of the package.
Instead of joining all the elements of the array together in one long string, like join() ,
it prints the elements of $array_ref to STDIO, adding a newline (\n) after every $row_length_in_elements
number of elements has passed. Additionally, if you include a $high_state_character and a $low_state_character,
it will print the $high_state_character (can be more than one character) for every element that
has a true value, and the $low_state_character for every element that has a false value. 
If you do not supply a $high_state_character, or the $high_state_character is a null or empty or 
undefined string, it join_cols() will just print the numerical value of each element seperated
by a null character (\0). join_cols() defaults to the latter behaviour.



=item $net->pdiff($array_ref_A, $array_ref_B);

This function is used VERY heavily internally to calculate the difference in percent
between elements of the two array refs passed. It returns a %.10f (sprintf-format) 
percent sting.


README  view on Meta::CPAN

new feature, output range limits, as well as automatic
crunching of run() and learn*() inputs. Included in the
examples directory are seven new practical-use example
scripts. Also implemented in this version is a much cleaner
learning function for individual neurons which is more
accurate than previous verions and is based on the LMS
rule. See range() for information on output range limits.
I have also updated the load() and save() methods so that
they do not depend on Storable anymore. In this version you
also have the choice between three network topologies, two
not as stable, and the third is the default which has been
in use for the previous four versions.

Checkout the nifty HTML-format docs in "docs.htm"

** What do you think?

Now I know you people are out there that are using the module...
I can hear the fists hitting the keyboards in frustration. :-) Relieve
some of that frustration by e-mailing me and letting me know what
you think of the module and any suggestions you got. Especially you

docs.htm  view on Meta::CPAN

<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="updates">UPDATES</A></H1>
<P>This is version 0.89. In this version I have included a new feature, output range limits, as
well as automatic crunching of <A HREF="#item_run"><CODE>run()</CODE></A> and learn*() inputs. Included in the examples directory
are seven new practical-use example scripts. Also implemented in this version is a much cleaner 
learning function for individual neurons which is more accurate than previous verions and is 
based on the LMS rule. See <A HREF="#item_range"><CODE>range()</CODE></A> for information on output range limits. I have also updated 
the <A HREF="#item_load"><CODE>load()</CODE></A> and <A HREF="#item_save"><CODE>save()</CODE></A> methods so that they do not depend on Storable anymore. In this version 
you also have the choice between three network topologies, two not as stable, and the third is 
the default which has been in use for the previous four versions.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="description">DESCRIPTION</A></H1>
<P>AI::NeuralNet::BackProp implements a nerual network similar to a feed-foward,
back-propagtion network; learning via a mix of a generalization
of the Delta rule and a disection of Hebbs rule. The actual 
neruons of the network are implemented via the AI::NeuralNet::BackProp::neuron package.
</P>

You constuct a new network via the new constructor:

docs.htm  view on Meta::CPAN

<P>
<H2><A NAME="methods">METHODS</A></H2>
<DL>
<DT><STRONG><A NAME="item_BackProp">new AI::NeuralNet::BackProp($layers, $size [, $outputs, $topology_flag])</A></STRONG><BR>
<DD>
Returns a newly created neural network from an <CODE>AI::NeuralNet::BackProp</CODE>
object. The network will have <CODE>$layers</CODE> number layers in it
and each layer will have <CODE>$size</CODE> number of neurons in that layer.
<P>There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size. $outputs may not exceed $size. If $outputs
exceeds $size, the <CODE>new()</CODE> constructor will return undef.</P>
<P>The optional parameter, $topology_flag, defaults to 0 when not used. There are
three valid topology flag values:</P>
<P><STRONG>0</STRONG> <EM>default</EM>
My feed-foward style: Each neuron in layer X is connected to one input of every
neuron in layer Y. The best and most proven flag style.</P>
<PRE>
        ^   ^   ^               
        O\  O\ /O       Layer Y
        ^\\/^/\/^
        | //|\/\|
        |/ \|/ \|               
        O   O   O       Layer X
        ^   ^   ^</PRE>

docs.htm  view on Meta::CPAN

in the input map if you have randomness disabled. See NOTES on using a 0 value with randomness
disabled.</P>
<P>The first two arguments may be array refs (or now, strings), and they may be of different lengths.</P>
<P>Options should be written on hash form. There are three options:
</P>
<PRE>
         inc    =&gt;      $learning_gradient
         max    =&gt;      $maximum_iterations
         error  =&gt;      $maximum_allowable_percentage_of_error</PRE>
<P>$learning_gradient is an optional value used to adjust the weights of the internal
connections. If $learning_gradient is ommitted, it defaults to 0.20.
</P>
<P>
$maximum_iterations is the maximum numbers of iteration the loop should do.
It defaults to 1024.  Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.</P>
<P>$maximum_allowable_percentage_of_error is the maximum allowable error to have. If 
this is set, then <A HREF="#item_learn"><CODE>learn()</CODE></A> will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then <A HREF="#item_learn"><CODE>learn()</CODE></A> will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.</P>
<P></P>
<DT><STRONG><A NAME="item_learn_set">$net-&gt;learn_set(\@set, [ options ]);</A></STRONG><BR>
<DD>

docs.htm  view on Meta::CPAN

<P>See the paragraph on measuring forgetfulness, below. There are 
two learn_set()-specific option tags available:</P>
<PRE>
        flag     =&gt;  $flag
        pattern  =&gt;  $row</PRE>
<P>If ``flag'' is set to some TRUE value, as in ``flag =&gt; 1'' in the hash of options, or if the option ``flag''
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will return an integer specifying the amount of forgetfulness when all the patterns 
are learned.</P>
<P>If ``pattern'' is set, then <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will use that pattern in the data set to measure forgetfulness by.
If ``pattern'' is omitted, it defaults to the first pattern in the set. Example:</P>
<PRE>
        my @set = (
                [ 0,1,0,1 ],  [ 0 ],
                [ 0,0,1,0 ],  [ 1 ],
                [ 1,1,0,1 ],  [ 2 ],  #  &lt;---
                [ 0,1,1,0 ],  [ 3 ]
        );
</PRE>
<P>
If you wish to measure forgetfulness as indicated by the line with the arrow, then you would

docs.htm  view on Meta::CPAN

<DT><STRONG><A NAME="item_join_cols">$net-&gt;join_cols($array_ref,$row_length_in_elements,$high_state_character,$low_state_character);</A></STRONG><BR>
<DD>
This is more of a utility function than any real necessary function of the package.
Instead of joining all the elements of the array together in one long string, like <CODE>join()</CODE> ,
it prints the elements of $array_ref to STDIO, adding a newline (\n) after every $row_length_in_elements
number of elements has passed. Additionally, if you include a $high_state_character and a $low_state_character,
it will print the $high_state_character (can be more than one character) for every element that
has a true value, and the $low_state_character for every element that has a false value. 
If you do not supply a $high_state_character, or the $high_state_character is a null or empty or 
undefined string, it <A HREF="#item_join_cols"><CODE>join_cols()</CODE></A> will just print the numerical value of each element seperated
by a null character (\0). <A HREF="#item_join_cols"><CODE>join_cols()</CODE></A> defaults to the latter behaviour.
<P></P>
<DT><STRONG><A NAME="item_pdiff">$net-&gt;pdiff($array_ref_A, $array_ref_B);</A></STRONG><BR>
<DD>
This function is used VERY heavily internally to calculate the difference in percent
between elements of the two array refs passed. It returns a %.10f (sprintf-format) 
percent sting.
<P></P>
<DT><STRONG><A NAME="item_p">$net-&gt;p($a,$b);</A></STRONG><BR>
<DD>
Returns a floating point number which represents $a as a percentage of $b.



( run in 0.592 second using v1.01-cache-2.11-cpan-0a6323c29d9 )