AI-NeuralNet-Mesh
view release on metacpan or search on metacpan
$AI::NeuralNet::Mesh::ID =
'$Id: AI::NeuralNet::Mesh.pm, v'.$AI::NeuralNet::Mesh::VERSION.' 2000/15/09 03:29:08 josiah Exp $';
}
package AI::NeuralNet::Mesh;
require Exporter;
@ISA = qw(Exporter);
@EXPORT = qw(range intr pdiff);
%EXPORT_TAGS = (
'default' => [ qw ( range intr pdiff )],
'all' => [ qw ( p low high ramp and_gate or_gate range intr pdiff ) ],
'p' => [ qw ( p low high intr pdiff ) ],
'acts' => [ qw ( ramp and_gate or_gate range ) ],
);
@EXPORT_OK = ( @{ $EXPORT_TAGS{'all'} }, qw( p low high ramp and_gate or_gate ) );
use strict;
use Benchmark;
# See POD for usage of this variable.
#
# Loads a CSV-like dataset from disk
#
# Usage:
# my $set = $set->load_set($file, $column, $seperator);
#
# Returns a data set of the same format as required by the
# learn_set() method. $file is the disk file to load set from.
# $column an optional variable specifying the column in the
# data set to use as the class attribute. $class defaults to 0.
# $seperator is an optional variable specifying the seperator
# character between values. $seperator defaults to ',' (a single comma).
# NOTE: This does not handle quoted fields, or any other record
# seperator other than "\n".
#
sub load_set {
my $self = shift;
my $file = shift;
my $attr = shift || 0;
my $sep = shift || ',';
my $data = [];
open(FILE, $file);
# comparative value
sub low {
shift if(substr($_[0],0,4) eq 'AI::');
my $ref1 = shift; my ($el,$len,$tmp); $tmp=0;
foreach $el (@{$ref1}) { $len++ }
for my $x (0..$len-1) { $tmp = $x if($ref1->[$x] < $ref1->[$tmp]) }
return $tmp;
}
# Following is a collection of a few nifty custom activation functions.
# range() is exported by default, the rest you can get with:
# use AI::NeuralNet::Mesh ':acts'
# The ':all' tag also gets these into your namespace.
#
# range() returns a closure limiting the output
# of that node to a specified set of values.
# Good for output layers.
#
# usage example:
# $net->activation(4,range(0..5));
# the inputs, and therefore, results could flucuate.
# The second learning cycle guarantees more accuracy.
#
sub range {
my @r=@_;
sub{$_[1]->{t}=$_[0]if($_[0]>$_[1]->{t});$r[intr($_[0]/$_[1]->{t}*$#r)]}
}
#
# ramp() preforms smooth ramp activation between 0 and 1 if $r is 1,
# or between -1 and 1 if $r is 2. $r defaults to 1, as you can see.
#
# Note: when using a ramp() activatior, train the
# net at least TWICE on the data set, because the first
# time the ramp() function searches for the top value in
# the inputs, and therefore, results could flucuate.
# The second learning cycle guarantees more accuracy.
#
sub ramp {
my $r=shift||1;my $t=($r<2)?0:-1;
sub{$_[1]->{t}=$_[0]if($_[0]>$_[1]->{t});$_[0]/$_[1]->{t}*$r-$b}
In this module I have included a more accurate form of "learning" for the
mesh. This form preforms descent toward a local error minimum (0) on a
directional delta, rather than the desired value for that node. This allows
for better, and more accurate results with larger datasets. This module also
uses a simpler recursion technique which, suprisingly, is more accurate than
the original technique that I've used in other ANNs.
=head1 EXPORTS
This module exports three functions by default:
range
intr
pdiff
See range() intr() and pdiff() for description of their respective functions.
Also provided are several export tag sets for usage in the form of:
use AI::NeuralNet::Mesh ':tag';
Tag sets are:
:default
- These functions are always exported.
- Exports:
range()
intr()
pdiff()
:all
- Exports:
p()
high()
P.S. Don't worry, the old C<new($layers, $nodes [, $outputs])> still works like always!
=item AI::NeuralNet::Mesh->new($layers, $nodes [, $outputs]);
Returns a newly created neural network from an C<AI::NeuralNet::Mesh>
object. The network will have C<$layers> number of layers in it
and it will have C<$nodes> number of nodes per layer.
There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size.
=item AI::NeuralNet::Mesh->new($file);
This will automatically create a new network from the file C<$file>. It will
return undef if the file was of an incorrect format or non-existant. Otherwise,
it will return a blessed refrence to a network completly restored from C<$file>.
=item AI::NeuralNet::Mesh->new(\@layer_sizes);
);
You are passing an array ref who's each element is a hash refrence. Each
hash refrence, or more precisely, each element in the array refrence you are passing
to the constructor, represents a layer in the network. Like the constructor above,
the first element is the input layer, and the last is the output layer. The rest are
hidden layers.
Each hash refrence is expected to have AT LEAST the "nodes" key set to the number
of nodes (neurons) in that layer. The other two keys are optional. If "activation" is left
out, it defaults to "linear". If "threshold" is left out, it defaults to 0.50.
The "activation" key can be one of four values:
linear ( simply use sum of inputs as output )
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;
"sigmoid_1" is an alias for "sigmoid".
You can also set the activation and threshold values after network creation with the
activation() and threshold() methods.
=item $net->learn($input_map_ref, $desired_result_ref [, options ]);
NOTE: learn_set() now has increment-degrading turned OFF by default. See note
on the degrade flag, below.
This will 'teach' a network to associate an new input map with a desired
result. It will return a string containg benchmarking information.
You can also specify strings as inputs and ouputs to learn, and they will be
crunched automatically. Example:
$net->learn('corn', 'cob');
Options should be written on hash form. There are three options:
inc => $learning_gradient
max => $maximum_iterations
error => $maximum_allowable_percentage_of_error
degrade => $degrade_increment_flag
$learning_gradient is an optional value used to adjust the weights of the internal
connections. If $learning_gradient is ommitted, it defaults to 0.002.
$maximum_iterations is the maximum numbers of iteration the loop should do.
It defaults to 1024. Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.
$maximum_allowable_percentage_of_error is the maximum allowable error to have. If
this is set, then learn() will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then learn() will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.
$degrade_increment_flag is a simple flag used to allow/dissalow increment degrading
during learning based on a product of the error difference with several other factors.
$degrade_increment_flag is off by default. Setting $degrade_increment_flag to a true
value turns increment degrading on.
In previous module releases $degrade_increment_flag was not used, as increment degrading
was always on. In this release I have looked at several other network types as well
as several texts and decided that it would be better to not use increment degrading. The
option is still there for those that feel the inclination to use it. I have found some areas
that do need the degrade flag to work at a faster speed. See test.pl for an example. If
the degrade flag wasn't in test.pl, it would take a very long time to learn.
flag => $flag
pattern => $row
If "flag" is set to some TRUE value, as in "flag => 1" in the hash of options, or if the option "flag"
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set() will return an integer specifying the amount of forgetfulness when all the patterns
are learned.
If "pattern" is set, then learn_set() will use that pattern in the data set to measure forgetfulness by.
If "pattern" is omitted, it defaults to the first pattern in the set. Example:
my @set = (
[ 0,1,0,1 ], [ 0 ],
[ 0,0,1,0 ], [ 1 ],
[ 1,1,0,1 ], [ 2 ], # <---
[ 0,1,1,0 ], [ 3 ]
);
If you wish to measure forgetfulness as indicated by the line with the arrow, then you would
pass 2 as the "pattern" option, as in "pattern => 2".
in the returned array ref represents the output value for the corrseponding row in the dataset
passed. (A row is two elements of the dataset together, see learn_set() for dataset structure.)
=item $net->load_set($file,$column,$seperator);
Loads a CSV-like dataset from disk
Returns a data set of the same structure as required by the
learn_set() method. $file is the disk file to load set from.
$column an optional variable specifying the column in the
data set to use as the class attribute. $class defaults to 0.
$seperator is an optional variable specifying the seperator
character between values. $seperator defaults to ',' (a single comma).
NOTE: This does not handle quoted fields, or any other record
seperator other than "\n".
The returned array ref is suitable for passing directly to
learn_set() or get_outs().
=item $net->range();
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions.
=item $net->join_cols($array_ref,$row_length_in_elements,$high_state_character,$low_state_character);
This is more of a utility function than any real necessary function of the package.
Instead of joining all the elements of the array together in one long string, like join() ,
it prints the elements of $array_ref to STDIO, adding a newline (\n) after every $row_length_in_elements
number of elements has passed. Additionally, if you include a $high_state_character and a $low_state_character,
it will print the $high_state_character (can be more than one character) for every element that
has a true value, and the $low_state_character for every element that has a false value.
If you do not supply a $high_state_character, or the $high_state_character is a null or empty or
undefined string, it join_cols() will just print the numerical value of each element seperated
by a null character (\0). join_cols() defaults to the latter behaviour.
=item $net->extend(\@array_of_hashes);
This allows you to re-apply any activations and thresholds with the same array ref which
you created a network with. This is useful for re-applying code ref activations after a load()
call without having to type the code ref twice.
You can also specify the extension in a simple array ref like this:
with no arguments, or an undef value, it will return current randomness value. When
called with a 0 value, it will disable randomness in the network. The randomness factor
is preserved across load() and save() calls.
=item $net->const($const);
This sets the run const. for the network. The run const. is a value that is added
to every input line when a set of inputs are run() or learn() -ed, to prevent the
network from hanging on a 0 value. When called with no arguments, it returns the current
const. value. It defaults to 0.0001 on a newly-created network. The run const. value
is preserved across load() and save() calls.
=item $net->error();
Returns the last error message which occured in the mesh, or undef if no errors have
occured.
=item $net->load_pcx($filename);
See C<perldoc PCX::Loader> for information on the methods of the object returned.
You can download PCX::Loader from
http://www.josiah.countystart.com/modules/get.pl?pcx-loader:mpod
=head1 CUSTOM ACTIVATION FUNCTIONS
Included in this package are four custom activation functions meant to be used
as a guide to create your own, as well as to be useful to you in normal use of the
module. There is only one function exported by default into your namespace, which
is the range() functions. These are not meant to be used as methods, but as functions.
These functions return code refs to a Perl closure which does the actual work when
the time comes.
=item range(0..X);
=item range(@range);
=item range(A,B,C);
use it as the output value for that node.
See? It's not that hard! Using custom activation functions, you could do
just about anything with the node that you want to, since you have
access to the node just as if you were a blessed member of that node's object.
=item ramp($r);
ramp() preforms smooth ramp activation between 0 and 1 if $r is 1,
or between -1 and 1 if $r is 2. $r defaults to 1.
You can get this into your namespace with the ':acts' export
tag as so:
use AI::NeuralNet::Mesh ':acts';
Note: when using a ramp() activatior, train the
net at least TWICE on the data set, because the first
time the ramp() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.
No code to show here, as it is almost exactly the same as range().
=item and_gate($threshold);
Self explanitory, pretty much. This turns the node into a basic AND gate.
$threshold is used to decide if an input is true or false (1 or 0). If
an input is below $threshold, it is false. $threshold defaults to 0.5.
You can get this into your namespace with the ':acts' export
tag as so:
use AI::NeuralNet::Mesh ':acts';
Let's look at the code real quick, as it shows how to get at the indivudal
input connections:
= line 1 = sub {
use AI::NeuralNet::Mesh ':acts';
=head1 VARIABLES
=item $AI::NeuralNet::Mesh::Connector
This is an option is step up from average use of this module. This variable
should hold the fully qualified name of the function used to make the actual connections
between the nodes in the network. This contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called tree() instead of
the default connector. Before I call the new() constructor, I use this line of code:
$AI::NeuralNet::Mesh::Connector = 'main::tree'
The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument. See notes on CUSTOM NETWORK CONNECTORS,
below, for more information on creating your own custom connector.
=item $AI::NeuralNet::Mesh::DEBUG
This variable controls the verbosity level. It will not hurt anything to set this
directly, yet most people find it easier to set it using the debug() method, or
any of its aliases.
=head1 CUSTOM NETWORK CONNECTORS
Creating custom network connectors is step up from average use of this module.
However, it can be very useful in creating other styles of neural networks, other
than the default fully-connected feed-foward network.
You create a custom connector by setting the variable $AI::NeuralNet::Mesh::Connector
to the fully qualified name of the function used to make the actual connections
between the nodes in the network. This variable contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called tree() instead of
the default connector. Before I call the new() constructor, I use this line of code:
$AI::NeuralNet::Mesh::Connector = 'main::tree'
The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument.
Example connector:
sub connect_three {
my $self = shift;
$y + $r2a gives us the node directly above the first node (supposedly...I'll get to the "supposedly"
part in a minute.) By adding or subtracting from this number we get the neighbor nodes.
In the above example you can see we check the $y index to see that we havn't come close to
any of the edges of the range.
Using $y+$r2a we get the index of the node to pass to add_output_node() on the first node at
$y+B<$r1a>.
And that's all there is to it!
For the fun of it, we'll take a quick look at the default connector.
Below is the actual default connector code, albeit a bit cleaned up, as well as
line numbers added.
= line 1 = sub _c {
= line 2 = my $self = shift;
= line 3 = my $r1a = shift;
= line 4 = my $r1b = shift;
= line 5 = my $r2a = shift;
= line 6 = my $r2b = shift;
= line 7 = my $mesh = $self->{mesh};
= line 8 = for my $y ($r1a..$r1b-1) {
by you.</P>
<P>In this module I have included a more accurate form of ``learning'' for the
mesh. This form preforms descent toward a local error minimum (0) on a
directional delta, rather than the desired value for that node. This allows
for better, and more accurate results with larger datasets. This module also
uses a simpler recursion technique which, suprisingly, is more accurate than
the original technique that I've used in other ANNs.</P>
<P>
<HR>
<H1><A NAME="exports">EXPORTS</A></H1>
<P>This module exports three functions by default:</P>
<PRE>
range
intr
pdiff
</PRE>
<P>See range() intr() and pdiff() for description of their respective functions.</P>
<P>Also provided are several export tag sets for usage in the form of:</P>
<PRE>
use AI::NeuralNet::Mesh ':tag';
</PRE>
<P>Tag sets are:</P>
<PRE>
:default
- These functions are always exported.
- Exports:
range()
intr()
pdiff()
:all
- Exports:
p()
high()
There are four ways to construct a new network with new(). Each is detailed below.
<P>P.S. Don't worry, the old <A HREF="#item_new"><CODE>new($layers, $nodes [, $outputs])</CODE></A> still works like always!</P>
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new($layers, $nodes [, $outputs]);</STRONG><BR>
<DD>
Returns a newly created neural network from an <CODE>AI::NeuralNet::Mesh</CODE>
object. The network will have <CODE>$layers</CODE> number of layers in it
and it will have <CODE>$nodes</CODE> number of nodes per layer.
<P>There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size.</P>
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new($file);</STRONG><BR>
<DD>
This will automatically create a new network from the file <CODE>$file</CODE>. It will
return undef if the file was of an incorrect format or non-existant. Otherwise,
it will return a blessed refrence to a network completly restored from <CODE>$file</CODE>.
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new(\@layer_sizes);</STRONG><BR>
<DD>
This constructor will make a network with the number of layers corresponding to the length
{ },
...
);</PRE>
<P>You are passing an array ref who's each element is a hash refrence. Each
hash refrence, or more precisely, each element in the array refrence you are passing
to the constructor, represents a layer in the network. Like the constructor above,
the first element is the input layer, and the last is the output layer. The rest are
hidden layers.</P>
<P>Each hash refrence is expected to have AT LEAST the ``nodes'' key set to the number
of nodes (neurons) in that layer. The other two keys are optional. If ``activation'' is left
out, it defaults to ``linear''. If ``threshold'' is left out, it defaults to 0.50.</P>
<P>The ``activation'' key can be one of four values:</P>
<PRE>
linear ( simply use sum of inputs as output )
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>Three of the activation syntaxes are shown in the first constructor above, the ``linear'',
``sigmoid'' and code ref types.</P>
<P>You can also set the activation and threshold values after network creation with the
<A HREF="#item_activation"><CODE>activation()</CODE></A> and <A HREF="#item_threshold"><CODE>threshold()</CODE></A> methods.</P>
<P></P>
<P></P>
<DT><STRONG><A NAME="item_learn">$net->learn($input_map_ref, $desired_result_ref [, options ]);</A></STRONG><BR>
<DD>
NOTE: <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> now has increment-degrading turned OFF by default. See note
on the degrade flag, below.
<P>This will 'teach' a network to associate an new input map with a desired
result. It will return a string containg benchmarking information.</P>
<P>You can also specify strings as inputs and ouputs to learn, and they will be
crunched automatically. Example:</P>
<PRE>
$net->learn('corn', 'cob');
</PRE>
<P>Note, the old method of calling crunch on the values still works just as well.</P>
<P>The first two arguments may be array refs (or now, strings), and they may be
of different lengths.</P>
<P>Options should be written on hash form. There are three options:
</P>
<PRE>
inc => $learning_gradient
max => $maximum_iterations
error => $maximum_allowable_percentage_of_error
degrade => $degrade_increment_flag</PRE>
<P>$learning_gradient is an optional value used to adjust the weights of the internal
connections. If $learning_gradient is ommitted, it defaults to 0.002.
</P>
<P>$maximum_iterations is the maximum numbers of iteration the loop should do.
It defaults to 1024. Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.</P>
<P>$maximum_allowable_percentage_of_error is the maximum allowable error to have. If
this is set, then <A HREF="#item_learn"><CODE>learn()</CODE></A> will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then <A HREF="#item_learn"><CODE>learn()</CODE></A> will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.</P>
<P>$degrade_increment_flag is a simple flag used to allow/dissalow increment degrading
during learning based on a product of the error difference with several other factors.
$degrade_increment_flag is off by default. Setting $degrade_increment_flag to a true
value turns increment degrading on.</P>
<P>In previous module releases $degrade_increment_flag was not used, as increment degrading
was always on. In this release I have looked at several other network types as well
as several texts and decided that it would be better to not use increment degrading. The
option is still there for those that feel the inclination to use it. I have found some areas
that do need the degrade flag to work at a faster speed. See test.pl for an example. If
the degrade flag wasn't in test.pl, it would take a very long time to learn.</P>
<P></P>
<DT><STRONG><A NAME="item_learn_set">$net->learn_set(\@set, [ options ]);</A></STRONG><BR>
<DD>
<P>See the paragraph on measuring forgetfulness, below. There are
two learn_set()-specific option tags available:</P>
<PRE>
flag => $flag
pattern => $row</PRE>
<P>If ``flag'' is set to some TRUE value, as in ``flag => 1'' in the hash of options, or if the option ``flag''
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will return an integer specifying the amount of forgetfulness when all the patterns
are learned.</P>
<P>If ``pattern'' is set, then <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will use that pattern in the data set to measure forgetfulness by.
If ``pattern'' is omitted, it defaults to the first pattern in the set. Example:</P>
<PRE>
my @set = (
[ 0,1,0,1 ], [ 0 ],
[ 0,0,1,0 ], [ 1 ],
[ 1,1,0,1 ], [ 2 ], # <---
[ 0,1,1,0 ], [ 3 ]
);
</PRE>
<P>If you wish to measure forgetfulness as indicated by the line with the arrow, then you would
pass 2 as the "pattern" option, as in "pattern => 2".</P>
output value. The output values are the target values specified in the $set passed. Each element
in the returned array ref represents the output value for the corrseponding row in the dataset
passed. (A row is two elements of the dataset together, see <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> for dataset structure.)
<P></P>
<DT><STRONG><A NAME="item_load_set">$net->load_set($file,$column,$seperator);</A></STRONG><BR>
<DD>
Loads a CSV-like dataset from disk
<P>Returns a data set of the same structure as required by the
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> method. $file is the disk file to load set from.
$column an optional variable specifying the column in the
data set to use as the class attribute. $class defaults to 0.
$seperator is an optional variable specifying the seperator
character between values. $seperator defaults to ',' (a single comma).
NOTE: This does not handle quoted fields, or any other record
seperator other than ``\n''.</P>
<P>The returned array ref is suitable for passing directly to
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> or get_outs().</P>
<P></P>
<DT><STRONG><A NAME="item_range">$net->range();</A></STRONG><BR>
<DD>
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions.
<P></P>
<DT><STRONG><A NAME="item_benchmark">$net->benchmark();</A></STRONG><BR>
<DT><STRONG><A NAME="item_join_cols">$net->join_cols($array_ref,$row_length_in_elements,$high_state_character,$low_state_character);</A></STRONG><BR>
<DD>
This is more of a utility function than any real necessary function of the package.
Instead of joining all the elements of the array together in one long string, like <CODE>join()</CODE> ,
it prints the elements of $array_ref to STDIO, adding a newline (\n) after every $row_length_in_elements
number of elements has passed. Additionally, if you include a $high_state_character and a $low_state_character,
it will print the $high_state_character (can be more than one character) for every element that
has a true value, and the $low_state_character for every element that has a false value.
If you do not supply a $high_state_character, or the $high_state_character is a null or empty or
undefined string, it <A HREF="#item_join_cols"><CODE>join_cols()</CODE></A> will just print the numerical value of each element seperated
by a null character (\0). <A HREF="#item_join_cols"><CODE>join_cols()</CODE></A> defaults to the latter behaviour.
<P></P>
<DT><STRONG><A NAME="item_extend">$net->extend(\@array_of_hashes);</A></STRONG><BR>
<DD>
This allows you to re-apply any activations and thresholds with the same array ref which
you created a network with. This is useful for re-applying code ref activations after a <A HREF="#item_load"><CODE>load()</CODE></A>
call without having to type the code ref twice.
<P>You can also specify the extension in a simple array ref like this:</P>
<PRE>
$net->extend([2,3,1]);
</PRE>
This will set the randomness factor from the network. Default is 0. When called
with no arguments, or an undef value, it will return current randomness value. When
called with a 0 value, it will disable randomness in the network. The randomness factor
is preserved across <A HREF="#item_load"><CODE>load()</CODE></A> and <A HREF="#item_save"><CODE>save()</CODE></A> calls.
<P></P>
<DT><STRONG><A NAME="item_const">$net->const($const);</A></STRONG><BR>
<DD>
This sets the run const. for the network. The run const. is a value that is added
to every input line when a set of inputs are <A HREF="#item_run"><CODE>run()</CODE></A> or <A HREF="#item_learn"><CODE>learn()</CODE></A> -ed, to prevent the
network from hanging on a 0 value. When called with no arguments, it returns the current
const. value. It defaults to 0.0001 on a newly-created network. The run const. value
is preserved across <A HREF="#item_load"><CODE>load()</CODE></A> and <A HREF="#item_save"><CODE>save()</CODE></A> calls.
<P></P>
<DT><STRONG><A NAME="item_error">$net->error();</A></STRONG><BR>
<DD>
Returns the last error message which occured in the mesh, or undef if no errors have
occured.
<P></P>
<DT><STRONG><A NAME="item_load_pcx">$net->load_pcx($filename);</A></STRONG><BR>
<DD>
NOTE: To use this function, you must have PCX::Loader installed. If you do not have
supports the following routinges/members. See example files ex_pcx.pl and ex_pcxl.pl in
the ./examples/ directory.</P>
<P>See <CODE>perldoc PCX::Loader</CODE> for information on the methods of the object returned.</P>
<P>You can download PCX::Loader from <A HREF="http://www.josiah.countystart.com/modules/get.pl?pcx-loader:mpod">http://www.josiah.countystart.com/modules/get.pl?pcx-loader:mpod</A></P>
<P></P></DL>
<P>
<HR>
<H1><A NAME="custom activation functions">CUSTOM ACTIVATION FUNCTIONS</A></H1>
<P>Included in this package are four custom activation functions meant to be used
as a guide to create your own, as well as to be useful to you in normal use of the
module. There is only one function exported by default into your namespace, which
is the <A HREF="#item_range"><CODE>range()</CODE></A> functions. These are not meant to be used as methods, but as functions.
These functions return code refs to a Perl closure which does the actual work when
the time comes.</P>
<DL>
<DT><STRONG>range(0..X);</STRONG><BR>
<DD>
<DT><STRONG>range(@range);</STRONG><BR>
<DD>
<DT><STRONG>range(A,B,C);</STRONG><BR>
<DD>
expanding it to fit smoothly inside the number of elements in the array. Then
we simply round to an integer and pluck that index from the array and
use it as the output value for that node.</P>
<P>See? It's not that hard! Using custom activation functions, you could do
just about anything with the node that you want to, since you have
access to the node just as if you were a blessed member of that node's object.</P>
<P></P>
<DT><STRONG><A NAME="item_ramp">ramp($r);</A></STRONG><BR>
<DD>
<A HREF="#item_ramp"><CODE>ramp()</CODE></A> preforms smooth ramp activation between 0 and 1 if $r is 1,
or between -1 and 1 if $r is 2. $r defaults to 1.
<P>You can get this into your namespace with the ':acts' export
tag as so:
</P>
<PRE>
use AI::NeuralNet::Mesh ':acts';</PRE>
<P>Note: when using a <A HREF="#item_ramp"><CODE>ramp()</CODE></A> activatior, train the
net at least TWICE on the data set, because the first
time the <A HREF="#item_ramp"><CODE>ramp()</CODE></A> function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.</P>
<P>No code to show here, as it is almost exactly the same as range().</P>
<P></P>
<DT><STRONG><A NAME="item_and_gate">and_gate($threshold);</A></STRONG><BR>
<DD>
Self explanitory, pretty much. This turns the node into a basic AND gate.
$threshold is used to decide if an input is true or false (1 or 0). If
an input is below $threshold, it is false. $threshold defaults to 0.5.
<P>You can get this into your namespace with the ':acts' export
tag as so:
</P>
<PRE>
use AI::NeuralNet::Mesh ':acts';</PRE>
<P>Let's look at the code real quick, as it shows how to get at the indivudal
input connections:</P>
<PRE>
= line 1 = sub {
= line 2 = my $sum = shift;
use AI::NeuralNet::Mesh ':acts';</PRE>
<P></P></DL>
<P>
<HR>
<H1><A NAME="variables">VARIABLES</A></H1>
<DL>
<DT><STRONG><A NAME="item_%24AI%3A%3ANeuralNet%3A%3AMesh%3A%3AConnector">$AI::NeuralNet::Mesh::Connector</A></STRONG><BR>
<DD>
This is an option is step up from average use of this module. This variable
should hold the fully qualified name of the function used to make the actual connections
between the nodes in the network. This contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called <CODE>tree()</CODE> instead of
the default connector. Before I call the <A HREF="#item_new"><CODE>new()</CODE></A> constructor, I use this line of code:
<PRE>
$AI::NeuralNet::Mesh::Connector = 'main::tree'
</PRE>
<P>The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument. See notes on CUSTOM NETWORK CONNECTORS,
below, for more information on creating your own custom connector.</P>
<P></P>
<DT><STRONG><A NAME="item_%24AI%3A%3ANeuralNet%3A%3AMesh%3A%3ADEBUG">$AI::NeuralNet::Mesh::DEBUG</A></STRONG><BR>
<DD>
This variable controls the verbosity level. It will not hurt anything to set this
directly, yet most people find it easier to set it using the <A HREF="#item_debug"><CODE>debug()</CODE></A> method, or
any of its aliases.
<P></P></DL>
<P>
<HR>
<H1><A NAME="custom network connectors">CUSTOM NETWORK CONNECTORS</A></H1>
<P>Creating custom network connectors is step up from average use of this module.
However, it can be very useful in creating other styles of neural networks, other
than the default fully-connected feed-foward network.</P>
<P>You create a custom connector by setting the variable $AI::NeuralNet::Mesh::Connector
to the fully qualified name of the function used to make the actual connections
between the nodes in the network. This variable contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called <CODE>tree()</CODE> instead of
the default connector. Before I call the <A HREF="#item_new"><CODE>new()</CODE></A> constructor, I use this line of code:</P>
<PRE>
$AI::NeuralNet::Mesh::Connector = 'main::tree'
</PRE>
<P>The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument.</P>
<P>Example connector:</P>
<PRE>
sub connect_three {
my $self = shift;
my $r1a = shift;
with a simple method of the outputing node (the node at $y+$r1a), called add_output_node().</P>
<P><CODE>add_output_node()</CODE> takes one simple arguemnt: A blessed refrence to a node that it is supposed
to output its final value TO. We get this blessed refrence with more simple addition.</P>
<P>$y + $r2a gives us the node directly above the first node (supposedly...I'll get to the ``supposedly''
part in a minute.) By adding or subtracting from this number we get the neighbor nodes.
In the above example you can see we check the $y index to see that we havn't come close to
any of the edges of the range.</P>
<P>Using $y+$r2a we get the index of the node to pass to <CODE>add_output_node()</CODE> on the first node at
$y+<STRONG>$r1a</STRONG>.</P>
<P>And that's all there is to it!</P>
<P>For the fun of it, we'll take a quick look at the default connector.
Below is the actual default connector code, albeit a bit cleaned up, as well as
line numbers added.</P>
<PRE>
= line 1 = sub _c {
= line 2 = my $self = shift;
= line 3 = my $r1a = shift;
= line 4 = my $r1b = shift;
= line 5 = my $r2a = shift;
= line 6 = my $r2b = shift;
= line 7 = my $mesh = $self->{mesh};
= line 8 = for my $y ($r1a..$r1b-1) {
( run in 0.703 second using v1.01-cache-2.11-cpan-0a6323c29d9 )