AI-NeuralNet-Mesh
view release on metacpan or search on metacpan
# Package constructor
sub new {
no strict 'refs';
my $type = shift;
my $self = {};
my $layers = shift;
my $nodes = shift;
my $outputs = shift || $nodes;
my $inputs = shift || $nodes;
bless $self, $type;
# If $layers is a string, then it will be numerically equal to 0, so
# try to load it as a network file.
if($layers == 0) {
# We use a "1" flag as the second argument to indicate that we
# want load() to call the new constructor to make a network the
# same size as in the file and return a refrence to the network,
# instead of just creating the network from pre-exisiting refrence
return $self->load($layers,1);
}
}
# Set the activation type of a specific layer.
# usage: $net->activation($layer,$type);
# $type can be: "linear", "sigmoid", "sigmoid_2".
# You can use "sigmoid_1" as a synonym to "sigmoid".
# Type can also be a CODE ref, ( ref($type) eq "CODE" ).
# If $type is a CODE ref, then the function is called in this form:
# $output = &$type($sum_of_inputs,$self);
# The code ref then has access to all the data in that node (thru the
# blessed refrence $self) and is expected to return the value to be used
# as the output for that node. The sum of all the inputs to that node
# is already summed and passed as the first argument.
sub activation {
my $self = shift;
my $layer = shift || 0;
my $value = shift || 'linear';
my $n = 0;
no strict 'refs';
for(0..$layer-1){$n+=$self->{layers}->[$_]}
for($n..$n+$self->{layers}->[$layer]-1) {
use strict;
# Node constructor
sub new {
my $type = shift;
my $self ={
_parent => shift,
_inputs => [],
_outputs => []
};
bless $self, $type;
}
# Receive inputs from other nodes, and also send
# outputs on.
sub input {
my $self = shift;
my $input = shift;
my $from_id = shift;
$self->{_inputs}->[$from_id]->{value} = $input * $self->{_inputs}->[$from_id]->{weight};
for my $i (@{$self->{_inputs}}) {
$i->{weight} += $inc * $i->{weight};
$i->{node}->adjust_weight($inc) if($i->{node});
}
}
1;
# Internal usage, prevents recursion on empty nodes.
package AI::NeuralNet::Mesh::cap;
sub new { bless {}, shift }
sub input {}
sub adjust_weight {}
sub add_output_node {}
sub add_input_node {}
1;
# Internal usage, collects data from output layer.
package AI::NeuralNet::Mesh::output;
use strict;
sub new {
my $type = shift;
my $self ={
_parent => shift,
_inputs => [],
};
bless $self, $type;
}
sub add_input_node {
my $self = shift;
return (++$self->{_inputs_size})-1;
}
sub input {
my $self = shift;
my $input = shift;
There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size.
=item AI::NeuralNet::Mesh->new($file);
This will automatically create a new network from the file C<$file>. It will
return undef if the file was of an incorrect format or non-existant. Otherwise,
it will return a blessed refrence to a network completly restored from C<$file>.
=item AI::NeuralNet::Mesh->new(\@layer_sizes);
This constructor will make a network with the number of layers corresponding to the length
in elements of the array ref passed. Each element in the array ref passed is expected
to contain an integer specifying the number of nodes (neurons) in that layer. The first
layer ($layer_sizes[0]) is to be the input layer, and the last layer in @layer_sizes is to be
the output layer.
Example:
"sigmoid_1" is an alias for "sigmoid".
The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:
$output = &$code_ref($sum_of_inputs, $self);
The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.
Three of the activation syntaxes are shown in the first constructor above, the "linear",
"sigmoid" and code ref types.
You can also set the activation and threshold values after network creation with the
activation() and threshold() methods.
"sigmoid_1" is an alias for "sigmoid".
The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:
$output = &$code_ref($sum_of_inputs, $self);
The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.
The activation type for each layer is preserved across load/save calls.
EXCEPTION: Due to the constraints of Perl, I cannot load/save the actual subs that the code
ref option points to. Therefore, you must re-apply any code ref activation types after a
load() call.
=item $net->load_pcx($filename);
NOTE: To use this function, you must have PCX::Loader installed. If you do not have
PCX::Loader installed, it will return undef and store an error for you to retrive with
the error() method, below.
This is a treat... this routine will load a PCX-format file (yah, I know ... ancient
format ... but it is the only one I could find specs for to write it in Perl. If
anyone can get specs for any other formats, or could write a loader for them, I
would be very grateful!) Anyways, a PCX-format file that is exactly 320x200 with 8 bits
per pixel, with pure Perl. It returns a blessed refrence to a PCX::Loader object, which
supports the following routinges/members. See example files ex_pcx.pl and ex_pcxl.pl in
the ./examples/ directory.
See C<perldoc PCX::Loader> for information on the methods of the object returned.
You can download PCX::Loader from
http://www.josiah.countystart.com/modules/get.pl?pcx-loader:mpod
=head1 CUSTOM ACTIVATION FUNCTIONS
the marker higher. This also shows that you can use the $self refrence
to maintain information across activations. This technique is also used
in the ramp() activator. Line 6 computes the index into the allowed
values array by first scaling the $sum to be between 0 and 1 and then
expanding it to fit smoothly inside the number of elements in the array. Then
we simply round to an integer and pluck that index from the array and
use it as the output value for that node.
See? It's not that hard! Using custom activation functions, you could do
just about anything with the node that you want to, since you have
access to the node just as if you were a blessed member of that node's object.
=item ramp($r);
ramp() preforms smooth ramp activation between 0 and 1 if $r is 1,
or between -1 and 1 if $r is 2. $r defaults to 1.
You can get this into your namespace with the ':acts' export
tag as so:
This is an option is step up from average use of this module. This variable
should hold the fully qualified name of the function used to make the actual connections
between the nodes in the network. This contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called tree() instead of
the default connector. Before I call the new() constructor, I use this line of code:
$AI::NeuralNet::Mesh::Connector = 'main::tree'
The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument. See notes on CUSTOM NETWORK CONNECTORS,
below, for more information on creating your own custom connector.
=item $AI::NeuralNet::Mesh::DEBUG
This variable controls the verbosity level. It will not hurt anything to set this
directly, yet most people find it easier to set it using the debug() method, or
any of its aliases.
You create a custom connector by setting the variable $AI::NeuralNet::Mesh::Connector
to the fully qualified name of the function used to make the actual connections
between the nodes in the network. This variable contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called tree() instead of
the default connector. Before I call the new() constructor, I use this line of code:
$AI::NeuralNet::Mesh::Connector = 'main::tree'
The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument.
Example connector:
sub connect_three {
my $self = shift;
my $r1a = shift;
my $r1b = shift;
my $r2a = shift;
my $r2b = shift;
my $mesh = $self->{mesh};
refrence over and over.
The loop that folows the arguments in the above example is very simple. It opens
a for() loop over the range of numbers, calculating the size instead of just going
$r1a..$r1b because we use the loop index with the next layer up as well.
$y + $r1a give the index into the mesh array of the current node to connect the output FROM.
We need to connect this nodes output lines to the next layers input nodes. We do this
with a simple method of the outputing node (the node at $y+$r1a), called add_output_node().
add_output_node() takes one simple arguemnt: A blessed refrence to a node that it is supposed
to output its final value TO. We get this blessed refrence with more simple addition.
$y + $r2a gives us the node directly above the first node (supposedly...I'll get to the "supposedly"
part in a minute.) By adding or subtracting from this number we get the neighbor nodes.
In the above example you can see we check the $y index to see that we havn't come close to
any of the edges of the range.
Using $y+$r2a we get the index of the node to pass to add_output_node() on the first node at
$y+B<$r1a>.
And that's all there is to it!
object. The network will have <CODE>$layers</CODE> number of layers in it
and it will have <CODE>$nodes</CODE> number of nodes per layer.
<P>There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size.</P>
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new($file);</STRONG><BR>
<DD>
This will automatically create a new network from the file <CODE>$file</CODE>. It will
return undef if the file was of an incorrect format or non-existant. Otherwise,
it will return a blessed refrence to a network completly restored from <CODE>$file</CODE>.
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new(\@layer_sizes);</STRONG><BR>
<DD>
This constructor will make a network with the number of layers corresponding to the length
in elements of the array ref passed. Each element in the array ref passed is expected
to contain an integer specifying the number of nodes (neurons) in that layer. The first
layer ($layer_sizes[0]) is to be the input layer, and the last layer in @layer_sizes is to be
the output layer.
<P>Example:</P>
<PRE>
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
$output = &$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>Three of the activation syntaxes are shown in the first constructor above, the ``linear'',
``sigmoid'' and code ref types.</P>
<P>You can also set the activation and threshold values after network creation with the
<A HREF="#item_activation"><CODE>activation()</CODE></A> and <A HREF="#item_threshold"><CODE>threshold()</CODE></A> methods.</P>
<P></P>
<P></P>
<DT><STRONG><A NAME="item_learn">$net->learn($input_map_ref, $desired_result_ref [, options ]);</A></STRONG><BR>
<DD>
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
$output = &$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>The activation type for each layer is preserved across load/save calls.</P>
<P>EXCEPTION: Due to the constraints of Perl, I cannot load/save the actual subs that the code
ref option points to. Therefore, you must re-apply any code ref activation types after a
<A HREF="#item_load"><CODE>load()</CODE></A> call.</P>
<P></P>
<DT><STRONG><A NAME="item_node_activation">$net->node_activation($layer,$node,$type);</A></STRONG><BR>
<DD>
This sets the activation function for a specific node in a layer. The same notes apply
<P></P>
<DT><STRONG><A NAME="item_load_pcx">$net->load_pcx($filename);</A></STRONG><BR>
<DD>
NOTE: To use this function, you must have PCX::Loader installed. If you do not have
PCX::Loader installed, it will return undef and store an error for you to retrive with
the <A HREF="#item_error"><CODE>error()</CODE></A> method, below.
<P>This is a treat... this routine will load a PCX-format file (yah, I know ... ancient
format ... but it is the only one I could find specs for to write it in Perl. If
anyone can get specs for any other formats, or could write a loader for them, I
would be very grateful!) Anyways, a PCX-format file that is exactly 320x200 with 8 bits
per pixel, with pure Perl. It returns a blessed refrence to a PCX::Loader object, which
supports the following routinges/members. See example files ex_pcx.pl and ex_pcxl.pl in
the ./examples/ directory.</P>
<P>See <CODE>perldoc PCX::Loader</CODE> for information on the methods of the object returned.</P>
<P>You can download PCX::Loader from <A HREF="http://www.josiah.countystart.com/modules/get.pl?pcx-loader:mpod">http://www.josiah.countystart.com/modules/get.pl?pcx-loader:mpod</A></P>
<P></P></DL>
<P>
<HR>
<H1><A NAME="custom activation functions">CUSTOM ACTIVATION FUNCTIONS</A></H1>
<P>Included in this package are four custom activation functions meant to be used
as a guide to create your own, as well as to be useful to you in normal use of the
node is higher than any previously encountered, and, if so, it sets
the marker higher. This also shows that you can use the $self refrence
to maintain information across activations. This technique is also used
in the <A HREF="#item_ramp"><CODE>ramp()</CODE></A> activator. Line 6 computes the index into the allowed
values array by first scaling the $sum to be between 0 and 1 and then
expanding it to fit smoothly inside the number of elements in the array. Then
we simply round to an integer and pluck that index from the array and
use it as the output value for that node.</P>
<P>See? It's not that hard! Using custom activation functions, you could do
just about anything with the node that you want to, since you have
access to the node just as if you were a blessed member of that node's object.</P>
<P></P>
<DT><STRONG><A NAME="item_ramp">ramp($r);</A></STRONG><BR>
<DD>
<A HREF="#item_ramp"><CODE>ramp()</CODE></A> preforms smooth ramp activation between 0 and 1 if $r is 1,
or between -1 and 1 if $r is 2. $r defaults to 1.
<P>You can get this into your namespace with the ':acts' export
tag as so:
</P>
<PRE>
use AI::NeuralNet::Mesh ':acts';</PRE>
<DD>
This is an option is step up from average use of this module. This variable
should hold the fully qualified name of the function used to make the actual connections
between the nodes in the network. This contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called <CODE>tree()</CODE> instead of
the default connector. Before I call the <A HREF="#item_new"><CODE>new()</CODE></A> constructor, I use this line of code:
<PRE>
$AI::NeuralNet::Mesh::Connector = 'main::tree'
</PRE>
<P>The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument. See notes on CUSTOM NETWORK CONNECTORS,
below, for more information on creating your own custom connector.</P>
<P></P>
<DT><STRONG><A NAME="item_%24AI%3A%3ANeuralNet%3A%3AMesh%3A%3ADEBUG">$AI::NeuralNet::Mesh::DEBUG</A></STRONG><BR>
<DD>
This variable controls the verbosity level. It will not hurt anything to set this
directly, yet most people find it easier to set it using the <A HREF="#item_debug"><CODE>debug()</CODE></A> method, or
any of its aliases.
<P></P></DL>
<P>
<HR>
than the default fully-connected feed-foward network.</P>
<P>You create a custom connector by setting the variable $AI::NeuralNet::Mesh::Connector
to the fully qualified name of the function used to make the actual connections
between the nodes in the network. This variable contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called <CODE>tree()</CODE> instead of
the default connector. Before I call the <A HREF="#item_new"><CODE>new()</CODE></A> constructor, I use this line of code:</P>
<PRE>
$AI::NeuralNet::Mesh::Connector = 'main::tree'
</PRE>
<P>The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument.</P>
<P>Example connector:</P>
<PRE>
sub connect_three {
my $self = shift;
my $r1a = shift;
my $r1b = shift;
my $r2a = shift;
my $r2b = shift;
my $mesh = $self->{mesh};
$r1a and $r1b. These define the start and end indices of the first range, or ``layer.'' Likewise,
the next two arguemnts, $r2a and $r2b, define the start and end indices of the second
layer. We also grab a refrence to the mesh array so we dont have to type the $self
refrence over and over.</P>
<P>The loop that folows the arguments in the above example is very simple. It opens
a <CODE>for()</CODE> loop over the range of numbers, calculating the size instead of just going
$r1a..$r1b because we use the loop index with the next layer up as well.</P>
<P>$y + $r1a give the index into the mesh array of the current node to connect the output FROM.
We need to connect this nodes output lines to the next layers input nodes. We do this
with a simple method of the outputing node (the node at $y+$r1a), called add_output_node().</P>
<P><CODE>add_output_node()</CODE> takes one simple arguemnt: A blessed refrence to a node that it is supposed
to output its final value TO. We get this blessed refrence with more simple addition.</P>
<P>$y + $r2a gives us the node directly above the first node (supposedly...I'll get to the ``supposedly''
part in a minute.) By adding or subtracting from this number we get the neighbor nodes.
In the above example you can see we check the $y index to see that we havn't come close to
any of the edges of the range.</P>
<P>Using $y+$r2a we get the index of the node to pass to <CODE>add_output_node()</CODE> on the first node at
$y+<STRONG>$r1a</STRONG>.</P>
<P>And that's all there is to it!</P>
<P>For the fun of it, we'll take a quick look at the default connector.
Below is the actual default connector code, albeit a bit cleaned up, as well as
line numbers added.</P>
( run in 0.823 second using v1.01-cache-2.11-cpan-de7293f3b23 )