AI-NeuralNet-Mesh
view release on metacpan or search on metacpan
#!/usr/bin/perl
# Copyright (c) 2000 Josiah Bryan USA
#
# See AUTHOR section in pod text below for usage and distribution rights.
#
BEGIN {
$AI::NeuralNet::Mesh::VERSION = "0.44";
$AI::NeuralNet::Mesh::ID =
'$Id: AI::NeuralNet::Mesh.pm, v'.$AI::NeuralNet::Mesh::VERSION.' 2000/15/09 03:29:08 josiah Exp $';
}
package AI::NeuralNet::Mesh;
my $layers = shift;
my $nodes = shift;
my $outputs = shift || $nodes;
my $inputs = shift || $nodes;
bless $self, $type;
# If $layers is a string, then it will be numerically equal to 0, so
# try to load it as a network file.
if($layers == 0) {
# We use a "1" flag as the second argument to indicate that we
# want load() to call the new constructor to make a network the
# same size as in the file and return a refrence to the network,
# instead of just creating the network from pre-exisiting refrence
return $self->load($layers,1);
}
# Looks like we got ourselves a layer specs array
if(ref($layers) eq "ARRAY") {
if(ref($layers->[0]) eq "HASH") {
$self->{total_nodes} = 0;
# values (not array ref), or you can pass a comma-
# seperated list of values as parameters:
#
# $net->activation(4,range(@numbers));
# $net->activation(4,range(6,15,26,106,28,3));
#
# Note: when using a range() activatior, train the
# net TWICE on the data set, because the first time
# the range() function searches for the top value in
# the inputs, and therefore, results could flucuate.
# The second learning cycle guarantees more accuracy.
#
sub range {
my @r=@_;
sub{$_[1]->{t}=$_[0]if($_[0]>$_[1]->{t});$r[intr($_[0]/$_[1]->{t}*$#r)]}
}
#
# ramp() preforms smooth ramp activation between 0 and 1 if $r is 1,
# or between -1 and 1 if $r is 2. $r defaults to 1, as you can see.
#
# Note: when using a ramp() activatior, train the
# net at least TWICE on the data set, because the first
# time the ramp() function searches for the top value in
# the inputs, and therefore, results could flucuate.
# The second learning cycle guarantees more accuracy.
#
sub ramp {
my $r=shift||1;my $t=($r<2)?0:-1;
sub{$_[1]->{t}=$_[0]if($_[0]>$_[1]->{t});$_[0]/$_[1]->{t}*$r-$b}
}
# Self explanitory, pretty much. $threshold is used to decide if an input
# is true or false (1 or 0). If an input is below $threshold, it is false.
sub and_gate {
my $threshold = shift || 0.5;
\&code_ref;
"sigmoid_1" is an alias for "sigmoid".
The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:
$output = &$code_ref($sum_of_inputs, $self);
The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.
Three of the activation syntaxes are shown in the first constructor above, the "linear",
"sigmoid" and code ref types.
You can also set the activation and threshold values after network creation with the
activation() and threshold() methods.
\&code_ref;
"sigmoid_1" is an alias for "sigmoid".
The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:
$output = &$code_ref($sum_of_inputs, $self);
The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.
The activation type for each layer is preserved across load/save calls.
EXCEPTION: Due to the constraints of Perl, I cannot load/save the actual subs that the code
ref option points to. Therefore, you must re-apply any code ref activation types after a
load() call.
values (not array ref), or you can pass a comma-
seperated list of values as parameters:
$net->activation(4,range(@numbers));
$net->activation(4,range(6,15,26,106,28,3));
Note: when using a range() activatior, train the
net TWICE on the data set, because the first time
the range() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.
The actual code that implements the range closure is
a bit convulted, so I will expand on it here as a simple
tutorial for custom activation functions.
= line 1 = sub {
= line 2 = my @values = ( 6..10 );
= line 3 = my $sum = shift;
= line 4 = my $self = shift;
= line 5 = $self->{top_value}=$sum if($sum>$self->{top_value});
You can get this into your namespace with the ':acts' export
tag as so:
use AI::NeuralNet::Mesh ':acts';
Note: when using a ramp() activatior, train the
net at least TWICE on the data set, because the first
time the ramp() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.
No code to show here, as it is almost exactly the same as range().
=item and_gate($threshold);
Self explanitory, pretty much. This turns the node into a basic AND gate.
$threshold is used to decide if an input is true or false (1 or 0). If
an input is below $threshold, it is false. $threshold defaults to 0.5.
This is a very simple example. It feeds the outputs of every node in the first layer
to the node directly above it, as well as the nodes on either side of the node directly
above it, checking for range sides, of course.
The network is stored internally as one long array of node objects. The goal here
is to connect one range of nodes in that array to another range of nodes. The calling
function has already calculated the indices into the array, and it passed it to you
as the four arguments after the $self refrence. The first two arguments we will call
$r1a and $r1b. These define the start and end indices of the first range, or "layer." Likewise,
the next two arguemnts, $r2a and $r2b, define the start and end indices of the second
layer. We also grab a refrence to the mesh array so we dont have to type the $self
refrence over and over.
The loop that folows the arguments in the above example is very simple. It opens
a for() loop over the range of numbers, calculating the size instead of just going
$r1a..$r1b because we use the loop index with the next layer up as well.
$y + $r1a give the index into the mesh array of the current node to connect the output FROM.
We need to connect this nodes output lines to the next layers input nodes. We do this
with a simple method of the outputing node (the node at $y+$r1a), called add_output_node().
= line 6 = my $r2b = shift;
= line 7 = my $mesh = $self->{mesh};
= line 8 = for my $y ($r1a..$r1b-1) {
= line 9 = for my $z ($r2a..$r2b-1) {
= line 10 = $mesh->[$y]->add_output_node($mesh->[$z]);
= line 11 = }
= line 12 = }
= line 12 = }
Its that easy! The simplest connector (well almost anyways). It just connects each
node in the first layer defined by ($r1a..$r1b) to every node in the second layer as
defined by ($r2a..$r2b).
Those of you that are still reading, if you do come up with any new connection functions,
PLEASE SEND THEM TO ME. I would love to see what others are doing, as well as get new
network ideas. I will probably include any connectors you send over in future releases (with
propoer credit and permission, of course).
Anyways, happy coding!
examples/addnet_data.txt view on Meta::CPAN
layers inc top forgetfulness time %diff1 %diff2 %diff3
%diff4
1 0.025 1 0 0 wallclock secs ( 0.06 usr + 0.00 sys = 0.06 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.025 2 0 0 wallclock secs ( 0.06 usr + 0.00 sys = 0.06 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.025 3 0 0 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.025 4 0 0 wallclock secs ( 0.17 usr + 0.00 sys = 0.17 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.050 1 0 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.050 2 0 0 wallclock secs ( 0.06 usr + 0.00 sys = 0.06 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.050 3 0 0 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.050 4 0 0 wallclock secs ( 0.16 usr + 0.00 sys = 0.16 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.075 1 0 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.075 2 0 0 wallclock secs ( 0.05 usr + 0.00 sys = 0.05 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.075 3 0 0 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.075 4 0 1 wallclock secs ( 0.22 usr + 0.00 sys = 0.22 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.100 1 0 0 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.100 2 0 0 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.100 3 0 0 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.100 4 0 0 wallclock secs ( 0.16 usr + 0.00 sys = 0.16 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.125 1 0 0 wallclock secs ( 0.05 usr + 0.00 sys = 0.05 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.125 2 0 1 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.125 3 0 0 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.125 4 0 0 wallclock secs ( 0.16 usr + 0.00 sys = 0.16 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.150 1 0 0 wallclock secs ( 0.05 usr + 0.00 sys = 0.05 CPU) 0.000000 0.000000 0.000000 0.000000
1 0.150 2 0 0 wallclock secs ( 0.11 usr + 0.00 sys = 0.11 CPU) 0.000000 0.000000 0.000000 0.000000
examples/ex_add.pl view on Meta::CPAN
[ 150, 150 ], [ 300 ],
[ 500, 500 ], [ 1000 ],
[ 10, 10 ], [ 20 ],
[ 15, 15 ], [ 30 ],
[ 12, 8 ], [ 20 ],
]);
$addition->save('add.mesh');
}
print "Enter first number to add : "; chomp(my $a = <>);
print "Enter second number to add : "; chomp(my $b = <>);
print "Result: ",$addition->run([$a,$b])->[0],"\n";
examples/ex_aln.pl view on Meta::CPAN
}
$self->extend($self->{_original_specs});
return $self;
};
# If $leavesis a string, then it will be numerically equal to 0, so
# try to load it as a network file.
if($leaves == 0) {
# We use a "1" flag as the second argument to indicate that we
# want load() to call the new constructor to make a network the
# same size as in the file and return a refrence to the network,
# instead of just creating the network from pre-exisiting refrence
my $self = AI::NeuralNet::Mesh->new(1,1);
return $self->load($leaves,1);
}
# Initalize our counter and our specs ref
my $specs = [];
my $level = 0;
examples/ex_aln.pl view on Meta::CPAN
# Create a new network from our specs
my $net = AI::NeuralNet::Mesh->new($specs);
$net->{_original_specs} = $specs;
# Return our new network
return $net;
}
# Our custom node connector for the tree function, above.
# This connects every two nodes from the first range
# to one node of the second range. This is only meant
# to be used in a factored layer mesh, such as one with a
# [8,4,2,1] node specification array. You should never
# worry about what the node spec array is, as that is
# built by tree().
sub _c_tree {
my ($self,$r1a,$r1b,$r2a,$r2b)=@_;
my $mesh = $self->{mesh};
my $z=$r2a;
for(my $y=0;$y<($r1b-$r1a);$y+=2) {
$mesh->[$y]->add_output_node($mesh->[$z]);
examples/ex_mult.pl view on Meta::CPAN
[ 2, 8 ], [ 16 ],
[ 9, 9 ], [ 81 ],
[ 10, 5 ], [ 50 ],
[ 20, 10 ], [ 200 ],
[ 100, 50 ], [ 5000 ],
]);
$multiply->save('mult.mesh');
}
print "Enter first number to multiply : "; chomp(my $a = <>);
print "Enter second number to multiply : "; chomp(my $b = <>);
print "Result: ",$multiply->run([$a,$b])->[0],"\n";
examples/ex_sub.pl view on Meta::CPAN
[ 2, 1 ], [ 1 ],
[ 10, 5 ], [ 5 ],
[ 20, 10 ], [ 10 ],
[ 100, 50 ], [ 50 ],
[ 500, 200 ], [ 300 ],
]);
$subtract->save('sub.mesh');
}
print "Enter first number to subtract : "; chomp(my $a = <>);
print "Enter second number to subtract : "; chomp(my $b = <>);
print "Result: ",$subtract->run([$a,$b])->[0],"\n";
my $result_bit_2 = $net->run([1,1])->[0];
# Display the results
print "AND test with inputs (0,1): $result_bit_1\n";
print "AND test with inputs (1,1): $result_bit_2\n";
</PRE>
<P>
<HR>
<H1><A NAME="version & updates">VERSION & UPDATES</A></H1>
<P>This is version <STRONG>0.43</STRONG>, the second release of this module.</P>
<P>With this version I have gone through and tuned up many area
of this module, including the descent algorithim in learn(),
as well as four custom activation functions, and several export
tag sets. With this release, I have also included a few
new and more practical example scripts. (See ex_wine.pl) This release
also includes a simple example of an ALN (Adaptive Logic Network) made
with this module. See ex_aln.pl. Also in this release is support for
loading data sets from simple CSV-like files. See the <A HREF="#item_load_set"><CODE>load_set()</CODE></A> method
for details. This version also fixes a big bug that I never knew about
until writing some demos for this version - that is, when trying to use
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
$output = &$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>Three of the activation syntaxes are shown in the first constructor above, the ``linear'',
``sigmoid'' and code ref types.</P>
<P>You can also set the activation and threshold values after network creation with the
<A HREF="#item_activation"><CODE>activation()</CODE></A> and <A HREF="#item_threshold"><CODE>threshold()</CODE></A> methods.</P>
<P></P>
<P></P>
<DT><STRONG><A NAME="item_learn">$net->learn($input_map_ref, $desired_result_ref [, options ]);</A></STRONG><BR>
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
$output = &$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>The activation type for each layer is preserved across load/save calls.</P>
<P>EXCEPTION: Due to the constraints of Perl, I cannot load/save the actual subs that the code
ref option points to. Therefore, you must re-apply any code ref activation types after a
<A HREF="#item_load"><CODE>load()</CODE></A> call.</P>
<P></P>
<DT><STRONG><A NAME="item_node_activation">$net->node_activation($layer,$node,$type);</A></STRONG><BR>
<DD>
You can also pass an array containing the range
values (not array ref), or you can pass a comma-
seperated list of values as parameters:</P>
<PRE>
$net->activation(4,range(@numbers));
$net->activation(4,range(6,15,26,106,28,3));</PRE>
<P>Note: when using a <A HREF="#item_range"><CODE>range()</CODE></A> activatior, train the
net TWICE on the data set, because the first time
the <A HREF="#item_range"><CODE>range()</CODE></A> function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.</P>
<P>The actual code that implements the range closure is
a bit convulted, so I will expand on it here as a simple
tutorial for custom activation functions.</P>
<PRE>
= line 1 = sub {
= line 2 = my @values = ( 6..10 );
= line 3 = my $sum = shift;
= line 4 = my $self = shift;
= line 5 = $self->{top_value}=$sum if($sum>$self->{top_value});
= line 6 = my $index = intr($sum/$self->{top_value}*$#values);
or between -1 and 1 if $r is 2. $r defaults to 1.
<P>You can get this into your namespace with the ':acts' export
tag as so:
</P>
<PRE>
use AI::NeuralNet::Mesh ':acts';</PRE>
<P>Note: when using a <A HREF="#item_ramp"><CODE>ramp()</CODE></A> activatior, train the
net at least TWICE on the data set, because the first
time the <A HREF="#item_ramp"><CODE>ramp()</CODE></A> function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.</P>
<P>No code to show here, as it is almost exactly the same as range().</P>
<P></P>
<DT><STRONG><A NAME="item_and_gate">and_gate($threshold);</A></STRONG><BR>
<DD>
Self explanitory, pretty much. This turns the node into a basic AND gate.
$threshold is used to decide if an input is true or false (1 or 0). If
an input is below $threshold, it is false. $threshold defaults to 0.5.
<P>You can get this into your namespace with the ':acts' export
tag as so:
</P>
}
}</PRE>
<P>This is a very simple example. It feeds the outputs of every node in the first layer
to the node directly above it, as well as the nodes on either side of the node directly
above it, checking for range sides, of course.</P>
<P>The network is stored internally as one long array of node objects. The goal here
is to connect one range of nodes in that array to another range of nodes. The calling
function has already calculated the indices into the array, and it passed it to you
as the four arguments after the $self refrence. The first two arguments we will call
$r1a and $r1b. These define the start and end indices of the first range, or ``layer.'' Likewise,
the next two arguemnts, $r2a and $r2b, define the start and end indices of the second
layer. We also grab a refrence to the mesh array so we dont have to type the $self
refrence over and over.</P>
<P>The loop that folows the arguments in the above example is very simple. It opens
a <CODE>for()</CODE> loop over the range of numbers, calculating the size instead of just going
$r1a..$r1b because we use the loop index with the next layer up as well.</P>
<P>$y + $r1a give the index into the mesh array of the current node to connect the output FROM.
We need to connect this nodes output lines to the next layers input nodes. We do this
with a simple method of the outputing node (the node at $y+$r1a), called add_output_node().</P>
<P><CODE>add_output_node()</CODE> takes one simple arguemnt: A blessed refrence to a node that it is supposed
to output its final value TO. We get this blessed refrence with more simple addition.</P>
= line 6 = my $r2b = shift;
= line 7 = my $mesh = $self->{mesh};
= line 8 = for my $y ($r1a..$r1b-1) {
= line 9 = for my $z ($r2a..$r2b-1) {
= line 10 = $mesh->[$y]->add_output_node($mesh->[$z]);
= line 11 = }
= line 12 = }
= line 12 = }
</PRE>
<P>Its that easy! The simplest connector (well almost anyways). It just connects each
node in the first layer defined by ($r1a..$r1b) to every node in the second layer as
defined by ($r2a..$r2b).</P>
<P>Those of you that are still reading, if you do come up with any new connection functions,
PLEASE SEND THEM TO ME. I would love to see what others are doing, as well as get new
network ideas. I will probably include any connectors you send over in future releases (with
propoer credit and permission, of course).</P>
<P>Anyways, happy coding!</P>
<P>
<HR>
<H1><A NAME="what can it do">WHAT CAN IT DO?</A></H1>
<P>Rodin Porrata asked on the ai-neuralnet-backprop malining list,
( run in 1.277 second using v1.01-cache-2.11-cpan-39bf76dae61 )