AI-NeuralNet-Mesh
view release on metacpan or search on metacpan
</PRE>
<P>Tag sets are:</P>
<PRE>
:default
- These functions are always exported.
- Exports:
range()
intr()
pdiff()
:all
- Exports:
p()
high()
low()
range()
ramp()
and_gate()
or_gate()
:p
- Exports:
p()
high()
low()
:acts
- Exports:
ramp()
and_gate()
or_gate()
</PRE>
<P>See the respective methods/functions for information about
each method/functions usage.</P>
<P>
<HR>
<H1><A NAME="methods">METHODS</A></H1>
<DL>
<DT><STRONG><A NAME="item_new">AI::NeuralNet::Mesh->new();</A></STRONG><BR>
<DD>
There are four ways to construct a new network with new(). Each is detailed below.
<P>P.S. Don't worry, the old <A HREF="#item_new"><CODE>new($layers, $nodes [, $outputs])</CODE></A> still works like always!</P>
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new($layers, $nodes [, $outputs]);</STRONG><BR>
<DD>
Returns a newly created neural network from an <CODE>AI::NeuralNet::Mesh</CODE>
object. The network will have <CODE>$layers</CODE> number of layers in it
and it will have <CODE>$nodes</CODE> number of nodes per layer.
<P>There is an optional parameter of $outputs, which specifies the number
of output neurons to provide. If $outputs is not specified, $outputs
defaults to equal $size.</P>
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new($file);</STRONG><BR>
<DD>
This will automatically create a new network from the file <CODE>$file</CODE>. It will
return undef if the file was of an incorrect format or non-existant. Otherwise,
it will return a blessed refrence to a network completly restored from <CODE>$file</CODE>.
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new(\@layer_sizes);</STRONG><BR>
<DD>
This constructor will make a network with the number of layers corresponding to the length
in elements of the array ref passed. Each element in the array ref passed is expected
to contain an integer specifying the number of nodes (neurons) in that layer. The first
layer ($layer_sizes[0]) is to be the input layer, and the last layer in @layer_sizes is to be
the output layer.
<P>Example:</P>
<PRE>
my $net = AI::NeuralNet::Mesh->new([2,3,1]);</PRE>
<P>Creates a network with 2 input nodes, 3 hidden nodes, and 1 output node.</P>
<P></P>
<DT><STRONG>AI::NeuralNet::Mesh->new(\@array_of_hashes);</STRONG><BR>
<DD>
Another dandy constructor...this is my favorite. It allows you to tailor the number of layers,
the size of the layers, the activation type (you can even add anonymous inline subs with this one),
and even the threshold, all with one array ref-ed constructor.
<P>Example:</P>
<PRE>
my $net = AI::NeuralNet::Mesh->new([
{
nodes => 2,
activation => linear
},
{
nodes => 3,
activation => sub {
my $sum = shift;
return $sum + rand()*1;
}
},
{
nodes => 1,
activation => sigmoid,
threshold => 0.75
}
]);
</PRE>
<P>Interesting, eh? What you are basically passing is this:</P>
<PRE>
my @info = (
{ },
{ },
{ },
...
);</PRE>
<P>You are passing an array ref who's each element is a hash refrence. Each
hash refrence, or more precisely, each element in the array refrence you are passing
to the constructor, represents a layer in the network. Like the constructor above,
the first element is the input layer, and the last is the output layer. The rest are
hidden layers.</P>
<P>Each hash refrence is expected to have AT LEAST the ``nodes'' key set to the number
of nodes (neurons) in that layer. The other two keys are optional. If ``activation'' is left
out, it defaults to ``linear''. If ``threshold'' is left out, it defaults to 0.50.</P>
<P>The ``activation'' key can be one of four values:</P>
<PRE>
linear ( simply use sum of inputs as output )
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
$output = &$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>Three of the activation syntaxes are shown in the first constructor above, the ``linear'',
``sigmoid'' and code ref types.</P>
<P>You can also set the activation and threshold values after network creation with the
<A HREF="#item_activation"><CODE>activation()</CODE></A> and <A HREF="#item_threshold"><CODE>threshold()</CODE></A> methods.</P>
<P></P>
<P></P>
<DT><STRONG><A NAME="item_learn">$net->learn($input_map_ref, $desired_result_ref [, options ]);</A></STRONG><BR>
<DD>
NOTE: <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> now has increment-degrading turned OFF by default. See note
on the degrade flag, below.
<P>This will 'teach' a network to associate an new input map with a desired
result. It will return a string containg benchmarking information.</P>
<P>You can also specify strings as inputs and ouputs to learn, and they will be
crunched automatically. Example:</P>
<PRE>
$net->learn('corn', 'cob');
</PRE>
<P>Note, the old method of calling crunch on the values still works just as well.</P>
<P>The first two arguments may be array refs (or now, strings), and they may be
of different lengths.</P>
<P>Options should be written on hash form. There are three options:
</P>
<PRE>
inc => $learning_gradient
max => $maximum_iterations
error => $maximum_allowable_percentage_of_error
degrade => $degrade_increment_flag</PRE>
<P>$learning_gradient is an optional value used to adjust the weights of the internal
connections. If $learning_gradient is ommitted, it defaults to 0.002.
</P>
<P>$maximum_iterations is the maximum numbers of iteration the loop should do.
It defaults to 1024. Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.</P>
<P>$maximum_allowable_percentage_of_error is the maximum allowable error to have. If
this is set, then <A HREF="#item_learn"><CODE>learn()</CODE></A> will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then <A HREF="#item_learn"><CODE>learn()</CODE></A> will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.</P>
<P>$degrade_increment_flag is a simple flag used to allow/dissalow increment degrading
during learning based on a product of the error difference with several other factors.
$degrade_increment_flag is off by default. Setting $degrade_increment_flag to a true
value turns increment degrading on.</P>
<P>In previous module releases $degrade_increment_flag was not used, as increment degrading
was always on. In this release I have looked at several other network types as well
as several texts and decided that it would be better to not use increment degrading. The
option is still there for those that feel the inclination to use it. I have found some areas
that do need the degrade flag to work at a faster speed. See test.pl for an example. If
the degrade flag wasn't in test.pl, it would take a very long time to learn.</P>
<P></P>
<DT><STRONG><A NAME="item_learn_set">$net->learn_set(\@set, [ options ]);</A></STRONG><BR>
<DD>
This takes the same options as <A HREF="#item_learn"><CODE>learn()</CODE></A> (learn_set() uses <A HREF="#item_learn"><CODE>learn()</CODE></A> internally)
and allows you to specify a set to learn, rather than individual patterns.
A dataset is an array refrence with at least two elements in the array,
each element being another array refrence (or now, a scalar string). For
each pattern to learn, you must specify an input array ref, and an ouput
array ref as the next element. Example:
<PRE>
my @set = (
# inputs outputs
[ 1,2,3,4 ], [ 1,3,5,6 ],
[ 0,2,5,6 ], [ 0,2,1,2 ]
);</PRE>
<P>Inputs and outputs in the dataset can also be strings.</P>
<P>See the paragraph on measuring forgetfulness, below. There are
two learn_set()-specific option tags available:</P>
<PRE>
flag => $flag
pattern => $row</PRE>
<P>If ``flag'' is set to some TRUE value, as in ``flag => 1'' in the hash of options, or if the option ``flag''
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will return an integer specifying the amount of forgetfulness when all the patterns
are learned.</P>
<P>If ``pattern'' is set, then <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will use that pattern in the data set to measure forgetfulness by.
If ``pattern'' is omitted, it defaults to the first pattern in the set. Example:</P>
<PRE>
my @set = (
<P>NOTE: The only activation type NOT saved is the CODE ref type, which must be set again
after loading.</P>
<P>This uses a simple flat-file text storage format, and therefore the network files should
be fairly portable.</P>
<P>This method will return undef if there was a problem with writing the file. If there is an
error, it will set the internal error message, which you can retrive with the <A HREF="#item_error"><CODE>error()</CODE></A> method,
below.</P>
<P>If there were no errors, it will return a refrence to $net.</P>
<P></P>
<DT><STRONG><A NAME="item_load">$net->load($filename);</A></STRONG><BR>
<DD>
This will load from disk any network saved by <A HREF="#item_save"><CODE>save()</CODE></A> and completly restore the internal
state at the point it was <A HREF="#item_save"><CODE>save()</CODE></A> was called at.
<P>If the file is of an invalid file type, then <A HREF="#item_load"><CODE>load()</CODE></A> will
return undef. Use the <A HREF="#item_error"><CODE>error()</CODE></A> method, below, to print the error message.</P>
<P>If there were no errors, it will return a refrence to $net.</P>
<P>UPDATE: $filename can now be a newline-seperated set of mesh data. This enables you
to do $net->load(join(``\n'',<DATA>)) and other fun things. I added this mainly
for a demo I'm writing but not qutie done with yet. So, Cheers!</P>
<P></P>
<DT><STRONG><A NAME="item_activation">$net->activation($layer,$type);</A></STRONG><BR>
<DD>
This sets the activation type for layer <CODE>$layer</CODE>.
<P><CODE>$type</CODE> can be one of four values:</P>
<PRE>
linear ( simply use sum of inputs as output )
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
$output = &$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>The activation type for each layer is preserved across load/save calls.</P>
<P>EXCEPTION: Due to the constraints of Perl, I cannot load/save the actual subs that the code
ref option points to. Therefore, you must re-apply any code ref activation types after a
<A HREF="#item_load"><CODE>load()</CODE></A> call.</P>
<P></P>
<DT><STRONG><A NAME="item_node_activation">$net->node_activation($layer,$node,$type);</A></STRONG><BR>
<DD>
This sets the activation function for a specific node in a layer. The same notes apply
here as to the <A HREF="#item_activation"><CODE>activation()</CODE></A> method above.
<P></P>
<DT><STRONG><A NAME="item_threshold">$net->threshold($layer,$value);</A></STRONG><BR>
<DD>
This sets the activation threshold for a specific layer. The threshold only is used
when activation is set to ``sigmoid'', ``sigmoid_1'', or ``sigmoid_2''.
<P></P>
<DT><STRONG><A NAME="item_node_threshold">$net->node_threshold($layer,$node,$value);</A></STRONG><BR>
<DD>
This sets the activation threshold for a specific node in a layer. The threshold only is used
when activation is set to ``sigmoid'', ``sigmoid_1'', or ``sigmoid_2''.
<P></P>
<DT><STRONG><A NAME="item_join_cols">$net->join_cols($array_ref,$row_length_in_elements,$high_state_character,$low_state_character);</A></STRONG><BR>
<DD>
This is more of a utility function than any real necessary function of the package.
Instead of joining all the elements of the array together in one long string, like <CODE>join()</CODE> ,
it prints the elements of $array_ref to STDIO, adding a newline (\n) after every $row_length_in_elements
number of elements has passed. Additionally, if you include a $high_state_character and a $low_state_character,
it will print the $high_state_character (can be more than one character) for every element that
has a true value, and the $low_state_character for every element that has a false value.
If you do not supply a $high_state_character, or the $high_state_character is a null or empty or
undefined string, it <A HREF="#item_join_cols"><CODE>join_cols()</CODE></A> will just print the numerical value of each element seperated
by a null character (\0). <A HREF="#item_join_cols"><CODE>join_cols()</CODE></A> defaults to the latter behaviour.
<P></P>
<DT><STRONG><A NAME="item_extend">$net->extend(\@array_of_hashes);</A></STRONG><BR>
<DD>
This allows you to re-apply any activations and thresholds with the same array ref which
you created a network with. This is useful for re-applying code ref activations after a <A HREF="#item_load"><CODE>load()</CODE></A>
call without having to type the code ref twice.
<P>You can also specify the extension in a simple array ref like this:</P>
<PRE>
$net->extend([2,3,1]);
</PRE>
<P>Which will simply add more nodes if needed to set the number of nodes in each layer to their
respective elements. This works just like the respective new() constructor, above.</P>
<P>NOTE: Your net will probably require re-training after adding nodes.</P>
<P></P>
<DT><STRONG><A NAME="item_extend_layer">$net->extend_layer($layer,\%hash);</A></STRONG><BR>
<DD>
With this you can modify only one layer with its specifications in a hash refrence. This hash
refrence uses the same keys as for the last <A HREF="#item_new"><CODE>new()</CODE></A> constructor form, above.
<P>You can also specify just the number of nodes for the layer in this form:</P>
<PRE>
$net->extend_layer(0,5);</PRE>
<P>Which will set the number of nodes in layer 0 to 5 nodes. This is the same as calling:
</P>
<PRE>
$net->add_nodes(0,5);</PRE>
<P>Which does the exact same thing. See <A HREF="#item_add_nodes"><CODE>add_nodes()</CODE></A> below.</P>
<P>NOTE: Your net will probably require re-training after adding nodes.</P>
<P></P>
<DT><STRONG><A NAME="item_add_nodes">$net->add_nodes($layer,$total_nodes);</A></STRONG><BR>
<DD>
This method was created mainly to service the extend*() group of functions, but it
can also be called independently. This will add nodes as needed to layer <CODE>$layer</CODE> to
make the nodes in layer equal to $total_nodes.
<P>NOTE: Your net will probably require re-training after adding nodes.</P>
<P></P>
<DT><STRONG><A NAME="item_p">$net->p($a,$b);</A></STRONG><BR>
<DD>
Returns a floating point number which represents $a as a percentage of $b.
<P></P>
<DT><STRONG><A NAME="item_intr">$net->intr($float);</A></STRONG><BR>
<DD>
Rounds a floating-point number rounded to an integer using <CODE>sprintf()</CODE> and <CODE>int()</CODE> , Provides
better rounding than just calling <CODE>int()</CODE> on the float. Also used very heavily internally.
<P></P>
<DT><STRONG><A NAME="item_high">$net->high($array_ref);</A></STRONG><BR>
<DD>
Returns the index of the element in array REF passed with the highest comparative value.
<P></P>
<DT><STRONG><A NAME="item_low">$net->low($array_ref);</A></STRONG><BR>
<DD>
Returns the index of the element in array REF passed with the lowest comparative value.
<P></P>
<DT><STRONG><A NAME="item_pdiff">$net->pdiff($array_ref_A, $array_ref_B);</A></STRONG><BR>
<DD>
( run in 1.681 second using v1.01-cache-2.11-cpan-39bf76dae61 )