AI-NeuralNet-Mesh
view release on metacpan or search on metacpan
linear ( simply use sum of inputs as output )
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
sigmoid_2 ( positive / 0 /negative sigmoid )
\&code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
$output = &$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>Three of the activation syntaxes are shown in the first constructor above, the ``linear'',
``sigmoid'' and code ref types.</P>
<P>You can also set the activation and threshold values after network creation with the
<A HREF="#item_activation"><CODE>activation()</CODE></A> and <A HREF="#item_threshold"><CODE>threshold()</CODE></A> methods.</P>
<P></P>
<P></P>
<DT><STRONG><A NAME="item_learn">$net->learn($input_map_ref, $desired_result_ref [, options ]);</A></STRONG><BR>
<DD>
NOTE: <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> now has increment-degrading turned OFF by default. See note
on the degrade flag, below.
<P>This will 'teach' a network to associate an new input map with a desired
result. It will return a string containg benchmarking information.</P>
<P>You can also specify strings as inputs and ouputs to learn, and they will be
crunched automatically. Example:</P>
<PRE>
$net->learn('corn', 'cob');
</PRE>
<P>Note, the old method of calling crunch on the values still works just as well.</P>
<P>The first two arguments may be array refs (or now, strings), and they may be
of different lengths.</P>
<P>Options should be written on hash form. There are three options:
</P>
<PRE>
inc => $learning_gradient
max => $maximum_iterations
error => $maximum_allowable_percentage_of_error
degrade => $degrade_increment_flag</PRE>
<P>$learning_gradient is an optional value used to adjust the weights of the internal
connections. If $learning_gradient is ommitted, it defaults to 0.002.
</P>
<P>$maximum_iterations is the maximum numbers of iteration the loop should do.
It defaults to 1024. Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.</P>
<P>$maximum_allowable_percentage_of_error is the maximum allowable error to have. If
this is set, then <A HREF="#item_learn"><CODE>learn()</CODE></A> will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then <A HREF="#item_learn"><CODE>learn()</CODE></A> will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.</P>
<P>$degrade_increment_flag is a simple flag used to allow/dissalow increment degrading
during learning based on a product of the error difference with several other factors.
$degrade_increment_flag is off by default. Setting $degrade_increment_flag to a true
value turns increment degrading on.</P>
<P>In previous module releases $degrade_increment_flag was not used, as increment degrading
was always on. In this release I have looked at several other network types as well
as several texts and decided that it would be better to not use increment degrading. The
option is still there for those that feel the inclination to use it. I have found some areas
that do need the degrade flag to work at a faster speed. See test.pl for an example. If
the degrade flag wasn't in test.pl, it would take a very long time to learn.</P>
<P></P>
<DT><STRONG><A NAME="item_learn_set">$net->learn_set(\@set, [ options ]);</A></STRONG><BR>
<DD>
This takes the same options as <A HREF="#item_learn"><CODE>learn()</CODE></A> (learn_set() uses <A HREF="#item_learn"><CODE>learn()</CODE></A> internally)
and allows you to specify a set to learn, rather than individual patterns.
A dataset is an array refrence with at least two elements in the array,
each element being another array refrence (or now, a scalar string). For
each pattern to learn, you must specify an input array ref, and an ouput
array ref as the next element. Example:
<PRE>
my @set = (
# inputs outputs
[ 1,2,3,4 ], [ 1,3,5,6 ],
[ 0,2,5,6 ], [ 0,2,1,2 ]
);</PRE>
<P>Inputs and outputs in the dataset can also be strings.</P>
<P>See the paragraph on measuring forgetfulness, below. There are
two learn_set()-specific option tags available:</P>
<PRE>
flag => $flag
pattern => $row</PRE>
<P>If ``flag'' is set to some TRUE value, as in ``flag => 1'' in the hash of options, or if the option ``flag''
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will return an integer specifying the amount of forgetfulness when all the patterns
are learned.</P>
<P>If ``pattern'' is set, then <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will use that pattern in the data set to measure forgetfulness by.
If ``pattern'' is omitted, it defaults to the first pattern in the set. Example:</P>
<PRE>
my @set = (
[ 0,1,0,1 ], [ 0 ],
[ 0,0,1,0 ], [ 1 ],
[ 1,1,0,1 ], [ 2 ], # <---
[ 0,1,1,0 ], [ 3 ]
);
</PRE>
<P>If you wish to measure forgetfulness as indicated by the line with the arrow, then you would
pass 2 as the "pattern" option, as in "pattern => 2".</P>
<P>Now why the heck would anyone want to measure forgetfulness, you ask? Maybe you wonder how I
even measure that. Well, it is not a vital value that you have to know. I just put in a
``forgetfulness measure'' one day because I thought it would be neat to know.</P>
<P>How the module measures forgetfulness is this: First, it learns all the patterns
in the set provided, then it will run the very first pattern (or whatever pattern
is specified by the ``row'' option) in the set after it has finished learning. It
will compare the <A HREF="#item_run"><CODE>run()</CODE></A> output with the desired output as specified in the dataset.
In a perfect world, the two should match exactly. What we measure is how much that
they don't match, thus the amount of forgetfulness the network has.</P>
<P>Example (from examples/ex_dow.pl):</P>
<PRE>
# Data from 1989 (as far as I know..this is taken from example data on BrainMaker)
my @data = (
# Mo CPI CPI-1 CPI-3 Oil Oil-1 Oil-3 Dow Dow-1 Dow-3 Dow Ave (output)
[ 1, 229, 220, 146, 20.0, 21.9, 19.5, 2645, 2652, 2597], [ 2647 ],
[ 2, 235, 226, 155, 19.8, 20.0, 18.3, 2633, 2645, 2585], [ 2637 ],
[ 3, 244, 235, 164, 19.6, 19.8, 18.1, 2627, 2633, 2579], [ 2630 ],
[ 4, 261, 244, 181, 19.6, 19.6, 18.1, 2611, 2627, 2563], [ 2620 ],
}
..
You can also pass an array containing the range
values (not array ref), or you can pass a comma-
seperated list of values as parameters:</P>
<PRE>
$net->activation(4,range(@numbers));
$net->activation(4,range(6,15,26,106,28,3));</PRE>
<P>Note: when using a <A HREF="#item_range"><CODE>range()</CODE></A> activatior, train the
net TWICE on the data set, because the first time
the <A HREF="#item_range"><CODE>range()</CODE></A> function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.</P>
<P>The actual code that implements the range closure is
a bit convulted, so I will expand on it here as a simple
tutorial for custom activation functions.</P>
<PRE>
= line 1 = sub {
= line 2 = my @values = ( 6..10 );
= line 3 = my $sum = shift;
= line 4 = my $self = shift;
= line 5 = $self->{top_value}=$sum if($sum>$self->{top_value});
= line 6 = my $index = intr($sum/$self->{top_value}*$#values);
= line 7 = return $values[$index];
= line 8 = }</PRE>
<P>Now, the actual function fits in one line of code, but I expanded it a bit
here. Line 1 creates our array of allowed output values. Lines two and
three grab our parameters off the stack which allow us access to the
internals of this node. Line 5 checks to see if the sum output of this
node is higher than any previously encountered, and, if so, it sets
the marker higher. This also shows that you can use the $self refrence
to maintain information across activations. This technique is also used
in the <A HREF="#item_ramp"><CODE>ramp()</CODE></A> activator. Line 6 computes the index into the allowed
values array by first scaling the $sum to be between 0 and 1 and then
expanding it to fit smoothly inside the number of elements in the array. Then
we simply round to an integer and pluck that index from the array and
use it as the output value for that node.</P>
<P>See? It's not that hard! Using custom activation functions, you could do
just about anything with the node that you want to, since you have
access to the node just as if you were a blessed member of that node's object.</P>
<P></P>
<DT><STRONG><A NAME="item_ramp">ramp($r);</A></STRONG><BR>
<DD>
<A HREF="#item_ramp"><CODE>ramp()</CODE></A> preforms smooth ramp activation between 0 and 1 if $r is 1,
or between -1 and 1 if $r is 2. $r defaults to 1.
<P>You can get this into your namespace with the ':acts' export
tag as so:
</P>
<PRE>
use AI::NeuralNet::Mesh ':acts';</PRE>
<P>Note: when using a <A HREF="#item_ramp"><CODE>ramp()</CODE></A> activatior, train the
net at least TWICE on the data set, because the first
time the <A HREF="#item_ramp"><CODE>ramp()</CODE></A> function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.</P>
<P>No code to show here, as it is almost exactly the same as range().</P>
<P></P>
<DT><STRONG><A NAME="item_and_gate">and_gate($threshold);</A></STRONG><BR>
<DD>
Self explanitory, pretty much. This turns the node into a basic AND gate.
$threshold is used to decide if an input is true or false (1 or 0). If
an input is below $threshold, it is false. $threshold defaults to 0.5.
<P>You can get this into your namespace with the ':acts' export
tag as so:
</P>
<PRE>
use AI::NeuralNet::Mesh ':acts';</PRE>
<P>Let's look at the code real quick, as it shows how to get at the indivudal
input connections:</P>
<PRE>
= line 1 = sub {
= line 2 = my $sum = shift;
= line 3 = my $self = shift;
= line 4 = my $threshold = 0.50;
= line 5 = for my $x (0..$self->{_inputs_size}-1) {
= line 6 = return 0.000001 if(!$self->{_inputs}->[$x]->{value}<$threshold)
= line 7 = }
= line 8 = return $sum/$self->{_inputs_size};
= line 9 = }</PRE>
<P>Line 2 and 3 pulls in our sum and self refrence. Line 5 opens a loop to go over
all the input lines into this node. Line 6 looks at each input line's value
and comparse it to the threshold. If the value of that line is below threshold, then
we return 0.000001 to signify a 0 value. (We don't return a 0 value so that the network
doen't get hung trying to multiply a 0 by a huge weight during training [it just will
keep getting a 0 as the product, and it will never learn]). Line 8 returns the mean
value of all the inputs if all inputs were above threshold.</P>
<P>Very simple, eh? :)
</P>
<P></P>
<DT><STRONG><A NAME="item_or_gate">or_gate($threshold);</A></STRONG><BR>
<DD>
<P>Self explanitory. Turns the node into a basic OR gate, $threshold is used same as above.</P>
<P>You can get this into your namespace with the ':acts' export
tag as so:
</P>
<PRE>
use AI::NeuralNet::Mesh ':acts';</PRE>
<P></P></DL>
<P>
<HR>
<H1><A NAME="variables">VARIABLES</A></H1>
<DL>
<DT><STRONG><A NAME="item_%24AI%3A%3ANeuralNet%3A%3AMesh%3A%3AConnector">$AI::NeuralNet::Mesh::Connector</A></STRONG><BR>
<DD>
This is an option is step up from average use of this module. This variable
should hold the fully qualified name of the function used to make the actual connections
between the nodes in the network. This contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called <CODE>tree()</CODE> instead of
the default connector. Before I call the <A HREF="#item_new"><CODE>new()</CODE></A> constructor, I use this line of code:
<PRE>
$AI::NeuralNet::Mesh::Connector = 'main::tree'
</PRE>
<P>The tree() function is called as a blessed method when it is used internally, providing
access to the bless refrence in the first argument. See notes on CUSTOM NETWORK CONNECTORS,
below, for more information on creating your own custom connector.</P>
<P></P>
<DT><STRONG><A NAME="item_%24AI%3A%3ANeuralNet%3A%3AMesh%3A%3ADEBUG">$AI::NeuralNet::Mesh::DEBUG</A></STRONG><BR>
<DD>
This variable controls the verbosity level. It will not hurt anything to set this
directly, yet most people find it easier to set it using the <A HREF="#item_debug"><CODE>debug()</CODE></A> method, or
( run in 0.522 second using v1.01-cache-2.11-cpan-39bf76dae61 )