AI-NeuralNet-Mesh

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

                - custom node activations
                - increased learning speed

0.43  Wed Sep 14 03:13:01 20000
        - Third release, by Josiah Bryan
        - Several bug fixes
                - fixed 'flag' option on learn_set()
                - fixed multiple-output bug
                - fixed learning gradient error
        - Improved learning function to not degrade increment automatically
        - Added CSV-style dataset loader
        - Added Export tags
        - Added four custom node activations, including range and ramp
        - Added several misc. extra functions
        - Added ALN example demo


Mesh.pm  view on Meta::CPAN

        	$layer_specs = [split(',',"$nodes," x $layers)];
        	$layer_specs->[$#{$layer_specs}+1]=$outputs;
        	$self->{layers}	= $layer_specs;
        }
        
        # First create the individual nodes
		for my $x (0..$tmp-1) {         
			$self->{mesh}->[$x] = AI::NeuralNet::Mesh::node->new($self);
        }              
        
        # Get an instance of an output (data collector) node
		$self->{output} = AI::NeuralNet::Mesh::output->new($self);
		
		# Connect the output layer to the data collector
        for $x (0..$outputs-1) {                    
			$self->{mesh}->[$tmp-$outputs+$x]->add_output_node($self->{output});
		}
		
		# Now we use the _c() method to connect the layers together.
        $y=0;
        my $c = $connector.'($self,$y,$y+$z,$y+$z,$y+$z+$layer_specs->[$x+1])';
        for $x (0..$layers-1) {
        	$z = $layer_specs->[$x];                         
        	d("layer $x size: $z (y:$y)\n,",1);

Mesh.pm  view on Meta::CPAN

   		}  
   		my $str = "Learning took $loop loops and ".timestr(timediff(new Benchmark,$start))."\n";
   		d($str,3); $self->{benchmark} = "$loop loops and ".timestr(timediff(new Benchmark,$start))."\n";
   		return $str;
   	}


	# See POD for usage
	sub learn_set {
		my $self	=	shift;
		my $data	=	shift;
		my %args	=	@_;
		my $len		=	$#{$data}/2;
		my $inc		=	$args{inc};
		my $max		=	$args{max};
	    my $error	=	$args{error};
	    my $degrade	=	$args{degrade};
	    my $p		=	(defined $args{flag}) ?$args{flag} :1;
	    my $row		=	(defined $args{row})  ?$args{row}+1:1;
	    my $leave	=	(defined $args{leave})?$args{leave}:0;
		for my $x (0..$len-$leave) {
			d("Learning set $x...\n",4);
			my $str = $self->learn( $data->[$x*2],
					  		  		$data->[$x*2+1],
					    			inc=>$inc,
					    			max=>$max,
					    			error=>$error,
					    			degrade=>$degrade);
		}
			
		if ($p) {
			return pdiff($data->[$row],$self->run($data->[$row-1]));
		} else {
			return $data->[$row]->[0]-$self->run($data->[$row-1])->[0];
		}
	}
	
	# See POD for usage
	sub run_set {
		my $self	=	shift;
		my $data	=	shift;
		my $len		=	$#{$data}/2;
		my (@results,$res);
		for my $x (0..$len) {
			$res = $self->run($data->[$x*2]);
			for(0..$#{$res}){$results[$x]->[$_]=$res->[$_]}
			d("Running set $x [$res->[0]]...\r",4);
		}
		return \@results;
	}
	
	#
	# Loads a CSV-like dataset from disk
	#
	# Usage:
	#	my $set = $set->load_set($file, $column, $seperator);
	#
	# Returns a data set of the same format as required by the
	# learn_set() method. $file is the disk file to load set from.
	# $column an optional variable specifying the column in the 
	# data set to use as the class attribute. $class defaults to 0.
	# $seperator is an optional variable specifying the seperator
	# character between values. $seperator defaults to ',' (a single comma). 
	# NOTE: This does not handle quoted fields, or any other record
	# seperator other than "\n".
	#
	sub load_set {
		my $self	=	shift;
		my $file	=	shift;
		my $attr	=	shift || 0;
		my $sep		=	shift || ',';
		my $data	=	[];
		open(FILE,	$file);
		my @lines	=	<FILE>;
		close(FILE);
		for my $x (0..$#lines) {
			chomp($lines[$x]);
			my @tmp	= split /$sep/, $lines[$x];
			my $c=0;
			for(0..$#tmp){ 
				$tmp[$_]=$self->crunch($tmp[$_])->[0] if($tmp[$_]=~/[AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz]/);
				if($_!=$attr){$data->[$x*2]->[$c]=$tmp[$c];$c++}
			};             
			d("Loaded line $x, [@tmp]                            \r",4);
			$data->[$x*2+1]=[$tmp[$attr]];
		}
		return $data;
	}
	
	# See POD for usage
	sub get_outs {
		my $self	=	shift;
		my $data	=	shift;
		my $len		=	$#{$data}/2;
		my $outs	=	[];
		for my $x (0..$len) {
			$outs->[$x] = $data->[$x*2+1];
		}
		return $outs;
	}
	
	# Save entire network state to disk.
	sub save {
		my $self	=	shift;
		my $file	=	shift;
		no strict 'refs';
		

Mesh.pm  view on Meta::CPAN

		}
	}
	  
	# Set the activation type of a specific layer.
	# usage: $net->activation($layer,$type);
	# $type can be: "linear", "sigmoid", "sigmoid_2".
	# You can use "sigmoid_1" as a synonym to "sigmoid". 
	# Type can also be a CODE ref, ( ref($type) eq "CODE" ).
	# If $type is a CODE ref, then the function is called in this form:
	# 	$output	= &$type($sum_of_inputs,$self);
	# The code ref then has access to all the data in that node (thru the
	# blessed refrence $self) and is expected to return the value to be used
	# as the output for that node. The sum of all the inputs to that node
	# is already summed and passed as the first argument.
	sub activation {
		my $self	=	shift;
		my $layer	=	shift || 0;
		my $value	=	shift || 'linear';
		my $n 		=	0;    
		no strict 'refs';
		for(0..$layer-1){$n+=$self->{layers}->[$_]}

Mesh.pm  view on Meta::CPAN

	#	}
	#	..
	# You can also pass an array containing the range
	# values (not array ref), or you can pass a comma-
	# seperated list of values as parameters:
	#
	#	$net->activation(4,range(@numbers));
	#	$net->activation(4,range(6,15,26,106,28,3));
	#
	# Note: when using a range() activatior, train the
	# net TWICE on the data set, because the first time
	# the range() function searches for the top value in
	# the inputs, and therefore, results could flucuate.
	# The second learning cycle guarantees more accuracy.
	#	
	sub range {
		my @r=@_;
		sub{$_[1]->{t}=$_[0]if($_[0]>$_[1]->{t});$r[intr($_[0]/$_[1]->{t}*$#r)]}
	}
	
	#
	# ramp() preforms smooth ramp activation between 0 and 1 if $r is 1, 
	# or between -1 and 1 if $r is 2. $r defaults to 1, as you can see.	
	#
	# Note: when using a ramp() activatior, train the
	# net at least TWICE on the data set, because the first 
	# time the ramp() function searches for the top value in
	# the inputs, and therefore, results could flucuate.
	# The second learning cycle guarantees more accuracy.
	#
	sub ramp {
		my $r=shift||1;my $t=($r<2)?0:-1;
		sub{$_[1]->{t}=$_[0]if($_[0]>$_[1]->{t});$_[0]/$_[1]->{t}*$r-$b}
	}

	# Self explanitory, pretty much. $threshold is used to decide if an input 

Mesh.pm  view on Meta::CPAN

	
# Internal usage, prevents recursion on empty nodes.
package AI::NeuralNet::Mesh::cap;
	sub new     { bless {}, shift }
	sub input           {}
	sub adjust_weight   {}
	sub add_output_node {}
	sub add_input_node  {}
1;

# Internal usage, collects data from output layer.
package AI::NeuralNet::Mesh::output;
	
	use strict;
	
	sub new {
		my $type		=	shift;
		my $self		={ 
			_parent		=>	shift,
			_inputs		=>	[],
		};

Mesh.pm  view on Meta::CPAN


This fixed the usage conflict with perl 5.3.3.

With this version I have gone through and tuned up many area
of this module, including the descent algorithim in learn(),
as well as four custom activation functions, and several export 
tag sets. With this release, I have also included a few
new and more practical example scripts. (See ex_wine.pl) This release 
also includes a simple example of an ALN (Adaptive Logic Network) made
with this module. See ex_aln.pl. Also in this release is support for 
loading data sets from simple CSV-like files. See the load_set() method 
for details. This version also fixes a big bug that I never knew about 
until writing some demos for this version - that is, when trying to use 
more than one output node, the mesh would freeze in learning. But, that 
is fixed now, and you can have as many outputs as you want (how does 3 
inputs and 50 outputs sound? :-)


=head1 DESCRIPTION

AI::NeuralNet::Mesh is an optimized, accurate neural network Mesh.
It was designed with accruacy and speed in mind. 

This network model is very flexable. It will allow for clasic binary
operation or any range of integer or floating-point inputs you care
to provide. With this you can change activation types on a per node or
per layer basis (you can even include your own anonymous subs as 
activation types). You can add sigmoid transfer functions and control
the threshold. You can learn data sets in batch, and load CSV data
set files. You can do almost anything you need to with this module.
This code is deigned to be flexable. Any new ideas for this module?
See AUTHOR, below, for contact info.

This module is designed to also be a customizable, extensable 
neural network simulation toolkit. Through a combination of setting
the $Connection variable and using custom activation functions, as
well as basic package inheritance, you can simulate many different
types of neural network structures with very little new code written
by you.

In this module I have included a more accurate form of "learning" for the
mesh. This form preforms descent toward a local error minimum (0) on a 
directional delta, rather than the desired value for that node. This allows
for better, and more accurate results with larger datasets. This module also
uses a simpler recursion technique which, suprisingly, is more accurate than
the original technique that I've used in other ANNs.

=head1 EXPORTS

This module exports three functions by default:

	range
	intr
	pdiff

Mesh.pm  view on Meta::CPAN

	\&code_ref;

"sigmoid_1" is an alias for "sigmoid". 

The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:

	$output = &$code_ref($sum_of_inputs, $self);
	
The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.

See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.

Three of the activation syntaxes are shown in the first constructor above, the "linear",
"sigmoid" and code ref types.

You can also set the activation and threshold values after network creation with the
activation() and threshold() methods. 

Mesh.pm  view on Meta::CPAN

option is still there for those that feel the inclination to use it. I have found some areas
that do need the degrade flag to work at a faster speed. See test.pl for an example. If
the degrade flag wasn't in test.pl, it would take a very long time to learn.



=item $net->learn_set(\@set, [ options ]);

This takes the same options as learn() (learn_set() uses learn() internally) 
and allows you to specify a set to learn, rather than individual patterns. 
A dataset is an array refrence with at least two elements in the array, 
each element being another array refrence (or now, a scalar string). For 
each pattern to learn, you must specify an input array ref, and an ouput 
array ref as the next element. Example:
	
	my @set = (
		# inputs        outputs
		[ 1,2,3,4 ],  [ 1,3,5,6 ],
		[ 0,2,5,6 ],  [ 0,2,1,2 ]
	);


Inputs and outputs in the dataset can also be strings.

See the paragraph on measuring forgetfulness, below. There are 
two learn_set()-specific option tags available:

	flag     =>  $flag
	pattern  =>  $row

If "flag" is set to some TRUE value, as in "flag => 1" in the hash of options, or if the option "flag"
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set() will return an integer specifying the amount of forgetfulness when all the patterns 
are learned. 

If "pattern" is set, then learn_set() will use that pattern in the data set to measure forgetfulness by.
If "pattern" is omitted, it defaults to the first pattern in the set. Example:

	my @set = (
		[ 0,1,0,1 ],  [ 0 ],
		[ 0,0,1,0 ],  [ 1 ],
		[ 1,1,0,1 ],  [ 2 ],  #  <---
		[ 0,1,1,0 ],  [ 3 ]
	);
	
If you wish to measure forgetfulness as indicated by the line with the arrow, then you would
pass 2 as the "pattern" option, as in "pattern => 2".

Now why the heck would anyone want to measure forgetfulness, you ask? Maybe you wonder how I 
even measure that. Well, it is not a vital value that you have to know. I just put in a 
"forgetfulness measure" one day because I thought it would be neat to know. 

How the module measures forgetfulness is this: First, it learns all the patterns 
in the set provided, then it will run the very first pattern (or whatever pattern
is specified by the "row" option) in the set after it has finished learning. It 
will compare the run() output with the desired output as specified in the dataset. 
In a perfect world, the two should match exactly. What we measure is how much that 
they don't match, thus the amount of forgetfulness the network has.

Example (from examples/ex_dow.pl):

	# Data from 1989 (as far as I know..this is taken from example data on BrainMaker)
	my @data = ( 
		#	Mo  CPI  CPI-1 CPI-3 	Oil  Oil-1 Oil-3    Dow   Dow-1 Dow-3   Dow Ave (output)
		[	1, 	229, 220,  146, 	20.0, 21.9, 19.5, 	2645, 2652, 2597], 	[	2647  ],
		[	2, 	235, 226,  155, 	19.8, 20.0, 18.3, 	2633, 2645, 2585], 	[	2637  ],
		[	3, 	244, 235,  164, 	19.6, 19.8, 18.1, 	2627, 2633, 2579], 	[	2630  ],
		[	4, 	261, 244,  181, 	19.6, 19.6, 18.1, 	2611, 2627, 2563], 	[	2620  ],
		[	5, 	276, 261,  196, 	19.5, 19.6, 18.0, 	2630, 2611, 2582], 	[	2638  ],
		[	6, 	287, 276,  207, 	19.5, 19.5, 18.0, 	2637, 2630, 2589], 	[	2635  ],
		[	7, 	296, 287,  212, 	19.3, 19.5, 17.8, 	2640, 2637, 2592], 	[	2641  ] 		
	);
	
	# Learn the set
	my $f = $net->learn_set(\@data, 
					  inc	=>	0.1,	
					  max	=>	500,
					 );
			
	# Print it 
	print "Forgetfullness: $f%";

    
This is a snippet from the example script examples/finance.pl, which demonstrates DOW average
prediction for the next month. A more simple set defenition would be as such:

	my @data = (
		[ 0,1 ], [ 1 ],
		[ 1,0 ], [ 0 ]
	);
	
	$net->learn_set(\@data);
	
Same effect as above, but not the same data (obviously).


=item $net->run($input_map_ref);

This method will apply the given array ref at the input layer of the neural network, and
it will return an array ref to the output of the network. run() will now automatically crunch() 
a string given as an input (See the crunch() method for info on crunching).

Example Usage:
	

Mesh.pm  view on Meta::CPAN

	$net->uncrunch($net->run($input_map_ref));

All that run_uc() does is that it automatically calls uncrunch() on the output, regardless
of whether the input was crunch() -ed or not.
	

=item $net->run_set($set);
                                                                                    
This takes an array ref of the same structure as the learn_set() method, above. It returns
an array ref. Each element in the returned array ref represents the output for the corresponding
element in the dataset passed. Uses run() internally.


=item $net->get_outs($set);

Simple utility function which takes an array ref of the same structure as the learn_set() method,
above. It returns an array ref of the same type as run_set() wherein each element contains an
output value. The output values are the target values specified in the $set passed. Each element
in the returned array ref represents the output value for the corrseponding row in the dataset
passed. (A row is two elements of the dataset together, see learn_set() for dataset structure.)

=item $net->load_set($file,$column,$seperator);

Loads a CSV-like dataset from disk

Returns a data set of the same structure as required by the
learn_set() method. $file is the disk file to load set from.
$column an optional variable specifying the column in the 
data set to use as the class attribute. $class defaults to 0.
$seperator is an optional variable specifying the seperator
character between values. $seperator defaults to ',' (a single comma). 
NOTE: This does not handle quoted fields, or any other record
seperator other than "\n".

The returned array ref is suitable for passing directly to
learn_set() or get_outs().
	

=item $net->range();

Mesh.pm  view on Meta::CPAN

=item $net->load($filename);

This will load from disk any network saved by save() and completly restore the internal
state at the point it was save() was called at.

If the file is of an invalid file type, then load() will
return undef. Use the error() method, below, to print the error message.

If there were no errors, it will return a refrence to $net.

UPDATE: $filename can now be a newline-seperated set of mesh data. This enables you
to do $net->load(join("\n",<DATA>)) and other fun things. I added this mainly
for a demo I'm writing but not qutie done with yet. So, Cheers!



=item $net->activation($layer,$type);

This sets the activation type for layer C<$layer>.

C<$type> can be one of four values:

Mesh.pm  view on Meta::CPAN

	\&code_ref;

"sigmoid_1" is an alias for "sigmoid". 

The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:

	$output = &$code_ref($sum_of_inputs, $self);
	
The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.

See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.

The activation type for each layer is preserved across load/save calls. 

EXCEPTION: Due to the constraints of Perl, I cannot load/save the actual subs that the code
ref option points to. Therefore, you must re-apply any code ref activation types after a 
load() call.

Mesh.pm  view on Meta::CPAN

	}
	..
You can also pass an array containing the range
values (not array ref), or you can pass a comma-
seperated list of values as parameters:

	$net->activation(4,range(@numbers));
	$net->activation(4,range(6,15,26,106,28,3));

Note: when using a range() activatior, train the
net TWICE on the data set, because the first time
the range() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.

The actual code that implements the range closure is
a bit convulted, so I will expand on it here as a simple
tutorial for custom activation functions.

	= line 1 = 	sub {
	= line 2 =		my @values = ( 6..10 );

Mesh.pm  view on Meta::CPAN


ramp() preforms smooth ramp activation between 0 and 1 if $r is 1, 
or between -1 and 1 if $r is 2. $r defaults to 1.	

You can get this into your namespace with the ':acts' export 
tag as so:
	
	use AI::NeuralNet::Mesh ':acts';

Note: when using a ramp() activatior, train the
net at least TWICE on the data set, because the first 
time the ramp() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.

No code to show here, as it is almost exactly the same as range().


=item and_gate($threshold);

Self explanitory, pretty much. This turns the node into a basic AND gate.

Mesh.pm  view on Meta::CPAN

Sejnowski and R.  Paul Gorman, performed better than a nearest-neighbor
classifier.

The kinds of problems best solved by neural networks are those that people
are good at such as association, evaluation and pattern recognition.
Problems that are difficult to compute and do not require perfect answers,
just very good answers, are also best done with neural networks.  A quick,
very good response is often more desirable than a more accurate answer which
takes longer to compute.  This is especially true in robotics or industrial
controller applications.  Predictions of behavior and general analysis of
data are also affairs for neural networks.  In the financial arena, consumer
loan analysis and financial forecasting make good applications.  New network
designers are working on weather forecasts by neural networks (Myself
included).  Currently, doctors are developing medical neural networks as an
aid in diagnosis.  Attorneys and insurance companies are also working on
neural networks to help estimate the value of claims.

Neural networks are poor at precise calculations and serial processing. They
are also unable to predict or recognize anything that does not inherently
contain some sort of pattern.  For example, they cannot predict the lottery,
since this is a random process.  It is unlikely that a neural network could

Mesh.pm  view on Meta::CPAN

It might be good to look at the source for this package (in the Mesh.pm file) if you
plan to do a lot of or extensive custom node activation types.

=item AI::NeuralNet::Mesh::cap

This is applied to the input layer of the mesh to prevent the mesh from trying to recursivly
adjust weights out throug the inputs.

=item AI::NeuralNet::Mesh::output

This is simply a data collector package clamped onto the output layer to record the data 
as it comes out of the mesh. 


=head1 BUGS

This is a beta release of C<AI::NeuralNet::Mesh>, and that holding true, I am sure 
there are probably bugs in here which I just have not found yet. If you find bugs in this module, I would 
appreciate it greatly if you could report them to me at F<E<lt>jdb@wcoil.comE<gt>>,
or, even better, try to patch them yourself and figure out why the bug is being buggy, and
send me the patched code, again at F<E<lt>jdb@wcoil.comE<gt>>. 

README  view on Meta::CPAN

** What is this?

AI::NeuralNet::Mesh is an optimized, accurate neural network Mesh.
It was designed with accruacy and speed in mind. 

This network model is very flexable. It will allow for clasic binary
operation or any range of integer or floating-point inputs you care
to provide. With this you can change activation types on a per node or
per layer basis (you can even include your own anonymous subs as 
activation types). You can add sigmoid transfer functions and control
the threshold. You can learn data sets in batch, and load CSV data
set files. You can do almost anything you need to with this module.
This code is deigned to be flexable. Any new ideas for this module?
Contact Josiah Bryan at <jdb@wcoil.com>

This module is designed to also be a customizable, extensable 
neural network simulation toolkit. Through a combination of setting
the $Connection variable and using custom activation functions, as
well as basic package inheritance, you can simulate many different
types of neural network structures with very little new code written
by you. (See ex_aln.pl)

README  view on Meta::CPAN


This fixed a compatibilty issue that 0.43 had with Perl 5.3.3

With this version I have gone through and tuned up many area
of this module, including the descent algorithim in learn(),
as well as four custom activation functions, and several export 
tag sets. With this release, I have also included a few
new and more practical example scripts. (See ex_wine.pl) This release 
also includes a simple example of an ALN (Adaptive Logic Network) made
with this module. See ex_aln.pl. Also in this release is support for 
loading data sets from simple CSV-like files. See the load_set() method 
for details. This version also fixes a big bug that I never knew about 
until writing some demos for this version - that is, when trying to use 
more than one output node, the mesh would freeze in learning. But, that 
is fixed now, and you can have as many outputs as you want (how does 3 
inputs and 50 outputs sound? :-) Also in this release is output range
limiting via the range() activation function.

** What do you think?

Now I know you people are out there that are using the module...

examples/ex_add2.pl  view on Meta::CPAN

=begin
    
    File:	examples/ex_add2.pl
	Author: Rodin Porrata, <rodin@ursa.llnl.gov>
	Desc: 

		This script runs a test of the networks ability to add 
		and remember data sets, as well as testing the optimum "inc" to 
		learn and the optimum number of layers for a network.

=cut

	use AI::NeuralNet::Mesh;
	use Benchmark;
	use English;
	
	my $ofile = "addnet_data.txt";
	
	open( OUTFP, ">$ofile" ) or die "Could not open output file\n";
	
	my ( $layers, $inputs, $outputs, $top, $inc, $top, $runtime,
	$forgetfulness );
	my @answers;
	my @predictions;
	my @percent_diff;
	
	$inputs = 3;

examples/ex_add2.pl  view on Meta::CPAN

	#....................................................
	sub addnet
	{
	 print "\nCreate a new net with $layers layers, 3 inputs, and 1 output\n";
	 my $net = AI::NeuralNet::Mesh->new($layers,3,1);
	
	 # Disable debugging
	 $net->debug(0);
	
	
	 my @data = (
	  [   2633, 2665, 2685],  [ 2633 + 2665 + 2685 ],
	  [   2623, 2645, 2585],  [ 2623 + 2645 + 2585 ],
	  [  2627, 2633, 2579],  [ 2627 + 2633 + 2579 ],
	  [   2611, 2627, 2563],  [ 2611 + 2627 + 2563 ],
	  [  2640, 2637, 2592],  [ 2640 + 2637 + 2592 ]
	 );
	
	 print "Learning started, will cycle $top times with inc = $inc\n";
	
	  # Make it learn the whole dataset $top times
	  my @list;
	
	 my $t1=new Benchmark;
	 for my $a (1..$top)
	 {
	  print "Outer Loop: $a : ";
	
	  $forgetfulness = $net->learn_set( \@data,
	           inc  => $inc,
	           max  => 500,
	           error => -1);
	
	  print "Forgetfulness: $forgetfulness %\n";
	
	 }
	 my $t2=new Benchmark;
	
	 $runtime = timediff($t2,$t1);

examples/ex_add2.pl  view on Meta::CPAN

	      [ 2345, 2543, 3000 ],
	      [ 2654, 2234, 2534 ] );
	
	    test_net( $net, @input );
	}
	#.....................................................................
	 sub test_net {
	  my @set;
	  my $fb;
	  my $net = shift;
	  my @data = @_;
	  undef @percent_diff; #@answers; undef @predictions;
	
	  for( $i=0; defined( $data[$i] ); $i++ ){
	   @set = @{ $data[$i] };
	   $fb = $net->run(\@set)->[0];
	   # Print output
	   print "Test Factors: (",join(',',@set),")\n";
	   $answer = eval( join( '+',@set ));
	   push @percent_diff, 100.0 * abs( $answer - $fb )/ $answer;
	   print "Prediction : $fb      answer: $answer\n";
	  }
	 }
	
	

examples/ex_alpha.pl  view on Meta::CPAN

		{
			nodes		=>	35,
			activation	=>	linear
		},
		{
			nodes		=>	1,
			activation	=>	linear,
		}
	]);
	
	# Debug level of 4 gives JUST learn loop iteteration benchmark and comparrison data 
	# as learning progresses.
	$net->debug(4);

	my $letters = [            # All prototype inputs        
        [
        0,1,1,1,0,             # Inputs are   
        1,0,0,0,1,             #  5*7 digitalized caracters 
        1,0,0,0,1,              
        1,1,1,1,1,
        1,0,0,0,1,             # This is the alphabet of the

examples/ex_bmp.pl  view on Meta::CPAN

	# Set resolution
	my $xres=5;
	my $yres=5;
	
	# Create a new net with 3 layes, $xres*$yres inputs, and 1 output
	my $net = AI::NeuralNet::Mesh->new(1,$xres*$yres,1);
	
	# Enable debugging
	$net->debug(4);
	
	# Create datasets.
	my @data = ( 
		[	0,1,1,0,0,
			0,0,1,0,0,
			0,0,1,0,0,
			0,0,1,0,0,
			0,1,1,1,2	],		[	1	],
		
		[	1,1,1,0,0,
			0,0,0,1,0,
			0,1,1,1,0,
			1,0,0,0,0,

examples/ex_bmp.pl  view on Meta::CPAN

			0,0,0,1,0,
			1,1,1,1,2	],		[	5	],
		
	);
    
    
	# If we havnt saved the net already, do the learning
	if(!$net->load('images.mesh')) {
		print "\nLearning started...\n";
		
		# Make it learn the whole dataset $top times
		my @list;
		my $top=3;
		for my $a (0..$top) {
			my $t1=new Benchmark;
			print "\n\nOuter Loop: $a\n";
			
			# Test fogetfullness
			my $f = $net->learn_set(\@data,	inc => 0.1);
			
			# Print it 
			print "\n\nForgetfullness: $f%\n";

			# Save net to disk				
			$net->save('images.mesh');

			my $t2=new Benchmark;
			my $td=timediff($t0,$t1);
			print "\nLoop $a took ",timestr($td),"\n";

examples/ex_bmp2.pl  view on Meta::CPAN

		corrupted "J" and displays the results of the networks 
		output.

=cut

    use AI::NeuralNet::Mesh;

	# Create a new network with 2 layers and 35 neurons in each layer.
    my $net = new AI::NeuralNet::Mesh(1,35,1);
	
	# Debug level of 4 gives JUST learn loop iteteration benchmark and comparrison data 
	# as learning progresses.
	$net->debug(4);
	
	# Create our model input
	my @map	=	(1,1,1,1,1,
				 0,0,1,0,0,
				 0,0,1,0,0,
				 0,0,1,0,0,
				 1,0,1,0,0,
				 1,0,1,0,0,

examples/ex_dow.pl  view on Meta::CPAN


    use AI::NeuralNet::Mesh;
	use Benchmark;

	# Create a new net with 5 layes, 9 inputs, and 1 output
        my $net = AI::NeuralNet::Mesh->new(2,9,1);
	
	# Disable debugging
        $net->debug(2);
	
	# Create datasets.
	#	Note that these are ficticious values shown for illustration purposes
	#	only.  In the example, CPI is a certain month's consumer price
	#	index, CPI-1 is the index one month before, CPI-3 is the the index 3
	#	months before, etc.

	my @data = ( 
		#	Mo  CPI  CPI-1 CPI-3 	Oil  Oil-1 Oil-3    Dow   Dow-1 Dow-3   Dow Ave (output)
		[	1, 	229, 220,  146, 	20.0, 21.9, 19.5, 	2645, 2652, 2597], 	[	2647  ],
		[	2, 	235, 226,  155, 	19.8, 20.0, 18.3, 	2633, 2645, 2585], 	[	2637  ],
		[	3, 	244, 235,  164, 	19.6, 19.8, 18.1, 	2627, 2633, 2579], 	[	2630  ],
		[	4, 	261, 244,  181, 	19.6, 19.6, 18.1, 	2611, 2627, 2563], 	[	2620  ],
		[	5, 	276, 261,  196, 	19.5, 19.6, 18.0, 	2630, 2611, 2582], 	[	2638  ],
		[	6, 	287, 276,  207, 	19.5, 19.5, 18.0, 	2637, 2630, 2589], 	[	2635  ],
		[	7, 	296, 287,  212, 	19.3, 19.5, 17.8, 	2640, 2637, 2592], 	[	2641  ] 		
	);
    
    
	# If we havnt saved the net already, do the learning
        if(!$net->load('DOW.mesh')) {
		print "\nLearning started...\n";
		
		# Make it learn the whole dataset $top times
		my @list;
		my $top=1;
		for my $a (0..$top) {
			my $t1=new Benchmark;
			print "\n\nOuter Loop: $a\n";
			
			# Test fogetfullness
			my $f = $net->learn_set(\@data,	inc		=>	0.2,	
											max		=>	2000,
											error	=>	-1);
			
			# Print it 
			print "\n\nForgetfullness: $f%\n";

			# Save net to disk				
            $net->save('DOW.mesh');
            
			my $t2=new Benchmark;
			my $td=timediff($t2,$t1);
			print "\nLoop $a took ",timestr($td),"\n";
		}
	}
                                                                          
	# Run a prediction using fake data
	#			Month	CPI  CPI-1 CPI-3 	Oil  Oil-1 Oil-3    Dow   Dow-1 Dow-3    
	my @set=(	10,		352, 309,  203, 	18.3, 18.7, 16.1, 	2592, 2641, 2651	  ); 
	
	# Dow Ave (output)	
	my $fb=$net->run(\@set)->[0];
	
	# Print output
	print "\nTest Factors: (",join(',',@set),")\n";
	print "DOW Prediction for Month #11: $fb\n";
	

examples/ex_synop.pl  view on Meta::CPAN

	
	# Add a small amount of randomness to the network
	$net->random(0.001);

	# Demonstrate a simple learn() call
	my @inputs = ( 0,0,1,1,1 );
	my @ouputs = ( 1,0,1,0,1 );
	
	print $net->learn(\@inputs, \@outputs),"\n";

	# Create a data set to learn
	my @set = (
		[ 2,2,3,4,1 ], [ 1,1,1,1,1 ],
		[ 1,1,1,1,1 ], [ 0,0,0,0,0 ],
		[ 1,1,1,0,0 ], [ 0,0,0,1,1 ]	
	);
	
	# Demo learn_set()
	my $f = $net->learn_set(\@set);
	print "Forgetfulness: $f unit\n";
	
	# Crunch a bunch of strings and return array refs
	my $phrase1 = $net->crunch("I love neural networks!");
	my $phrase2 = $net->crunch("Jay Lenno is wierd.");
	my $phrase3 = $net->crunch("The rain in spain...");
	my $phrase4 = $net->crunch("Tired of word crunching yet?");

	# Make a data set from the array refs
	my @phrases = (
		$phrase1, $phrase2,
		$phrase3, $phrase4
	);

	# Learn the data set	
	$net->learn_set(\@phrases);
	
	# Run a test phrase through the network
	my $test_phrase = $net->crunch("I love neural networking!");
	my $result = $net->run($test_phrase);
	
	# Get this, it prints "Jay Leno is  networking!" ...  LOL!
	print $net->uncrunch($result),"\n";

examples/ex_wine.pl  view on Meta::CPAN

=begin

	File:   examples/ex_wine.pl
	Author: Josiah Bryan, <jdb@wcoil.com>
    Desc:
		
		This demonstrates wine cultivar prediction using the
		AI::NeuralNet::Mesh module.
		
        This script uses the data that is the results of a chemical analysis 
        of wines grown in the same region in Italy but derived from three
	    different cultivars. The analysis determined the quantities 
	    of 13 constituents found in each of the three types of wines. 

		The inputs of the net represent 13 seperate attributes
		of the wine's chemical analysis, as follows:
		
		 	1)  Alcohol
		 	2)  Malic acid
		 	3)  Ash

examples/ex_wine.pl  view on Meta::CPAN

		 	12) OD280/OD315 of diluted wines
		 	13) Proline            
		
		There are 168 total examples, with the class distrubution
		as follows:
		
			class 1: 59 instances
			class 2: 71 instances
			class 3: 48 instances
			
		The datasets are stored in wine.dat, and the first
		column on every row is the class attribute for that
		row.

=cut

    use AI::NeuralNet::Mesh;
	use Benchmark;

	# Create a new net
    my $net = AI::NeuralNet::Mesh->new([13,45,1]);

	# Set activation on output node to contstrain values
	# to a specific range of values.    
    $net->activation(2,range(1..3));
	
	# Enable debugging
	$net->verbose(4);

	# Load the data set
	my $data = $net->load_set('wine.dat',0);
	
	# Seperate data based on class
	my $sets=[];
	for my $i (0..$#{$data}/2) {
		my $c = $data->[$i*2+1]->[0];
		print "Class of set $i: $c                                  \r";
		# inputs
		$sets->[$c]->[++$#{$sets->[$c]}] = $data->[$i*2];
		# class
		$sets->[$c]->[++$#{$sets->[$c]}] = $data->[$i*2+1];
	}                                  
	
			
	for(0..$#{$sets}) {
		next if(!defined $sets->[$_]->[0]);
		print "Size of set $_: ",$#{$sets->[$_]}/2,"\n";
	}
	
	# If we havnt saved the net already, do the learning
    if(!$net->load('wine.mesh')) {
		print "\nLearning started...\n";
		
		# Make it learn the whole dataset $top times
		my @list;
		my $top=5;
		for my $a (0..$top) {
			print "\n\nOuter Loop: $a\n";
			
			for(0..$#{$sets}) {
				next if(!defined $sets->[$_]->[0]);
				my $t1=new Benchmark;
				
				# Test fogetfullness

mesh.htm  view on Meta::CPAN

<HR>
<H1><A NAME="version & updates">VERSION &amp; UPDATES</A></H1>
<P>This is version <STRONG>0.43</STRONG>, the second release of this module.</P>
<P>With this version I have gone through and tuned up many area
of this module, including the descent algorithim in learn(),
as well as four custom activation functions, and several export 
tag sets. With this release, I have also included a few
new and more practical example scripts. (See ex_wine.pl) This release 
also includes a simple example of an ALN (Adaptive Logic Network) made
with this module. See ex_aln.pl. Also in this release is support for 
loading data sets from simple CSV-like files. See the <A HREF="#item_load_set"><CODE>load_set()</CODE></A> method 
for details. This version also fixes a big bug that I never knew about 
until writing some demos for this version - that is, when trying to use 
more than one output node, the mesh would freeze in learning. But, that 
is fixed now, and you can have as many outputs as you want (how does 3 
inputs and 50 outputs sound? :-)</P>
<P>
<HR>
<H1><A NAME="description">DESCRIPTION</A></H1>
<P>AI::NeuralNet::Mesh is an optimized, accurate neural network Mesh.
It was designed with accruacy and speed in mind.</P>
<P>This network model is very flexable. It will allow for clasic binary
operation or any range of integer or floating-point inputs you care
to provide. With this you can change activation types on a per node or
per layer basis (you can even include your own anonymous subs as 
activation types). You can add sigmoid transfer functions and control
the threshold. You can learn data sets in batch, and load CSV data
set files. You can do almost anything you need to with this module.
This code is deigned to be flexable. Any new ideas for this module?
See AUTHOR, below, for contact info.</P>
<P>This module is designed to also be a customizable, extensable 
neural network simulation toolkit. Through a combination of setting
the $Connection variable and using custom activation functions, as
well as basic package inheritance, you can simulate many different
types of neural network structures with very little new code written
by you.</P>
<P>In this module I have included a more accurate form of ``learning'' for the
mesh. This form preforms descent toward a local error minimum (0) on a 
directional delta, rather than the desired value for that node. This allows
for better, and more accurate results with larger datasets. This module also
uses a simpler recursion technique which, suprisingly, is more accurate than
the original technique that I've used in other ANNs.</P>
<P>
<HR>
<H1><A NAME="exports">EXPORTS</A></H1>
<P>This module exports three functions by default:</P>
<PRE>
        range
        intr
        pdiff

mesh.htm  view on Meta::CPAN

        sigmoid    [ sigmoid_1 ]  ( only positive sigmoid )
        sigmoid_2                 ( positive / 0 /negative sigmoid )
        \&amp;code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
        $output = &amp;$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>Three of the activation syntaxes are shown in the first constructor above, the ``linear'',
``sigmoid'' and code ref types.</P>
<P>You can also set the activation and threshold values after network creation with the
<A HREF="#item_activation"><CODE>activation()</CODE></A> and <A HREF="#item_threshold"><CODE>threshold()</CODE></A> methods.</P>
<P></P>
<P></P>
<DT><STRONG><A NAME="item_learn">$net-&gt;learn($input_map_ref, $desired_result_ref [, options ]);</A></STRONG><BR>

mesh.htm  view on Meta::CPAN

was always on. In this release I have looked at several other network types as well
as several texts and decided that it would be better to not use increment degrading. The
option is still there for those that feel the inclination to use it. I have found some areas
that do need the degrade flag to work at a faster speed. See test.pl for an example. If
the degrade flag wasn't in test.pl, it would take a very long time to learn.</P>
<P></P>
<DT><STRONG><A NAME="item_learn_set">$net-&gt;learn_set(\@set, [ options ]);</A></STRONG><BR>
<DD>
This takes the same options as <A HREF="#item_learn"><CODE>learn()</CODE></A> (learn_set() uses <A HREF="#item_learn"><CODE>learn()</CODE></A> internally) 
and allows you to specify a set to learn, rather than individual patterns. 
A dataset is an array refrence with at least two elements in the array, 
each element being another array refrence (or now, a scalar string). For 
each pattern to learn, you must specify an input array ref, and an ouput 
array ref as the next element. Example:

<PRE>

        my @set = (
                # inputs        outputs
                [ 1,2,3,4 ],  [ 1,3,5,6 ],
                [ 0,2,5,6 ],  [ 0,2,1,2 ]
        );</PRE>
<P>Inputs and outputs in the dataset can also be strings.</P>
<P>See the paragraph on measuring forgetfulness, below. There are 
two learn_set()-specific option tags available:</P>
<PRE>
        flag     =&gt;  $flag
        pattern  =&gt;  $row</PRE>
<P>If ``flag'' is set to some TRUE value, as in ``flag =&gt; 1'' in the hash of options, or if the option ``flag''
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will return an integer specifying the amount of forgetfulness when all the patterns 
are learned.</P>
<P>If ``pattern'' is set, then <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> will use that pattern in the data set to measure forgetfulness by.
If ``pattern'' is omitted, it defaults to the first pattern in the set. Example:</P>
<PRE>
        my @set = (
                [ 0,1,0,1 ],  [ 0 ],
                [ 0,0,1,0 ],  [ 1 ],
                [ 1,1,0,1 ],  [ 2 ],  #  &lt;---
                [ 0,1,1,0 ],  [ 3 ]
        );
</PRE>
<P>If you wish to measure forgetfulness as indicated by the line with the arrow, then you would
pass 2 as the &quot;pattern&quot; option, as in &quot;pattern =&gt; 2&quot;.</P>
<P>Now why the heck would anyone want to measure forgetfulness, you ask? Maybe you wonder how I 
even measure that. Well, it is not a vital value that you have to know. I just put in a 
``forgetfulness measure'' one day because I thought it would be neat to know.</P>
<P>How the module measures forgetfulness is this: First, it learns all the patterns 
in the set provided, then it will run the very first pattern (or whatever pattern
is specified by the ``row'' option) in the set after it has finished learning. It 
will compare the <A HREF="#item_run"><CODE>run()</CODE></A> output with the desired output as specified in the dataset. 
In a perfect world, the two should match exactly. What we measure is how much that 
they don't match, thus the amount of forgetfulness the network has.</P>
<P>Example (from examples/ex_dow.pl):</P>
<PRE>
        # Data from 1989 (as far as I know..this is taken from example data on BrainMaker)
        my @data = ( 
                #       Mo  CPI  CPI-1 CPI-3    Oil  Oil-1 Oil-3    Dow   Dow-1 Dow-3   Dow Ave (output)
                [       1,      229, 220,  146,         20.0, 21.9, 19.5,       2645, 2652, 2597],      [       2647  ],
                [       2,      235, 226,  155,         19.8, 20.0, 18.3,       2633, 2645, 2585],      [       2637  ],
                [       3,      244, 235,  164,         19.6, 19.8, 18.1,       2627, 2633, 2579],      [       2630  ],
                [       4,      261, 244,  181,         19.6, 19.6, 18.1,       2611, 2627, 2563],      [       2620  ],
                [       5,      276, 261,  196,         19.5, 19.6, 18.0,       2630, 2611, 2582],      [       2638  ],
                [       6,      287, 276,  207,         19.5, 19.5, 18.0,       2637, 2630, 2589],      [       2635  ],
                [       7,      296, 287,  212,         19.3, 19.5, 17.8,       2640, 2637, 2592],      [       2641  ]                 
        );

        # Learn the set
        my $f = $net-&gt;learn_set(\@data, 
                                          inc   =&gt;      0.1,    
                                          max   =&gt;      500,
                                         );

        # Print it 
        print &quot;Forgetfullness: $f%&quot;;</PRE>
<P></P>
<P>This is a snippet from the example script examples/finance.pl, which demonstrates DOW average
prediction for the next month. A more simple set defenition would be as such:</P>
<PRE>
        my @data = (
                [ 0,1 ], [ 1 ],
                [ 1,0 ], [ 0 ]
        );

        $net-&gt;learn_set(\@data);</PRE>
<P>Same effect as above, but not the same data (obviously).</P>
<P></P>
<DT><STRONG><A NAME="item_run">$net-&gt;run($input_map_ref);</A></STRONG><BR>
<DD>
This method will apply the given array ref at the input layer of the neural network, and
it will return an array ref to the output of the network. <A HREF="#item_run"><CODE>run()</CODE></A> will now automatically <A HREF="#item_crunch"><CODE>crunch()</CODE></A> 
a string given as an input (See the <A HREF="#item_crunch"><CODE>crunch()</CODE></A> method for info on crunching).
<P>Example Usage:
</P>
<PRE>
        my $inputs  = [ 1,1,0,1 ];

mesh.htm  view on Meta::CPAN


<PRE>
        $net-&gt;uncrunch($net-&gt;run($input_map_ref));</PRE>
<P>All that <A HREF="#item_run_uc"><CODE>run_uc()</CODE></A> does is that it automatically calls <A HREF="#item_uncrunch"><CODE>uncrunch()</CODE></A> on the output, regardless
of whether the input was <A HREF="#item_crunch"><CODE>crunch()</CODE></A> -ed or not.</P>
<P></P>
<DT><STRONG><A NAME="item_run_set">$net-&gt;run_set($set);</A></STRONG><BR>
<DD>
<P>This takes an array ref of the same structure as the learn_set() method, above. It returns
an array ref. Each element in the returned array ref represents the output for the corresponding
element in the dataset passed. Uses run() internally.</P>
<DT><STRONG><A NAME="item_get_outs">$net-&gt;get_outs($set);</A></STRONG><BR>
<DD>
Simple utility function which takes an array ref of the same structure as the <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> method,
above. It returns an array ref of the same type as <A HREF="#item_run_set"><CODE>run_set()</CODE></A> wherein each element contains an
output value. The output values are the target values specified in the $set passed. Each element
in the returned array ref represents the output value for the corrseponding row in the dataset
passed. (A row is two elements of the dataset together, see <A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> for dataset structure.)
<P></P>
<DT><STRONG><A NAME="item_load_set">$net-&gt;load_set($file,$column,$seperator);</A></STRONG><BR>
<DD>
Loads a CSV-like dataset from disk
<P>Returns a data set of the same structure as required by the
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> method. $file is the disk file to load set from.
$column an optional variable specifying the column in the 
data set to use as the class attribute. $class defaults to 0.
$seperator is an optional variable specifying the seperator
character between values. $seperator defaults to ',' (a single comma). 
NOTE: This does not handle quoted fields, or any other record
seperator other than ``\n''.</P>
<P>The returned array ref is suitable for passing directly to
<A HREF="#item_learn_set"><CODE>learn_set()</CODE></A> or get_outs().</P>
<P></P>
<DT><STRONG><A NAME="item_range">$net-&gt;range();</A></STRONG><BR>
<DD>
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions.

mesh.htm  view on Meta::CPAN

below.</P>
<P>If there were no errors, it will return a refrence to $net.</P>
<P></P>
<DT><STRONG><A NAME="item_load">$net-&gt;load($filename);</A></STRONG><BR>
<DD>
This will load from disk any network saved by <A HREF="#item_save"><CODE>save()</CODE></A> and completly restore the internal
state at the point it was <A HREF="#item_save"><CODE>save()</CODE></A> was called at.
<P>If the file is of an invalid file type, then <A HREF="#item_load"><CODE>load()</CODE></A> will
return undef. Use the <A HREF="#item_error"><CODE>error()</CODE></A> method, below, to print the error message.</P>
<P>If there were no errors, it will return a refrence to $net.</P>
<P>UPDATE: $filename can now be a newline-seperated set of mesh data. This enables you
to do $net-&gt;load(join(``\n'',&lt;DATA&gt;)) and other fun things. I added this mainly
for a demo I'm writing but not qutie done with yet. So, Cheers!</P>
<P></P>
<DT><STRONG><A NAME="item_activation">$net-&gt;activation($layer,$type);</A></STRONG><BR>
<DD>
This sets the activation type for layer <CODE>$layer</CODE>.
<P><CODE>$type</CODE> can be one of four values:</P>
<PRE>
        linear                    ( simply use sum of inputs as output )
        sigmoid    [ sigmoid_1 ]  ( only positive sigmoid )
        sigmoid_2                 ( positive / 0 /negative sigmoid )
        \&amp;code_ref;</PRE>
<P>``sigmoid_1'' is an alias for ``sigmoid''.</P>
<P>The code ref option allows you to have a custom activation function for that layer.
The code ref is called with this syntax:</P>
<PRE>
        $output = &amp;$code_ref($sum_of_inputs, $self);
</PRE>
<P>The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.</P>
<P>See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.</P>
<P>The activation type for each layer is preserved across load/save calls.</P>
<P>EXCEPTION: Due to the constraints of Perl, I cannot load/save the actual subs that the code
ref option points to. Therefore, you must re-apply any code ref activation types after a 
<A HREF="#item_load"><CODE>load()</CODE></A> call.</P>
<P></P>
<DT><STRONG><A NAME="item_node_activation">$net-&gt;node_activation($layer,$node,$type);</A></STRONG><BR>
<DD>

mesh.htm  view on Meta::CPAN

		activation	=&gt;	range 5..2
	}
	..
You can also pass an array containing the range
values (not array ref), or you can pass a comma-
seperated list of values as parameters:</P>
<PRE>
        $net-&gt;activation(4,range(@numbers));
        $net-&gt;activation(4,range(6,15,26,106,28,3));</PRE>
<P>Note: when using a <A HREF="#item_range"><CODE>range()</CODE></A> activatior, train the
net TWICE on the data set, because the first time
the <A HREF="#item_range"><CODE>range()</CODE></A> function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.</P>
<P>The actual code that implements the range closure is
a bit convulted, so I will expand on it here as a simple
tutorial for custom activation functions.</P>
<PRE>
        = line 1 =      sub {
        = line 2 =              my @values = ( 6..10 );
        = line 3 =              my $sum   = shift;

mesh.htm  view on Meta::CPAN

<DT><STRONG><A NAME="item_ramp">ramp($r);</A></STRONG><BR>
<DD>
<A HREF="#item_ramp"><CODE>ramp()</CODE></A> preforms smooth ramp activation between 0 and 1 if $r is 1, 
or between -1 and 1 if $r is 2. $r defaults to 1.
<P>You can get this into your namespace with the ':acts' export 
tag as so:
</P>
<PRE>
        use AI::NeuralNet::Mesh ':acts';</PRE>
<P>Note: when using a <A HREF="#item_ramp"><CODE>ramp()</CODE></A> activatior, train the
net at least TWICE on the data set, because the first 
time the <A HREF="#item_ramp"><CODE>ramp()</CODE></A> function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.</P>
<P>No code to show here, as it is almost exactly the same as range().</P>
<P></P>
<DT><STRONG><A NAME="item_and_gate">and_gate($threshold);</A></STRONG><BR>
<DD>
Self explanitory, pretty much. This turns the node into a basic AND gate.
$threshold is used to decide if an input is true or false (1 or 0). If 
an input is below $threshold, it is false. $threshold defaults to 0.5.

mesh.htm  view on Meta::CPAN

returns from an undersea mine and rock.  This classifier, designed by
Sejnowski and R.  Paul Gorman, performed better than a nearest-neighbor
classifier.</P>
<P>The kinds of problems best solved by neural networks are those that people
are good at such as association, evaluation and pattern recognition.
Problems that are difficult to compute and do not require perfect answers,
just very good answers, are also best done with neural networks.  A quick,
very good response is often more desirable than a more accurate answer which
takes longer to compute.  This is especially true in robotics or industrial
controller applications.  Predictions of behavior and general analysis of
data are also affairs for neural networks.  In the financial arena, consumer
loan analysis and financial forecasting make good applications.  New network
designers are working on weather forecasts by neural networks (Myself
included).  Currently, doctors are developing medical neural networks as an
aid in diagnosis.  Attorneys and insurance companies are also working on
neural networks to help estimate the value of claims.</P>
<P>Neural networks are poor at precise calculations and serial processing. They
are also unable to predict or recognize anything that does not inherently
contain some sort of pattern.  For example, they cannot predict the lottery,
since this is a random process.  It is unlikely that a neural network could
be built which has the capacity to think as well as a person does for two

mesh.htm  view on Meta::CPAN

It might be good to look at the source for this package (in the Mesh.pm file) if you
plan to do a lot of or extensive custom node activation types.
<P></P>
<DT><STRONG><A NAME="item_AI%3A%3ANeuralNet%3A%3AMesh%3A%3Acap">AI::NeuralNet::Mesh::cap</A></STRONG><BR>
<DD>
This is applied to the input layer of the mesh to prevent the mesh from trying to recursivly
adjust weights out throug the inputs.
<P></P>
<DT><STRONG><A NAME="item_AI%3A%3ANeuralNet%3A%3AMesh%3A%3Aoutput">AI::NeuralNet::Mesh::output</A></STRONG><BR>
<DD>
This is simply a data collector package clamped onto the output layer to record the data 
as it comes out of the mesh.
<P></P></DL>
<P>
<HR>
<H1><A NAME="bugs">BUGS</A></H1>
<P>This is a beta release of <CODE>AI::NeuralNet::Mesh</CODE>, and that holding true, I am sure 
there are probably bugs in here which I just have not found yet. If you find bugs in this module, I would 
appreciate it greatly if you could report them to me at <EM>&lt;<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>&gt;</EM>,
or, even better, try to patch them yourself and figure out why the bug is being buggy, and
send me the patched code, again at <EM>&lt;<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>&gt;</EM>.</P>



( run in 0.317 second using v1.01-cache-2.11-cpan-a5abf4f5562 )