Result:
found more than 436 distributions - search limited to the first 2001 files matching your query ( run in 0.733 )


AI-NNFlex

 view release on metacpan or  search on metacpan

examples/add.pl  view on Meta::CPAN



$network->init();

# Taken from Mesh ex_add.pl
my $dataset = AI::NNFlex::Dataset->new([
[ 1,   1   ], [ 2    ],
[ 1,   2   ], [ 3    ],
[ 2,   2   ], [ 4    ],
[ 20,  20  ], [ 40   ],
[ 10,  10  ], [ 20   ],

examples/add.pl  view on Meta::CPAN

]);

my $err = 10;
# Stop after 4096 epochs -- don't want to wait more than that
for ( my $i = 0; ($err > 0.0001) && ($i < 4096); $i++ ) {
    $err = $dataset->learn($network);
    print "Epoch = $i error = $err\n";
}

foreach (@{$dataset->run($network)})
{
    foreach (@$_){print $_}
    print "\n";    
}

 view all matches for this distribution


AI-NaiveBayes

 view release on metacpan or  search on metacpan

lib/AI/NaiveBayes.pm  view on Meta::CPAN

algorithm.  This is a low level class that accepts only pre-computed feature-vectors
as input, see L<AI::Classifier::Text> for a text classifier that uses
this class.  

Creation of C<AI::NaiveBayes> classifier object out of training
data is done by L<AI::NaiveBayes::Learner>. For quick start 
you can use the limited C<train> class method that trains the 
classifier in a default way.

The classifier object is immutable.

lib/AI/NaiveBayes.pm  view on Meta::CPAN

=over 4

=item new( model => $model )

Internal. See L<AI::NaiveBayes::Learner> to learn how to create a C<AI::NaiveBayes>
classifier from training data.

=item train( LIST of HASHREFS )

Shortcut for creating a trained classifier using L<AI::NaiveBayes::Learner> default
settings. 

 view all matches for this distribution


AI-NaiveBayes1

 view release on metacpan or  search on metacpan

NaiveBayes1.pm  view on Meta::CPAN


Number of training instances.

=item C<{stat_labels}{$l}>

Label count in training data.

=item C<{stat_attributes}{$a}>

Statistics for an attribute: C<...{$value}{$label}> = count of
instances.

NaiveBayes1.pm  view on Meta::CPAN


For an attribute A one can specify:

    $nb->{smoothing}{A} = 'unseen count=0.5';

to provide a count for unseen data.  The count is taken into
consideration in training and prediction, when any unseen attribute
values are observed.  Zero probabilities can be prevented in this way.
A count other than 0.5 can be provided, but if it is <=0 it will be
set to 0.5.  The method is similar to add-one smoothing.  A special
attribute value '*' is used for all unseen data. 

=head1 METHODS

=head2 Constructor Methods

NaiveBayes1.pm  view on Meta::CPAN

 # @attribute temperature real
 # @attribute humidity real
 # @attribute windy {TRUE, FALSE}
 # @attribute play {yes, no}
 # 
 # @data
 # sunny,85,85,FALSE,no
 # sunny,80,90,TRUE,no
 # overcast,83,86,FALSE,yes
 # rainy,70,96,FALSE,yes
 # rainy,68,80,FALSE,yes

 view all matches for this distribution


AI-Nerl

 view release on metacpan or  search on metacpan

examples/digits/deep_digits.pl  view on Meta::CPAN


use FindBin qw($Bin); 
chdir $Bin;

unless (-e "t10k-labels-idx1-ubyte.fits"){ die <<"NODATA";}
pull this data by running get_digits.sh
convert it to FITS by running idx_to_fits.pl
NODATA


my $images = rfits('t10k-images-idx3-ubyte.fits');
my $labels = rfits('t10k-labels-idx1-ubyte.fits');
my $y = identity(10)->range($labels->transpose)->sever;
say 't10k data loaded';

my $nerl = AI::Nerl->new(
   # type => image,dims=>[28,28],...
   scale_input => 1/256,
);

 view all matches for this distribution


AI-NeuralNet-BackProp

 view release on metacpan or  search on metacpan

BackProp.pm  view on Meta::CPAN

		shift if(substr($_[0],0,4) eq 'AI::'); 
		my ($fa,$fb)=(shift,shift);
		sprintf("%.3f",((($fb-$fa)*((($fb-$fa)<0)?-1:1))/$fa)*100);
	}
	
	# This sub will take an array ref of a data set, which it expects in this format:
	#   my @data_set = (	[ ...inputs... ], [ ...outputs ... ],
	#				   				   ... rows ...
	#				   );
	#
	# This wil sub returns the percentage of 'forgetfullness' when the net learns all the
	# data in the set in order. Usage:
	#
	#	 learn_set(\@data,[ options ]);
	#
	# Options are options in hash form. They can be of any form that $net->learn takes.
	#
	# It returns a percentage string.
	#
	sub learn_set {
		my $self	=	shift if(substr($_[0],0,4) eq 'AI::'); 
		my $data	=	shift;
		my %args	=	@_;
		my $len		=	$#{$data}/2-1;
		my $inc		=	$args{inc};
		my $max		=	$args{max};
	    my $error	=	$args{error};
	    my $p		=	(defined $args{flag})	?$args{flag}	   :1;
	    my $row		=	(defined $args{pattern})?$args{pattern}*2+1:1;
	    my ($fa,$fb);
		for my $x (0..$len) {
			print "\nLearning index $x...\n" if($AI::NeuralNet::BackProp::DEBUG);
			my $str = $self->learn( $data->[$x*2],			# The list of data to input to the net
					  		  		$data->[$x*2+1], 		# The output desired
					    			inc=>$inc,				# The starting learning gradient
					    			max=>$max,				# The maximum num of loops allowed
					    			error=>$error);			# The maximum (%) error allowed
			print $str if($AI::NeuralNet::BackProp::DEBUG); 
		}
			
		
		my $res;
		$data->[$row] = $self->crunch($data->[$row]) if($data->[$row] == 0);
		
		if ($p) {
			$res=pdiff($data->[$row],$self->run($data->[$row-1]));
		} else {
			$res=$data->[$row]->[0]-$self->run($data->[$row-1])->[0];
		}
		return $res;
	}
	
	# This sub will take an array ref of a data set, which it expects in this format:
	#   my @data_set = (	[ ...inputs... ], [ ...outputs ... ],
	#				   				   ... rows ...
	#				   );
	#
	# This wil sub returns the percentage of 'forgetfullness' when the net learns all the
	# data in the set in RANDOM order. Usage:
	#
	#	 learn_set_rand(\@data,[ options ]);
	#
	# Options are options in hash form. They can be of any form that $net->learn takes.
	#
	# It returns a true value.
	#
	sub learn_set_rand {
		my $self	=	shift if(substr($_[0],0,4) eq 'AI::'); 
		my $data	=	shift;
		my %args	=	@_;
		my $len		=	$#{$data}/2-1;
		my $inc		=	$args{inc};
		my $max		=	$args{max};
	    my $error	=	$args{error};
	    my @learned;
		while(1) {
			_GET_X:
			my $x=$self->intr(rand()*$len);
			goto _GET_X if($learned[$x]);
			$learned[$x]=1;
			print "\nLearning index $x...\n" if($AI::NeuralNet::BackProp::DEBUG); 
			my $str =  $self->learn($data->[$x*2],			# The list of data to input to the net
					  		  		$data->[$x*2+1], 		# The output desired
					    			inc=>$inc,				# The starting learning gradient
			 		    			max=>$max,				# The maximum num of loops allowed
					    			error=>$error);			# The maximum (%) error allowed
			print $str if($AI::NeuralNet::BackProp::DEBUG); 
		}

BackProp.pm  view on Meta::CPAN

			}
			AI::NeuralNet::BackProp::out1 "\n";             
		}
		
		# These next two loops connect the _run and _map packages (the IO interface) to 
		# the start and end 'layers', respectively. These are how we insert data into
		# the network and how we get data from the network. The _run and _map packages 
		# are connected to the neurons so that the neurons think that the IO packages are
		# just another neuron, sending data on. But the IO packs. are special packages designed
		# with the same methods as neurons, just meant for specific IO purposes. You will
		# never need to call any of the IO packs. directly. Instead, they are called whenever
		# you use the run(), map(), or learn() methods of your network.
        
    	AI::NeuralNet::BackProp::out2 "\nMapping I (_run package) connections to network...\n";

BackProp.pm  view on Meta::CPAN

		binmode(FILE);
		
		my $tmp;
		my @image;
		my @palette;
		my $data;
		
		# Read header
		read(FILE,$tmp,128);
		
		# load the data and decompress into buffer
		my $count=0;
		
		while($count<320*200) {
		     # get the first piece of data
		     read(FILE,$data,1);
	         $data=ord($data);
	         
		     # is this a rle?
		     if ($data>=192 && $data<=255) {
		        # how many bytes in run?
		        my $num_bytes = $data-192;
		
		        # get the actual $data for the run
		        read(FILE, $data, 1);
				$data=ord($data);
		        # replicate $data in buffer num_bytes times
		        while($num_bytes-->0) {
	            	$image[$count++] = $data;
		        } # end while
		     } else {
		        # actual $data, just copy it into buffer at next location
		        $image[$count++] = $data;
		     } # end else not rle
		}
		
		# move to end of file then back up 768 bytes i.e. to begining of palette
		seek(FILE,-768,2);

BackProp.pm  view on Meta::CPAN

	my @inputs = ( 0,0,1,1,1 );
	my @ouputs = ( 1,0,1,0,1 );
	
	print $net->learn(\@inputs, \@outputs),"\n";

	# Create a data set to learn
	my @set = (
		[ 2,2,3,4,1 ], [ 1,1,1,1,1 ],
		[ 1,1,1,1,1 ], [ 0,0,0,0,0 ],
		[ 1,1,1,0,0 ], [ 0,0,0,1,1 ]	
	);

BackProp.pm  view on Meta::CPAN

	my $phrase1 = $net->crunch("I love neural networks!");
	my $phrase2 = $net->crunch("Jay Lenno is wierd.");
	my $phrase3 = $net->crunch("The rain in spain...");
	my $phrase4 = $net->crunch("Tired of word crunching yet?");

	# Make a data set from the array refs
	my @phrases = (
		$phrase1, $phrase2,
		$phrase3, $phrase4
	);

	# Learn the data set	
	$net->learn_set(\@phrases);
	
	# Run a test phrase through the network
	my $test_phrase = $net->crunch("I love neural networking!");
	my $result = $net->run($test_phrase);

BackProp.pm  view on Meta::CPAN

reaches $maximum_iterations.


=item $net->learn_set(\@set, [ options ]);

UPDATE: Inputs and outputs in the dataset can now be strings. See information on auto-crunching
in learn()

This takes the same options as learn() and allows you to specify a set to learn, rather
than individual patterns. A dataset is an array refrence with at least two elements in the
array, each element being another array refrence (or now, a scalar string). For each pattern to
learn, you must specify an input array ref, and an ouput array ref as the next element. Example:
	
	my @set = (
		# inputs        outputs

BackProp.pm  view on Meta::CPAN

If "flag" is set to some TRUE value, as in "flag => 1" in the hash of options, or if the option "flag"
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set() will return an integer specifying the amount of forgetfulness when all the patterns 
are learned. 

If "pattern" is set, then learn_set() will use that pattern in the data set to measure forgetfulness by.
If "pattern" is omitted, it defaults to the first pattern in the set. Example:

	my @set = (
		[ 0,1,0,1 ],  [ 0 ],
		[ 0,0,1,0 ],  [ 1 ],

BackProp.pm  view on Meta::CPAN

"forgetfulness measure" one day because I thought it would be neat to know. 

How the module measures forgetfulness is this: First, it learns all the patterns in the set provided,
then it will run the very first pattern (or whatever pattern is specified by the "row" option)
in the set after it has finished learning. It will compare the run() output with the desired output
as specified in the dataset. In a perfect world, the two should match exactly. What we measure is
how much that they don't match, thus the amount of forgetfulness the network has.

NOTE: In version 0.77 percentages were disabled because of a bug. Percentages are now enabled.

Example (from examples/ex_dow.pl):

	# Data from 1989 (as far as I know..this is taken from example data on BrainMaker)
	my @data = ( 
		#	Mo  CPI  CPI-1 CPI-3 	Oil  Oil-1 Oil-3    Dow   Dow-1 Dow-3   Dow Ave (output)
		[	1, 	229, 220,  146, 	20.0, 21.9, 19.5, 	2645, 2652, 2597], 	[	2647  ],
		[	2, 	235, 226,  155, 	19.8, 20.0, 18.3, 	2633, 2645, 2585], 	[	2637  ],
		[	3, 	244, 235,  164, 	19.6, 19.8, 18.1, 	2627, 2633, 2579], 	[	2630  ],
		[	4, 	261, 244,  181, 	19.6, 19.6, 18.1, 	2611, 2627, 2563], 	[	2620  ],

BackProp.pm  view on Meta::CPAN

		[	6, 	287, 276,  207, 	19.5, 19.5, 18.0, 	2637, 2630, 2589], 	[	2635  ],
		[	7, 	296, 287,  212, 	19.3, 19.5, 17.8, 	2640, 2637, 2592], 	[	2641  ] 		
	);
	
	# Learn the set
	my $f = learn_set(\@data, 
					  inc	=>	0.1,	
					  max	=>	500,
					  p		=>	1
					 );
			

BackProp.pm  view on Meta::CPAN


    
This is a snippet from the example script examples/ex_dow.pl, which demonstrates DOW average
prediction for the next month. A more simple set defenition would be as such:

	my @data = (
		[ 0,1 ], [ 1 ],
		[ 1,0 ], [ 0 ]
	);
	
	$net->learn_set(\@data);
	
Same effect as above, but not the same data (obviously).

=item $net->learn_set_rand(\@set, [ options ]);

UPDATE: Inputs and outputs in the dataset can now be strings. See information on auto-crunching
in learn()

This takes the same options as learn() and allows you to specify a set to learn, rather
than individual patterns. 

BackProp.pm  view on Meta::CPAN

each pattern once, rather than in the order that they are in the array. This returns a true
value (1) instead of a forgetfullnes factor.

Example:

	my @data = (
		[ 0,1 ], [ 1 ],
		[ 1,0 ], [ 0 ]
	);
	
	$net->learn_set_rand(\@data);
	


=item $net->run($input_map_ref);

BackProp.pm  view on Meta::CPAN

=item $net->range();

This allows you to limit the possible outputs to a specific set of values. There are several 
ways you can specify the set of values to limit the output to. Each method is shown below. 
When called without any arguements, it will disable output range limits. You will need to re-learn
any data previously learned after disabling ranging, as disabling range invalidates the current
weight matrix in the network.

range() automatically scales the networks outputs to fit inside the size of range you allow, and, therefore,
it keeps track of the maximum output it can expect to scale. Therefore, you will need to learn() 
the whole data set again after calling range() on a network.

Subsequent calls to range() invalidate any previous calls to range()

NOTE: It is recomended, you call range() before you call learn() or else you will get unexpected
results from any run() call after range() .

BackProp.pm  view on Meta::CPAN

Level 1 ($level = 1) : This causes ALL debugging information for the network to be dumped
as the network runs. In this mode, it is a good idea to pipe your STDIO to a file, especially
for large programs.

Level 2 ($level = 2) : A slightly-less verbose form of debugging, not as many internal 
data dumps.

Level 3 ($level = 3) : JUST prints weight mapping as weights change.

Level 4 ($level = 4) : JUST prints the benchmark info for EACH learn loop iteteration, not just
learning as a whole. Also prints the percentage difference for each loop between current network

BackProp.pm  view on Meta::CPAN


=item AI::NeuralNet::BackProp::_run

=item AI::NeuralNet::BackProp::_map

These two packages, _run and _map are used to insert data into
the network and used to get data from the network. The _run and _map packages 
are connected to the neurons so that the neurons think that the IO packages are
just another neuron, sending data on. But the IO packs. are special packages designed
with the same methods as neurons, just meant for specific IO purposes. You will
never need to call any of the IO packs. directly. Instead, they are called whenever
you use the run() or learn() methods of your network.
        

 view all matches for this distribution


AI-NeuralNet-FastSOM

 view release on metacpan or  search on metacpan

lib/AI/NeuralNet/FastSOM.pm  view on Meta::CPAN

sub mean_error {
    my $self = shift;
    my $error = 0;
    map { $error += $_ }                    # then add them all up
        map { ( $self->bmu($_) )[2] }       # then find the distance
           @_;                              # take all data vectors
    return ($error / scalar @_);            # return the mean value
}

XSLoader::load(__PACKAGE__);

 view all matches for this distribution


AI-NeuralNet-Hopfield

 view release on metacpan or  search on metacpan

MANIFEST  view on Meta::CPAN

README
t/00-load.t
t/manifest.t
t/pod-coverage.t
t/pod.t
META.yml                                 Module YAML meta-data (added by MakeMaker)
META.json                                Module JSON meta-data (added by MakeMaker)

 view all matches for this distribution


AI-NeuralNet-Kohonen-Visual

 view release on metacpan or  search on metacpan

MANIFEST  view on Meta::CPAN

Makefile.PL
MANIFEST
README
t/AI-NeuralNet-Kohonen-Visual.t
lib/AI/NeuralNet/Kohonen/Visual.pm
META.yml                                 Module meta-data (added by MakeMaker)

 view all matches for this distribution


AI-NeuralNet-Kohonen

 view release on metacpan or  search on metacpan

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN

	1 .5 0 orange
	1 .5 1 pink"
	);

	$_->train;
	$_->save_file('mydata.txt');
	exit;

=head1 DESCRIPTION

An illustrative implimentation of Kohonen's Self-organising Feature Maps (SOMs)

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN

before the exponential function is applied: the default value is 2.5,
but you may with to use 2 or 4.

=item missing_mask

Used to signify data is missing in an input vector. Defaults
to C<x>.

=back

Private fields:

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN

	#- Neighborhood type, either bubble or gaussian (string, optional, case-sen- sitive).
	print OUT "gaussian ";
	# End of header
	print OUT "\n";

	# Format input data
	foreach (@{$self->{input}}){
		print OUT join("\t",@{$_->{values}});
		if ($_->{class}){
			print OUT " $_->{class} " ;
		}

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN

	$_				      = shift @specs;
	$self->{map_dim_y}    = $_ if defined $_;
	#- Neighborhood type, either bubble or gaussian (string, optional, case-sen- sitive).
	# not implimented

	# Format input data
	foreach (@_){
		$self->_add_input_from_str($_);
	}
	return 1;
}

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN


Adds to the C<input> field an input vector in SOM_PAK-format
whitespace-delimited ASCII.

Returns C<undef> on failure to add an item (perhaps because
the data passed was a comment, or the C<weight_dim> flag was
not set); a true value on success.

=cut

sub _add_input_from_str { my ($self) = (shift);

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN

I<SOM_PAK> file format version 3.1 (April 7, 1995),
Helsinki University of Technology, Espoo:

=over 4

The input data is stored in ASCII-form as a list of entries, one line
...for each vectorial sample.

The first line of the file is reserved for status knowledge of the
entries; in the present version it is used to define the following
items (these items MUST occur in the indicated order):

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN

...

Subsequent lines consist of n floating-point numbers followed by an
optional class label (that can be any string) and two optional
qualifiers (see below) that determine the usage of the corresponding
data entry in training programs.  The data files can also contain an
arbitrary number of comment lines that begin with '#', and are
ignored. (One '#' for each comment line is needed.)

If some components of some data vectors are missing (due to data
collection failures or any other reason) those components should be
marked with 'x'...[in processing, these] are ignored.

...

Each data line may have two optional qualifiers that determine the
usage of the data entry during training. The qualifiers are of the
form codeword=value, where spaces are not allowed between the parts of
the qualifier. The optional qualifiers are the following:

=over 4

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN


Enhancement factor: e.g. weight=3.  The training rate for the
corresponding input pattern vector is multiplied by this
parameter so that the reference vectors are updated as if this
input vector were repeated 3 times during training (i.e., as if
the same vector had been stored 2 extra times in the data file).

=item -

Fixed-point qualifier: e.g. fixed=2,5.  The map unit defined by
the fixed-point coordinates (x = 2; y = 5) is selected instead of

lib/AI/NeuralNet/Kohonen.pm  view on Meta::CPAN


I<neighbourhood type> is always gaussian.

=item *

i<x> for missing data.

=item *

the two optional qualifiers

 view all matches for this distribution


AI-NeuralNet-Mesh

 view release on metacpan or  search on metacpan

Mesh.pm  view on Meta::CPAN

        # First create the individual nodes
		for my $x (0..$tmp-1) {         
			$self->{mesh}->[$x] = AI::NeuralNet::Mesh::node->new($self);
        }              
        
        # Get an instance of an output (data collector) node
		$self->{output} = AI::NeuralNet::Mesh::output->new($self);
		
		# Connect the output layer to the data collector
        for $x (0..$outputs-1) {                    
			$self->{mesh}->[$tmp-$outputs+$x]->add_output_node($self->{output});
		}
		
		# Now we use the _c() method to connect the layers together.

Mesh.pm  view on Meta::CPAN



	# See POD for usage
	sub learn_set {
		my $self	=	shift;
		my $data	=	shift;
		my %args	=	@_;
		my $len		=	$#{$data}/2;
		my $inc		=	$args{inc};
		my $max		=	$args{max};
	    my $error	=	$args{error};
	    my $degrade	=	$args{degrade};
	    my $p		=	(defined $args{flag}) ?$args{flag} :1;
	    my $row		=	(defined $args{row})  ?$args{row}+1:1;
	    my $leave	=	(defined $args{leave})?$args{leave}:0;
		for my $x (0..$len-$leave) {
			d("Learning set $x...\n",4);
			my $str = $self->learn( $data->[$x*2],
					  		  		$data->[$x*2+1],
					    			inc=>$inc,
					    			max=>$max,
					    			error=>$error,
					    			degrade=>$degrade);
		}
			
		if ($p) {
			return pdiff($data->[$row],$self->run($data->[$row-1]));
		} else {
			return $data->[$row]->[0]-$self->run($data->[$row-1])->[0];
		}
	}
	
	# See POD for usage
	sub run_set {
		my $self	=	shift;
		my $data	=	shift;
		my $len		=	$#{$data}/2;
		my (@results,$res);
		for my $x (0..$len) {
			$res = $self->run($data->[$x*2]);
			for(0..$#{$res}){$results[$x]->[$_]=$res->[$_]}
			d("Running set $x [$res->[0]]...\r",4);
		}
		return \@results;
	}
	
	#
	# Loads a CSV-like dataset from disk
	#
	# Usage:
	#	my $set = $set->load_set($file, $column, $seperator);
	#
	# Returns a data set of the same format as required by the
	# learn_set() method. $file is the disk file to load set from.
	# $column an optional variable specifying the column in the 
	# data set to use as the class attribute. $class defaults to 0.
	# $seperator is an optional variable specifying the seperator
	# character between values. $seperator defaults to ',' (a single comma). 
	# NOTE: This does not handle quoted fields, or any other record
	# seperator other than "\n".
	#
	sub load_set {
		my $self	=	shift;
		my $file	=	shift;
		my $attr	=	shift || 0;
		my $sep		=	shift || ',';
		my $data	=	[];
		open(FILE,	$file);
		my @lines	=	<FILE>;
		close(FILE);
		for my $x (0..$#lines) {
			chomp($lines[$x]);
			my @tmp	= split /$sep/, $lines[$x];
			my $c=0;
			for(0..$#tmp){ 
				$tmp[$_]=$self->crunch($tmp[$_])->[0] if($tmp[$_]=~/[AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz]/);
				if($_!=$attr){$data->[$x*2]->[$c]=$tmp[$c];$c++}
			};             
			d("Loaded line $x, [@tmp]                            \r",4);
			$data->[$x*2+1]=[$tmp[$attr]];
		}
		return $data;
	}
	
	# See POD for usage
	sub get_outs {
		my $self	=	shift;
		my $data	=	shift;
		my $len		=	$#{$data}/2;
		my $outs	=	[];
		for my $x (0..$len) {
			$outs->[$x] = $data->[$x*2+1];
		}
		return $outs;
	}
	
	# Save entire network state to disk.

Mesh.pm  view on Meta::CPAN

	# $type can be: "linear", "sigmoid", "sigmoid_2".
	# You can use "sigmoid_1" as a synonym to "sigmoid". 
	# Type can also be a CODE ref, ( ref($type) eq "CODE" ).
	# If $type is a CODE ref, then the function is called in this form:
	# 	$output	= &$type($sum_of_inputs,$self);
	# The code ref then has access to all the data in that node (thru the
	# blessed refrence $self) and is expected to return the value to be used
	# as the output for that node. The sum of all the inputs to that node
	# is already summed and passed as the first argument.
	sub activation {
		my $self	=	shift;

Mesh.pm  view on Meta::CPAN

	#
	#	$net->activation(4,range(@numbers));
	#	$net->activation(4,range(6,15,26,106,28,3));
	#
	# Note: when using a range() activatior, train the
	# net TWICE on the data set, because the first time
	# the range() function searches for the top value in
	# the inputs, and therefore, results could flucuate.
	# The second learning cycle guarantees more accuracy.
	#	
	sub range {

Mesh.pm  view on Meta::CPAN

	#
	# ramp() preforms smooth ramp activation between 0 and 1 if $r is 1, 
	# or between -1 and 1 if $r is 2. $r defaults to 1, as you can see.	
	#
	# Note: when using a ramp() activatior, train the
	# net at least TWICE on the data set, because the first 
	# time the ramp() function searches for the top value in
	# the inputs, and therefore, results could flucuate.
	# The second learning cycle guarantees more accuracy.
	#
	sub ramp {

Mesh.pm  view on Meta::CPAN

	sub adjust_weight   {}
	sub add_output_node {}
	sub add_input_node  {}
1;

# Internal usage, collects data from output layer.
package AI::NeuralNet::Mesh::output;
	
	use strict;
	
	sub new {

Mesh.pm  view on Meta::CPAN

as well as four custom activation functions, and several export 
tag sets. With this release, I have also included a few
new and more practical example scripts. (See ex_wine.pl) This release 
also includes a simple example of an ALN (Adaptive Logic Network) made
with this module. See ex_aln.pl. Also in this release is support for 
loading data sets from simple CSV-like files. See the load_set() method 
for details. This version also fixes a big bug that I never knew about 
until writing some demos for this version - that is, when trying to use 
more than one output node, the mesh would freeze in learning. But, that 
is fixed now, and you can have as many outputs as you want (how does 3 
inputs and 50 outputs sound? :-)

Mesh.pm  view on Meta::CPAN

This network model is very flexable. It will allow for clasic binary
operation or any range of integer or floating-point inputs you care
to provide. With this you can change activation types on a per node or
per layer basis (you can even include your own anonymous subs as 
activation types). You can add sigmoid transfer functions and control
the threshold. You can learn data sets in batch, and load CSV data
set files. You can do almost anything you need to with this module.
This code is deigned to be flexable. Any new ideas for this module?
See AUTHOR, below, for contact info.

This module is designed to also be a customizable, extensable 

Mesh.pm  view on Meta::CPAN

by you.

In this module I have included a more accurate form of "learning" for the
mesh. This form preforms descent toward a local error minimum (0) on a 
directional delta, rather than the desired value for that node. This allows
for better, and more accurate results with larger datasets. This module also
uses a simpler recursion technique which, suprisingly, is more accurate than
the original technique that I've used in other ANNs.

=head1 EXPORTS

Mesh.pm  view on Meta::CPAN

The code ref is called with this syntax:

	$output = &$code_ref($sum_of_inputs, $self);
	
The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.

See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.

Mesh.pm  view on Meta::CPAN


=item $net->learn_set(\@set, [ options ]);

This takes the same options as learn() (learn_set() uses learn() internally) 
and allows you to specify a set to learn, rather than individual patterns. 
A dataset is an array refrence with at least two elements in the array, 
each element being another array refrence (or now, a scalar string). For 
each pattern to learn, you must specify an input array ref, and an ouput 
array ref as the next element. Example:
	
	my @set = (

Mesh.pm  view on Meta::CPAN

		[ 1,2,3,4 ],  [ 1,3,5,6 ],
		[ 0,2,5,6 ],  [ 0,2,1,2 ]
	);


Inputs and outputs in the dataset can also be strings.

See the paragraph on measuring forgetfulness, below. There are 
two learn_set()-specific option tags available:

	flag     =>  $flag

Mesh.pm  view on Meta::CPAN

If "flag" is set to some TRUE value, as in "flag => 1" in the hash of options, or if the option "flag"
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set() will return an integer specifying the amount of forgetfulness when all the patterns 
are learned. 

If "pattern" is set, then learn_set() will use that pattern in the data set to measure forgetfulness by.
If "pattern" is omitted, it defaults to the first pattern in the set. Example:

	my @set = (
		[ 0,1,0,1 ],  [ 0 ],
		[ 0,0,1,0 ],  [ 1 ],

Mesh.pm  view on Meta::CPAN

"forgetfulness measure" one day because I thought it would be neat to know. 

How the module measures forgetfulness is this: First, it learns all the patterns 
in the set provided, then it will run the very first pattern (or whatever pattern
is specified by the "row" option) in the set after it has finished learning. It 
will compare the run() output with the desired output as specified in the dataset. 
In a perfect world, the two should match exactly. What we measure is how much that 
they don't match, thus the amount of forgetfulness the network has.

Example (from examples/ex_dow.pl):

	# Data from 1989 (as far as I know..this is taken from example data on BrainMaker)
	my @data = ( 
		#	Mo  CPI  CPI-1 CPI-3 	Oil  Oil-1 Oil-3    Dow   Dow-1 Dow-3   Dow Ave (output)
		[	1, 	229, 220,  146, 	20.0, 21.9, 19.5, 	2645, 2652, 2597], 	[	2647  ],
		[	2, 	235, 226,  155, 	19.8, 20.0, 18.3, 	2633, 2645, 2585], 	[	2637  ],
		[	3, 	244, 235,  164, 	19.6, 19.8, 18.1, 	2627, 2633, 2579], 	[	2630  ],
		[	4, 	261, 244,  181, 	19.6, 19.6, 18.1, 	2611, 2627, 2563], 	[	2620  ],

Mesh.pm  view on Meta::CPAN

		[	6, 	287, 276,  207, 	19.5, 19.5, 18.0, 	2637, 2630, 2589], 	[	2635  ],
		[	7, 	296, 287,  212, 	19.3, 19.5, 17.8, 	2640, 2637, 2592], 	[	2641  ] 		
	);
	
	# Learn the set
	my $f = $net->learn_set(\@data, 
					  inc	=>	0.1,	
					  max	=>	500,
					 );
			
	# Print it 

Mesh.pm  view on Meta::CPAN


    
This is a snippet from the example script examples/finance.pl, which demonstrates DOW average
prediction for the next month. A more simple set defenition would be as such:

	my @data = (
		[ 0,1 ], [ 1 ],
		[ 1,0 ], [ 0 ]
	);
	
	$net->learn_set(\@data);
	
Same effect as above, but not the same data (obviously).


=item $net->run($input_map_ref);

This method will apply the given array ref at the input layer of the neural network, and

Mesh.pm  view on Meta::CPAN


=item $net->run_set($set);
                                                                                    
This takes an array ref of the same structure as the learn_set() method, above. It returns
an array ref. Each element in the returned array ref represents the output for the corresponding
element in the dataset passed. Uses run() internally.


=item $net->get_outs($set);

Simple utility function which takes an array ref of the same structure as the learn_set() method,
above. It returns an array ref of the same type as run_set() wherein each element contains an
output value. The output values are the target values specified in the $set passed. Each element
in the returned array ref represents the output value for the corrseponding row in the dataset
passed. (A row is two elements of the dataset together, see learn_set() for dataset structure.)

=item $net->load_set($file,$column,$seperator);

Loads a CSV-like dataset from disk

Returns a data set of the same structure as required by the
learn_set() method. $file is the disk file to load set from.
$column an optional variable specifying the column in the 
data set to use as the class attribute. $class defaults to 0.
$seperator is an optional variable specifying the seperator
character between values. $seperator defaults to ',' (a single comma). 
NOTE: This does not handle quoted fields, or any other record
seperator other than "\n".

Mesh.pm  view on Meta::CPAN

If the file is of an invalid file type, then load() will
return undef. Use the error() method, below, to print the error message.

If there were no errors, it will return a refrence to $net.

UPDATE: $filename can now be a newline-seperated set of mesh data. This enables you
to do $net->load(join("\n",<DATA>)) and other fun things. I added this mainly
for a demo I'm writing but not qutie done with yet. So, Cheers!



Mesh.pm  view on Meta::CPAN

The code ref is called with this syntax:

	$output = &$code_ref($sum_of_inputs, $self);
	
The code ref is expected to return a value to be used as the output of the node.
The code ref also has access to all the data of that node through the second argument,
a blessed hash refrence to that node.

See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.

Mesh.pm  view on Meta::CPAN


	$net->activation(4,range(@numbers));
	$net->activation(4,range(6,15,26,106,28,3));

Note: when using a range() activatior, train the
net TWICE on the data set, because the first time
the range() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.

The actual code that implements the range closure is

Mesh.pm  view on Meta::CPAN

tag as so:
	
	use AI::NeuralNet::Mesh ':acts';

Note: when using a ramp() activatior, train the
net at least TWICE on the data set, because the first 
time the ramp() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.

No code to show here, as it is almost exactly the same as range().

Mesh.pm  view on Meta::CPAN

Problems that are difficult to compute and do not require perfect answers,
just very good answers, are also best done with neural networks.  A quick,
very good response is often more desirable than a more accurate answer which
takes longer to compute.  This is especially true in robotics or industrial
controller applications.  Predictions of behavior and general analysis of
data are also affairs for neural networks.  In the financial arena, consumer
loan analysis and financial forecasting make good applications.  New network
designers are working on weather forecasts by neural networks (Myself
included).  Currently, doctors are developing medical neural networks as an
aid in diagnosis.  Attorneys and insurance companies are also working on
neural networks to help estimate the value of claims.

Mesh.pm  view on Meta::CPAN

This is applied to the input layer of the mesh to prevent the mesh from trying to recursivly
adjust weights out throug the inputs.

=item AI::NeuralNet::Mesh::output

This is simply a data collector package clamped onto the output layer to record the data 
as it comes out of the mesh. 


=head1 BUGS

 view all matches for this distribution


AI-NeuralNet-SOM

 view release on metacpan or  search on metacpan

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN

    [ 0, 4, -3]);

  my @mes = $nn->train (30, ...);      # learn about the smallest errors
                                       # during training

  print $nn->as_data;                  # dump the raw data
  print $nn->as_string;                # prepare a somehow formatted string

  use AI::NeuralNet::SOM::Torus;
  # similar to above

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN

=head2 Scenario

The basic idea is that the neural network consists of a 2-dimensional
array of N-dimensional vectors. When the training is started these
vectors may be completely random, but over time the network learns
from the sample data, which is a set of N-dimensional vectors.

Slowly, the vectors in the network will try to approximate the sample
vectors fed in. If in the sample vectors there were clusters, then
these clusters will be neighbourhoods within the rectangle (or
whatever topology you are using).

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN

You need to initialize all vectors in the map before training. There are several options
how this is done:

=over

=item providing data vectors

If you provide a list of vectors, these will be used in turn to seed the neurons. If the list is
shorter than the number of neurons, the list will be started over. That way it is trivial to
zero everything:

  $nn->initialize ( [ 0, 0, 0 ] );

=item providing no data

Then all vectors will get randomized values (in the range [ -0.5 .. 0.5 ]).

=item using eigenvectors (see L</HOWTOS>)

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN

=cut

sub train {
    my $self   = shift;
    my $epochs = shift || 1;
    die "no data to learn" unless @_;

    $self->{LAMBDA} = $epochs / log ($self->{_Sigma0});                                 # educated guess?

    my @mes    = ();                                                                    # this will contain the errors during the epochs
    for my $epoch (1..$epochs)  {

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN

    my $sigma = shift;                                                                  # the current radius
    my $unit  = shift;                                                                  # which unit to change
    my ($x, $y, $d) = @$unit;                                                           # it contains the distance
    my $v     = shift;                                                                  # the vector which makes the impact

    my $w     = $self->{map}->[$x]->[$y];                                               # find the data behind the unit
    my $theta = exp ( - ($d ** 2) / (2 * $sigma ** 2));                                 # gaussian impact (using distance and current radius)

    foreach my $i (0 .. $#$w) {                                                         # adjusting values
	$w->[$i] = $w->[$i] + $theta * $l * ( $v->[$i] - $w->[$i] );
    }

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN

sub mean_error {
    my $self = shift;
    my $error = 0;
    map { $error += $_ }                    # then add them all up
        map { ( $self->bmu($_) )[2] }       # then find the distance
           @_;                              # take all data vectors
    return ($error / scalar @_);            # return the mean value
}

=pod

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN


=item I<map>

I<$m> = I<$nn>->map

This method returns a reference to the map data. See the appropriate subclass of the data
representation.

=cut

sub map {

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN


sub as_string { die; }

=pod

=item I<as_data>

print I<$nn>->as_data

This methods creates a string containing the raw vector data, row by
row. This can be fed into gnuplot, for instance.

=cut

sub as_data { die; }

=pod

=back

lib/AI/NeuralNet/SOM.pm  view on Meta::CPAN

results are not as good as I had thought).

=item I<loading and saving a SOM>

See the example script in the directory C<examples>. It uses
C<Storable> to directly dump the data structure onto disk. Storage and
retrieval is quite fast.

=back

=head1 FAQs

 view all matches for this distribution


AI-NeuralNet-Simple

 view release on metacpan or  search on metacpan

examples/game_ai.pl  view on Meta::CPAN

    return $response;
}

sub display_result
{
    my ($net,@data) = @_;
    my $result      = $net->winner(\@data);
    my @health      = qw/Poor Average Good/;
    my @knife       = qw/No Yes/;
    my @gun         = qw/No Yes/;
    printf $format, 
        $health[$_[1]], 

 view all matches for this distribution


AI-Ollama-Client

 view release on metacpan or  search on metacpan

lib/AI/Ollama/Client.pm  view on Meta::CPAN


=head2 C<< deleteModel >>

  my $res = $client->deleteModel()->get;

Delete a model and its data.


=cut

=head2 C<< generateEmbedding >>

 view all matches for this distribution


AI-PBDD

 view release on metacpan or  search on metacpan

MANIFEST  view on Meta::CPAN

Makefile.PL
MANIFEST
README
t/PBDD.t
XS.xs
META.yml                                 Module YAML meta-data (added by MakeMaker)
META.json                                Module JSON meta-data (added by MakeMaker)

 view all matches for this distribution


AI-PSO

 view release on metacpan or  search on metacpan

examples/NeuralNet/pso_ann.pl  view on Meta::CPAN

	print ANN "\n";
	close(ANN);
}

sub runANN($$) {
	my ($configFile, $dataFile) = @_;
	my $networkValue = `ann_compute $configFile $dataFile`;
	chomp($networkValue);
	return $networkValue;
}

 view all matches for this distribution


AI-ParticleSwarmOptimization-MCE

 view release on metacpan or  search on metacpan

LICENSE  view on Meta::CPAN

distributed under the terms of this Lesser General Public
License (also called "this License"). Each licensee is
addressed as "you".

A "library" means a collection of software functions and/or
data prepared so as to be conveniently linked with
application programs (which use some of those functions
and data) to form executables.

The "Library", below, refers to any such software library or
work which has been distributed under these terms. A "work
based on the Library" means either the Library or any
derivative work under copyright law: that is to say, a work

LICENSE  view on Meta::CPAN

     files and the date of any change.
     c) You must cause the whole of the work to be
     licensed at no charge to all third parties under
     the terms of this License.
     d) If a facility in the modified Library refers to a
     function or a table of data to be supplied by an
     application program that uses the facility, other
     than as an argument passed when the facility
     is invoked, then you must make a good faith
     effort to ensure that, in the event an application
     does not supply such function or table, the

LICENSE  view on Meta::CPAN

source code is not. Whether this is true is especially
significant if the work can be linked without the Library, or if
the work is itself a library. The threshold for this to be true is
not precisely defined by law.

If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and
small inline functions (ten lines or less in length), then the
use of the object file is unrestricted, regardless of whether it
is legally a derivative work. (Executables containing this
object code plus portions of the Library will still fall under

LICENSE  view on Meta::CPAN

     e) Verify that the user has already received a
     copy of these materials or that you have
     already sent this user a copy.

For an executable, the required form of the "work that uses
the Library" must include any data and utility programs
needed for reproducing the executable from it. However, as a
special exception, the materials to be distributed need not
include anything that is normally distributed (in either source
or binary form) with the major components (compiler, kernel,
and so on) of the operating system on which the executable

 view all matches for this distribution


AI-ParticleSwarmOptimization-Pmap

 view release on metacpan or  search on metacpan

LICENSE  view on Meta::CPAN

distributed under the terms of this Lesser General Public
License (also called "this License"). Each licensee is
addressed as "you".

A "library" means a collection of software functions and/or
data prepared so as to be conveniently linked with
application programs (which use some of those functions
and data) to form executables.

The "Library", below, refers to any such software library or
work which has been distributed under these terms. A "work
based on the Library" means either the Library or any
derivative work under copyright law: that is to say, a work

LICENSE  view on Meta::CPAN

     files and the date of any change.
     c) You must cause the whole of the work to be
     licensed at no charge to all third parties under
     the terms of this License.
     d) If a facility in the modified Library refers to a
     function or a table of data to be supplied by an
     application program that uses the facility, other
     than as an argument passed when the facility
     is invoked, then you must make a good faith
     effort to ensure that, in the event an application
     does not supply such function or table, the

LICENSE  view on Meta::CPAN

source code is not. Whether this is true is especially
significant if the work can be linked without the Library, or if
the work is itself a library. The threshold for this to be true is
not precisely defined by law.

If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and
small inline functions (ten lines or less in length), then the
use of the object file is unrestricted, regardless of whether it
is legally a derivative work. (Executables containing this
object code plus portions of the Library will still fall under

LICENSE  view on Meta::CPAN

     e) Verify that the user has already received a
     copy of these materials or that you have
     already sent this user a copy.

For an executable, the required form of the "work that uses
the Library" must include any data and utility programs
needed for reproducing the executable from it. However, as a
special exception, the materials to be distributed need not
include anything that is normally distributed (in either source
or binary form) with the major components (compiler, kernel,
and so on) of the operating system on which the executable

 view all matches for this distribution


AI-ParticleSwarmOptimization

 view release on metacpan or  search on metacpan

MANIFEST  view on Meta::CPAN

README
Samples/PSOPlatTest.pl
Samples/PSOTest.pl
t/01_pso_oo.t
Todo
META.yml                                 Module YAML meta-data (added by MakeMaker)
META.json                                Module JSON meta-data (added by MakeMaker)

 view all matches for this distribution


AI-Pathfinding-AStar-Rectangle

 view release on metacpan or  search on metacpan

MANIFEST  view on Meta::CPAN

t/06-setstart.t
t/07-dastar.t

examples/snake_labirint.pl
Benchmark/perl-vs-xs.pl
META.yml                                 Module meta-data (added by MakeMaker)

 view all matches for this distribution


AI-Pathfinding-OptimizeMultiple

 view release on metacpan or  search on metacpan

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

use Scalar::Util qw/ blessed /;

has chosen_scans     => ( isa => 'ArrayRef', is => 'rw' );
has _iter_idx        => ( isa => 'Int', is => 'rw', default  => sub { 0; }, );
has _num_boards      => ( isa => 'Int', is => 'ro', init_arg => 'num_boards', );
has _orig_scans_data => ( isa => 'PDL', is => 'rw' );
has _optimize_for => ( isa => 'Str', is => 'ro', init_arg => 'optimize_for', );
has _scans_data   => ( isa => 'PDL', is => 'rw' );
has _selected_scans =>
    ( isa => 'ArrayRef', is => 'ro', init_arg => 'selected_scans', );
has _status => ( isa => 'Str',           is => 'rw' );
has _quotas => ( isa => 'ArrayRef[Int]', is => 'ro', init_arg => 'quotas' );
has _total_boards_solved => ( isa => 'Int', is => 'rw' );
has _total_iters         => ( is  => 'rw' );
has _trace_cb =>
    ( isa => 'Maybe[CodeRef]', is => 'ro', init_arg => 'trace_cb' );
has _scans_meta_data => ( isa => 'ArrayRef', is => 'ro', init_arg => 'scans' );
has _scans_iters_pdls =>
    ( isa => 'HashRef', is => 'rw', init_arg => 'scans_iters_pdls' );
has _stats_factors => (
    isa      => 'HashRef',
    is       => 'ro',

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

{
    my $self = shift;

    my $args = shift;

    my $scans_data = PDL::cat(
        map {
            my $id     = $_->id();
            my $pdl    = $self->_scans_iters_pdls()->{$id};
            my $factor = $self->_stats_factors->{$id};
            (

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

                : $pdl
            );
        } @{ $self->_selected_scans() }
    );

    $self->_orig_scans_data($scans_data);
    $self->_scans_data( $self->_orig_scans_data()->copy() );

    return 0;
}

my $BOARDS_DIM     = 0;

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

                error => "No q_more", );
        }

        $iters_quota += $q_more;

        my $iters        = $self->_scans_data()->slice(":,:,0");
        my $solved       = ( ( $iters <= $iters_quota ) & ( $iters > 0 ) );
        my $num_moves    = $self->_scans_data->slice(":,:,2");
        my $solved_moves = $solved * $num_moves;

        my $solved_moves_sums   = _my_sum_over($solved_moves);
        my $solved_moves_counts = _my_sum_over($solved);
        my $solved_moves_avgs   = $solved_moves_sums / $solved_moves_counts;

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

                error => "No q_more", );
        }

        $iters_quota += $q_more;

        my $iters        = $self->_scans_data()->slice(":,:,0");
        my $solved       = ( ( $iters <= $iters_quota ) & ( $iters > 0 ) );
        my $num_moves    = $self->_scans_data->slice(":,:,2");
        my $solved_moves = $solved * $num_moves;

        my $solved_moves_maxima = $solved_moves->maximum()->slice(":,(0),(0)");
        my $solved_moves_counts = _my_sum_over($solved);

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

        $iters_quota += $q_more;

        ( undef, $num_solved_in_iter, undef, $selected_scan_idx ) =
            PDL::minmaximum(
            PDL::sumover(
                ( $self->_scans_data() <= $iters_quota ) &
                    ( $self->_scans_data() > 0 )
            )
            );
    }

    return {

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN


sub _get_num_scans
{
    my $self = shift;

    return ( ( $self->_scans_data()->dims() )[$SCANS_DIM] );
}

sub _calc_chosen_scan
{
    my ( $self, $selected_scan_idx, $iters_quota ) = @_;

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

        PDL::Core::pdl( [ map { [1] } ( 1 .. $self->_get_num_scans() ) ] );

    my $next_num_iters_for_each_scan_x_scan =
        ( ( $ones_constant x $flares_num_iters ) );

    my $num_moves = $self->_scans_data->slice(":,:,1");

    # The number of moves for dimension 0,1,2 above.
    my $num_moves_repeat = $num_moves->clump( 1 .. 2 )->xchg( 0, 1 )
        ->dummy( 0, $self->_get_num_scans() );

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

            )
        );

        # print "\$next_num_iters = $next_num_iters\n";

        my $iters = $self->_scans_data()->slice(":,:,0");

        my $iters_repeat =
            $iters->dummy( 0, $self->_get_num_scans() )->xchg( 1, 2 )
            ->clump( 2 .. 3 );

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

    my $self  = shift;
    my $board = shift;

    my $board_iters = 0;

    my @info      = PDL::list( $self->_orig_scans_data()->slice("$board,:") );
    my @orig_info = @info;

    foreach my $s ( @{ $self->chosen_scans() } )
    {
        if (   ( $info[ $s->scan_idx() ] > 0 )

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN


    $args ||= {};

    my $chosen_scans = ( $args->{chosen_scans} || $self->chosen_scans );

    my @info = PDL::list( $self->_orig_scans_data()->slice("$board_idx,:") );

    my $board_iters = 0;

    my @scan_runs;

lib/AI/Pathfinding/OptimizeMultiple.pm  view on Meta::CPAN

=head1 ACKNOWLEDGEMENTS

B<popl> from Freenode's #perl for trying to dig some references to an existing
algorithm in the scientific literature.

=for :stopwords cpan testmatrix url bugtracker rt cpants kwalitee diff irc mailto metadata placeholders metacpan

=head1 SUPPORT

=head2 Websites

 view all matches for this distribution


AI-Pathfinding-SMAstar

 view release on metacpan or  search on metacpan

lib/AI/Pathfinding/SMAstar.pm  view on Meta::CPAN

	_state_eval_func => undef,	
	_state_goal_p_func => undef,
	_state_num_successors_func => undef,
	_state_successors_iterator => undef,
	_show_prog_func => undef,
	_state_get_data_func => undef,


	@_, # attribute override
    };
    return bless $self, $class;

lib/AI/Pathfinding/SMAstar.pm  view on Meta::CPAN

    my $self = shift;
    if (@_) { $self->{_state_successors_iterator} = shift }
    return $self->{_state_successors_iterator};    
}

sub state_get_data_func {
    my $self = shift;
    if (@_) { $self->{_state_get_data_func} = shift }
    return $self->{_state_get_data_func};    
}

sub show_prog_func {
    my $self = shift;
    if (@_) { $self->{_show_prog_func} = shift }

lib/AI/Pathfinding/SMAstar.pm  view on Meta::CPAN


    my $state_eval_func = $self->{_state_eval_func};
    my $state_goal_p_func = $self->{_state_goal_p_func};
    my $state_num_successors_func = $self->{_state_num_successors_func},
    my $state_successors_iterator = $self->{_state_successors_iterator},
    my $state_get_data_func = $self->{_state_get_data_func};
    
    # make sure required functions have been defined
    if(!defined($state_eval_func)){
	croak "SMAstar:  evaluation function is not defined\n";
    }

lib/AI/Pathfinding/SMAstar.pm  view on Meta::CPAN

	_state           => $state,
	_eval_func      => $state_eval_func,
	_goal_p_func    => $state_goal_p_func,
	_num_successors_func => $state_num_successors_func,
	_successors_iterator => $state_successors_iterator,
	_get_data_func  => $state_get_data_func,
	);
    
    
    my $fcost = AI::Pathfinding::SMAstar::Path::fcost($state_obj);
    # check if the fcost of this node looks OK (is numeric)

lib/AI/Pathfinding/SMAstar.pm  view on Meta::CPAN


        # must return *one* successor at a time
        _state_successors_iterator => \&FrontierObj::get_successors_iterator,   

        # can be any suitable string representation 
        _state_get_data_func       => \&FrontierObj::string_representation,  

        # gets called once per iteration, useful for showing algorithm progress
        _show_prog_func            => \&FrontierObj::progress_callback,      
    );

lib/AI/Pathfinding/SMAstar.pm  view on Meta::CPAN

to maintain the memory-bounded constraint of SMA* search.


=item *

B<State get-data function> (C<_state_get_data_func> above)

This function returns a string representation of this node.


=item *

lib/AI/Pathfinding/SMAstar.pm  view on Meta::CPAN


Set/get the handle to the function that returns iterator that produces the 
next successor of this node.


=head2 state_get_data_func()

 $smastar->state_get_data_func(\&FrontierObj::string_representation);

Set/get the handle to the function that returns a string 
representation of this node.


 view all matches for this distribution


AI-Perceptron-Simple

 view release on metacpan or  search on metacpan

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    } );

    # train
    $nerve->tame( ... );
    $nerve->exercise( ... );
    $nerve->train( $training_data_csv, $expected_column_name, $save_nerve_to );
    # or
    $nerve->train(
        $training_data_csv, $expected_column_name, $save_nerve_to, 
        $show_progress, $identifier); # these two parameters must go together


    # validate
    $nerve->take_lab_test( ... );
    $nerve->take_mock_exam( ... );

    # fill results to original file
    $nerve->validate( { 
        stimuli_validate => $validation_data_csv, 
        predicted_column_index => 4,
     } );
    # or        
    # fill results to a new file
    $nerve->validate( {
        stimuli_validate => $validation_data_csv,
        predicted_column_index => 4,
        results_write_to => $new_csv
    } );


lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $nerve->test( ... );


    # confusion matrix
    my %c_matrix = $nerve->get_confusion_matrix( { 
        full_data_file => $file_csv, 
        actual_output_header => $header_name,
        predicted_output_header => $predicted_header_name,
        more_stats => 1, # optional
    } );

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

        zero_as => "bad apples", # cat  milk   green  etc.
        one_as => "good apples", # dog  honey  pink   etc.
    } );


    # saving and loading data of perceptron locally
    # NOTE: nerve data is automatically saved after each trainning process
    use AI::Perceptron::Simple ":local_data";

    my $nerve_file = "apples.nerve";
    preserve( ... );
    save_perceptron( $nerve, $nerve_file );

    # load data of percpetron for use in actual program
    my $apple_nerve = revive( ... );
    my $apple_nerve = load_perceptron( $nerve_file );


    # for portability of nerve data
    use AI::Perceptron::Simple ":portable_data";

    my $yaml_nerve_file = "pearls.yaml";
    preserve_as_yaml ( ... );
    save_perceptron_yaml ( $nerve, $yaml_nerve_file );

    # load nerve data on the other computer
    my $pearl_nerve = revive_from_yaml ( ... );
    my $pearl_nerve = load_perceptron_yaml ( $yaml_nerve_file );


    # processing data
    use AI::Perceptron::Simple ":process_data";
    shuffle_stimuli ( ... )
    shuffle_data ( ORIGINAL_STIMULI, $new_file_1, $new_file_2, ... );
    shuffle_data ( $original_stimuli => $new_file_1, $new_file_2, ... );

=head1 EXPORT

None by default.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


The tags available include the following:

=over 4

=item C<:process_data> - subroutines under C<DATA PROCESSING RELATED SUBROUTINES> section.

=item C<:local_data> - subroutines under C<NERVE DATA RELATED SUBROUTINES> section.

=item C<:portable_data> - subroutines under C<NERVE PORTABILITY RELATED SUBROUTINES> section.

=back

Most of the stuff are OO.

=cut

use Exporter qw( import );
our @EXPORT_OK = qw( 
    shuffle_data shuffle_stimuli
    preserve save_perceptron revive load_perceptron
    preserve_as_yaml save_perceptron_yaml revive_from_yaml load_perceptron_yaml
);
our %EXPORT_TAGS = ( 
    process_data => [ qw( shuffle_data shuffle_stimuli ) ],
    local_data => [ qw( preserve save_perceptron revive load_perceptron ) ],
    portable_data => [ qw( preserve_as_yaml save_perceptron_yaml revive_from_yaml load_perceptron_yaml ) ],
);

=head1 DESCRIPTION

This module provides methods to build, train, validate and test a perceptron. It can also save the data of the perceptron for future use for any actual AI programs.

This module is also aimed to help newbies grasp hold of the concept of perceptron, training, validation and testing as much as possible. Hence, all the methods and subroutines in this module are decoupled as much as possible so that the actual script...

The implementation here is super basic as it only takes in input of the dendrites and calculate the output. If the output is higher than the threshold, the final result (category) will 
be 1 aka perceptron is activated. If not, then the result will be 0 (not activated).

Depending on how you view or categorize the final result, the perceptron will fine tune itself (aka train) based on the learning rate until the desired result is met. Everything from 
here on is all mathematics and numbers which only makes sense to the computer and not humans anymore.

Whenever the perceptron fine tunes itself, it will increase/decrease all the dendrites that is significant (attributes labelled 1) for each input. This means that even when the 
perceptron successfully fine tunes itself to suite all the data in your file for the first round, the perceptron might still get some of the things wrong for the next round of training. 
Therefore, the perceptron should be trained for as many rounds as possible. The more "confusion" the perceptron is able to correctly handle, the more "mature" the perceptron is. 
No one defines how "mature" it is except the programmer himself/herself :)

=head1 CONVENTIONS USED

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


=head1 DATASET STRUCTURE

I<This module can only process CSV files.>

Any field ie columns that will be used for processing must be binary ie. C<0> or C<1> only. Your dataset can contain other columns with non-binary data as long as they are not one of the dendrites.

There are soem sample dataset which can be found in the C<t> directory. The original dataset can also be found in C<docs/book_list.csv>. The files can also be found L<here|https://github.com/Ellednera/AI-Perceptron-Simple>.

=head1 PERCEPTRON DATA

The perceptron/neuron data is stored using the C<Storable> module. 

See C<Portability of Nerve Data> section below for more info on some known issues.

=head1 DATA PROCESSING RELATED SUBROUTINES

These subroutines can be imported using the tag C<:process_data>.

These subroutines should be called in the procedural way.

=head2 shuffle_stimuli ( ... )

The parameters and usage are the same as C<shuffled_data>. See the next two subroutines.

=head2 shuffle_data ( $original_data => $shuffled_1, $shuffled_2, ... )

=head2 shuffle_data ( ORIGINAL_DATA, $shuffled_1, $shuffled_2, ... )

Shuffles C<$original_data> or C<ORIGINAL_DATA> and saves them to other files.

=cut

sub shuffle_stimuli {
    shuffle_data( @_ );
}

sub shuffle_data {
    my $stimuli = shift or croak "Please specify the original file name";
    my @shuffled_stimuli_names = @_ 
        or croak "Please specify the output files for the shuffled data";
    
    my @aoa;
    for ( @shuffled_stimuli_names ) {
        # copied from _real_validate_or_test
        # open for shuffling

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

        @aoa = shuffle( @$aoa ); # this can only process actual array
        unshift @aoa, $attrib_array_ref; # put back the headers before saving file

        csv( in => \@aoa, out => $_, encoding => ":encoding(utf-8)" ) 
        and
        print "Saved shuffled data into ", basename($_), "!\n";

    }
}

=head1 CREATION RELATED SUBROUTINES/METHODS

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

=cut

sub new {
    my $class = shift;
    
    my $data_ref = shift;
    my %data = %{ $data_ref };
    
    # check keys
    $data{ learning_rate } = LEARNING_RATE if not exists $data{ learning_rate };
    $data{ threshold } = THRESHOLD if not exists $data{ threshold };
    
    #####
    # don't pack this key checking process into a subroutine for now
    # this is also used in &_real_validate_or_test
    my @missing_keys;
    for ( qw( initial_value attribs ) ) {
        push @missing_keys, $_ unless exists $data{ $_ };
    }
    
    croak "Missing keys: @missing_keys" if @missing_keys;
    #####
    
    # continue to process the rest of the data
    my %attributes;
    for ( @{ $data{ attribs } } ) {
        $attributes{ $_ } = $data{ initial_value };
    }
    
    my %processed_data = (
        learning_rate => $data{ learning_rate },
        threshold => $data{ threshold },
        attributes_hash_ref => \%attributes,
    );
    
    bless \%processed_data, $class;
}

=head2 get_attributes

Obtains a hash of all the attributes of the perceptron

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


=head2 train ( $stimuli_train_csv, $expected_output_header, $save_nerve_to_file, $display_stats, $identifier )

Trains the perceptron. 

C<$stimuli_train_csv> is the set of data / input (in CSV format) to train the perceptron while C<$save_nerve_to_file> is 
the filename that will be generate each time the perceptron finishes the training process. This data file is the data of the C<AI::Perceptron::Simple> 
object and it is used in the C<validate> method.

C<$expected_output_header> is the header name of the columns in the csv file with the actual category or the exepcted values. This is used to determine to tune the nerve up or down. This value should only be 0 or 1 for the sake of simplicity.

C<$display_stats> is B<optional> and the default is 0. It will display more output about the tuning process. It will show the followings:

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


The new sum of all C<weightage * input> after fine-tuning the nerve

=back

If C<$display_stats> is specified ie. set to C<1>, then you B<MUST> specify the C<$identifier>. C<$identifier> is the column / header name that is used to identify a specific row of data in C<$stimuli_train_csv>.

=cut

sub tame {
    train( @_ );

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    if ( $display_stats and not defined $identifier ) {
        croak "Please specifiy a string for \$identifier if you are trying to display stats";
    }
    
    # CSV processing is all according to the documentation of Text::CSV
    open my $data_fh, "<:encoding(UTF-8)", $stimuli_train_csv 
        or croak "Can't open $stimuli_train_csv: $!";
    
    my $csv = Text::CSV->new( {auto_diag => 1, binary => 1} );
    
    my $attrib = $csv->getline($data_fh);
    $csv->column_names( $attrib );

    # individual row
    ROW: while ( my $row = $csv->getline_hr($data_fh) ) {
        # print $row->{book_name}, " -> ";
        # print $row->{$expected_output_header} ? "意林\n" : "魅丽优品\n";

        # calculate the output and fine tune parameters if necessary
        while (1) {

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

                next ROW;
            } #else { print "Something's not right\n'" }
        }
    }

    close $data_fh;
    
    save_perceptron( $self, $save_nerve_to_file ); # this doesn't return anything
    
}

=head2 &_calculate_output( $self, \%stimuli_hash )

Calculates and returns the C<sum(weightage*input)> for each individual row of data. Actually, it justs add up all the existing weight since the C<input> is always 1 for now :)

C<%stimuli_hash> is the actual data to be used for training. It might contain useless columns.

This will get all the avaible dendrites using the C<get_attributes> method and then use all the keys ie. headers to access the corresponding values.

This subroutine should be called in the procedural way for now.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


=head2 take_lab_test (...)

=head2 validate ( \%options )

This method validates the perceptron against another set of data after it has undergone the training process.

This method calculates the output of each row of data and write the result into the predicted column. The data begin written into the new file or the original file will maintain it's sequence.

Please take note that this method will load all the data of the validation stimuli, so please split your stimuli into multiple files if possible and call this method a few more times.

For C<%options>, the followings are needed unless mentioned:

=over 4

=item stimuli_validate => $csv_file

This is the CSV file containing the validation data, make sure that it contains a column with the predicted values as it is needed in the next key mentioned: C<predicted_column_index>

=item predicted_column_index => $column_number

This is the index of the column that contains the predicted output values. C<$index> starts from C<0>.

This column will be filled with binary numbers and the full new data will be saved to the file specified in the C<results_write_to> key.

=item results_write_to => $new_csv_file

Optional.

The default behaviour will write the predicted output back into C<stimuli_validate> ie the original data. The sequence of the data will be maintained.

=back

I<*This method will call C<_real_validate_or_test> to do the actual work.>

=cut

sub take_mock_exam {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

sub take_lab_test {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

sub validate {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

=head1 TESTING RELATED SUBROUTINES/METHODS

All the testing methods here have the same parameters as the actual C<test> method and they all do the same stuff. They are also used in the same way.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


=cut

# redirect to _real_validate_or_test
sub take_real_exam {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

sub work_in_real_world {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

sub test {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

=head2 _real_validate_or_test ( $data_hash_ref )

This is where the actual validation or testing takes place. 

C<$data_hash_ref> is the list of parameters passed into the C<validate> or C<test> methods.

This is a B<method>, so use the OO way. This is one of the exceptions to the rules where private subroutines are treated as methods :)

=cut

sub _real_validate_or_test {

    my $self = shift;   my $data_hash_ref = shift;
    
    #####
    my @missing_keys;
    for ( qw( stimuli_validate predicted_column_index ) ) {
        push @missing_keys, $_ unless exists $data_hash_ref->{ $_ };
    }
    
    croak "Missing keys: @missing_keys" if @missing_keys;
    #####
    
    my $stimuli_validate = $data_hash_ref->{ stimuli_validate };
    my $predicted_index = $data_hash_ref->{ predicted_column_index };
    
    # actual processing starts here
    my $output_file = defined $data_hash_ref->{ results_write_to } 
                        ? $data_hash_ref->{ results_write_to }
                        : $stimuli_validate;
    
    # open for writing results
    my $aoa = csv (in => $stimuli_validate, encoding => ":encoding(utf-8)");
    

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $aoa = _fill_predicted_values( $self, $stimuli_validate, $predicted_index, $aoa );

    # put back the array of headers before saving file
    unshift @$aoa, $attrib_array_ref;

    print "Saving data to $output_file\n";
    csv( in => $aoa, out => $output_file, encoding => ":encoding(utf-8)" );
    print "Done saving!\n";

}

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


sub _fill_predicted_values {
    my ( $self, $stimuli_validate, $predicted_index, $aoa ) = @_;

    # CSV processing is all according to the documentation of Text::CSV
    open my $data_fh, "<:encoding(UTF-8)", $stimuli_validate 
        or croak "Can't open $stimuli_validate: $!";
    
    my $csv = Text::CSV->new( {auto_diag => 1, binary => 1} );
    
    my $attrib = $csv->getline($data_fh);
    
    $csv->column_names( $attrib );

    # individual row
    my $row = 0;
    while ( my $data = $csv->getline_hr($data_fh) ) {
        
        if ( _calculate_output( $self, $data )  >= $self->threshold ) {
            # write 1 into aoa
            $aoa->[ $row ][ $predicted_index ] = 1;
        } else {
            #write 0 into aoa
            $aoa->[ $row ][ $predicted_index ] = 0;
        }
        
        $row++;
    }
    
    close $data_fh;
    
    $aoa;
}

=head1 RESULTS RELATED SUBROUTINES/METHODS

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


For C<%options>, the followings are needed unless mentioned:

=over 4

=item full_data_file => $filled_test_file

This is the CSV file filled with the predicted values. 

Make sure that you don't do anything to the actual and predicted output in this file after testing the nerve. These two columns must contain binary values only!

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


=cut

sub _collect_stats {
    my $info = shift;
    my $file = $info->{ full_data_file };
    my $actual_header = $info->{ actual_output_header };
    my $predicted_header = $info->{ predicted_output_header };
    my $more_stats = defined ( $info->{ more_stats } ) ? 1 : 0;
    
    my %c_matrix = ( 
        true_positive => 0, true_negative => 0, false_positive => 0, false_negative => 0,
        accuracy => 0, sensitivity => 0
    );
    
    # CSV processing is all according to the documentation of Text::CSV
    open my $data_fh, "<:encoding(UTF-8)", $file
        or croak "Can't open $file: $!";
    
    my $csv = Text::CSV->new( {auto_diag => 1, binary => 1} );
    
    my $attrib = $csv->getline($data_fh); # get the row of headers, can't specify any column
    # shouldn't be a problem, since we're reading line by line :)

    $csv->column_names( $attrib );

    # individual row
    while ( my $row = $csv->getline_hr($data_fh) ) {
        
        # don't pack this part into another subroutine, number of rows can be very big
        if ( $row->{ $actual_header } == 1 and $row->{ $predicted_header } == 1 ) {

            # true positive

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

            "Make sure that the actual and predicted values in your file are binary ie 0 or 1" ;
            
        }
    }
    
    close $data_fh;

    _calculate_total_entries( \%c_matrix );

    _calculate_sensitivity( \%c_matrix );
    

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    %c_matrix;
}

=head2 &_calculate_total_entries ( $c_matrix_ref )

Calculates and adds the data for the C<total_entries> key in the confusion matrix hash.

=cut

sub _calculate_total_entries {

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


}

=head2 &_calculate_accuracy ( $c_matrix_ref )

Calculates and adds the data for the C<accuracy> key in the confusion matrix hash.

=cut

sub _calculate_accuracy {

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    # no need to return anything, we're using ref
}

=head2 &_calculate_sensitivity ( $c_matrix_ref )

Calculates and adds the data for the C<sensitivity> key in the confusion matrix hash.

=cut

sub _calculate_sensitivity {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    # no need to return anything, we're using ref
}

=head2 &_calculate_precision ( $c_matrix_ref )

Calculates and adds the data for the C<precision> key in the confusion matrix hash.

=cut

sub _calculate_precision {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $c_matrix->{ precision } = $numerator / $denominator * 100;
}

=head2 &_calculate_specificity ( $c_matrix_ref )

Calculates and adds the data for the C<specificity> key in the confusion matrix hash.

=cut

sub _calculate_specificity {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $c_matrix->{ specificity } = $numerator / $denominator * 100;
}

=head2 &_calculate_f1_score ( $c_matrix_ref )

Calculates and adds the data for the C<F1_Score> key in the confusion matrix hash.

=cut

sub _calculate_f1_score {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $c_matrix->{ F1_Score } = $numerator / $denominator * 100;
}       

=head2  &_calculate_negative_predicted_value( $c_matrix_ref )

Calculates and adds the data for the C<negative_predicted_value> key in the confusion matrix hash.

=cut

sub _calculate_negative_predicted_value {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $c_matrix->{ negative_predicted_value } = $numerator / $denominator * 100;
}

=head2  &_calculate_false_negative_rate( $c_matrix_ref )

Calculates and adds the data for the C<false_negative_rate> key in the confusion matrix hash.

=cut

sub _calculate_false_negative_rate {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $c_matrix->{ false_negative_rate } = $numerator / $denominator * 100;
}

=head2  &_calculate_false_positive_rate( $c_matrix_ref )

Calculates and adds the data for the C<false_positive_rate> key in the confusion matrix hash.

=cut

sub _calculate_false_positive_rate {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $c_matrix->{ false_positive_rate } = $numerator / $denominator * 100;
}

=head2  &_calculate_false_discovery_rate( $c_matrix_ref )

Calculates and adds the data for the C<false_discovery_rate> key in the confusion matrix hash.

=cut

sub _calculate_false_discovery_rate {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $c_matrix->{ false_discovery_rate } = $numerator / $denominator * 100;
}

=head2  &_calculate_false_omission_rate( $c_matrix_ref )

Calculates and adds the data for the C<false_omission_rate> key in the confusion matrix hash.

=cut

sub _calculate_false_omission_rate {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $c_matrix->{ false_omission_rate } = $numerator / $denominator * 100;
}

=head2  &_calculate_balanced_accuracy( $c_matrix_ref )

Calculates and adds the data for the C<balanced_accuracy> key in the confusion matrix hash.

=cut

sub _calculate_balanced_accuracy {
    my $c_matrix = shift;

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    my $actual_1_sum = $c_matrix->{ false_negative } + $c_matrix->{ true_positive };
    # column sum
    my $predicted_0_sum = $c_matrix->{ true_negative } + $c_matrix->{ false_negative };
    my $predicted_1_sum = $c_matrix->{ false_positive } + $c_matrix->{ true_positive };
    
    my $data = [ 
        [ $c_matrix->{ true_negative },  $c_matrix->{ false_positive }, $actual_0_sum ],
        [ $c_matrix->{ false_negative }, $c_matrix->{ true_positive }, $actual_1_sum ],
        [ $predicted_0_sum, $predicted_1_sum, $c_matrix->{ total_entries } ],
    ];
    my $matrix = Text::Matrix->new(
        rows => $actual_rows,
        columns => $predicted_columns,
        data => $data,
    );
    
    $matrix, $c_matrix;
}

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    print "~~" x24, "\n";
}

=head1 NERVE DATA RELATED SUBROUTINES

This part is about saving the data of the nerve. These subroutines can be imported using the C<:local_data> tag.

B<The subroutines are to be called in the procedural way>. No checking is done currently.

See C<PERCEPTRON DATA> and C<KNOWN ISSUES> sections for more details on the subroutines in this section.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


The parameters and usage are the same as C<load_perceptron>. See the next subroutine.

=head2 load_perceptron ( $nerve_file_to_load )

Loads the data and turns it into a C<AI::Perceptron::Simple> object as the return value.

=cut

sub revive {
    load_perceptron( @_ );

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

    $loaded_nerve;
}

=head1 NERVE PORTABILITY RELATED SUBROUTINES

These subroutines can be imported using the C<:portable_data> tag.

The file type currently supported is YAML. Please be careful with the data as you won't want the nerve data accidentally modified.

=head2 preserve_as_yaml ( ... )

The parameters and usage are the same as C<save_perceptron_yaml>. See the next subroutine.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


The parameters and usage are the same as C<load_perceptron>. See the next subroutine.

=head2 load_perceptron_yaml ( $yaml_nerve_file )

Loads the YAML data and turns it into a C<AI::Perceptron::Simple> object as the return value.

=cut

sub revive_from_yaml {
    load_perceptron_yaml( @_ );

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


=over 4

=item * Clean up and refactor source codes

=item * Add more useful data for confusion matrix

=item * Implement shuffling data feature

=item * Implement fast/smart training feature

=item * Write a tutorial or something for this module

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


=head1 KNOWN ISSUES

=head2 Portability of Nerve Data

Take note that the C<Storable> nerve data is not compatible across different versions.

If you really need to send the nerve data to different computers with different versions of C<Storable> module, see the docs of the following subroutines: 

=over 4

=item * C<&preserve_as_yaml> or C<&save_perceptron_yaml> for storing data.

=item * C<&revive_from_yaml> or C<&load_perceptron_yaml> for retrieving the data.

=back

=head1 AUTHOR

 view all matches for this distribution


AI-PredictionClient-Alien-TensorFlowServingProtos

 view release on metacpan or  search on metacpan

alien_packages/tds/base64.proto  view on Meta::CPAN

option cc_enable_arenas = true;
option java_outer_classname = "base64";
option java_multiple_files = true;
option java_package = "org.tds";

// Protocol buffer to encode/decode base64 data for JSON transport.
// Protocol Buffers encode bytes to base64 when transforming to JSON.

message Base64Proto {
 repeated bytes base64  = 1;
};

 view all matches for this distribution


AI-PredictionClient

 view release on metacpan or  search on metacpan

lib/AI/PredictionClient.pm  view on Meta::CPAN


If you don't have a server to talk to, but want to see if most everything else is working use 
the --debug_loopback_interface. This will provide a sample response you can test the client with. 
The module can use the same loopback interface for debugging your bespoke clients.

The --debug_verbose option will dump the data structures of the request and response to allow
you to see what is going on.

=head3 The response from a live server to the camel image looks like this:

 Inception.pl --image_file=zzzzz --debug_camel --host=107.170.xx.xxx --port=9000    

lib/AI/PredictionClient.pm  view on Meta::CPAN


 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

In this design I flattened the architecture out and where the native data handling of Perl is best, 
the modules use plain old Perl data structures rather than creating another layer of accessors.

The Tensor interface is used repetitively so this package includes a simplified Tensor class 
to pack and unpack data to and from the models.

In the case of most clients, the Tensor class is simply sending and receiving rank one tensors - vectors. 
In the case of higher rank tensors, the tensor data is sent and received flattened. 
The size property would be used for importing/exporting the tensors in/out of a math package.   

The design takes advantage of the native JSON serialization capabilities built into the C++ Protocol Buffers. 
Serialization allows a much simpler more robust interface to be created between the Perl environment 
and the C++ environment. 
One of the biggest advantages is for the developer who would like to quickly extend what this package does. 
You can see how the data structures are built and directly manipulate them in Perl. 
Of course, if you can be more forward looking, building the proper roles and classes and contributing them would be great. 

=head1 DEPENDENCIES

This module is dependent on gRPC. This module will use the cpan module Alien::Google::GRPC to 

 view all matches for this distribution


AI-Prolog

 view release on metacpan or  search on metacpan

examples/data_structures.pl  view on Meta::CPAN


use AI::Prolog;

# note that the following line sets an experimental interface option
AI::Prolog->raw_results(0);
my $database = <<'END_PROLOG';
append([], X, X).
append([W|X],Y,[W|Z]) :- append(X,Y,Z).
END_PROLOG

my $logic = AI::Prolog->new($database);
$logic->query('append(LIST1,LIST2,[a,b,c,d]).');
while (my $result = $logic->results) {
    print Dumper($result->LIST1);
    print Dumper($result->LIST2);
}

 view all matches for this distribution


AI-SimulatedAnnealing

 view release on metacpan or  search on metacpan

t/annealing_tests.t  view on Meta::CPAN

        when ($Text::BSV::Exception::IO_ERROR) {
            say STDERR "Couldn't open $DQ$bsv_file_path$DQ for reading.";
            exit(1);
        }
        when ($Text::BSV::Exception::INVALID_DATA_FORMAT) {
            say STDERR "Invalid BSV data:  " . $exception->get_message();
            exit(1);
        }
        default {
            say STDERR $exception->get_message();
            exit(1);
        } # end when
    } # end given
} # end if

# Generate a list of distances for each probability from the data in the
# BSV file:
my $field_names = $bsv_file_reader->get_field_names();
my @mapped_distances; # indexes 2-5 = Probability constants;
                      # values = references to number arrays

t/annealing_tests.t  view on Meta::CPAN

unless ($field_names->[0] eq "Time"
  && $field_names->[1] =~ /$Probability::ONE_FIFTH\z/s
  && $field_names->[2] =~ /$Probability::ONE_FOURTH\z/s
  && $field_names->[3] =~ /$Probability::ONE_THIRD\z/s
  && $field_names->[4] =~ /$Probability::ONE_HALF\z/s) {
    die "ERROR:  The input file does not contain market-distance data in "
      . "the expected format.\n";
} # end unless

while ($bsv_file_reader->has_next()) {
    my $record;

t/annealing_tests.t  view on Meta::CPAN

    };

    if ($EVAL_ERROR) {
        given ($EVAL_ERROR->get_type()) {
            when ($Text::BSV::Exception::INVALID_DATA_FORMAT) {
                die "ERROR:  Invalid BSV data:  "
                  . $EVAL_ERROR->get_message() . $LF;
            }
            default {
                die "ERROR:  " . $EVAL_ERROR->get_message() . $LF;
            } # end when

t/annealing_tests.t  view on Meta::CPAN


    $dex = $record->{"Time"} - 3;

    unless ($dex >= 0
      && $dex <= scalar($mapped_distances[$Probability::ONE_FIFTH])) {
        die "ERROR:  The input file does not contain market-distance data "
          . "in the expected format.\n";
    } # end unless

    for my $p (2..5) {
        push @{ $mapped_distances[$p] }, $record->{$field_names->[6 - $p]};

 view all matches for this distribution


AI-TensorFlow-Libtensorflow

 view release on metacpan or  search on metacpan

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN

package AI::TensorFlow::Libtensorflow::Buffer;
# ABSTRACT: Buffer that holds pointer to data with length
$AI::TensorFlow::Libtensorflow::Buffer::VERSION = '0.0.7';
use strict;
use warnings;
use namespace::autoclean;
use AI::TensorFlow::Libtensorflow::Lib qw(arg);

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN





FFI::C->struct( 'TF_Buffer' => [
	data => 'opaque',
	length => 'size_t',
	_data_deallocator => 'opaque', # data_deallocator_t
	# this does not work?
	#_data_deallocator => 'data_deallocator_t',
]);
use Sub::Delete;
delete_sub 'DESTROY';

sub data_deallocator {
	my ($self, $coderef) = shift;

	return $self->{_data_deallocator_closure} unless $coderef;

	my $closure = $ffi->closure( $coderef );

	$closure->sticky;
	$self->{_data_deallocator_closure} = $closure;

	my $opaque = $ffi->cast('data_deallocator_t', 'opaque', $closure);
	$self->_data_deallocator( $opaque );
}


$ffi->attach( [ 'NewBuffer' => 'New' ] => [] => 'TF_Buffer' );

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN


=encoding UTF-8

=head1 NAME

AI::TensorFlow::Libtensorflow::Buffer - Buffer that holds pointer to data with length

=head1 SYNOPSIS

  use aliased 'AI::TensorFlow::Libtensorflow::Buffer' => 'Buffer';

=head1 DESCRIPTION

C<TFBuffer> is a data structure that stores a pointer to a block of data, the
length of the data, and optionally a deallocator function for memory
management.

This structure is typically used in C<libtensorflow> to store the data for a
serialized protocol buffer.

=head1 CONSTRUCTORS

=head2 New

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN

=back

Makes a copy of the input and sets an appropriate deallocator. Useful for
passing in read-only, input protobufs.

  my $data = 'bytes';
  my $buffer = Buffer->NewFromString(\$data);
  ok $buffer, 'create buffer from string';
  is $buffer->length, bytes::length($data), 'same length as string';

B<Parameters>

=over 4

lib/AI/TensorFlow/Libtensorflow/Buffer.pm  view on Meta::CPAN


=over 4

=item L<TFBuffer|AI::TensorFlow::Libtensorflow::Lib::Types/TFBuffer>

Contains a copy of the input data from C<$proto>.

=back

B<C API>: L<< C<TF_NewBufferFromString>|AI::TensorFlow::Libtensorflow::Manual::CAPI/TF_NewBufferFromString >>

=head1 ATTRIBUTES

=head2 data

An C<opaque> pointer to the buffer.

=head2 length

Length of the buffer as a C<size_t>.

=head2 data_deallocator

A C<CodeRef> for the deallocator.

=head1 DESTRUCTORS

 view all matches for this distribution


AI-Termites

 view release on metacpan or  search on metacpan

MANIFEST  view on Meta::CPAN

Makefile.PL
MANIFEST			This list of files
README
samples/termites.pl
t/AI-Termites.t
META.yml                                 Module meta-data (added by MakeMaker)

 view all matches for this distribution


AI-XGBoost

 view release on metacpan or  search on metacpan

examples/basic.pl  view on Meta::CPAN

use AI::XGBoost qw(train);

# We are going to solve a binary classification problem:
#  Mushroom poisonous or not

my $train_data = DMatrix->From(file => 'agaricus.txt.train');
my $test_data = DMatrix->From(file => 'agaricus.txt.test');

# With XGBoost we can solve this problem using 'gbtree' booster
#  and as loss function a logistic regression 'binary:logistic'
#  (Gradient Boosting Regression Tree)
# XGBoost Tree Booster has a lot of parameters that we can tune
# (https://github.com/dmlc/xgboost/blob/master/doc/parameter.md)

my $booster = train(data => $train_data, number_of_rounds => 10, params => {
        objective => 'binary:logistic',
        eta => 1.0,
        max_depth => 2,
        silent => 1
    });

# For binay classification predictions are probability confidence scores in [0, 1]
#  indicating that the label is positive (1 in the first column of agaricus.txt.test)
my $predictions = $booster->predict(data => $test_data);

say join "\n", @$predictions[0 .. 10];

 view all matches for this distribution


AIIA-GMT

 view release on metacpan or  search on metacpan

MANIFEST  view on Meta::CPAN

MANIFEST
ppport.h
README
t/AIIA-GMT.t
lib/AIIA/GMT.pm
META.yml                                 Module meta-data (added by MakeMaker)

 view all matches for this distribution


( run in 0.733 second using v1.01-cache-2.11-cpan-496ff517765 )