view release on metacpan or search on metacpan
FuzzyInference.pm view on Meta::CPAN
return $obj;
}
# sub _init() - private method.
#
# no arguments. Initializes the data structures we will need.
# It also defines the default logic operations we might need.
sub _init {
my $self = shift;
FuzzyInference.pm view on Meta::CPAN
return undef unless exists $self->{RESULTS}{$var};
return $self->{RESULTS}{$var};
}
# sub reset() - public method
#
# cleans the data structures used.
sub reset {
my $self = shift;
my @list = $self->{SET}->listMatching(q|:implicated$|);
push @list => $self->{SET}->listMatching(q|:aggregated$|);
FuzzyInference.pm view on Meta::CPAN
sub compute {
my ($self,
%vars,
) = @_;
$self->reset();
# First thing we do is to fuzzify the inputs.
$self->_fuzzify(%vars);
# Now, apply the rules to see which ones fire.
FuzzyInference.pm view on Meta::CPAN
my $_defuzzification = $self->{DEFUZZIFICATION};
# iterate through all output vars.
for my $var (keys %{$self->{OUTVARS}}) {
my $result = 0;
if ($self->{SET}->exists("$var:aggregated")) {
$result = $self->{SET}->$_defuzzification("$var:aggregated");
}
$self->{RESULTS}{$var} = $result;
}
}
# sub _aggregate() - private method.
#
FuzzyInference.pm view on Meta::CPAN
In this step, all the defined rules are examined. Each rule has two parts:
the I<precedent> and the I<consequent>. The degree of support for each
rule is computed by applying fuzzy operators (I<and>, I<or>) to combine
all parts of its precendent, and generate a single crisp value. This value
indicates the "strength of firing" of the rule, and is used to reshape
(I<implicate>) the consequent part of the rule, generating modified
fuzzy sets.
=head2 Aggregation
FuzzyInference.pm view on Meta::CPAN
=over 8
=item min
The result of C<A and B> is C<min(A, B)>. This is the default.
=item product
The result of C<A and B> is C<A * B>.
=back
=item |
=over 8
=item max
The result of C<A or B> is C<max(A, B)>. This is the default.
=item sum
The result of C<A or B> is C<min(A + B, 1)>.
=back
=item !
=over 8
=item complement
The result of C<not A> is C<1 - A>. This is the default.
=back
The method returns the name of the method to be used for the given
operation.
FuzzyInference.pm view on Meta::CPAN
This method is used to add the fuzzy rules. Its arguments are hash-value
pairs; the keys are the precedents and the values are the consequents.
Each antecedent has to be a combination of 1 or more strings. The
strings have to be separated by C<&> or C<|> indicating the fuzzy
I<AND> and I<OR> operations respectively. Each consequent must be a
single string. Each string has the form: C<var = term_set>. Spaces
are completely optional. Example:
$obj->addRule('height=short & weight=big' => 'diet = necessary',
'height=tall & weight=tiny' => 'diet = are_you_kidding_me');
FuzzyInference.pm view on Meta::CPAN
=item compute()
This method takes as input a set of hash-value pairs; the keys are names
of input variables, and the values are the values of the variables. It
runs those values through the FIS, generating corresponding values for
the output variables. It always returns a true value. To get the actual
values of the output variables, look at the C<value()> method below.
Example:
$obj->compute(x => 5,
y => 24);
Note that any subsequent call to C<compute()> will implicitly clear out
the old computed values before recomputing the new ones. This is done
through a call to the C<reset()> method below.
=item value()
This method returns the value of the supplied output variable. It only
works for output variables (defined using the C<outVar()> method),
and only returns useful results after a call to C<compute()> has been
made.
=item reset()
This method resets all the data structures used to compute crisp values
of the output variables. It is implicitly called by the C<compute()>
method above.
=back
FuzzyInference.pm view on Meta::CPAN
make install
=head1 AUTHOR
Copyright 2002, Ala Qumsieh. All rights reserved.
This library is free software; you can redistribute it and/or modify
it under the same terms as Perl itself.
Address bug reports and comments to: aqumsieh@cpan.org
view all matches for this distribution
view release on metacpan or search on metacpan
AI/Gene/Sequence.pm view on Meta::CPAN
=head2 Anatomy of a gene
A gene is a sequence of tokens, each a member of some group
of simillar tokens (they can of course all be members of a
single group). This module encodes genes as a string
representing token types, and an array containing the
tokens themselves, this allows for arbitary data to be
stored as a token in a gene.
For instance, a regular expression could be encoded as:
$self = ['ccartm',['a', 'b', '|', '[A-Z]', '\W', '*?'] ]
Using a string to indicate the sort of thing held at the
corresponding part of the gene allows for a simple test
of the validity of a proposed gene by using a regular
expression.
=head2 Using the module
To use the genetic sequences, you must write your own
implementations of the following methods:
AI/Gene/Sequence.pm view on Meta::CPAN
If positions are not defined, then random ones are chosen. If
lengths are not defined, a length of 1 is assumed (ie. working on
single tokens only), if a length of 0 is requested, then a random
length is chosen.
Also, if a mutation is suggested but would result in an invalid
sequence, then the mutation will not be carried out.
If a mutation is attempted which could corrupt your gene (copying
from a region beyond the end of the gene for instance) then it
will be silently skipped. Mutation methods all return the number
of mutations carried out (not the number of tokens affected).
AI/Gene/Sequence.pm view on Meta::CPAN
For an illustration of the use of this module, see Regexgene.pm,
Musicgene.pm, spamscan.pl and music.pl from the gziped distribution.
=head1 COPYRIGHT
Copyright (c) 2000 Alex Gough <F<alex@rcon.org>>. All rights reserved.
This program is free software; you can redistribute it and/or
modify it under the same terms as Perl itself.
=head1 BUGS
view all matches for this distribution
view release on metacpan or search on metacpan
make test
make install
DEPENDENCIES
This module requires these other modules and libraries:
blah blah blah
COPYRIGHT AND LICENCE
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Genetic/Pro.pm view on Meta::CPAN
mutation _mutator
strategy _strategist
selection _selector
_translations
generation
preserve
variable_length
_fix_range
_package
_length
strict _strict
lib/AI/Genetic/Pro.pm view on Meta::CPAN
}
#=======================================================================
sub _state {
my ( $self ) = @_;
my @res;
if( $self->_package ){
@res = map {
[
${ tied( @{ $self->chromosomes->[ $_ ] } ) },
$self->_fitness->{ $_ },
]
} 0 .. $self->population - 1
}else{
@res = map {
[
$self->chromosomes->[ $_ ],
$self->_fitness->{ $_ },
]
} 0 .. $self->population - 1
}
return \@res;
}
#=======================================================================
sub evolve {
my ($self, $generations) = @_;
lib/AI/Genetic/Pro.pm view on Meta::CPAN
croak(qq/Chromosome was modified outside the 'evolve' function from "@tmp0" to "@tmp1"!/) unless compare(\@tmp0, \@tmp1);
}
}
# split into two loops just for speed
unless($self->preserve){
for(my $i = 0; $i != $generations; $i++){
# terminate ----------------------------------------------------
last if $self->terminate and $self->terminate->($self);
# update generation --------------------------------------------
$self->generation($self->generation + 1);
lib/AI/Genetic/Pro.pm view on Meta::CPAN
$self->_crossover();
# mutation -----------------------------------------------------
$self->_mutation();
}
}else{
croak('You cannot preserve more chromosomes than is in population!') if $self->preserve > $self->population;
my @preserved;
for(my $i = 0; $i != $generations; $i++){
# terminate ----------------------------------------------------
last if $self->terminate and $self->terminate->($self);
# update generation --------------------------------------------
$self->generation($self->generation + 1);
# update history -----------------------------------------------
$self->_save_history;
#---------------------------------------------------------------
# preservation of N unique chromosomes
@preserved = map { clone($_) } @{ $self->getFittest_as_arrayref($self->preserve - 1, 1) };
# selection ----------------------------------------------------
$self->_select_parents();
# crossover ----------------------------------------------------
$self->_crossover();
# mutation -----------------------------------------------------
$self->_mutation();
#---------------------------------------------------------------
for(@preserved){
my $idx = int rand @{$self->chromosomes};
$self->chromosomes->[$idx] = $_;
$self->_fitness->{$idx} = $self->fitness()->($self, $_);
}
}
lib/AI/Genetic/Pro.pm view on Meta::CPAN
return oct('0b' . $ga->as_string($chromosome));
}
sub terminate {
my ($ga) = @_;
my $result = oct('0b' . $ga->as_string($ga->getFittest));
return $result == 4294967295 ? 1 : 0;
}
my $ga = AI::Genetic::Pro->new(
-fitness => \&fitness, # fitness function
-terminate => \&terminate, # terminate function
lib/AI/Genetic/Pro.pm view on Meta::CPAN
-crossover => 0.9, # probab. of crossover
-mutation => 0.01, # probab. of mutation
-parents => 2, # number of parents
-selection => [ 'Roulette' ], # selection strategy
-strategy => [ 'Points', 2 ], # crossover strategy
-cache => 0, # cache results
-history => 1, # remember best results
-preserve => 3, # remember the bests
-variable_length => 1, # turn variable length ON
-mce => 1, # optional MCE support
-workers => 3, # number of workers (MCE)
);
lib/AI/Genetic/Pro.pm view on Meta::CPAN
This defines the size of the population, i.e. how many chromosomes
simultaneously exist at each generation.
=item -crossover
This defines the crossover rate. The fairest results are achieved with
crossover rate ~0.95.
=item -mutation
This defines the mutation rate. The fairest results are achieved with mutation
rate ~0.01.
=item -preserve
This defines injection of the bests chromosomes into a next generation. It causes a little slow down, however (very often) much better results are achieved. You can specify, how many chromosomes will be preserved, i.e.
-preserve => 1, # only one chromosome will be preserved
# or
-preserve => 9, # 9 chromosomes will be preserved
# and so on...
Attention! You cannot preserve more chromosomes than exist in your population.
=item -variable_length
This defines whether variable-length chromosomes are turned on (default off)
and a which types of mutation are allowed. See below.
lib/AI/Genetic/Pro.pm view on Meta::CPAN
X^($aa - 1) * (1 - X)^($bb - 1) / B($aa , $bb) for 0 < X < 1.
C<$aa> and C<$bb> are set by default to number of parents.
B<Argument restrictions:> Both $aa and $bb must not be less than 1.0E-37.
=item C<-selection =E<gt> [ 'RouletteDistribution', 'binomial' ]>
Binomial distribution. No additional parameters are needed.
lib/AI/Genetic/Pro.pm view on Meta::CPAN
X^($aa - 1) * (1 - X)^($bb - 1) / B($aa , $bb) for 0 < X < 1.
C<$aa> and C<$bb> are set by default to number of parents.
B<Argument restrictions:> Both $aa and $bb must not be less than 1.0E-37.
=item C<-selection =E<gt> [ 'Distribution', 'binomial' ]>
Binomial distribution. No additional parameters are needed.
lib/AI/Genetic/Pro.pm view on Meta::CPAN
X^($aa - 1) * (1 - X)^($bb - 1) / B($aa , $bb) for 0 < X < 1.
C<$aa> and C<$bb> are set by default to the number of parents.
B<Argument restrictions:> Both $aa and $bb must not be less than 1.0E-37.
=item C<-strategy =E<gt> [ 'Distribution', 'binomial' ]>
Binomial distribution. No additional parameters are needed.
lib/AI/Genetic/Pro.pm view on Meta::CPAN
This initializes a population where each individual/chromosome has 10 genes.
=item B<listvector>
For listvectors, the argument is an anonymous list of lists. The number of sub-lists is equal to the number of genes of each individual/chromosome. Each sub-list defines the possible string values that the corresponding gene can assume.
$ga->init([
[qw/red blue green/],
[qw/big medium small/],
[qw/very_fat fat fit thin very_thin/],
lib/AI/Genetic/Pro.pm view on Meta::CPAN
This initializes a population where each individual/chromosome has 3 genes and each gene can assume one of the given values.
=item B<rangevector>
For rangevectors, the argument is an anonymous list of lists. The number of sub-lists is equal to the number of genes of each individual/chromosome. Each sub-list defines the minimum and maximum integer values that the corresponding gene can assume.
$ga->init([
[1, 5],
[0, 20],
[4, 9],
]);
This initializes a population where each individual/chromosome has 3 genes and each gene can assume an integer within the corresponding range.
=item B<combination>
For combination, the argument is an anonymous list of possible values of gene.
lib/AI/Genetic/Pro.pm view on Meta::CPAN
Alias for C<people>.
=item I<$ga>-E<gt>B<chart>(%options)
Generate a chart describing changes of min, mean, and max scores in your
population. To satisfy your needs, you can pass the following options:
=over 4
=item -filename
lib/AI/Genetic/Pro.pm view on Meta::CPAN
Load a state of the genetic algorithm from the specified file.
=item I<$ga>-E<gt>B<as_array>($chromosome)
In list context return an array representing the specified chromosome.
In scalar context return an reference to an array representing the specified
chromosome. If I<variable_length> is turned on and is set to level 2, an array
can have some C<undef> values. To get only C<not undef> values use
C<as_array_def_only> instead of C<as_array>.
=item I<$ga>-E<gt>B<as_array_def_only>($chromosome)
In list context return an array representing the specified chromosome.
In scalar context return an reference to an array representing the specified
chromosome. If I<variable_length> is turned off, this function is just an
alias for C<as_array>. If I<variable_length> is turned on and is set to
level 2, this function will return only C<not undef> values from chromosome.
See example below:
lib/AI/Genetic/Pro.pm view on Meta::CPAN
# @chromosome looks something like that
# ( 1, 0, 1, 1, 1, 0 )
=item I<$ga>-E<gt>B<as_string>($chromosome)
Return a string representation of the specified chromosome. See example below:
# -type => 'bitvector'
my $string = $ga->as_string($chromosome);
# $string looks something like that
lib/AI/Genetic/Pro.pm view on Meta::CPAN
string C<undef> values will be replaced with B<spaces>. If you don't want
to see any I<spaces>, use C<as_string_def_only> instead of C<as_string>.
=item I<$ga>-E<gt>B<as_string_def_only>($chromosome)
Return a string representation of specified chromosome. If I<variable_length>
is turned off, this function is just alias for C<as_string>. If I<variable_length>
is turned on and is set to level 2, this function will return a string without
C<undef> values. See example below:
# -variable_length => 2, -type => 'bitvector'
lib/AI/Genetic/Pro.pm view on Meta::CPAN
L<AI::Genetic>
L<Algorithm::Evolutionary>
=head1 COPYRIGHT
Copyright (c) Strzelecki Lukasz. All rights reserved.
This program is free software; you can redistribute it and/or modify it
under the same terms as Perl itself.
=cut
view all matches for this distribution
view release on metacpan or search on metacpan
sub generation { $_[0]{GENERATION} }
# sub inject():
# This method is used to add individuals to the current population.
# The point of it is that sometimes the population gets stagnant,
# so it could be useful add "fresh blood".
# Takes a variable number of arguments. The first argument is the
# total number, N, of new individuals to add. The remaining arguments
# are genomes to inject. There must be at most N genomes to inject.
# If the number, n, of genomes to inject is less than N, N - n random
# genomes are added. Perhaps an example will help?
}
sub terminateFunc {
my $ga = shift;
# terminate if reached some threshold.
return 1 if $ga->getFittest->score > $THRESHOLD;
return 0;
}
=head1 DESCRIPTION
along the way.
B<PLEASE NOTE:> As of v0.02, AI::Genetic has been re-written from
scratch to be more modular and expandable. To achieve this, I had
to modify the API, so it is not backward-compatible with v0.01.
As a result, I do not plan on supporting v0.01.
I will not go into the details of GAs here, but here are the
bare basics. Plenty of information can be found on the web.
In a GA, a population of individuals compete for survival. Each
individual is designated by a set of genes that define its
behaviour. Individuals that perform better (as defined by the
fitness function) have a higher chance of mating with other
individuals. When two individuals mate, they swap some of
their genes, resulting in an individual that has properties
from both of its "parents". Every now and then, a mutation
occurs where some gene randomly changes value, resulting in
a different individual. If all is well defined, after a few
generations, the population should converge on a "good-enough"
solution to the problem being tackled.
A GA implementation runs for a discrete number of time steps
=item B<2. Crossover>
Here, individuals selected are randomly paired up for
crossover (aka I<sexual reproduction>). This is further
controlled by the crossover rate specified and may result in
a new offspring individual that contains genes common to
both parents. New individuals are injected into the current
population.
=item B<3. Mutation>
=item o
For listvectors, the argument is an anonymous list of lists. The
number of sub-lists is equal to the number of genes of each individual.
Each sub-list defines the possible string values that the corresponding gene
can assume.
$ga->init([
[qw/red blue green/],
[qw/big medium small/],
=item o
For rangevectors, the argument is an anonymous list of lists. The
number of sub-lists is equal to the number of genes of each individual.
Each sub-list defines the minimum and maximum integer values that the
corresponding gene can assume.
$ga->init([
[1, 5],
[0, 20],
[4, 9],
]);
this initializes a population where each individual has 3 genes, and each gene
can assume an integer within the corresponding range.
=back
=item I<$ga>-E<gt>B<inject>(I<N>, ?I<args>?)
The population is sorted according to the individuals' fitnesses.
=item o
The subroutine corresponding to the named strategy is called with one argument,
the AI::Genetic object. This subroutine is expected to alter the object itself.
=item o
If a termination subroutine is given, it is executed and the return value is
This returns the I<N> fittest individuals. If not specified,
I<N> defaults to 1. As a side effect, it sorts the population by
fitness score. The actual AI::Genetic::Individual objects are returned.
You can use the C<genes()> and C<score()> methods to get the genes and the
scores of the individuals. Please check L<AI::Genetic::Individual> for details.
=item I<$ga>-E<gt>B<sortPopulation>
This method sorts the population according to fitness function. The results
are cached for speed.
=item I<$ga>-E<gt>B<sortIndividuals>(?[I<ListOfIndividuals>]?)
Given an anonymous list of individuals, this method sorts them according
Very quickly you will realize that properly defining the fitness function
is the most important aspect of a GA. Most of the time that a genetic
algorithm takes to run is spent in running the fitness function for each
separate individual to get its fitness. AI::Genetic tries to minimize this
time by caching the fitness result for each individual. But, B<you should
spend a lot of time optimizing your fitness function to achieve decent run
times.>
The fitness function should expect only one argument, an anonymous list of
genes, corresponding to the individual being analyzed. It is expected
to return a number which defines the fitness score of the said individual.
The higher the score, the more fit the individual, the more the chance it
has to be chosen for crossover.
=head1 STRATEGIES
Perl can be pretty fast, but will never reach the speed of optimized
C code (at least my Perl coding will not). I wrote AI::Genetic mainly
for my own learning experience, but still tried to optimize it as
much as I can while trying to keep it as flexible as possible.
To do that, I resorted to some well-known tricks like passing a
reference of a long list instead of the list itself (for example,
when calling the fitness function, a reference of the gene list
is passed), and caching fitness scores (if you try to evaluate
the fitness of the same individual more than once, then the fitness
function will not be called, and the cached result is returned).
To help speed up your run times, you should pay special attention
to the design of your fitness function since this will be called once
for each unique individual in each generation. If you can shave off a
few clock cycles here and there, then it will be greatly magnified in
discussions and great suggestions. Daniel Martin and Ivan Tubert-Brohman
uncovered various bugs and for this I'm grateful.
=head1 COPYRIGHTS
(c) 2003-2005 Ala Qumsieh. All rights reserved.
This module is distributed under the same terms as Perl itself.
=cut
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Image.pm view on Meta::CPAN
# Get URL from image prompt
sub image {
my ($self, $prompt) = @_;
my $response = $http->post($url{$self->{'api'}}, {
'headers' => {
'Authorization' => 'Bearer ' . $self->{'key'},
'Content-type' => 'application/json'
},
content => encode_json {
model => $self->{'model'},
size => $self->{'size'},
prompt => $prompt,
}
});
if ($response->{'content'} =~ 'invalid_api_key') {
croak 'Incorrect API Key - check your API Key is correct';
}
if ($self->{'debug'} and !$response->{'success'}) {
croak $response if $self->{'debug'} eq 'verbose';
croak $response->{'content'};
}
my $reply = decode_json($response->{'content'});
return $reply->{'data'}[0]->{'url'};
}
__END__
lib/AI/Image.pm view on Meta::CPAN
=head1 MODELS
Although the API Key is free, each use incurs a cost. This is dependent on the
model chosen and the size. The 'dall-e-3' model produces better images but at a
higher cost. Likewise, bigger images cost more.
The default model C<dall-e-2> with the default size of C<512x512> produces resonable
results at a low cost and is a good place to start using this module.
See also L<https://platform.openai.com/docs/models/overview>
=head1 METHODS
lib/AI/Image.pm view on Meta::CPAN
=head1 BUGS
Please report any bugs or feature requests to C<bug-ai-image at rt.cpan.org>, or through
the web interface at L<https://rt.cpan.org/NoAuth/ReportBug.html?Queue=bug-ai-image>. I will be notified, and then you'll
automatically be notified of progress on your bug as I make changes.
=head1 SUPPORT
You can find documentation for this module with the perldoc command.
view all matches for this distribution
view release on metacpan or search on metacpan
LibNeural.pm view on Meta::CPAN
$nn->train( [ 0, 0 ], [ 0.05 ], 0.0000000005, 0.2 );
$nn->train( [ 0, 1 ], [ 0.05 ], 0.0000000005, 0.2 );
$nn->train( [ 1, 0 ], [ 0.05 ], 0.0000000005, 0.2 );
$nn->train( [ 1, 1 ], [ 0.95 ], 0.0000000005, 0.2 );
my $result = $nn->run( [ 1, 1 ] );
# result should be ~ 0.95
$result = $nn->run( [ 0, 1 ] );
# result should be ~ 0.05
$nn->save('and.mem');
=head1 ABSTRACT
LibNeural.pm view on Meta::CPAN
nodes, and OUTPUTS output nodes.
=item $nn->train([I1,I2,...],[O1,O2,...],MINERR,TRAINRATE)
Completes a training cycle for the given inputs I1-IN, with the expected
results of O1-OM, where N is the number of inputs and M is the number of
outputs. MINERR is the mean squared error at the output that you wish to be achieved. TRAINRATE is the learning rate to be used.
=item (O1,O2) = $nn->run([I1,I2,...])
Calculate the corresponding outputs (O1-OM) for the given inputs (I1-ON) based
on the previous training. Should only be called after the network has been
suitably trained.
=item NUM = $nn->get_layersize(WHICH)
LibNeural.pm view on Meta::CPAN
=item status = $nn->load(FILENAME)
=item status = $nn->save(FILENAME)
Loads and saves respectively the 'memory,' node configuration and weights,
of the network. FILENAME should be the location of the file in which the
memory is stored/retrieved.
=back
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
my @args = ("./dlv", "$_[1]");
system(@args) == 0
or die "system @args failed: $?";
open(STDOUT,">&SAVESTDOUT"); #close file and restore STDOUT
close OUTPUT;
}
sub executeAndSave { #Executes DLV and saves the output of the program written by the user in a file
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
open(STDOUT, ">$_[0]") or die "Can't open STDOUT to $_[0]", "$!\n";
my @args = ("./dlv --");
system(@args) == 0 or die "system @args failed: $?";
open(STDOUT,">&SAVESTDOUT"); #close file and restore STDOUT
close OUTPUT;
}
sub iterativeExec { # Executes an input program with several instances and stores them in a bidimensional array
my @input = @_;
my @returned_value;
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
return @returned_value;
}
sub singleExec { # Executes a single input program or opens the DLV terminal and stores it in an array
my @input = @_;
my @returned_value;
if(@input) {
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
}
sub getASFromFile { #Gets the Answer Set from the file where the output was saved
open RESULT, "<", "$_[0]" or die $!;
my @result = <RESULT>;
my @arr;
foreach my $line (@result) {
if($line =~ /\{\w*/) {
$line =~ s/(\{|\})//g;
#$line =~ s/\n//g; # delete \n from $line
my @tmp = split(', ', $line);
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
return @arr;
}
sub getAS { #Returns the Answer Sets from the array where the output was saved
my @result = @_;
my @arr;
foreach my $line (@result) {
if($line =~ /\{\w*/) {
$line =~ s/(\{|\})//g;
$line =~ s/(Best model:)//g;
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
sub getProjection { #Returns the values selected by the user
my @pr = @{$_[0]};
my @projection;
my @res = @{$pr[$_[1]]{$_[2]}};
my $size = @res;
my $fieldsStr;
for(my $i = 0; $i < $size; $i++) {
my $pred = @{$pr[$_[1]]{$_[2]}}[$i];
if($pred =~ /(\w+)\((.+)\)/) {
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
# invoke DLV( AnwerSetProgramming-based system) and save the stdoutput
my @stdoutput = AI::Logic::AnswerSet::singleExec("3-colorability.txt");
# parse the output
my @res = AI::Logic::AnswerSet::getAS(@stdoutput);
# map the results
my @mappedAS = AI::Logic::AnswerSet::mapAS(\@res);
# get a predicate from the results
my @col = AI::Logic::AnswerSet::getPred(\@mappedAS,1,"col");
# get a term of a predicate
my @term = AI::Logic::AnswerSet::getProjection(\@mappedAS,1,"col",2);
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
The module was originally published as "ASPerl", but suffered from
some problems with the namespace, now changed. The module has been
also significantly rearranged according to the advices coming from the
community. Thank you all!
If you are using this module, please let us know: we are always
interested in end-users desires, and we wish to improve our library:
comments are truly welcome!
=head2 Methods
=head3 executeFromFileAndSave
This method allows to execute DLV with and input file and save the output in another file.
AI::Logic::AnswerSet::executeFromFileAndSave("outprog.txt","dlvprog.txt","");
In this case the file "outprog.txt" consists of the result of the DLV invocation
with the file "dlvprog.txt".
No code is specified in the third value of the method. It can be used to add code
to an existing file or to a new one.
AI::Logic::AnswerSet::executeFromFileAndSave("outprog.txt","dlvprog.txt",
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
To call DLV without an input file, directly writing the ASP code from the terminal,
use this method, passing only the name of the output file.
AI::Logic::AnswerSet::executeAndSave("outprog.txt");
Press Ctrl+D to stop using the DLV terminal and execute the program.
=head3 singleExec
Use this method to execute DLV whit several input files, including also
DLV options like "-nofacts".
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
=head3 iterativeExec
This method allows to call multiples DLV executions for several instances of the same problem.
Suppose you have a program that calculates the 3-colorability of a graph; in this case
one might have more than a graph, and each graph instance can be stored in a different file.
A Perl programmer might want to work with the results of all the graphs she has in her files,
so this function will be useful for this purpose.
Use it like in the following:
my @outputs = AI::Logic::AnswerSet::iterativeExec("3col.txt","nodes.txt","./instances");
In this case the nodes of each graph are the same, but not the edges.
Notice that in order to correctly use this method, the user must specify the path
to the instances (the edges, in this case).
The output of this function is a two-dimensional array; each element corresponds to the result
of a single DLV execution, exactly as in the case of the function C<singleExec()>.
=head3 selectOutput
This method allows to get one of the results of C<iterativeExec>.
my @outputs = AI::Logic::AnswerSet::iterativeExec("3col.txt","nodes.txt","./instances");
my @out = AI::Logic::AnswerSet::selectOutput(\@outputs,0);
In this case the first output is selected.
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
=head3 getASFromFile
Parses the output of a DLV execution saved in a file and gather the answer sets.
AI::Logic::AnswerSet::executeFromFileAndSave("outprog.txt","dlvprog.txt","");
my @result = AI::Logic::AnswerSet::getASFromFile("outprog.txt");
=head3 getAS
Parses the output of a DLV execution and gather the answer sets.
my @out = AI::Logic::AnswerSet::singleExec("3col.txt","nodes.txt","edges.txt","-nofacts");
my @result = AI::Logic::AnswerSet::getAS(@out);
=head3 mapAS
Parses the new output in order to save and organize the results into a hashmap.
my @out = AI::Logic::AnswerSet::singleExec("3col.txt","nodes.txt","edges.txt","-nofacts");
my @result = AI::Logic::AnswerSet::getAS(@out);
my @mappedAS = AI::Logic::AnswerSet::mapAS(@result);
The user can set some constraints on the data to be saved in the hashmap, such as predicates, or answer sets, or both.
my @mappedAS = AI::Logic::AnswerSet::mapAS(@result,@predicates,@answerSets);
For instance, think about the 3-colorability problem: imagine to
have the edges in the hashmap, and to print the edges contained in the third answer set
returned by DLV; this is an example of the print instruction, useful to understand how
the hashmap works:
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
Suppose that we have the predicate "person" C<person(Name,Surename);> and
that we just want the surenames of all the instances of "person":
my @surenames = AI::Logic::AnswerSet::getProjection(\@mappedAS,3,"person",2);
The parameters are, respectively: hashmap, number of the answer set, name of the predicate,
position of the term.
=head3 statistics
This method returns an array of hashes with some stats of every predicate of every answer set,
namely the number of occurrences of the specified predicates of each answer set.
If a condition is specified(number of predicates), only the answer sets that satisfy
the condition are returned.
my @res = AI::Logic::AnswerSet::getAS(@output);
my @predicates = ("node","edge");
my @stats = AI::Logic::AnswerSet::statistics(\@res,\@predicates);
In this case the data structure returned is the same as the one returned by C<mapAS()>.
Hence, for each answer set (each element of the array of hashes), the hashmap will appear
like this:
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
}
This means that for a particular answer set we have 6 nodes and 9 edges.
In addition, this method can be used with some constraints:
my @res = AI::Logic::AnswerSet::getAS(@output);
my @predicates = ("node,"edge");
my @numbers = (4,15);
my @operators = (">","<");
my @stats = AI::Logic::AnswerSet::statistics(\@res,\@predicates,\@numbers,\@operators);
Now the functions returns the answer sets that satisfy the condition, i.e., an answer set
is returned only if the number of occurrences of the predicate "node" is higher than 4, and the number of occurrences of the predicate "edge" less than 15.
=head3 getFacts
lib/AI/Logic/AnswerSet.pm view on Meta::CPAN
AI::Logic::AnswerSet::createNewFile($file,"b(3). b(4).");
=head3 addFacts
Quiclky adds facts to a file. Imagine to have some data(representing facts)
stored inside an array; just use this method to put them in a file and give it a name.
AI::Logic::AnswerSet::addFacts("villagers",\@villagers,">","villagersFile.txt");
In the example above, "villagers" will be the name of the facts; C<@villagers> is the array
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/ML/Expr.pm view on Meta::CPAN
=head2 prediction
=cut
sub prediction {
my ($self, %opts) = @_;
my $t = exists $opts{threshold} ? $opts{threshold} : 0.50;
return _bless _predict_binary_classification($self->matrix_id, $t);
}
=head2 precision
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/MXNet/Gluon/Contrib.pm view on Meta::CPAN
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
package AI::MXNet::Gluon::Contrib;
use strict;
view all matches for this distribution
view release on metacpan or search on metacpan
examples/image_classification.pl view on Meta::CPAN
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
use strict;
use warnings;
examples/image_classification.pl view on Meta::CPAN
GetOptions(
## my Pembroke Welsh Corgi Kyuubi, enjoing Solar eclipse of August 21, 2017
'image=s' => \(my $image = 'http://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/'.
'gluon/dataset/kyuubi.jpg'),
'model=s' => \(my $model = 'resnet152_v2'),
'help' => sub { HelpMessage(0) },
) or HelpMessage(1);
## get a pretrained model (download parameters file if necessary)
my $net = get_model($model, pretrained => 1);
examples/image_classification.pl view on Meta::CPAN
# And then perform a center crop to obtain a 224-by-224 image.
# The following code uses the image processing functions provided
# in the AI::MXNet::Image module.
$image = mx->image->imread($image);
$image = mx->image->resize_short($image, $model =~ /inception/ ? 330 : 256);
($image) = mx->image->center_crop($image, [($model =~ /inception/ ? 299 : 224)x2]);
## CV that is used to read image is column major (as PDL)
$image = $image->transpose([2,0,1])->expand_dims(axis=>0);
## normalizing the image
my $rgb_mean = nd->array([0.485, 0.456, 0.406])->reshape([1,3,1,1]);
my $rgb_std = nd->array([0.229, 0.224, 0.225])->reshape([1,3,1,1]);
$image = ($image->astype('float32') / 255 - $rgb_mean) / $rgb_std;
# Now we can recognize the object in the image.
# We perform an additional softmax on the output to obtain probability scores.
# And then print the top-5 recognized objects.
my $prob = $net->($image)->softmax;
for my $idx (@{ $prob->topk(k=>5)->at(0) })
{
my $i = $idx->asscalar;
view all matches for this distribution
view release on metacpan or search on metacpan
examples/calculator.pl view on Meta::CPAN
## creates a pdl with $n rows and two columns with random
## floats in the range between 0 and 1
my $data = PDL->random(2, $n);
## creates the pdl with $n rows and one column with labels
## labels are floats that either sum or product, etc of
## two random values in each corresponding row of the data pdl
my $label = $func->($data->slice('0,:'), $data->slice('1,:'));
# partition into train/eval sets
my $edge = int($n / 8);
my $validation_data = $data->slice(":,0:@{[ $edge - 1 ]}");
my $validation_label = $label->slice(":,0:@{[ $edge - 1 ]}");
examples/calculator.pl view on Meta::CPAN
my $wide = mx->sym->Concat($data, $ln);
my $fc = mx->sym->FullyConnected(
$wide,
num_hidden => 1
);
return mx->sym->MAERegressionOutput(data => $fc, name => 'softmax');
}
sub learn_function {
my(%args) = @_;
my $func = $args{func};
examples/calculator.pl view on Meta::CPAN
$model->fit($train_iter,
eval_data => $eval_iter,
optimizer => 'adam',
optimizer_params => {
learning_rate => $args{lr}//0.01,
rescale_grad => 1/$batch_size,
lr_scheduler => AI::MXNet::FactorScheduler->new(
step => 100,
factor => 0.99
)
},
examples/calculator.pl view on Meta::CPAN
my $iter = mx->io->NDArrayIter(
batch_size => 1,
data => PDL->pdl([[ 0, 0 ]]),
label => PDL->pdl([[ 0 ]]),
);
$model->reshape(
data_shapes => $iter->provide_data,
label_shapes => $iter->provide_label,
);
# wrap a helper around making predictions
view all matches for this distribution
view release on metacpan or search on metacpan
"inc"
]
},
"prereqs" : {
"build" : {
"requires" : {
"ExtUtils::MakeMaker" : "0"
}
},
"configure" : {
"requires" : {
"ExtUtils::MakeMaker" : "0"
}
},
"runtime" : {
"requires" : {
"Test::More" : "0"
}
}
},
"release_status" : "stable",
view all matches for this distribution
view release on metacpan or search on metacpan
inc/Module/AutoInstall.pm view on Meta::CPAN
$VERSION = '1.03';
}
# special map on pre-defined feature sets
my %FeatureMap = (
'' => 'Core Features', # XXX: deprecated
'-core' => 'Core Features',
);
# various lexical flags
my ( @Missing, @Existing, %DisabledTests, $UnderCPAN, $HasCPANPLUS );
my ( $Config, $CheckOnly, $SkipInstall, $AcceptDefault, $TestOnly );
inc/Module/AutoInstall.pm view on Meta::CPAN
_update_to( $modules, @_ ) and return if $option eq 'version';
# sets CPAN configuration options
$Config = $modules if $option eq 'config';
# promote every features to core status
$core_all = ( $modules =~ /^all$/i ) and next
if $option eq 'core';
next unless $option eq 'core';
}
inc/Module/AutoInstall.pm view on Meta::CPAN
my $site = shift;
return (
( _load('Socket') and Socket::inet_aton($site) ) or _prompt(
qq(
*** Your host cannot resolve the domain name '$site', which
probably means the Internet connections are unavailable.
==> Should we try to install the required module(s) anyway?), 'n'
) =~ /^[Yy]/
);
}
inc/Module/AutoInstall.pm view on Meta::CPAN
ExtUtils::MakeMaker::WriteMakefile(%args);
print << "." unless $PostambleUsed;
*** WARNING: Makefile written with customized MY::postamble() without
including contents from Module::AutoInstall::postamble() --
auto installation features disabled. Please contact the author.
.
return 1;
}
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/MegaHAL.pm view on Meta::CPAN
Creates a new AI::MegaHAL object. The object constructor can optionaly receive the following named parameters:
=over 4
=item B<Path> - The path to MegaHALs brain or training file (megahal.brn and megahal.trn respectively). If 'Path' is not specified the current working directory is assumed.
=item B<Banner> - A flag which enables/disables the banner which is displayed when MegaHAL starts up. The default is to disable the banner.
=item B<Prompt> - A flag which enables/disables the prompt. This flag is only useful when MegaHAL is run interactively and is disabled by default.
=item B<Wrap> - A flag which enables/disables word wrapping of MegaHALs responses when the lines exceed 80 characters in length. The default is to disable word wrapping.
=back
=head1 METHODS
lib/AI/MegaHAL.pm view on Meta::CPAN
=head2 learn
$megahal->learn($message);
Learns from $message without generating a response
=head1 BUGS
None known at this time.
view all matches for this distribution
view release on metacpan or search on metacpan
bin/from-folder.pl view on Meta::CPAN
p @{[keys %$files,reverse @ARGV,$storage]};
__DATA__
our $c = AI::MicroStructure::Context->new(@ARGV);
$c->retrieveIndex($PWD."/t/docs"); #"/home/santex/data-hub/data-hub" structures=0 text=1 json=1
my $style = {};
$style->{explicit} = 1;
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/NNEasy.pm view on Meta::CPAN
$def ||= {} ;
$conf = { nodes=>$conf } if !ref($conf) ;
foreach my $Key ( keys %$def ) { $$conf{$Key} = $$def{$Key} if !exists $$conf{$Key} ;}
my $layer_conf = {nodes=>1 , persistent_activation=>0 , decay=>0 , random_activation=>0 , threshold=>0 , activation_function=>'tanh' , random_weights=>1} ;
foreach my $Key ( keys %$layer_conf ) { $$layer_conf{$Key} = $$conf{$Key} if exists $$conf{$Key} ;}
return $layer_conf ;
}
sub reset_nn {
my $this = ref($_[0]) ? shift : undef ;
my $CLASS = ref($this) || __PACKAGE__ ;
$this->{NN} = AI::NNEasy::NN->new( @{ $this->{NN_ARGS} } ) ;
}
lib/AI/NNEasy.pm view on Meta::CPAN
if ( -s $file ) {
open (my $fh, $file) ;
my $dump = join '' , <$fh> ;
close ($fh) ;
my $restored = thaw($dump) ;
if ($restored) {
my $fl = $this->{FILE} ;
%$this = %$restored ;
$this->{FILE} = $fl if $fl ;
return 1 ;
}
}
return ;
lib/AI/NNEasy.pm view on Meta::CPAN
my $error_ok = $this->{ERROR_OK} ;
my $check_diff_count = 1000 ;
my ($learn_ok,$counter,$err,$err_last,$err_count,$err_static, $reset_count1 , $reset_count2 ,$print) ;
$err_static = 0 ;
while ( ($learn_ok < $ins_ok) && ($counter < $limit) ) {
($err , $learn_ok , $print) = $this->_learn_set_get_output_error(\@set , $error_ok , $ins_ok , $verbose) ;
lib/AI/NNEasy.pm view on Meta::CPAN
print "err_static = $err_static\n" if $verbose && $err_static ;
$err_last = $err ;
my $reseted ;
if ( $err_static >= $err_static_limit || ($err > 1 && $err_static >= $err_static_limit_positive) ) {
$err_static = 0 ;
$counter -= 2000 ;
$reseted = 1 ;
++$reset_count1 ;
if ( ( $reset_count1 + $reset_count2 ) > 2 ) {
$reset_count1 = $reset_count2 = 0 ;
print "** Reseting NN...\n" if $verbose ;
$this->reset_nn ;
}
else {
print "** Reseting weights due NULL diff...\n" if $verbose ;
$this->{NN}->init ;
}
lib/AI/NNEasy.pm view on Meta::CPAN
if ( !($counter % $check_diff_count) ) {
$err_count /= ($check_diff_count/100) ;
print "ERR COUNT> $err_count\n" if $verbose ;
if ( !$reseted && $err_count < 0.001 ) {
$err_static = 0 ;
$counter -= 1000 ;
++$reset_count2 ;
if ( ($reset_count1 + $reset_count2) > 2 ) {
$reset_count1 = $reset_count2 = 0 ;
print "** Reseting NN...\n" if $verbose ;
$this->reset_nn ;
}
else {
print "** Reseting weights due LOW diff...\n" if $verbose ;
$this->{NN}->init ;
}
lib/AI/NNEasy.pm view on Meta::CPAN
The file path to save the NN. Default: 'nneasy.nne'.
=item @OUTPUT_TYPES
An array of outputs that the NN can have, so the NN can find the nearest number in this
list to give your the right output.
=item ERROR_OK
The maximal error of the calculated output.
lib/AI/NNEasy.pm view on Meta::CPAN
Run a input and return the output calculated by the NN based in what the NN already have learned.
=head2 run_get_winner (@INPUT)
Same of I<run()>, but the output will return the nearest output value based in the
I<@OUTPUT_TYPES> defined at I<new()>.
For example an input I<[0,1]> learned that have
the output I<[1]>, actually will return something like 0.98324 as output and
not 1, since the error never should be 0. So, with I<run_get_winner()>
lib/AI/NNEasy.pm view on Meta::CPAN
sub bar($x , $y) {
$this->add($x , $y) ;
}
sub[C] int add( int x , int y ) {
int res = x + y ;
return res ;
}
}
What make possible to write the module in 2 days! ;-P
lib/AI/NNEasy.pm view on Meta::CPAN
The secret of a NN is the number of hidden layers and nodes/neurons for each layer.
Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES).
So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer.
This definition exists because the number of inputs define the maximal variability of
the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like
int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the
logic we have 3 groups of inputs:
0 0 => 0 # false
0 1 => 1 # or
view all matches for this distribution
view release on metacpan or search on metacpan
examples/bp.pl view on Meta::CPAN
print "epoch = ".$j." RMS Error = ".$RMSerror."\n";
}
#training has finished
#display the results
displayResults();
}
#============================================================
examples/bp.pl view on Meta::CPAN
{
print "initialising data\n";
# the data here is the XOR data
# it has been rescaled to the range
# [-1][1]
# an extra input valued 1 is also added
# to act as the bias
$trainInputs[0][0] = 1;
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/NNVMCAPI.pm view on Meta::CPAN
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
package AI::NNVMCAPI;
use strict;
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/NaiveBayes.pm view on Meta::CPAN
my $m = $self->model;
# Note that we're using the log(prob) here. That's why we add instead of multiply.
my %scores = %{$m->{prior_probs}};
my %features;
while (my ($feature, $value) = each %$newattrs) {
next unless exists $m->{attributes}{$feature}; # Ignore totally unseen features
while (my ($label, $attributes) = each %{$m->{probs}}) {
my $score = ($attributes->{$feature} || $m->{smoother}{$label})*$value; # P($feature|$label)**$value
$scores{$label} += $score;
$features{$feature}{$label} = $score;
}
}
rescale(\%scores);
return AI::NaiveBayes::Classification->new( label_sums => \%scores, features => \%features );
}
sub rescale {
my ($scores) = @_;
# Scale everything back to a reasonable area in logspace (near zero), un-loggify, and normalize
my $total = 0;
my $max = max(values %$scores);
foreach (values %$scores) {
$_ = exp($_ - $max);
$total += $_**2;
}
$total = sqrt($total);
foreach (values %$scores) {
$_ /= $total;
}
}
lib/AI/NaiveBayes.pm view on Meta::CPAN
},
labels => ['farming']
},
{
attributes => {
vampires => 1, cannot => 1, see => 1, their => 1,
images => 1, mirrors => 1
},
labels => ['vampire']
},
);
# Classify a feature vector
my $result = $classifier->classify({bar => 3, blurp => 2});
# $result is now a AI::NaiveBayes::Classification object
my $best_category = $result->best_category;
=head1 DESCRIPTION
This module implements the classic "Naive Bayes" machine learning
algorithm. This is a low level class that accepts only pre-computed feature-vectors
lib/AI/NaiveBayes.pm view on Meta::CPAN
The classifier object is immutable.
It is a well-studied probabilistic algorithm often used in
automatic text categorization. Compared to other algorithms (kNN,
SVM, Decision Trees), it's pretty fast and reasonably competitive in
the quality of its results.
A paper by Fabrizio Sebastiani provides a really good introduction to
text categorization:
L<http://faure.iei.pi.cnr.it/~fabrizio/Publications/ACMCS02.pdf>
lib/AI/NaiveBayes.pm view on Meta::CPAN
Classifies a feature-vector of the form:
{ feature1 => weight1, feature2 => weight2, ... }
The result is a C<AI::NaiveBayes::Classification> object.
=item rescale
Internal
=back
lib/AI/NaiveBayes.pm view on Meta::CPAN
We have applied Bayes' Theorem because C<P(cat | words)> is a difficult
quantity to compute directly, but C<P(words | cat)> and C<P(cat)> are accessible
(see below).
The greater the expression above, the greater the probability that the given
document belongs to the given category. So we want to find the maximum
value. We write this as
P(words | cat) P(cat)
Best category = ArgMax -----------------------
lib/AI/NaiveBayes.pm view on Meta::CPAN
Best category = ArgMax P(words | cat) P(cat)
cat in cats
Finally, we note that if C<w1, w2, ... wn> are the words in the document,
then this expression is equivalent to:
Best category = ArgMax P(w1|cat)*P(w2|cat)*...*P(wn|cat)*P(cat)
cat in cats
That's the formula I use in my document categorization code. The last
step is the only non-rigorous one in the derivation, and this is the
"naive" part of the Naive Bayes technique. It assumes that the
probability of each word appearing in a document is unaffected by the
presence or absence of each other word in the document. We assume
this even though we know this isn't true: for example, the word
"iodized" is far more likely to appear in a document that contains the
word "salt" than it is to appear in a document that contains the word
"subroutine". Luckily, as it turns out, making this assumption even
when it isn't true may have little effect on our results, as the
following paper by Pedro Domingos argues:
L<"http://www.cs.washington.edu/homes/pedrod/mlj97.ps.gz">
=head1 SEE ALSO
view all matches for this distribution
view release on metacpan or search on metacpan
NaiveBayes1.pm view on Meta::CPAN
sub predict {
my ($self, %params) = @_;
my $newattrs = $params{attributes} or die "Missing 'attributes' parameter for predict()";
my $m = $self->{model}; # For convenience
my %scores;
my @labels = @{ $self->{labels} };
$scores{$_} = $m->{labelprob}{$_} foreach (@labels);
foreach my $att (keys(%{ $newattrs })) {
if (!defined($self->{attribute_type}{$att})) { die "Unknown attribute: `$att'" }
next if $self->{attribute_type}{$att} eq 'real';
die unless exists($self->{stat_attributes}{$att});
my $attval = $newattrs->{$att};
NaiveBayes1.pm view on Meta::CPAN
exists($self->{smoothing}{$att});
foreach my $label (@labels) {
if (exists($m->{condprob}{$att}{$attval}) and
exists($m->{condprob}{$att}{$attval}{$label}) and
$m->{condprob}{$att}{$attval}{$label} > 0 ) {
$scores{$label} *=
$m->{condprob}{$att}{$attval}{$label};
} elsif (exists($self->{smoothing}{$att})) {
$scores{$label} *=
$m->{condprob}{$att}{'*'}{$label};
} else { $scores{$label} = 0 }
}
}
foreach my $att (keys %{$newattrs}){
next unless $self->{attribute_type}{$att} eq 'real';
my $sum=0; my %nscores;
foreach my $label (@labels) {
die unless exists $m->{real_stat}{$att}{$label}{mean};
$nscores{$label} =
0.398942280401433 / $m->{real_stat}{$att}{$label}{stddev}*
exp( -0.5 *
( ( $newattrs->{$att} -
$m->{real_stat}{$att}{$label}{mean})
/ $m->{real_stat}{$att}{$label}{stddev}
) ** 2
);
$sum += $nscores{$label};
}
if ($sum==0) { print STDERR "Ignoring all Gaussian probabilities: all=0!\n" }
else {
foreach my $label (@labels) { $scores{$label} *= $nscores{$label} }
}
}
my $sumPx = 0.0;
$sumPx += $scores{$_} foreach (keys(%scores));
$scores{$_} /= $sumPx foreach (keys(%scores));
return \%scores;
}
sub print_model {
my $self = shift;
my $withcounts = '';
NaiveBayes1.pm view on Meta::CPAN
$nb->train;
print "Model:\n" . $nb->print_model;
# Find results for unseen instances
my $result = $nb->predict
(attributes => {model=>'T', place=>'N'});
foreach my $k (keys(%{ $result })) {
print "for label $k P = " . $result->{$k} . "\n";
}
# export the model into a string
my $string = $nb->export_to_YAML();
NaiveBayes1.pm view on Meta::CPAN
Constructor. Creates a new C<AI::NaiveBayes1> object and returns it.
=item import_from_YAML($string)
Constructor. Creates a new C<AI::NaiveBayes1> object from a string where it is
represented in C<YAML>. Requires YAML module.
=item import_from_YAML_file($file_name)
Constructor. Creates a new C<AI::NaiveBayes1> object from a file where it is
represented in C<YAML>. Requires YAML module.
=back
=head2 Non-Constructor Methods
NaiveBayes1.pm view on Meta::CPAN
Delete attributes after adding instances.
=item set_real(list_of_attributes)
Delares a list of attributes to be real-valued. During training,
their conditional probabilities will be modeled with Gaussian (normal)
distributions.
=item C<add_instance(attributes=E<gt>HASH,label=E<gt>STRING|ARRAY)>
NaiveBayes1.pm view on Meta::CPAN
Adds a number of identical instances to the categorizer.
=item export_to_YAML()
Returns a C<YAML> string representation of an C<AI::NaiveBayes1>
object. Requires YAML module.
=item C<export_to_YAML_file( $file_name )>
Writes a C<YAML> string representation of an C<AI::NaiveBayes1>
object to a file. Requires YAML module.
=item C<print_model( OPTIONAL 'with counts' )>
Returns a string, human-friendly representation of the model.
The model is supposed to be trained before calling this method.
One argument 'with counts' can be supplied, in which case explanatory
expressions with counts are printed as well.
=item train()
Calculates the probabilities that will be necessary for categorization
using the C<predict()> method.
NaiveBayes1.pm view on Meta::CPAN
=item C<predict( attributes =E<gt> HASH )>
Use this method to predict the label of an unknown instance. The
attributes should be of the same format as you passed to
C<add_instance()>. C<predict()> returns a hash reference whose keys
are the names of labels, and whose values are corresponding
probabilities.
=item C<labels>
Returns a list of all the labels the object knows about (in no
NaiveBayes1.pm view on Meta::CPAN
P(A=a|C=c) = ------------ * exp( - ------- )
sqrt(2*Pi)*s 2*s^2
this boils down to the following lines of code:
$scores{$label} *=
0.398942280401433 / $m->{real_stat}{$att}{$label}{stddev}*
exp( -0.5 *
( ( $newattrs->{$att} -
$m->{real_stat}{$att}{$label}{mean})
/ $m->{real_stat}{$att}{$label}{stddev}
NaiveBayes1.pm view on Meta::CPAN
Copyright 2003-21 Vlado Keselj L<https://web.cs.dal.ca/~vlado>.
In 2004 Yung-chung Lin provided implementation of the Gaussian model for
continous variables.
This script is provided "as is" without expressed or implied warranty.
This is free software; you can redistribute it and/or modify it under
the same terms as Perl itself.
The module is available on CPAN (L<https://metacpan.org/author/VLADO>), and
L<https://web.cs.dal.ca/~vlado/srcperl/>. The latter site is
view all matches for this distribution
view release on metacpan or search on metacpan
examples/digits/digits.pl view on Meta::CPAN
$w = $w->squeeze;
my $min = $w->minimum;
$w -= $min;
my $max = $w->maximum;
$w /= $max;
$w = $w->reshape(28,28);
imag2d $w;
}
sub sigmoid{
my $foo = shift;
return 1/(1+E**-$foo);
view all matches for this distribution
view release on metacpan or search on metacpan
BackProp.pm view on Meta::CPAN
error=>$error); # The maximum (%) error allowed
print $str if($AI::NeuralNet::BackProp::DEBUG);
}
my $res;
$data->[$row] = $self->crunch($data->[$row]) if($data->[$row] == 0);
if ($p) {
$res=pdiff($data->[$row],$self->run($data->[$row-1]));
} else {
$res=$data->[$row]->[0]-$self->run($data->[$row-1])->[0];
}
return $res;
}
# This sub will take an array ref of a data set, which it expects in this format:
# my @data_set = ( [ ...inputs... ], [ ...outputs ... ],
# ... rows ...
BackProp.pm view on Meta::CPAN
}
AI::NeuralNet::BackProp::out1 "\n";
}
# These next two loops connect the _run and _map packages (the IO interface) to
# the start and end 'layers', respectively. These are how we insert data into
# the network and how we get data from the network. The _run and _map packages
# are connected to the neurons so that the neurons think that the IO packages are
# just another neuron, sending data on. But the IO packs. are special packages designed
# with the same methods as neurons, just meant for specific IO purposes. You will
# never need to call any of the IO packs. directly. Instead, they are called whenever
BackProp.pm view on Meta::CPAN
$self->{RUN}->run($map);
$self->{LAST_TIME}=timestr(timediff(new Benchmark, $t0));
return $self->map();
}
# This automatically uncrunches a response after running it
sub run_uc {
$_[0]->uncrunch(run(@_));
}
# Returns benchmark and loop's ran or learned
BackProp.pm view on Meta::CPAN
my $self = shift;
$self->{MAP}->map();
}
# Forces network to learn pattern passed and give desired
# results. See usage in POD.
sub learn {
my $self = shift;
my $omap = shift;
my $res = shift;
my %args = @_;
my $inc = $args{inc} || 0.20;
my $max = $args{max} || 1024;
my $_mx = intr($max/10);
my $_mi = 0;
BackProp.pm view on Meta::CPAN
my ($t0,$it0);
no strict 'refs';
# Take care of crunching strings passed
$omap = $self->crunch($omap) if($omap == 0);
$res = $self->crunch($res) if($res == 0);
# Fill in empty spaces at end of results matrix with a 0
if($#{$res}<$out) {
for my $x ($#{$res}+1..$out) {
#$res->[$x] = 0;
}
}
# Debug
AI::NeuralNet::BackProp::out1 "Num output neurons: $out, Input neurons: $size, Division: $divide\n";
BackProp.pm view on Meta::CPAN
my $cdiff = 0;
$diff = 100;
$error = ($error>-1)?$error:-1;
# $flag only goes high when all neurons in output map compare exactly with
# desired result map or $max loops is reached
#
while(!$flag && ($max ? $loop<$max : 1)) {
$it0 = new Benchmark;
# Run the map
BackProp.pm view on Meta::CPAN
# Retrieve last mapping and initialize a few variables.
$map = $self->map();
$y = $size-$div;
$flag = 1;
# Compare the result map we just ran with the desired result map.
$diff = pdiff($map,$res);
# This adjusts the increment multiplier to decrease as the loops increase
if($_mi > $_mx) {
$dinc *= 0.1;
$_mi = 0;
BackProp.pm view on Meta::CPAN
# We de-increment the loop ammount to prevent infinite learning loops.
# In old versions of this module, if you used too high of an initial input
# $inc, then the network would keep jumping back and forth over your desired
# results because the increment was too high...it would try to push close to
# the desired result, only to fly over the other edge too far, therby trying
# to come back, over shooting again.
# This simply adjusts the learning gradient proportionally to the ammount of
# convergance left as the difference decreases.
$inc -= ($dinc*$diff);
$inc = 0.0000000001 if($inc < 0.0000000001);
BackProp.pm view on Meta::CPAN
last;
}
# Debugging
AI::NeuralNet::BackProp::out4 "Difference: $diff\%\t Increment: $inc\tMax Error: $error\%\n";
AI::NeuralNet::BackProp::out1 "\n\nMapping results from $map:\n";
# This loop compares each element of the output map with the desired result map.
# If they don't match exactly, we call weight() on the offending output neuron
# and tell it what it should be aiming for, and then the offending neuron will
# try to adjust the weights of its synapses to get closer to the desired output.
# See comments in the weight() method of AI::NeuralNet::BackProp for how this works.
my $l=$self->{NET};
for my $i (0..$out-1) {
$a = $map->[$i];
$b = $res->[$i];
AI::NeuralNet::BackProp::out1 "\nmap[$i] is $a\n";
AI::NeuralNet::BackProp::out1 "res[$i] is $b\n";
for my $j (0..$divide-1) {
if($a!=$b) {
AI::NeuralNet::BackProp::out1 "Punishing $self->{NET}->[($i*$divide)+$j] at ",(($i*$divide)+$j)," ($i with $a) by $inc.\n";
$l->[$y+($i*$divide)+$j]->weight($inc,$b) if($l->[$y+($i*$divide)+$j]);
BackProp.pm view on Meta::CPAN
AI::NeuralNet::BackProp::out1 "\n\n";
# Benchmark this loop.
AI::NeuralNet::BackProp::out4 "Learning itetration $loop complete, timed at".timestr(timediff(new Benchmark, $it0),'noc','5.3f')."\n";
# Map the results from this loop.
AI::NeuralNet::BackProp::out4 "Map: \n";
AI::NeuralNet::BackProp::join_cols($map,$self->{col_width}) if ($AI::NeuralNet::BackProp::DEBUG);
AI::NeuralNet::BackProp::out4 "Res: \n";
AI::NeuralNet::BackProp::join_cols($res,$self->{col_width}) if ($AI::NeuralNet::BackProp::DEBUG);
}
# Compile benchmarking info for entire learn() process and return it, save it, and
# display it.
$self->{LAST_TIME}="$loop loops and ".timestr(timediff(new Benchmark, $t0));
BackProp.pm view on Meta::CPAN
$self->{RMAP}->{$sid-1} = $self->{PARENT}->{_tmp_synapse};
return $sid-1;
}
# Here is the real meat of this package.
# run() does one thing: It fires values
# into the first layer of the network.
sub run {
my $self = shift;
my $map = shift;
my $x = 0;
BackProp.pm view on Meta::CPAN
return $sid-1;
}
# This acts just like a regular neuron by receiving
# values from input synapes. Yet, unlike a regularr
# neuron, it doesnt weight the values, just stores
# them to be retrieved by a call to map().
sub input {
no strict 'refs';
my $self = shift;
my $sid = shift;
BackProp.pm view on Meta::CPAN
my $self = shift;
my $color = shift;
return $self->{parent}->intr(($self->{palette}->[$color]->{red}+$self->{palette}->[$color]->{green}+$self->{palette}->[$color]->{blue})/3);
}
# Loads and decompresses a PCX-format 320x200, 8-bit image file and returns
# two arrays, first is a 64000-byte long array, each element contains a palette
# index, and the second array is a 255-byte long array, each element is a hash
# ref with the keys 'red', 'green', and 'blue', each key contains the respective color
# component for that color index in the palette.
sub load_pcx {
shift if(substr($_[0],0,4) eq 'AI::');
# open the file
BackProp.pm view on Meta::CPAN
my $data;
# Read header
read(FILE,$tmp,128);
# load the data and decompress into buffer
my $count=0;
while($count<320*200) {
# get the first piece of data
read(FILE,$data,1);
BackProp.pm view on Meta::CPAN
# Learn the data set
$net->learn_set(\@phrases);
# Run a test phrase through the network
my $test_phrase = $net->crunch("I love neural networking!");
my $result = $net->run($test_phrase);
# Get this, it prints "Jay Leno is networking!" ... LOL!
print $net->uncrunch($result),"\n";
=head1 UPDATES
BackProp.pm view on Meta::CPAN
use AI;
my $net = new AI::NeuralNet::BackProp(2,2);
my @map = (0,1);
my $result = $net->run(\@map);
Now, this call would probably not give what you want, because
the network hasn't "learned" any patterns yet. But this
illustrates the call. Run now allows strings to be used as
input. See run() for more information.
Run returns a refrence with $size elements (Remember $size? $size
is what you passed as the second argument to the network
constructor.) This array contains the results of the mapping. If
you ran the example exactly as shown above, $result would probably
contain (1,1) as its elements.
To make the network learn a new pattern, you simply call the learn
method with a sample input and the desired result, both array
refrences of $size length. Example:
use AI;
my $net = new AI::NeuralNet::BackProp(2,2);
my @map = (0,1);
my @res = (1,0);
$net->learn(\@map,\@res);
my $result = $net->run(\@map);
Now $result will conain (1,0), effectivly flipping the input pattern
around. Obviously, the larger $size is, the longer it will take
to learn a pattern. Learn() returns a string in the form of
Learning took X loops and X wallclock seconds (X.XXX usr + X.XXX sys = X.XXX CPU).
With the X's replaced by time or loop values for that loop call. So,
to view the learning stats for every learn call, you can just:
print $net->learn(\@map,\@res);
If you call "$net->debug(4)" with $net being the
refrence returned by the new() constructor, you will get benchmarking
information for the learn function, as well as plenty of other information output.
BackProp.pm view on Meta::CPAN
Before you can really do anything useful with your new neural network
object, you need to teach it some patterns. See the learn() method, below.
=item $net->learn($input_map_ref, $desired_result_ref [, options ]);
This will 'teach' a network to associate an new input map with a desired resuly.
It will return a string containg benchmarking information. You can retrieve the
pattern index that the network stored the new input map in after learn() is complete
with the pattern() method, below.
UPDATED: You can now specify strings as inputs and ouputs to learn, and they will be crunched
BackProp.pm view on Meta::CPAN
It defaults to 1024. Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.
$maximum_allowable_percentage_of_error is the maximum allowable error to have. If
this is set, then learn() will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then learn() will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.
=item $net->learn_set(\@set, [ options ]);
BackProp.pm view on Meta::CPAN
flag => $flag
pattern => $row
If "flag" is set to some TRUE value, as in "flag => 1" in the hash of options, or if the option "flag"
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set() will return an integer specifying the amount of forgetfulness when all the patterns
are learned.
If "pattern" is set, then learn_set() will use that pattern in the data set to measure forgetfulness by.
If "pattern" is omitted, it defaults to the first pattern in the set. Example:
BackProp.pm view on Meta::CPAN
Now why the heck would anyone want to measure forgetfulness, you ask? Maybe you wonder how I
even measure that. Well, it is not a vital value that you have to know. I just put in a
"forgetfulness measure" one day because I thought it would be neat to know.
How the module measures forgetfulness is this: First, it learns all the patterns in the set provided,
then it will run the very first pattern (or whatever pattern is specified by the "row" option)
in the set after it has finished learning. It will compare the run() output with the desired output
as specified in the dataset. In a perfect world, the two should match exactly. What we measure is
how much that they don't match, thus the amount of forgetfulness the network has.
BackProp.pm view on Meta::CPAN
the whole data set again after calling range() on a network.
Subsequent calls to range() invalidate any previous calls to range()
NOTE: It is recomended, you call range() before you call learn() or else you will get unexpected
results from any run() call after range() .
=item $net->range($bottom..$top);
This is a common form often used in a C<for my $x (0..20)> type of for() constructor. It works
BackProp.pm view on Meta::CPAN
Level 3 ($level = 3) : JUST prints weight mapping as weights change.
Level 4 ($level = 4) : JUST prints the benchmark info for EACH learn loop iteteration, not just
learning as a whole. Also prints the percentage difference for each loop between current network
results and desired results, as well as learning gradient ('incremenet').
Level 4 is useful for seeing if you need to give a smaller learning incrememnt to learn() .
I used level 4 debugging quite often in creating the letters.pl example script and the small_1.pl
example script.
BackProp.pm view on Meta::CPAN
=item $net->load($filename);
This will load from disk any network saved by save() and completly restore the internal
state at the point it was save() was called at.
BackProp.pm view on Meta::CPAN
percent sting.
=item $net->p($a,$b);
Returns a floating point number which represents $a as a percentage of $b.
=item $net->intr($float);
BackProp.pm view on Meta::CPAN
UPDATE: Now you can use a variabled instead of using qw(). Strings will be split internally.
Do not use qw() to pass strings to crunch.
This splits a string passed with /[\s\t]/ into an array ref containing unique indexes
to the words. The words are stored in an intenal array and preserved across load() and save()
calls. This is designed to be used to generate unique maps sutible for passing to learn() and
run() directly. It returns an array ref.
The words are not duplicated internally. For example:
BackProp.pm view on Meta::CPAN
$net->learn($net->crunch("I love apples."), $net->crunch("Good, Healthy Food."));
$net->learn($net->crunch("I love pop."), $net->crunch("That's Junk Food!"));
$net->learn($net->crunch("I love oranges."),$net->crunch("Good, Healthy Food."));
}
my $response = $net->run($net->crunch("I love corn."));
print $net->uncrunch($response),"\n";
On my system, this responds with, "Good, Healthy Food." If you try to run crunch() with
"I love pop.", though, you will probably get "Food! apples. apples." (At least it returns
that on my system.) As you can see, the associations are not yet perfect, but it can make
for some interesting demos!
=item $net->crunched($word);
BackProp.pm view on Meta::CPAN
=item $net->load_pcx($filename);
Oh heres a treat... this routine will load a PCX-format file (yah, I know ... ancient format ... but
it is the only one I could find specs for to write it in Perl. If anyone can get specs for
any other formats, or could write a loader for them, I would be very grateful!) Anyways, a PCX-format
file that is exactly 320x200 with 8 bits per pixel, with pure Perl. It returns a blessed refrence to
a AI::NeuralNet::BackProp::PCX object, which supports the following routinges/members. See example
files ex_pcxl.pl and ex_pcx.pl in the ./examples/ directory.
BackProp.pm view on Meta::CPAN
=item $pcx->{image}
This is an array refrence to the entire image. The array containes exactly 64000 elements, each
element contains a number corresponding into an index of the palette array, details below.
=item $pcx->{palette}
BackProp.pm view on Meta::CPAN
$pcx->{palette}->[0]->{red};
$pcx->{palette}->[0]->{green};
$pcx->{palette}->[0]->{blue};
Each is in the range of 0..63, corresponding to their named color component.
=item $pcx->get_block($array_ref);
BackProp.pm view on Meta::CPAN
=item $pcx->rgb($index);
Returns a 3-element array (not array ref) with each element corresponding to the red, green, or
blue color components, respecitvely.
=item $pcx->avg($index);
BackProp.pm view on Meta::CPAN
You can now use 0 values in any input maps. This is a good improvement over versions 0.40
and 0.42, where no 0s were allowed because the learning would never finish learning completly
with a 0 in the input.
Yet with the allowance of 0s, it requires one of two factors to learn correctly. Either you
must enable randomness with $net->random(0.0001) (Any values work [other than 0], see random() ),
or you must set an error-minimum with the 'error => 5' option (you can use some other error value
as well).
When randomness is enabled (that is, when you call random() with a value other than 0), it interjects
a bit of randomness into the output of every neuron in the network, except for the input and output
neurons. The randomness is interjected with rand()*$rand, where $rand is the value that was
passed to random() call. This assures the network that it will never have a pure 0 internally. It is
bad to have a pure 0 internally because the weights cannot change a 0 when multiplied by a 0, the
product stays a 0. Yet when a weight is multiplied by 0.00001, eventually with enough weight, it will
be able to learn. With a 0 value instead of 0.00001 or whatever, then it would never be able
to add enough weight to get anything other than a 0.
BackProp.pm view on Meta::CPAN
=head1 AUTHOR
Josiah Bryan F<E<lt>jdb@wcoil.comE<gt>>
Copyright (c) 2000 Josiah Bryan. All rights reserved. This program is free software;
you can redistribute it and/or modify it under the same terms as Perl itself.
The C<AI::NeuralNet::BackProp> and related modules are free software. THEY COME WITHOUT WARRANTY OF ANY KIND.
BackProp.pm view on Meta::CPAN
A mailing list has been setup for AI::NeuralNet::BackProp for discussion of AI and
neural net related topics as they pertain to AI::NeuralNet::BackProp. I will also
announce in the group each time a new release of AI::NeuralNet::BackProp is available.
The list address is at: ai-neuralnet-backprop@egroups.com
To subscribe, send a blank email to: ai-neuralnet-backprop-subscribe@egroups.com
=cut
view all matches for this distribution
view release on metacpan or search on metacpan
examples/eigenvector_initialization.pl view on Meta::CPAN
}
return undef;
}
for (@es_idx) { # from the highest values downwards, take the index
push @training_vectors, [ list $E->dice($_) ] ; # get the corresponding vector
}
}
$nn->initialize (@training_vectors[0..0]); # take only the biggest ones (the eigenvalues are big, actually)
#warn $nn->as_string;
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/NeuralNet/Hopfield.pm view on Meta::CPAN
sub convert_array() {
my $rows = shift;
my $cols = shift;
my @pattern = @_;
my $result = Math::SparseMatrix->new(1, $cols);
for (my $i = 0; $i < ($#pattern + 1); $i++) {
if ($pattern[$i] =~ m/true/ig) {
$result->set(1, ($i +1 ), 1);
} else {
$result->set(1, ($i + 1), -1);
}
}
return $result;
}
sub transpose() {
my $matrix = shift;
my $rows = $matrix->{_rows};
lib/AI/NeuralNet/Hopfield.pm view on Meta::CPAN
my $a_cols = $matrix_a->{_cols};
my $b_rows = $matrix_b->{_rows};
my $b_cols = $matrix_b->{_cols};
my $result = Math::SparseMatrix->new($a_rows, $b_cols);
if ($matrix_a->{_cols} != $matrix_b->{_rows}) {
die "To use ordinary matrix multiplication the number of columns on the first matrix must mat the number of rows on the second";
}
for (my $result_row = 1; $result_row <= $a_rows; $result_row++) {
for(my $result_col = 1; $result_col <= $b_cols; $result_col++) {
my $value = 0;
for (my $i = 1; $i <= $a_cols; $i++) {
$value += ($matrix_a->get($result_row, $i)) * ($matrix_b->get($i, $result_col));
}
$result->set($result_row, $result_col, $value);
}
}
return $result;
}
sub identity() {
my $size = shift;
if ($size < 1) {
die "Identity matrix must be at least of size 1.";
}
my $result = Math::SparseMatrix->new ($size, $size);
for (my $i = 1; $i <= $size; $i++) {
$result->set($i, $i, 1);
}
return $result;
}
sub subtract() {
my $matrix_a = shift;
my $matrix_b = shift;
lib/AI/NeuralNet/Hopfield.pm view on Meta::CPAN
if ($a_cols != $b_cols) {
die "To subtract the matrixes they must have the same number of rows and columns. Matrix a has ";
}
my $result = Math::SparseMatrix->new($a_rows, $a_cols);
for (my $result_row = 1; $result_row <= $a_rows; $result_row++) {
for (my $result_col = 1; $result_col <= $a_cols; $result_col++) {
my $value = ( $matrix_a->get($result_row, $result_col) ) - ( $matrix_b->get($result_row, $result_col));
if ($value == 0) {
$value += 2;
}
$result->set($result_row, $result_col, $value);
}
}
return $result;
}
sub add() {
#weight matrix.
my $matrix_a = shift;
lib/AI/NeuralNet/Hopfield.pm view on Meta::CPAN
if ($a_cols != $b_cols) {
die "To add the matrixes they must have the same number of rows and columns.";
}
my $result = Math::SparseMatrix->new($a_rows, $a_cols);
for (my $result_row = 1; $result_row <= $a_rows; $result_row++) {
for (my $result_col = 1; $result_col <= $a_cols; $result_col++) {
my $value = $matrix_b->get($result_row, $result_col);
$result->set($result_row, $result_col, $matrix_a->get($result_row, $result_col) + $value )
}
}
return $result;
}
sub dot_product() {
my $matrix_a = shift;
my $matrix_b = shift;
lib/AI/NeuralNet/Hopfield.pm view on Meta::CPAN
if ($#array_a != $#array_b) {
die "To take the dot product, both matrixes must be of the same length.";
}
my $result = 0;
my $length = $#array_a + 1;
for (my $i = 0; $i < $length; $i++) {
$result += $array_a[$i] * $array_b[$i];
}
return $result;
}
sub packed_array() {
my $matrix = shift;
my @result = ();
for (my $r = 1; $r <= $matrix->{_rows}; $r++) {
for (my $c = 1; $c <= $matrix->{_cols}; $c++) {
push(@result, $matrix->get($r, $c));
}
}
return @result;
}
sub get_col() {
my $self = shift;
my $col = shift;
lib/AI/NeuralNet/Hopfield.pm view on Meta::CPAN
=cut
=head2 Evaluation
The evaluation method compares the new input with the information stored in the matrix memory.
The output is a new array with the boolean evaluation of each neuron.
my @input_2 = qw(true true true false);
my @result = $hop->evaluate(@input_2);
=cut
=head1 AUTHOR
lib/AI/NeuralNet/Hopfield.pm view on Meta::CPAN
=head1 BUGS
Please report any bugs or feature requests to C<bug-ai-neuralnet-hopfield at rt.cpan.org>, or through
the web interface at L<http://rt.cpan.org/NoAuth/ReportBug.html?Queue=AI-NeuralNet-Hopfield>. I will be notified, and then you'll
automatically be notified of progress on your bug as I make changes.
=head1 SUPPORT
You can find documentation for this module with the perldoc command.
lib/AI/NeuralNet/Hopfield.pm view on Meta::CPAN
This license does not grant you the right to use any trademark, service
mark, tradename, or logo of the Copyright Holder.
This license includes the non-exclusive, worldwide, free-of-charge
patent license to make, have made, use, offer to sell, sell, import and
otherwise transfer the Package with respect to any patent claims
licensable by the Copyright Holder that are necessarily infringed by the
Package. If you institute patent litigation (including a cross-claim or
counterclaim) against any party alleging that the Package constitutes
direct or contributory patent infringement, then this Artistic License
to you shall terminate on the date that such litigation is filed.
view all matches for this distribution
view release on metacpan or search on metacpan
make test
make install
DEPENDENCIES
This module requires these other modules and libraries:
AI::NeuralNet::Kohonen
Tk
Tk::Canvas
Tk::Label
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/NeuralNet/Kohonen/Visual.pm view on Meta::CPAN
$self->{_mw} = MainWindow->new(
-width => $w + 20,
-height => $h + 20,
);
$self->{_mw}->fontCreate(qw/TAG -family verdana -size 8 -weight bold/);
$self->{_mw}->resizable( 0, 0);
$self->{_quit_flag} = 0;
$self->{_mw}->protocol('WM_DELETE_WINDOW' => sub {$self->{_quit_flag}=1});
$self->{_canvas} = $self->{_mw}->Canvas(
-width => $w,
-height => $h,
lib/AI/NeuralNet/Kohonen/Visual.pm view on Meta::CPAN
=head1 METHOD plot_map
Plots the map on the existing canvas. Arguments are supplied
in a hash with the following keys as options:
The values of C<bmu_x> and C<bmu_y> represent The I<x> and I<y>
co-ordinates of unit to highlight using the value in the
C<hicol> to highlight it with colour. If no C<hicolo> is provided,
it default to red.
When called, this method also sets the object field flag C<plotted>:
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
See also L</FILE FORMAT> and L</METHOD load_input>.
=item input
A reference to an array of training vectors, within which each vector
is represented by an array:
[ [v1a, v1b, v1c], [v2a,v2b,v2c], ..., [vNa,vNb,vNc] ]
See also C<table>.
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
return $self->{map}->[$x]->[$y]->{weight};
}
=head1 METHOD get_results
Finds and returns the results for all input vectors in the supplied
reference to an array of arrays,
placing the values in the C<results> field (array reference),
and, returning it either as an array or as it is, depending on
the calling context
If no array reference of input vectors is supplied, will use
the values in the C<input> field.
Individual results are in the array format as described in
L<METHOD find_bmu>.
See L<METHOD find_bmu>, and L</METHOD get_weight_at>.
=cut
sub get_results { my ($self,$targets)=(shift,shift);
$self->{results} = [];
if (not defined $targets){
$targets = $self->{input};
} elsif (not $targets eq $self->{input}){
foreach (@$targets){
next if ref $_ eq 'AI::NeuralNet::Kohonen::Input';
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
}
}
foreach my $target (@{ $targets}){
$_ = $self->find_bmu($target);
push @$_, $target->{class}||"?";
push @{$self->{results}}, $_;
}
# Make it a scalar if it's a scalar
# if ($#{$self->{results}} == 0){
# $self->{results} = @{$self->{results}}[0];
# }
return wantarray? @{$self->{results}} : $self->{results};
}
=head1 METHOD map_results
Clears the C<map> and fills it with the results.
The sole paramter is passed to the L<METHOD clear_map>.
L<METHOD get_results> is then called, and the results
returned fed into the object field C<map>.
This may change, as it seems misleading to re-use that field.
=cut
sub map_results { my $self=shift;
}
=head1 METHOD dump
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
Loads a SOM_PAK-format file of input vectors.
This method is automatically accessed if the constructor is supplied
with an C<input_file> field.
Requires: a path to a file.
Returns C<undef> on failure.
See L</FILE FORMAT>.
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
=head2 ADJUSTING THE NEIGHBOURS OF THE BMU
W(t+1) = W(t) + THETA(t) L(t)( V(t)-W(t) )
Where C<L> is the learning rate, C<V> the target vector,
and C<W> the weight. THETA(t) represents the influence
of distance from the BMU upon a node's learning, and
is calculated by the C<Node> class - see
L<AI::NeuralNet::Kohonen::Node/distance_effect>.
=cut
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
Returns: reference to a 2d array that is the mask.
=cut
sub _make_gaussian_mask { my ($smooth) = (shift);
my $f = 4; # Cut-off threshold
my $g_mask_2d = [];
for my $x (0..$smooth){
$g_mask_2d->[$x] = [];
for my $y (0..$smooth){
$g_mask_2d->[$x]->[$y] =
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
}
}
# Recieves an array of ONE element,
# should be an array of an array of elements
my @bmu = $self->get_results($targets);
# Check input and output dims are the same
if ($#{$self->{map}->[0]->[1]->{weight}} != $targets->[0]->{dim}){
confess "target input and map dimensions differ";
}
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
=over 4
The input data is stored in ASCII-form as a list of entries, one line
...for each vectorial sample.
The first line of the file is reserved for status knowledge of the
entries; in the present version it is used to define the following
items (these items MUST occur in the indicated order):
- Dimensionality of the vectors (integer, compulsory).
- Topology type, either hexa or rect (string, optional, case-sensitive).
- Map dimension in x-direction (integer, optional).
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
...
Subsequent lines consist of n floating-point numbers followed by an
optional class label (that can be any string) and two optional
qualifiers (see below) that determine the usage of the corresponding
data entry in training programs. The data files can also contain an
arbitrary number of comment lines that begin with '#', and are
ignored. (One '#' for each comment line is needed.)
If some components of some data vectors are missing (due to data
collection failures or any other reason) those components should be
marked with 'x'...[in processing, these] are ignored.
...
Each data line may have two optional qualifiers that determine the
lib/AI/NeuralNet/Kohonen.pm view on Meta::CPAN
=over 4
=item -
Enhancement factor: e.g. weight=3. The training rate for the
corresponding input pattern vector is multiplied by this
parameter so that the reference vectors are updated as if this
input vector were repeated 3 times during training (i.e., as if
the same vector had been stored 2 extra times in the data file).
=item -
Fixed-point qualifier: e.g. fixed=2,5. The map unit defined by
the fixed-point coordinates (x = 2; y = 5) is selected instead of
the best-matching unit for training. (See below for the definition
of coordinates over the map.) If several inputs are forced to
known locations, a wanted orientation results in the map.
=back
=back
view all matches for this distribution
view release on metacpan or search on metacpan
$self->{connector} = $AI::NeuralNet::Mesh::Connector;
# Build mesh
$self->_init();
# Initalize activation, thresholds, etc, if provided
if(ref($layers->[0]) eq "HASH") {
for (0..$self->{total_layers}) {
$self->activation($_,$layers->[$_]->{activation});
$self->threshold($_,$layers->[$_]->{threshold});
$self->mean($_,$layers->[$_]->{mean});
}
}
# Done!
$self->{error} = "extend_layer(): You must provide specs to extend layer $layer with.\n";
return undef;
}
if(ref($specs) eq "HASH") {
$self->activation($layer,$specs->{activation}) if($specs->{activation});
$self->threshold($layer,$specs->{threshold}) if($specs->{threshold});
$self->mean($layer,$specs->{mean}) if($specs->{mean});
return $self->add_nodes($layer,$specs->{nodes});
} else {
return $self->add_nodes($layer,$specs);
}
my $inc = $args{inc} || 0.002; # learning gradient
my $max = $args{max} || 1024; # max iteterations
my $degrade = $args{degrade} || 0; # enable gradient degrading
my $error = ($args{error}>-1 && defined $args{error}) ? $args{error} : -1;
my $dinc = 0.0002; # amount to adjust gradient by
my $diff = 100; # error magin between results
my $start = new Benchmark;
$inputs = $self->crunch($inputs) if($inputs == 0);
$outputs = $self->crunch($outputs) if($outputs == 0);
my ($flag,$ldiff,$cdiff,$_mi,$loop,$y);
while(!$flag && ($max ? $loop<$max : 1)) {
# See POD for usage
sub run_set {
my $self = shift;
my $data = shift;
my $len = $#{$data}/2;
my (@results,$res);
for my $x (0..$len) {
$res = $self->run($data->[$x*2]);
for(0..$#{$res}){$results[$x]->[$_]=$res->[$_]}
d("Running set $x [$res->[0]]...\r",4);
}
return \@results;
}
#
# Loads a CSV-like dataset from disk
#
for my $y (0..$self->{layers}->[$x]-1) {
my $w='';
for my $z (0..$self->{layers}->[$x-1]-1) {
$w.="$self->{mesh}->[$n]->{_inputs}->[$z]->{weight},";
}
print FILE "n$n=$w$self->{mesh}->[$n]->{activation},$self->{mesh}->[$n]->{threshold},$self->{mesh}->[$n]->{mean}\n";
$n++;
}
}
close(FILE);
for my $z (0..$self->{layers}->[$x-1]-1) {
$self->{mesh}->[$n]->{_inputs}->[$z]->{weight} = $l[$z];
}
my $z = $self->{layers}->[$x-1];
$self->{mesh}->[$n]->{activation} = $l[$z];
$self->{mesh}->[$n]->{threshold} = $l[$z+1];
$self->{mesh}->[$n]->{mean} = $l[$z+2];
$n++;
}
}
no strict 'refs';
for(0..$layer-1){$n+=$self->{layers}->[$_]}
$self->{mesh}->[$n+$node]->{activation} = $value;
}
# Set the activation threshold for a specific layer.
# Only applicable if that layer uses "sigmoid" or "sigmoid_2"
# usage: $net->threshold($layer,$threshold);
sub threshold {
my $self = shift;
my $layer = shift || 0;
my $value = shift || 0.5;
my $n = 0;
no strict 'refs';
for(0..$layer-1){$n+=$self->{layers}->[$_]}
for($n..$n+$self->{layers}->[$layer]-1) {
$self->{mesh}->[$_]->{threshold} = $value;
}
}
# Applies a threshold to a specific node
sub node_threshold {
my $self = shift;
my $layer = shift || 0;
my $node = shift || 0;
my $value = shift || 0.5;
my $n = 0;
no strict 'refs';
for(0..$layer-1){$n+=$self->{layers}->[$_]}
$self->{mesh}->[$n+$node]->{threshold} = $value;
}
# Set mean (avg.) flag for a layer.
# usage: $net->mean($layer,$flag);
# If $flag is true, it enables finding the mean for that layer,
for my $x (0..$len-1) { $tmp = $x if($ref1->[$x] < $ref1->[$tmp]) }
return $tmp;
}
# Following is a collection of a few nifty custom activation functions.
# range() is exported by default, the rest you can get with:
# use AI::NeuralNet::Mesh ':acts'
# The ':all' tag also gets these into your namespace.
#
# range() returns a closure limiting the output
# $net->activation(4,range(6,15,26,106,28,3));
#
# Note: when using a range() activatior, train the
# net TWICE on the data set, because the first time
# the range() function searches for the top value in
# the inputs, and therefore, results could flucuate.
# The second learning cycle guarantees more accuracy.
#
sub range {
my @r=@_;
sub{$_[1]->{t}=$_[0]if($_[0]>$_[1]->{t});$r[intr($_[0]/$_[1]->{t}*$#r)]}
# or between -1 and 1 if $r is 2. $r defaults to 1, as you can see.
#
# Note: when using a ramp() activatior, train the
# net at least TWICE on the data set, because the first
# time the ramp() function searches for the top value in
# the inputs, and therefore, results could flucuate.
# The second learning cycle guarantees more accuracy.
#
sub ramp {
my $r=shift||1;my $t=($r<2)?0:-1;
sub{$_[1]->{t}=$_[0]if($_[0]>$_[1]->{t});$_[0]/$_[1]->{t}*$r-$b}
}
# Self explanitory, pretty much. $threshold is used to decide if an input
# is true or false (1 or 0). If an input is below $threshold, it is false.
sub and_gate {
my $threshold = shift || 0.5;
sub {
my $sum = shift;
my $self = shift;
for my $x (0..$self->{_inputs_size}-1) { return $self->{_parent}->{const} if!$self->{_inputs}->[$x]->{value}<$threshold }
return $sum/$self->{_inputs_size};
}
}
# Self explanitory, $threshold is used same as above.
sub or_gate {
my $threshold = shift || 0.5;
sub {
my $sum = shift;
my $self = shift;
for my $x (0..$self->{_inputs_size}-1) { return $sum/$self->{_inputs_size} if!$self->{_inputs}->[$x]->{value}<$threshold }
return $self->{_parent}->{const};
}
}
1;
# Sum
for my $i (@{$self->{_inputs}}) {
$output += $i->{value};
}
# Handle activations, thresholds, and means
$output /= $self->{_inputs_size} if($self->{flag_mean});
#$output += (rand()*$self->{_parent}->{random});
$output = ($output>=$self->{threshold})?1:0 if(($self->{activation} eq "sigmoid") || ($self->{activation} eq "sigmoid_1"));
if($self->{activation} eq "sigmoid_2") {
$output = 1 if($output >$self->{threshold});
$output = -1 if($output <$self->{threshold});
$output = 0 if($output==$self->{threshold});
}
# Handle CODE refs
$output = &{$self->{activation}}($output,$self) if(ref($self->{activation}) eq "CODE");
$net->learn([0,0],[0]);
$net->learn([0,1],[0]);
$net->learn([1,0],[0]);
$net->learn([1,1],[1]);
# Present it with two test cases
my $result_bit_1 = $net->run([0,1])->[0];
my $result_bit_2 = $net->run([1,1])->[0];
# Display the results
print "AND test with inputs (0,1): $result_bit_1\n";
print "AND test with inputs (1,1): $result_bit_2\n";
=head1 VERSION & UPDATES
This is version B<0.44>, an update release for version 0.43.
This network model is very flexable. It will allow for clasic binary
operation or any range of integer or floating-point inputs you care
to provide. With this you can change activation types on a per node or
per layer basis (you can even include your own anonymous subs as
activation types). You can add sigmoid transfer functions and control
the threshold. You can learn data sets in batch, and load CSV data
set files. You can do almost anything you need to with this module.
This code is deigned to be flexable. Any new ideas for this module?
See AUTHOR, below, for contact info.
This module is designed to also be a customizable, extensable
neural network simulation toolkit. Through a combination of setting
the $Connection variable and using custom activation functions, as
well as basic package inheritance, you can simulate many different
types of neural network structures with very little new code written
by you.
In this module I have included a more accurate form of "learning" for the
mesh. This form preforms descent toward a local error minimum (0) on a
directional delta, rather than the desired value for that node. This allows
for better, and more accurate results with larger datasets. This module also
uses a simpler recursion technique which, suprisingly, is more accurate than
the original technique that I've used in other ANNs.
=head1 EXPORTS
range
intr
pdiff
See range() intr() and pdiff() for description of their respective functions.
Also provided are several export tag sets for usage in the form of:
use AI::NeuralNet::Mesh ':tag';
- Exports:
ramp()
and_gate()
or_gate()
See the respective methods/functions for information about
each method/functions usage.
=head1 METHODS
=item AI::NeuralNet::Mesh->new($file);
This will automatically create a new network from the file C<$file>. It will
return undef if the file was of an incorrect format or non-existant. Otherwise,
it will return a blessed refrence to a network completly restored from C<$file>.
=item AI::NeuralNet::Mesh->new(\@layer_sizes);
This constructor will make a network with the number of layers corresponding to the length
in elements of the array ref passed. Each element in the array ref passed is expected
to contain an integer specifying the number of nodes (neurons) in that layer. The first
layer ($layer_sizes[0]) is to be the input layer, and the last layer in @layer_sizes is to be
the output layer.
=item AI::NeuralNet::Mesh->new(\@array_of_hashes);
Another dandy constructor...this is my favorite. It allows you to tailor the number of layers,
the size of the layers, the activation type (you can even add anonymous inline subs with this one),
and even the threshold, all with one array ref-ed constructor.
Example:
my $net = AI::NeuralNet::Mesh->new([
{
}
},
{
nodes => 1,
activation => sigmoid,
threshold => 0.75
}
]);
Interesting, eh? What you are basically passing is this:
my @info = (
{ },
{ },
{ },
...
);
You are passing an array ref who's each element is a hash refrence. Each
hash refrence, or more precisely, each element in the array refrence you are passing
to the constructor, represents a layer in the network. Like the constructor above,
the first element is the input layer, and the last is the output layer. The rest are
hidden layers.
Each hash refrence is expected to have AT LEAST the "nodes" key set to the number
of nodes (neurons) in that layer. The other two keys are optional. If "activation" is left
out, it defaults to "linear". If "threshold" is left out, it defaults to 0.50.
The "activation" key can be one of four values:
linear ( simply use sum of inputs as output )
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
other than the ones listed above.
Three of the activation syntaxes are shown in the first constructor above, the "linear",
"sigmoid" and code ref types.
You can also set the activation and threshold values after network creation with the
activation() and threshold() methods.
=item $net->learn($input_map_ref, $desired_result_ref [, options ]);
NOTE: learn_set() now has increment-degrading turned OFF by default. See note
on the degrade flag, below.
This will 'teach' a network to associate an new input map with a desired
result. It will return a string containg benchmarking information.
You can also specify strings as inputs and ouputs to learn, and they will be
crunched automatically. Example:
$net->learn('corn', 'cob');
It defaults to 1024. Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.
$maximum_allowable_percentage_of_error is the maximum allowable error to have. If
this is set, then learn() will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then learn() will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.
$degrade_increment_flag is a simple flag used to allow/dissalow increment degrading
during learning based on a product of the error difference with several other factors.
$degrade_increment_flag is off by default. Setting $degrade_increment_flag to a true
flag => $flag
pattern => $row
If "flag" is set to some TRUE value, as in "flag => 1" in the hash of options, or if the option "flag"
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set() will return an integer specifying the amount of forgetfulness when all the patterns
are learned.
If "pattern" is set, then learn_set() will use that pattern in the data set to measure forgetfulness by.
If "pattern" is omitted, it defaults to the first pattern in the set. Example:
Now why the heck would anyone want to measure forgetfulness, you ask? Maybe you wonder how I
even measure that. Well, it is not a vital value that you have to know. I just put in a
"forgetfulness measure" one day because I thought it would be neat to know.
How the module measures forgetfulness is this: First, it learns all the patterns
in the set provided, then it will run the very first pattern (or whatever pattern
is specified by the "row" option) in the set after it has finished learning. It
will compare the run() output with the desired output as specified in the dataset.
In a perfect world, the two should match exactly. What we measure is how much that
they don't match, thus the amount of forgetfulness the network has.
=item $net->run_set($set);
This takes an array ref of the same structure as the learn_set() method, above. It returns
an array ref. Each element in the returned array ref represents the output for the corresponding
element in the dataset passed. Uses run() internally.
=item $net->get_outs($set);
Simple utility function which takes an array ref of the same structure as the learn_set() method,
above. It returns an array ref of the same type as run_set() wherein each element contains an
output value. The output values are the target values specified in the $set passed. Each element
in the returned array ref represents the output value for the corrseponding row in the dataset
passed. (A row is two elements of the dataset together, see learn_set() for dataset structure.)
=item $net->load_set($file,$column,$seperator);
Loads a CSV-like dataset from disk
If there were no errors, it will return a refrence to $net.
=item $net->load($filename);
This will load from disk any network saved by save() and completly restore the internal
state at the point it was save() was called at.
If the file is of an invalid file type, then load() will
return undef. Use the error() method, below, to print the error message.
a blessed hash refrence to that node.
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions
other than the ones listed above.
The activation type for each layer is preserved across load/save calls.
EXCEPTION: Due to the constraints of Perl, I cannot load/save the actual subs that the code
ref option points to. Therefore, you must re-apply any code ref activation types after a
load() call.
This sets the activation function for a specific node in a layer. The same notes apply
here as to the activation() method above.
=item $net->threshold($layer,$value);
This sets the activation threshold for a specific layer. The threshold only is used
when activation is set to "sigmoid", "sigmoid_1", or "sigmoid_2".
=item $net->node_threshold($layer,$node,$value);
This sets the activation threshold for a specific node in a layer. The threshold only is used
when activation is set to "sigmoid", "sigmoid_1", or "sigmoid_2".
=item $net->join_cols($array_ref,$row_length_in_elements,$high_state_character,$low_state_character);
This is more of a utility function than any real necessary function of the package.
=item $net->extend(\@array_of_hashes);
This allows you to re-apply any activations and thresholds with the same array ref which
you created a network with. This is useful for re-applying code ref activations after a load()
call without having to type the code ref twice.
You can also specify the extension in a simple array ref like this:
$net->extend([2,3,1]);
Which will simply add more nodes if needed to set the number of nodes in each layer to their
respective elements. This works just like the respective new() constructor, above.
NOTE: Your net will probably require re-training after adding nodes.
=item $net->extend_layer($layer,\%hash);
=item $net->p($a,$b);
Returns a floating point number which represents $a as a percentage of $b.
=item $net->intr($float);
=item $net->crunch($string);
This splits a string passed with /[\s\t]/ into an array ref containing unique indexes
to the words. The words are stored in an intenal array and preserved across load() and save()
calls. This is designed to be used to generate unique maps sutible for passing to learn() and
run() directly. It returns an array ref.
The words are not duplicated internally. For example:
}
print $net->run_uc("I love corn.")),"\n";
On my system, this responds with, "Good, Healthy Food." If you try to run crunch() with
"I love pop.", though, you will probably get "Food! apples. apples." (At least it returns
that on my system.) As you can see, the associations are not yet perfect, but it can make
for some interesting demos!
=item $net->crunched($word);
bitmaps. This will set the debugger to automatically insert a line break after that many
elements in the map output when dumping the currently run map during a learn loop.
It will return the current width when called with a 0 or undef value.
The column width is preserved across load() and save() calls.
=item $net->random($rand);
This will set the randomness factor from the network. Default is 0. When called
with no arguments, or an undef value, it will return current randomness value. When
called with a 0 value, it will disable randomness in the network. The randomness factor
is preserved across load() and save() calls.
=item $net->const($const);
This sets the run const. for the network. The run const. is a value that is added
to every input line when a set of inputs are run() or learn() -ed, to prevent the
network from hanging on a 0 value. When called with no arguments, it returns the current
const. value. It defaults to 0.0001 on a newly-created network. The run const. value
is preserved across load() and save() calls.
=item $net->error();
Returns the last error message which occured in the mesh, or undef if no errors have
$net->activation(4,range(6,15,26,106,28,3));
Note: when using a range() activatior, train the
net TWICE on the data set, because the first time
the range() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.
The actual code that implements the range closure is
a bit convulted, so I will expand on it here as a simple
tutorial for custom activation functions.
use AI::NeuralNet::Mesh ':acts';
Note: when using a ramp() activatior, train the
net at least TWICE on the data set, because the first
time the ramp() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.
No code to show here, as it is almost exactly the same as range().
=item and_gate($threshold);
Self explanitory, pretty much. This turns the node into a basic AND gate.
$threshold is used to decide if an input is true or false (1 or 0). If
an input is below $threshold, it is false. $threshold defaults to 0.5.
You can get this into your namespace with the ':acts' export
tag as so:
use AI::NeuralNet::Mesh ':acts';
input connections:
= line 1 = sub {
= line 2 = my $sum = shift;
= line 3 = my $self = shift;
= line 4 = my $threshold = 0.50;
= line 5 = for my $x (0..$self->{_inputs_size}-1) {
= line 6 = return 0.000001 if(!$self->{_inputs}->[$x]->{value}<$threshold)
= line 7 = }
= line 8 = return $sum/$self->{_inputs_size};
= line 9 = }
Line 2 and 3 pulls in our sum and self refrence. Line 5 opens a loop to go over
all the input lines into this node. Line 6 looks at each input line's value
and comparse it to the threshold. If the value of that line is below threshold, then
we return 0.000001 to signify a 0 value. (We don't return a 0 value so that the network
doen't get hung trying to multiply a 0 by a huge weight during training [it just will
keep getting a 0 as the product, and it will never learn]). Line 8 returns the mean
value of all the inputs if all inputs were above threshold.
Very simple, eh? :)
=item or_gate($threshold)
Self explanitory. Turns the node into a basic OR gate, $threshold is used same as above.
You can get this into your namespace with the ':acts' export
tag as so:
use AI::NeuralNet::Mesh ':acts';
way the brain's neurons are, neural networks are able to associate and
generalize without rules. They have solved problems in pattern recognition,
robotics, speech processing, financial predicting and signal processing, to
name a few.
One of the first impressive neural networks was NetTalk, which read in ASCII
text and correctly pronounced the words (producing phonemes which drove a
speech chip), even those it had never seen before. Designed by John Hopkins
biophysicist Terry Sejnowski and Charles Rosenberg of Princeton in 1986,
this application made the Backprogagation training algorithm famous. Using
the same paradigm, a neural network has been trained to classify sonar
returns from an undersea mine and rock. This classifier, designed by
Sejnowski and R. Paul Gorman, performed better than a nearest-neighbor
classifier.
The kinds of problems best solved by neural networks are those that people
are good at such as association, evaluation and pattern recognition.
Problems that are difficult to compute and do not require perfect answers,
just very good answers, are also best done with neural networks. A quick,
very good response is often more desirable than a more accurate answer which
takes longer to compute. This is especially true in robotics or industrial
controller applications. Predictions of behavior and general analysis of
data are also affairs for neural networks. In the financial arena, consumer
loan analysis and financial forecasting make good applications. New network
designers are working on weather forecasts by neural networks (Myself
contain some sort of pattern. For example, they cannot predict the lottery,
since this is a random process. It is unlikely that a neural network could
be built which has the capacity to think as well as a person does for two
reasons. Neural networks are terrible at deduction, or logical thinking and
the human brain is just too complex to completely simulate. Also, some
problems are too difficult for present technology. Real vision, for
example, is a long way off.
In short, Neural Networks are poor at precise calculations, but good at
association, evaluation, and pattern recognition.
=head1 AUTHOR
Josiah Bryan F<E<lt>jdb@wcoil.comE<gt>>
Copyright (c) 2000 Josiah Bryan. All rights reserved. This program is free software;
you can redistribute it and/or modify it under the same terms as Perl itself.
The C<AI::NeuralNet::Mesh> and related modules are free software. THEY COME WITHOUT WARRANTY OF ANY KIND.
$Id: AI::NeuralNet::Mesh.pm, v0.44 2000/15/09 03:29:08 josiah Exp $
A mailing list has been setup for AI::NeuralNet::Mesh and AI::NeuralNet::BackProp.
The list is for discussion of AI and neural net related topics as they pertain to
AI::NeuralNet::BackProp and AI::NeuralNet::mesh. I will also announce in the group
each time a new release of AI::NeuralNet::Mesh is available.
The list address is at:
ai-neuralnet-backprop@egroups.com
To subscribe, send a blank email:
ai-neuralnet-backprop-subscribe@egroups.com
view all matches for this distribution
view release on metacpan or search on metacpan
examples/eigenvector_initialization.pl view on Meta::CPAN
}
return undef;
}
for (@es_idx) { # from the highest values downwards, take the index
push @training_vectors, [ list $E->dice($_) ] ; # get the corresponding vector
}
}
$nn->initialize (@training_vectors[0..0]); # take only the biggest ones (the eigenvalues are big, actually)
#warn $nn->as_string;
view all matches for this distribution