view release on metacpan or search on metacpan
also about 50% faster (do you sense a theme developing here?)
Removed momentum module (well, removed backprop module and
renamed momentum module in fact). There were few code differences
and its easier to maintain this way. Default option is vanilla
backprop, momentum & fahlman adjustments are only implemented if
specified in the network config.
Bundled all the maths functions into AI::NNFlex::Mathlib
Implemented calls to error transformation function, as per
Fahlman. Testing, it doesn't seem to make much difference, but
at least now the facility is there.
#############################################################
0.20
20050308
v0.17 was never released, as I rejigged the whole lot for
object inheritance before I got around to uploading it to CPAN.
will be put back later (perhaps) or incorporated into a separate GUI
frontend.
Removed feedforward_pdl.pm from the distribution - it shouldn't have
been there in the first place!
Fixed the lesion subs so they expect parameter=>value pairs instead
of an anonymous hash (left over from when I didn't know you could do
that).
Fixed an error - random weights was bounded by 1, not the parameter
'randomweights'. Its now positive only. Some benchmarking needed as
it appears that positive random starting weights rather than a mix
of positive and negative make the network quicker to converge, at
least with momentum.
weights now defaults to rand(1) instead of 0 - at least for backprop
type nets, a default 0 weight will never work. For other types of nets
the random weights can be overridden with the 'fixedweights' parameter.
Fixed load_state to correctly read weights from the bias node
#############################################################
0.17
its at the users risk.
See xor.pl or the perldoc & readme for examples.
Cleaned up the perldoc some more. Commented out all the method
perldocs, so there is just the single block defining the
distributions documentation, as advocated by perlmonks. Method
perldocs in importable modules have not been commented out.
Removed the weight bounding in backprop & momentum. If the network
is going into an unstable state the weight bounding won't help,
and it causes errors under perl -w.
Implemented tests (apologies to the CPAN testers for not having
done so before!).
#################################################################
0.13
20050121
my $dataset = AI::NNFlex::Dataset->new([
[0,0],[0],
[0,1],[1],
[1,0],[1],
[1,1],[0]]);
my $counter=0;
my $err = 10;
while ($err >.001)
{
$err = $dataset->learn($network);
print "Epoch = $counter error = $err\n";
$counter++;
}
foreach (@{$dataset->run($network)})
{
foreach (@$_){print $_}
print "\n";
}
Put in some more error checking, particularly trying to create connections
between layers/nodes that don't exist.
Write a simple net simulator with syntax loosely based on xerion. At
present this lot is API driven, it should be straightforward to write
a basic simulator that calls the API in the backend.
read & write methods for both networks and datasets modelled on snns format (for use with frontend script). data should be snns format, network definition file will probably have to differ
Implement an error method in addition to dbug, and clean up the dbug & error calls
examples/add.pl view on Meta::CPAN
[ 1, 1 ], [ 2 ],
[ 1, 2 ], [ 3 ],
[ 2, 2 ], [ 4 ],
[ 20, 20 ], [ 40 ],
[ 10, 10 ], [ 20 ],
[ 15, 15 ], [ 30 ],
[ 12, 8 ], [ 20 ],
]);
my $err = 10;
# Stop after 4096 epochs -- don't want to wait more than that
for ( my $i = 0; ($err > 0.0001) && ($i < 4096); $i++ ) {
$err = $dataset->learn($network);
print "Epoch = $i error = $err\n";
}
foreach (@{$dataset->run($network)})
{
foreach (@$_){print $_}
print "\n";
}
print "this should be 4000 - ";
$network->run([2000,2000]);
examples/bp.pl view on Meta::CPAN
#Translated into perl - ccolbourn oct 2004
my $numEpochs = 500;
my $numInputs = 3;
my $numHidden = 4;
my $numPatterns = 4;
my $LR_IH = 0.7;
my $LR_HO = 0.07;
my $patNum;
my $errThisPat;
my $outPred;
my $RMSerror;
my @trainInputs;
my @trainOutput;
# the outputs of the hidden neurons
my @hiddenVal;
# the weights
my @weightsIH;
examples/bp.pl view on Meta::CPAN
for(my $j = 0;$j <= $numEpochs;$j++)
{
for(my $i = 0;$i<$numPatterns;$i++)
{
#select a pattern at random
$patNum = (rand()*$numPatterns)-0.001;
#calculate the current network output
#and error for this pattern
calcNet();
#change network weights
WeightChangesHO();
WeightChangesIH();
}
#display the overall network error
#after each epoch
calcOverallError();
print "epoch = ".$j." RMS Error = ".$RMSerror."\n";
}
#training has finished
#display the results
displayResults();
}
#============================================================
examples/bp.pl view on Meta::CPAN
}
#calculate the output of the network
#the output neuron is linear
$outPred = 0.0;
for(my $i = 0;$i<$numHidden;$i++)
{
$outPred = $outPred + $hiddenVal[$i] * $weightsHO[$i];
}
#calculate the error
$errThisPat = $outPred - $trainOutput[$patNum];
}
#************************************
sub WeightChangesHO()
#adjust the weights hidden-output
{
for(my $k = 0;$k<$numHidden;$k++)
{
$weightChange = $LR_HO * $errThisPat * $hiddenVal[$k];
$weightsHO[$k] = $weightsHO[$k] - $weightChange;
#regularisation on the output weights
if ($weightsHO[$k] < -5)
{
$weightsHO[$k] = -5;
}
elsif ($weightsHO[$k] > 5)
{
$weightsHO[$k] = 5;
examples/bp.pl view on Meta::CPAN
#************************************
sub WeightChangesIH()
#adjust the weights input-hidden
{
for(my $i = 0;$i<$numHidden;$i++)
{
for(my $k = 0;$k<$numInputs;$k++)
{
my $x = 1 - ($hiddenVal[$i] * $hiddenVal[$i]);
$x = $x * $weightsHO[$i] * $errThisPat * $LR_IH;
$x = $x * $trainInputs[$patNum][$k];
my $weightChange = $x;
$weightsIH[$k][$i] = $weightsIH[$k][$i] - $weightChange;
}
}
}
#************************************
sub initWeights()
examples/bp.pl view on Meta::CPAN
$patNum = $i;
calcNet();
print "pat = ".($patNum+1)." actual = ".$trainOutput[$patNum]." neural model = ".$outPred."\n";
}
}
#************************************
sub calcOverallError()
{
$RMSerror = 0.0;
for(my $i = 0;$i<$numPatterns;$i++)
{
$patNum = $i;
calcNet();
$RMSerror = $RMSerror + ($errThisPat * $errThisPat);
}
$RMSerror = $RMSerror/$numPatterns;
$RMSerror = sqrt($RMSerror);
}
examples/cars/cars.pl view on Meta::CPAN
$network->add_layer( nodes=>2,
activationfunction=>"linear");
$network->init();
$network->connect(fromlayer=>2,tolayer=>2);
my $counter=0;
my $err = 10;
while ($err >.001)
{
$err = $dataset->learn($network);
print "Epoch $counter: Error = $err\n";
$counter++;
}
foreach (@{$dataset->run($network)})
{
foreach (@$_){print $_}
print "\n";
}
examples/lesion.pl view on Meta::CPAN
my $dataset = AI::NNFlex::Dataset->new([
[0,0],[0],
[0,1],[1],
[1,0],[1],
[1,1],[0]]);
my $counter=0;
my $err = 10;
while ($err >.001)
{
$err = $dataset->learn($network);
print "Epoch $counter: Error = $err\n";
$counter++;
}
$network->lesion(nodes=>0.5,connections=>0.5);
$network->dump_state(filename=>"weights-learned.wts",activations=>1);
foreach (@{$dataset->run($network)})
{
foreach (@$_){print $_}
examples/reinforceTest.pl view on Meta::CPAN
$object->run([1,0]);
$output = $object->output();
foreach (@$output)
{
print "1,0 - $_ ";
}
print "\n";
$object->run([0,1]);
$err = $object->learn([1]);
$output = $object->output();
foreach (@$output)
{
print "0,1 - $_ ";
}
print "\n";
$object->run([0,1]);
$err = $object->learn([1]);
$output = $object->output();
foreach (@$output)
{
print "0,1 - $_ ";
}
print "\n";
$object->run([0,1]);
$output = $object->output();
foreach (@$output)
examples/test.pl view on Meta::CPAN
momentum=>0.6,
round=>1);
$network->add_layer( nodes=>8,
activationfunction=>"tanh");
$network->add_layer( nodes=>8,
errorfunction=>'atanh',
activationfunction=>"tanh");
$network->add_layer( nodes=>8,
activationfunction=>"linear");
$network->init();
my $dataset = AI::NNFlex::Dataset->new(\@data);
my $counter=0;
my $err = 10;
while ($err >.01)
{
$err = $dataset->learn($network);
print "Epoch = $counter error = $err\n";
$counter++;
}
$network->run([0,0,0,0,0,1,0,1]);
my $output = $network->output();
print $output."\n";
foreach (@$output){print $_}
print "\n";
examples/xor.pl view on Meta::CPAN
[0,0],[0],
[0,1],[1],
[1,0],[1],
[1,1],[0]]);
$dataset->save(filename=>'xor.pat');
$dataset->load(filename=>'xor.pat');
my $counter=0;
my $err = 10;
while ($err >.001)
#for (1..1500)
{
$err = $dataset->learn($network);
print "Epoch = $counter error = $err\n";
$counter++;
}
foreach (@{$dataset->run($network)})
{
foreach (@$_){print $_}
print "\n";
}
examples/xor_minimal.pl view on Meta::CPAN
my $dataset = AI::NNFlex::Dataset->new([
[0,0],[0],
[0,1],[1],
[1,0],[1],
[1,1],[0]]);
my $counter=0;
my $err = 10;
while ($err >.001)
{
$err = $dataset->learn($network);
print "Epoch $counter: Error = $err\n";
$counter++;
}
foreach (@{$dataset->run($network)})
{
foreach (@$_){print $_}
print "\n";
}
examples/xorminus.pl view on Meta::CPAN
[-1,-1],[-1],
[-1,1],[1],
[1,-1],[1],
[1,1],[-1]]);
$dataset->save(filename=>'xor.pat');
$dataset->load(filename=>'xor.pat');
my $counter=0;
my $err = 10;
while ($err >.001)
#for (1..1500)
{
$err = $dataset->learn($network);
print "Epoch = $counter error = $err\n";
$counter++;
}
foreach (@{$dataset->run($network)})
{
foreach (@$_){print $_}
print "\n";
}
lib/AI/NNFlex.pm view on Meta::CPAN
if (!$network->{'debug'})
{
@DEBUGLEVELS=@DEBUG;
}
else
{
@DEBUGLEVELS = @{$network->{'debug'}};
}
# 0 is error so ALWAYS display
if (!(grep /0/,@DEBUGLEVELS)){push @DEBUGLEVELS,0}
foreach (@DEBUGLEVELS)
{
if ($level == $_)
{
print "$message\n";
}
}
lib/AI/NNFlex.pm view on Meta::CPAN
$$node{'activation'} =
rand($$params{'random'});
AI::NNFlex::dbug($params,"Randomly activated at ".$$node{'activation'},2);
}
else
{
$$node{'activation'} = 0;
}
$$node{'active'} = 1;
$$node{'error'} = 0;
bless $node,$class;
AI::NNFlex::dbug($params,"Created node $node",2);
return $node;
}
##############################################################################
# sub lesion
##############################################################################
sub lesion
lib/AI/NNFlex/Backprop.pm view on Meta::CPAN
###########################################################
#
package AI::NNFlex::Backprop;
use AI::NNFlex;
use AI::NNFlex::Feedforward;
use base qw(AI::NNFlex::Feedforward AI::NNFlex);
use strict;
sub calc_error
{
my $network = shift;
my $outputPatternRef = shift;
my @outputPattern = @$outputPatternRef;
my @debug = @{$network->{'debug'}};
if (scalar @debug > 0)
{$network->dbug ("Output pattern @outputPattern received by Backprop",4);}
my $outputLayer = $network->{'layers'}->[-1]->{'nodes'};
if (scalar @$outputLayer != scalar @outputPattern)
{
$network->dbug ("Wrong number of output values, net has ".scalar @$outputLayer." nodes",0);
return 0;
}
# Now calculate the error
my $counter=0;
foreach (@$outputLayer)
{
my $value = $_->{'activation'} - $outputPattern[$counter];
if ($_->{'errorfunction'})
{
my $errorfunction = $_->{'errorfunction'};
$value = $network->$errorfunction($value);
}
$_->{'error'} = $value;
$counter++;
if (scalar @debug > 0)
{$network->dbug ("Error on output node $_ = ".$_->{'error'},4);}
}
}
########################################################
# AI::NNFlex::Backprop::learn
########################################################
sub learn
lib/AI/NNFlex/Backprop.pm view on Meta::CPAN
# Set a default value on the Fahlman constant
if (!$network->{'fahlmanconstant'})
{
$network->{'fahlmanconstant'} = 0.1;
}
my @outputPattern = @$outputPatternRef;
$network->calc_error($outputPatternRef);
#calculate & apply dWs
$network->hiddenToOutput;
if (scalar @{$network->{'layers'}} > 2)
{
$network->hiddenOrInputToHidden;
}
# calculate network sqErr
my $Err = $network->RMSErr($outputPatternRef);
lib/AI/NNFlex/Backprop.pm view on Meta::CPAN
if ($network->{'momentum'})
{
if ($node->{'connectedNodesWest'}->{'lastdelta'}->[$connectedNodeCounter])
{
$momentum = ($network->{'momentum'})*($node->{'connectedNodesWest'}->{'lastdelta'}->[$connectedNodeCounter]);
}
}
if (scalar @debug > 0)
{$network->dbug("Learning rate is ".$network->{'learningrate'},4);}
my $deltaW = (($network->{'learningrate'}) * ($node->{'error'}) * ($connectedNode->{'activation'}));
$deltaW = $deltaW+$momentum;
$node->{'connectedNodesWest'}->{'lastdelta'}->[$connectedNodeCounter] = $deltaW;
if (scalar @debug > 0)
{$network->dbug("Applying delta $deltaW on hiddenToOutput $connectedNode to $node",4);}
#
$node->{'connectedNodesWest'}->{'weights'}->[$connectedNodeCounter] -= $deltaW;
$connectedNodeCounter++;
}
lib/AI/NNFlex/Backprop.pm view on Meta::CPAN
my @layers = @{$network->{'layers'}};
my @debug = @{$network->{'debug'}};
# remove the last element (The output layer) from the stack
# because we've already calculated dW on that
pop @layers;
if (scalar @debug > 0)
{$network->dbug("Starting Backprop of error on ".scalar @layers." hidden layers",4);}
foreach my $layer (reverse @layers)
{
foreach my $node (@{$layer->{'nodes'}})
{
my $connectedNodeCounter=0;
if (!$node->{'connectedNodesWest'}) {last}
my $nodeError;
foreach my $connectedNode (@{$node->{'connectedNodesEast'}->{'nodes'}})
{
$nodeError += ($connectedNode->{'error'}) * ($connectedNode->{'connectedNodesWest'}->{'weights'}->[$connectedNodeCounter]);
$connectedNodeCounter++;
}
if (scalar @debug > 0)
{$network->dbug("Hidden node $node error = $nodeError",4);}
# Apply error function
if ($node->{'errorfunction'})
{
my $functioncall = $node->{'errorfunction'};
$nodeError = $network->$functioncall($nodeError);
}
$node->{'error'} = $nodeError;
# update the weights from nodes inputting to here
$connectedNodeCounter=0;
foreach my $westNodes (@{$node->{'connectedNodesWest'}->{'nodes'}})
{
my $momentum = 0;
if ($network->{'momentum'})
{
lib/AI/NNFlex/Backprop.pm view on Meta::CPAN
# get the slope from the activation function component
my $value = $node->{'activation'};
my $functionSlope = $node->{'activationfunction'}."_slope";
$value = $network->$functionSlope($value);
# Add the Fahlman constant
$value += $network->{'fahlmanconstant'};
$value = $value * $node->{'error'} * $network->{'learningrate'} * $westNodes->{'activation'};
my $dW = $value;
$dW = $dW + $momentum;
if (scalar @debug > 0)
{$network->dbug("Applying deltaW $dW to inputToHidden connection from $westNodes to $node",4);}
$node->{'connectedNodesWest'}->{'lastdelta'}->{$westNodes} = $dW;
$node->{'connectedNodesWest'}->{'weights'}->[$connectedNodeCounter] -= $dW;
lib/AI/NNFlex/Backprop.pm view on Meta::CPAN
my $sqrErr;
my $outputLayer = $network->{'layers'}->[-1]->{'nodes'};
if (scalar @$outputLayer != scalar @outputPattern)
{
$network->dbug("Wrong number of output values, net has ".scalar @$outputLayer." nodes",0);
return 0;
}
# Now calculate the error
my $counter=0;
foreach (@$outputLayer)
{
my $value = $_->{'activation'} - $outputPattern[$counter];
$sqrErr += $value *$value;
$counter++;
if (scalar @debug > 0)
{$network->dbug("Error on output node $_ = ".$_->{'error'},4);}
}
my $error = sqrt($sqrErr);
return $error;
}
1;
=pod
=head1 NAME
AI::NNFlex::Backprop - a fast, pure perl backprop Neural Net simulator
lib/AI/NNFlex/Backprop.pm view on Meta::CPAN
momentum
fahlmanconstant
If randomweights is not specified the network will default to a random value from 0 to 1.
If momentum is not specified the network will default to vanilla (non momentum) backprop.
The Fahlman constant modifies the slope of the error curve. 0.1 is the standard value for everything, and speeds the network up immensely. If no Fahlman constant is set, the network will default to 0.1
=head2 AI::NNFlex::Dataset
new ( [[INPUT VALUES],[OUTPUT VALUES],
[INPUT VALUES],[OUTPUT VALUES],..])
=head2 INPUT VALUES
These should be comma separated values. They can be applied to the network with ::run or ::learn
lib/AI/NNFlex/Backprop.pm view on Meta::CPAN
=head2 add_layer
Syntax:
$network->add_layer( nodes=>NUMBER OF NODES IN LAYER,
persistentactivation=>RETAIN ACTIVATION BETWEEN PASSES,
decay=>RATE OF ACTIVATION DECAY PER PASS,
randomactivation=>MAXIMUM STARTING ACTIVATION,
threshold=>NYI,
activationfunction=>"ACTIVATION FUNCTION",
errorfunction=>'ERROR TRANSFORMATION FUNCTION',
randomweights=>MAX VALUE OF STARTING WEIGHTS);
The activation function must be defined in AI::NNFlex::Mathlib. Valid predefined activation functions are tanh & linear.
The error transformation function defines a transform that is done on the error value. It must be a valid function in AI::NNFlex::Mathlib. Using a non linear transformation function on the error value can sometimes speed up training.
The following parameters are optional:
persistentactivation
decay
randomactivation
threshold
errorfunction
randomweights
=head2 init
Syntax:
$network->init();
lib/AI/NNFlex/Dataset.pm view on Meta::CPAN
}
###############################################################
# AI::NNFlex::Dataset::learn
###############################################################
sub learn
{
my $self = shift;
my $network = shift;
my $error;
for (my $itemCounter=0;$itemCounter<(scalar @{$self->{'data'}});$itemCounter +=2)
{
$network->run(@{$self->{'data'}}[$itemCounter]);
$error += $network->learn(@{$self->{'data'}}[$itemCounter+1]);
}
$error = $error*$error;
return $error;
}
#################################################################
# AI::NNFlex::Dataset::save
#################################################################
# save a dataset in an snns .pat file
#################################################################
sub save
{
my $dataset = shift;
my %config = @_;
open (OFILE,">".$config{'filename'}) or return "File error $!";
print OFILE "No. of patterns : ".((scalar @{$dataset->{'data'}})/2)."\n";
print OFILE "No. of input units : ".(scalar @{$dataset->{'data'}->[0]})."\n";
print OFILE "No. of output units : ".(scalar @{$dataset->{'data'}->[1]})."\n\n";
my $counter = 1;
my @values = @{$dataset->{'data'}};
while (@values)
{
print OFILE "# Input pattern $counter:\n";
lib/AI/NNFlex/Feedforward.pm view on Meta::CPAN
# This is the first propagation module for NNFlex
#
##########################################################
# Versions
# ========
#
# 1.0 20040910 CColbourn New module
#
# 1.1 20050116 CColbourn Added call to
# datasets where run
# is erroneously called
# with a dataset
#
# 1.2 20050206 CColbourn Fixed a bug where
# transfer function
# was called on every
# input to a node
# instead of total
#
# 1.3 20050218 CColbourn Changed to reflect
# new weight indexing
lib/AI/NNFlex/Mathlib.pm view on Meta::CPAN
{
my $network = shift;
my $value = shift;
if ($value <0){return -1}
if ($value >0){return 1}
return $value;
}
############################################################
# atanh error function
############################################################
sub atanh
{
my $network = shift;
my $value = shift;
if ($value >-0.5 && $value <0.5)
{
$value = log((1+$value)/(1-$value))/2;
}
return $value;
lib/AI/NNFlex/Mathlib.pm view on Meta::CPAN
1;
=pod
=head1 NAME
AI::NNFlex::Mathlib - miscellaneous mathematical functions for the AI::NNFlex NN package
=head1 DESCRIPTION
The AI::NNFlex::Mathlib package contains activation and error functions. At present there are the following:
Activation functions
=over
=item *
tanh
=item *
linear
lib/AI/NNFlex/Mathlib.pm view on Meta::CPAN
Error functions
=over
=item *
atanh
=back
If you want to implement your own activation/error functions, you can add them to this module. All activation functions to be used by certain types of net (like Backprop) require an additional function <function name>_slope, which returns the 1st ord...
This rule doesn't apply to all network types. Hopfield for example requires no slope calculation.
=head1 CHANGES
v1.2 includes hopfield_threshold
=head1 COPYRIGHT
Copyright (c) 2004-2005 Charles Colbourn. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
t/Backprop.t view on Meta::CPAN
my $dataset = AI::NNFlex::Dataset->new([
[0,0],[1,1],
[0,1],[1,0],
[1,0],[0,1],
[1,1],[0,0]]);
ok ($dataset); #test 4
##
# Test a learning pass
my $err = $dataset->learn($network);
ok($err); #test 5
##
# Test a run pass
$result = $dataset->run($network);
ok($result); #test 8
##
# test saving weights
$result = $network->dump_state(filename=>'state.wts',activations=>1);
t/Dataset.t view on Meta::CPAN
# compare original & loaded dataset
my $comparison;
if (scalar @{$dataset->{'data'}} == scalar @{$dataset2->{'data'}}){$comparison=1}
ok($comparison);
# delete a pair from the dataset
$result = $dataset->delete([4,5]);
ok($result);
# Test a learning pass
my $err = $dataset->learn($network);
ok($err); #test 5
##
# Test a run pass
$result = $dataset->run($network);
ok($result); #test 8
##
t/backprop.t view on Meta::CPAN
my $dataset = AI::NNFlex::Dataset->new([
[0,0],[1,1],
[0,1],[1,0],
[1,0],[0,1],
[1,1],[0,0]]);
ok ($dataset); #test 4
##
# Test a learning pass
my $err = $dataset->learn($network);
ok($err); #test 5
##
# Test a run pass
$result = $dataset->run($network);
ok($result); #test 8
##