AI-NeuralNet-BackProp
view release on metacpan or search on metacpan
BackProp.pm view on Meta::CPAN
}
#
# name: AI::NeuralNet::BackProp
#
# author: Josiah Bryan
# date: Tuesday August 15 2000
# desc: A simple back-propagation, feed-foward neural network with
# learning implemented via a generalization of Dobbs rule and
# several principals of Hoppfield networks.
# online: http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl
#
package AI::NeuralNet::BackProp::neuron;
use strict;
# Dummy constructor
sub new {
bless {}, shift
}
BackProp.pm view on Meta::CPAN
my $inc = $args{inc};
my $max = $args{max};
my $error = $args{error};
my $p = (defined $args{flag}) ?$args{flag} :1;
my $row = (defined $args{pattern})?$args{pattern}*2+1:1;
my ($fa,$fb);
for my $x (0..$len) {
print "\nLearning index $x...\n" if($AI::NeuralNet::BackProp::DEBUG);
my $str = $self->learn( $data->[$x*2], # The list of data to input to the net
$data->[$x*2+1], # The output desired
inc=>$inc, # The starting learning gradient
max=>$max, # The maximum num of loops allowed
error=>$error); # The maximum (%) error allowed
print $str if($AI::NeuralNet::BackProp::DEBUG);
}
my $res;
$data->[$row] = $self->crunch($data->[$row]) if($data->[$row] == 0);
if ($p) {
BackProp.pm view on Meta::CPAN
my $error = $args{error};
my @learned;
while(1) {
_GET_X:
my $x=$self->intr(rand()*$len);
goto _GET_X if($learned[$x]);
$learned[$x]=1;
print "\nLearning index $x...\n" if($AI::NeuralNet::BackProp::DEBUG);
my $str = $self->learn($data->[$x*2], # The list of data to input to the net
$data->[$x*2+1], # The output desired
inc=>$inc, # The starting learning gradient
max=>$max, # The maximum num of loops allowed
error=>$error); # The maximum (%) error allowed
print $str if($AI::NeuralNet::BackProp::DEBUG);
}
return 1;
}
# Returns the index of the element in array REF passed with the highest comparative value
BackProp.pm view on Meta::CPAN
if($flag == 2) {
$self->{NET}->[$y+$z]->connect($self->{NET}->[$y+$div+$z]);
$self->{NET}->[$y+$z]->connect($self->{NET}->[$y+$z+1]) if($z<$div-1);
}
AI::NeuralNet::BackProp::out1 "\n";
}
AI::NeuralNet::BackProp::out1 "\n";
}
# These next two loops connect the _run and _map packages (the IO interface) to
# the start and end 'layers', respectively. These are how we insert data into
# the network and how we get data from the network. The _run and _map packages
# are connected to the neurons so that the neurons think that the IO packages are
# just another neuron, sending data on. But the IO packs. are special packages designed
# with the same methods as neurons, just meant for specific IO purposes. You will
# never need to call any of the IO packs. directly. Instead, they are called whenever
# you use the run(), map(), or learn() methods of your network.
AI::NeuralNet::BackProp::out2 "\nMapping I (_run package) connections to network...\n";
for($y=0; $y<$div; $y++) {
BackProp.pm view on Meta::CPAN
weighting the neurons. Daniel was a great help with early beta testing of the module and related
ideas. Pat has been a great help for running the module through the works. Pat is the author of
the new Inter game, a in-depth strategy game. He is using a group of neural networks internally
which provides a good test bed for coming up with new ideas for the network. Thankyou for all of
your help, everybody.
=head1 DOWNLOAD
You can always download the latest copy of AI::NeuralNet::BackProp
from http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl
=head1 MAILING LIST
A mailing list has been setup for AI::NeuralNet::BackProp for discussion of AI and
neural net related topics as they pertain to AI::NeuralNet::BackProp. I will also
announce in the group each time a new release of AI::NeuralNet::BackProp is available.
The list address is at: ai-neuralnet-backprop@egroups.com
from operating theory, not math theory. Any die-hard neural
networking gurus out there? Let me know how far off I am with
this code! :-)
Regards,
~ Josiah Bryan, <jdb@wcoil.com>
Latest Version:
http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl?README
as disabling Storable. Steve is the author of AI::Perceptron, and gave some good suggestions for
weighting the neurons. Daniel was a great help with early beta testing of the module and related
ideas. Pat has been a great help for running the module through the works. Pat is the author of
the new Inter game, a in-depth strategy game. He is using a group of neural networks internally
which provides a good test bed for coming up with new ideas for the network. Thankyou for all of
your help, everybody.</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="download">DOWNLOAD</A></H1>
<P>You can always download the latest copy of AI::NeuralNet::BackProp
from <A HREF="http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl">http://www.josiah.countystart.com/modules/AI/cgi-bin/rec.pl</A></P>
<P>
<HR SIZE=1 COLOR=BLACK>
<H1><A NAME="mailing list">MAILING LIST</A></H1>
<P>A mailing list has been setup for AI::NeuralNet::BackProp for discussion of AI and
neural net related topics as they pertain to AI::NeuralNet::BackProp. I will also
announce in the group each time a new release of AI::NeuralNet::BackProp is available.</P>
<P>The list address is at: <A HREF="mailto:ai-neuralnet-backprop@egroups.com">ai-neuralnet-backprop@egroups.com</A></P>
<P>To subscribe, send a blank email to: <A HREF="mailto:ai-neuralnet-backprop-subscribe@egroups.com">ai-neuralnet-backprop-subscribe@egroups.com</A></P>
<P>
reasons. Neural networks are terrible at deduction, or logical thinking and
the human brain is just too complex to completely simulate. Also, some
problems are too difficult for present technology. Real vision, for
example, is a long way off.</P>
<P>In short, Neural Networks are poor at precise calculations, but good at
association, evaluation, and pattern recognition.
</P>
<P>
<HR SIZE=1 COLOR=BLACK>
<A HREF="http://www.josiah.countystart.com/modules/AI/rec.pl?docs.htm">AI::NeuralNet::BackProp</a> - <i>Written by Josiah Bryan, <<A HREF="mailto:jdb@wcoil.com">jdb@wcoil.com</A>></I>
</BODY>
</HTML>
examples/ex_add2.pl view on Meta::CPAN
my @data = (
[ 2633, 2665, 2685], [ 2633 + 2665 + 2685 ],
[ 2623, 2645, 2585], [ 2623 + 2645 + 2585 ],
[ 2627, 2633, 2579], [ 2627 + 2633 + 2579 ],
[ 2611, 2627, 2563], [ 2611 + 2627 + 2563 ],
[ 2640, 2637, 2592], [ 2640 + 2637 + 2592 ]
);
print "Learning started, will cycle $top times with inc = $inc\n";
# Make it learn the whole dataset $top times
my @list;
my $t1=new Benchmark;
for my $a (1..$top)
{
print "Outer Loop: $a : ";
$forgetfulness = $net->learn_set( \@data,
examples/ex_bmp.pl view on Meta::CPAN
2,2,2,1,2,
1,1,1,1,2 ], [ 5 ],
);
$net->range(1,5);
# If we havnt saved the net already, do the learning
if(!$net->load('images.net')) {
print "\nLearning started...\n";
# Make it learn the whole dataset $top times
my @list;
my $top=3;
for my $a (0..$top) {
my $t1=new Benchmark;
print "\n\nOuter Loop: $a\n";
# Test fogetfullness
my $f = $net->learn_set(\@data, inc => 0.1,
examples/ex_bmp2.pl view on Meta::CPAN
# Create our model input
my @map = (1,1,1,1,1,
0,0,1,0,0,
0,0,1,0,0,
0,0,1,0,0,
1,0,1,0,0,
1,0,1,0,0,
1,1,1,0,0);
print "\nLearning started...\n";
print $net->learn(\@map,'J');
print "Learning done.\n";
# Build a test map
my @tmp = (0,0,1,1,1,
1,1,1,0,0,
0,0,0,1,0,
0,0,0,1,0,
examples/ex_dow.pl view on Meta::CPAN
[ 3, 244, 235, 164, 19.6, 19.8, 18.1, 2627, 2633, 2579], [ 2630 ],
[ 4, 261, 244, 181, 19.6, 19.6, 18.1, 2611, 2627, 2563], [ 2620 ],
[ 5, 276, 261, 196, 19.5, 19.6, 18.0, 2630, 2611, 2582], [ 2638 ],
[ 6, 287, 276, 207, 19.5, 19.5, 18.0, 2637, 2630, 2589], [ 2635 ],
[ 7, 296, 287, 212, 19.3, 19.5, 17.8, 2640, 2637, 2592], [ 2641 ]
);
# If we havnt saved the net already, do the learning
if(!$net->load('dow.dat')) {
print "\nLearning started...\n";
# Make it learn the whole dataset $top times
my @list;
my $top=1;
for my $a (0..$top) {
my $t1=new Benchmark;
print "\n\nOuter Loop: $a\n";
# Test fogetfullness
my $f = $net->learn_set(\@data, inc => 0.2,
( run in 0.381 second using v1.01-cache-2.11-cpan-0d8aa00de5b )