AI-Perceptron

 view release on metacpan or  search on metacpan

Build.PL  view on Meta::CPAN


use Module::Build;

my $build = Module::Build->new
  (
   module_name        => 'AI::Perceptron',
   dist_version_from  => 'lib/AI/Perceptron.pm',
   create_makefile_pl => 'passthrough',
   create_readme      => 1,
   license            => 'perl',
   build_requires     => {
			  'Test::More'    => '0.01',
			  'Module::Build' => '0.20',
		         },
   requires           => {
			  'accessors'     => '0.01',
		         },
  );

$build->create_build_script;


__END__

=head1 AUTHOR

Changes  view on Meta::CPAN

----------------
 AI::Perceptron
----------------

Revision history for AI::Perceptron.

1.0
    +	moved to Module::Build
    +	restructured project directory tree
    +	fixed bug in algorithm [reported by Himanshu Garg]
    +	refactored api & code, things should be more obvious now.
    +	more documentation & bugfixes
    +	wrote a proper test-suite
    +	weights & threshold no longer random

0.01
    +	original dist created Fri Feb 18 16:05:09 2000 by h2xs 1.19
    +	project began July 20, 1999, originally designed on a napkin which,
	sadly enough, I probably still have lying around somewhere.

META.yml  view on Meta::CPAN

--- #YAML:1.0
name: AI-Perceptron
version: 1.0
license: perl
distribution_type: module
requires:
  accessors: 0.01
recommends: {}
build_requires:
  Module::Build: 0.20
  Test::More: 0.01
conflicts: {}
provides:
  AI::Perceptron:
    file: lib/AI/Perceptron.pm
    version: 1.0
generated_by: Module::Build version 0.20_02

Makefile.PL  view on Meta::CPAN


    unless (eval "use Module::Build::Compat 0.02; 1" ) {
      print "This module requires Module::Build to install itself.\n";
      
      require ExtUtils::MakeMaker;
      my $yn = ExtUtils::MakeMaker::prompt
	('  Install Module::Build now from CPAN?', 'y');
      
      unless ($yn =~ /^y/i) {
	warn " *** Cannot install without Module::Build.  Exiting ...\n";
	exit 1;
      }

README  view on Meta::CPAN

NAME
    AI::Perceptron - example of a node in a neural network.

SYNOPSIS
     use AI::Perceptron;

     my $p = AI::Perceptron->new
               ->num_inputs( 2 )
               ->learning_rate( 0.04 )
               ->threshold( 0.02 )
               ->weights([ 0.1, 0.2 ]);

     my @inputs  = ( 1.3, -0.45 );   # input can be any number
     my $target  = 1;                # output is always -1 or 1
     my $current = $p->compute_output( @inputs );

     print "current output: $current, target: $target\n";

     $p->add_examples( [ $target, @inputs ] );

README  view on Meta::CPAN

      .           | .   |   ___   |_________|    __  Squarewave |_______\  Output
      .           | .   |   \     |    S    | __|    Generator  |       /
      .           | .   |   /__   |         +-------------------+
     X[n] o------ |W[n] |   Sum   |
                  +-----+---------+

                 S  =  T + Sum( W[i]*X[i] )  as i goes from 1 -> n
            Output  =  1 if S > 0; else -1

    Where "X[n]" are the perceptron's *inputs*, "W[n]" are the *Weights*
    that get applied to the corresponding input, and "T" is the *Threshold*.

    The *squarewave generator* just turns the result into a positive or
    negative number.

    So in summary, when you feed the perceptron some numeric inputs you get
    either a positive or negative output depending on the input's weights
    and a threshold.

TRAINING
    Usually you have to train a perceptron before it will give you the
    outputs you expect. This is done by giving the perceptron a set of
    examples containing the output you want for some given inputs:

        -1 => -1, -1
        -1 =>  1, -1
        -1 => -1,  1
         1 =>  1,  1

    If you've ever studied boolean logic, you should recognize that as the
    truth table for an "AND" gate (ok so we're using -1 instead of the
    commonly used 0, same thing really).

    You *train* the perceptron by iterating over the examples and adjusting
    the *weights* and *threshold* by some value until the perceptron's
    output matches the expected output of each example:

        while some examples are incorrectly classified
            update weights for each example that fails

    The value each weight is adjusted by is calculated as follows:

        delta[i] = learning_rate * (expected_output - output) * input[i]

    Which is know as a negative feedback loop - it uses the current output

README  view on Meta::CPAN


    Also, note that this means you can get stuck in an infinite loop. It's
    not a bad idea to set the maximum number of iterations to prevent that.

CONSTRUCTOR
    new( [%args] )
        Creates a new perceptron with the following default properties:

            num_inputs    = 1
            learning_rate = 0.01
            threshold     = 0.0
            weights       = empty list

        Ideally you should use the accessors to set the properties, but for
        backwards compatability you can still use the following arguments:

            Inputs => $number_of_inputs  (positive int)
            N      => $learning_rate     (float)
            W      => [ @weights ]       (floats)

        The number of elements in *W* must be equal to the number of inputs
        plus one. This is because older version of AI::Perceptron combined
        the threshold and the weights a single list where W[0] was the
        threshold and W[1] was the first weight. Great idea, eh? :) That's
        why it's *DEPRECATED*.

ACCESSORS
    num_inputs( [ $int ] )
        Set/get the perceptron's number of inputs.

    learning_rate( [ $float ] )
        Set/get the perceptron's number of inputs.

    weights( [ \@weights ] )
        Set/get the perceptron's weights (floats).

        For backwards compatability, returns a list containing the
        *threshold* as the first element in list context:

          ($threshold, @weights) = $p->weights;

        This usage is *DEPRECATED*.

    threshold( [ $float ] )
        Set/get the perceptron's number of inputs.

    training_examples( [ \@examples ] )
        Set/get the perceptron's list of training examples. This should be a
        list of arrayrefs of the form:

            [ $expected_result => @inputs ]

    max_iterations( [ $int ] )
        Set/get the perceptron's number of inputs, a negative value implies
        no maximum.

METHODS
    compute_output( @inputs )
        Computes and returns the perceptron's output (either -1 or 1) for
        the given inputs. See the above model for more details.

README  view on Meta::CPAN

        Adds the @training_examples to to current list of examples. See
        training_examples() for more details.

    train( [ @training_examples ] )
        Uses the *Stochastic Approximation of the Gradient-Descent* model to
        adjust the perceptron's weights until all training examples are
        classified correctly.

        @training_examples can be passed for convenience. These are passed
        to add_examples(). If you want to re-train the perceptron with an
        entirely new set of examples, reset the training_examples().

AUTHOR
    Steve Purkis <spurkis@epn.nu>

COPYRIGHT
    Copyright (c) 1999-2003 Steve Purkis. All rights reserved.

    This package is free software; you can redistribute it and/or modify it
    under the same terms as Perl itself.

REFERENCES
    *Machine Learning*, by Tom M. Mitchell.

THANKS
    Himanshu Garg <himanshu@gdit.iiit.net> for his bug-report and feedback.
    Many others for their feedback.

examples/and.pl  view on Meta::CPAN

# And - and function using a perceptron
# Steve Purkis <spurkis@epn.nu>
# July 20, 1999
##


use Data::Dumper;
use AI::Perceptron;

print( "Example: training a perceptron to recognize an 'AND' function.\n",
       "usage: $0 [<threshold> <weight1> <weight2>]\n" );

my $p = AI::Perceptron->new
                      ->num_inputs( 2 )
                      ->learning_rate( 0.1 );
if (@ARGV) {
    $p->threshold( shift(@ARGV) )
      ->weights([ shift(@ARGV), shift(@ARGV) ]);
}

my @training_exs = (
		    [-1 => -1, -1],
		    [-1 =>  1, -1],
		    [-1 => -1,  1],
		    [ 1 =>  1,  1],
		   );

examples/and.pl  view on Meta::CPAN

dump_perceptron( $p );

print "\nTraining...\n";
$p->train( @training_exs );

print "\nAfter Training\n";
dump_perceptron( $p );

sub dump_perceptron {
    my $p = shift;
    print "\tThreshold: ", $p->threshold, " Weights: ", join(', ', @{ $p->weights }), "\n";
    foreach my $inputs (@training_exs) {
	my $target = $inputs->[0];
	print "\tInputs = {", join(',', @$inputs[1..2]), "}, target=$target, output=", $p->compute_output( @$inputs[1..2] ), "\n";
    }
}

lib/AI/Perceptron.pm  view on Meta::CPAN


AI::Perceptron - example of a node in a neural network.

=head1 SYNOPSIS

 use AI::Perceptron;

 my $p = AI::Perceptron->new
           ->num_inputs( 2 )
           ->learning_rate( 0.04 )
           ->threshold( 0.02 )
           ->weights([ 0.1, 0.2 ]);

 my @inputs  = ( 1.3, -0.45 );   # input can be any number
 my $target  = 1;                # output is always -1 or 1
 my $current = $p->compute_output( @inputs );

 print "current output: $current, target: $target\n";

 $p->add_examples( [ $target, @inputs ] );

lib/AI/Perceptron.pm  view on Meta::CPAN

   warn "couldn't train in 10 iterations!";

 print "training until it gets it right\n";
 $p->max_iterations( -1 )->train; # watch out for infinite loops

=cut

package AI::Perceptron;

use strict;
use accessors qw( num_inputs learning_rate _weights threshold
		  training_examples max_iterations );

our $VERSION = '1.0';
our $Debug   = 0;

sub new {
    my $class = shift;
    my $self  = bless {}, $class;
    return $self->init( @_ );
}

sub init {
    my $self = shift;
    my %args = @_;

    $self->num_inputs( $args{Inputs} || 1 )
         ->learning_rate( $args{N} || 0.05 )
	 ->max_iterations( -1 )
	 ->threshold( $args{T} || 0.0 )
	 ->training_examples( [] )
	 ->weights( [] );

    # DEPRECATED: backwards compat
    if ($args{W}) {
	$self->threshold( shift @{ $args{W} } )
	     ->weights( [ @{ $args{W} } ] );
    }

    return $self;
}

sub verify_weights {
    my $self = shift;

    for my $i (0 .. $self->num_inputs-1) {
	$self->weights->[$i] ||= 0.0;
    }

    return $self;
}

# DEPRECATED: backwards compat
sub weights {
    my $self = shift;
    my $ret  = $self->_weights(@_);
    return wantarray ? ( $self->threshold, @{ $self->_weights } ) : $ret;
}

sub add_examples {
    my $self = shift;

    foreach my $ex (@_) {
	die "training examples must be arrayrefs!" unless (ref $ex eq 'ARRAY');
	my @inputs = @{$ex}; # be nice, take a copy
	my $target = shift @inputs;
	die "expected result must be either -1 or 1, not $target!"
	  unless (abs $target == 1);
	# TODO: avoid duplicate entries
	push @{ $self->training_examples }, [$target, @inputs];
    }

    return $self;
}

sub add_example {
    shift->add_examples(@_);
}

sub compute_output {
    my $self   = shift;
    my @inputs = @_;

    my $sum = $self->threshold; # start at threshold
    for my $i (0 .. $self->num_inputs-1) {
	$sum += $self->weights->[$i] * $inputs[$i];
    }

    # binary (returning the real $sum is not part of this model)
    return ($sum > 0) ? 1 : -1;
}

##
# $p->train( [ @training_examples ] )

lib/AI/Perceptron.pm  view on Meta::CPAN

	    my ($expected_output, @inputs) = @$training_example;

	    $self->emit( "Training X=<", join(',', @inputs),
			 "> with target $expected_output" ) if $Debug > 1;

	    # want the perceptron's output equal to training output
	    # TODO: this duplicates work by classifies_examples_correctly()
	    my $output = $self->compute_output(@inputs);
	    next if ($output == $expected_output);

	    $self->adjust_threshold( $expected_output, $output )
	         ->adjust_weights( \@inputs, $expected_output, $output );
	}
    }

    $self->emit( "completed in $iter iterations." );

    return $self;
}

# return true unless all training examples are correctly classified

lib/AI/Perceptron.pm  view on Meta::CPAN

    my $training_examples = $self->training_examples;

    foreach my $training_example (@$training_examples) {
	my ($output, @inputs) = @{$training_example};
	return if ($self->compute_output( @inputs ) != $output);
    }

    return 1;
}

sub adjust_threshold {
    my $self            = shift;
    my $expected_output = shift;
    my $output          = shift;
    my $n               = $self->learning_rate;

    my $delta = $n * ($expected_output - $output);
    $self->threshold( $self->threshold + $delta );

    return $self;
}

sub adjust_weights {
    my $self            = shift;
    my $inputs          = shift;
    my $expected_output = shift;
    my $output          = shift;
    my $n               = $self->learning_rate;

lib/AI/Perceptron.pm  view on Meta::CPAN

  .           | .   |   ___   |_________|    __  Squarewave |_______\  Output
  .           | .   |   \     |    S    | __|    Generator  |       /
  .           | .   |   /__   |         +-------------------+
 X[n] o------ |W[n] |   Sum   |
              +-----+---------+

	     S  =  T + Sum( W[i]*X[i] )  as i goes from 1 -> n
	Output  =  1 if S > 0; else -1

Where C<X[n]> are the perceptron's I<inputs>, C<W[n]> are the I<Weights> that
get applied to the corresponding input, and C<T> is the I<Threshold>.

The I<squarewave generator> just turns the result into a positive or negative
number.

So in summary, when you feed the perceptron some numeric inputs you get either
a positive or negative output depending on the input's weights and a threshold.

=head1 TRAINING

Usually you have to train a perceptron before it will give you the outputs you
expect.  This is done by giving the perceptron a set of examples containing the
output you want for some given inputs:

    -1 => -1, -1
    -1 =>  1, -1
    -1 => -1,  1
     1 =>  1,  1

If you've ever studied boolean logic, you should recognize that as the truth
table for an C<AND> gate (ok so we're using -1 instead of the commonly used 0,
same thing really).

You I<train> the perceptron by iterating over the examples and adjusting the
I<weights> and I<threshold> by some value until the perceptron's output matches
the expected output of each example:

    while some examples are incorrectly classified
        update weights for each example that fails

The value each weight is adjusted by is calculated as follows:

    delta[i] = learning_rate * (expected_output - output) * input[i]

Which is know as a negative feedback loop - it uses the current output as an

lib/AI/Perceptron.pm  view on Meta::CPAN

=head1 CONSTRUCTOR

=over 4

=item new( [%args] )

Creates a new perceptron with the following default properties:

    num_inputs    = 1
    learning_rate = 0.01
    threshold     = 0.0
    weights       = empty list

Ideally you should use the accessors to set the properties, but for backwards
compatability you can still use the following arguments:

    Inputs => $number_of_inputs  (positive int)
    N      => $learning_rate     (float)
    W      => [ @weights ]       (floats)

The number of elements in I<W> must be equal to the number of inputs plus one.
This is because older version of AI::Perceptron combined the threshold and the
weights a single list where W[0] was the threshold and W[1] was the first
weight.  Great idea, eh? :)  That's why it's I<DEPRECATED>.

=back

=head1 ACCESSORS

=over 4

=item num_inputs( [ $int ] )

Set/get the perceptron's number of inputs.

=item learning_rate( [ $float ] )

Set/get the perceptron's number of inputs.

=item weights( [ \@weights ] )

Set/get the perceptron's weights (floats).

For backwards compatability, returns a list containing the I<threshold> as the
first element in list context:

  ($threshold, @weights) = $p->weights;

This usage is I<DEPRECATED>.

=item threshold( [ $float ] )

Set/get the perceptron's number of inputs.

=item training_examples( [ \@examples ] )

Set/get the perceptron's list of training examples.  This should be a list of
arrayrefs of the form:

    [ $expected_result => @inputs ]

=item max_iterations( [ $int ] )

Set/get the perceptron's number of inputs, a negative value implies no maximum.

=back

=head1 METHODS

=over 4

lib/AI/Perceptron.pm  view on Meta::CPAN

Adds the @training_examples to to current list of examples.  See
L<training_examples()> for more details.

=item train( [ @training_examples ] )

Uses the I<Stochastic Approximation of the Gradient-Descent> model to adjust
the perceptron's weights until all training examples are classified correctly.

@training_examples can be passed for convenience.  These are passed to
L<add_examples()>.  If you want to re-train the perceptron with an entirely new
set of examples, reset the L<training_examples()>.

=back

=head1 AUTHOR

Steve Purkis E<lt>spurkis@epn.nuE<gt>

=head1 COPYRIGHT

Copyright (c) 1999-2003 Steve Purkis.  All rights reserved.

This package is free software; you can redistribute it and/or modify it under
the same terms as Perl itself.

=head1 REFERENCES

I<Machine Learning>, by Tom M. Mitchell.

=head1 THANKS

t/01_basic.t  view on Meta::CPAN

## Test suite for AI::Perceptron
##

use Test::More 'no_plan'; # tests => 3;

use_ok( 'AI::Perceptron' );

my $p = AI::Perceptron->new
          ->num_inputs( 2 )
          ->learning_rate( 0.01 )
          ->threshold( 0.8 )
          ->weights([ -0.5, 0.5 ])
          ->max_iterations( 20 );

# get the current output of the node given a training example:
my @inputs = ( 1, 1 );
my $target_output  = 1;
my $current_output = $p->compute_output( @inputs );

ok( defined $current_output,         'compute_output' );
is( $current_output, $target_output, 'expected output for preset weights' );

# train the perceptron until it gets it right:
my @training_examples = ( [ -$target_output, @inputs ] );
is( $p->add_examples( @training_examples ), $p, 'add_examples' );
is( $p->train, $p, 'train' );
is( $p->compute_output( @inputs ), -$target_output, 'perceptron re-trained' );

t/02_api.t  view on Meta::CPAN

## API tests for AI::Perceptron
##

use Test::More 'no_plan'; # tests => 3;

use AI::Perceptron;

my $p = AI::Perceptron->new
                      ->num_inputs( 3 )
                      ->learning_rate( 0.01 )
                      ->threshold( 0.02 )
                      ->weights([ 0.1, 0.2, -0.3 ])
                      ->max_iterations( 5 )
                      ->training_examples( [ -1 => 1, 2 ] );

is( $p->num_inputs,             3,      'num_inputs' );
is( $p->learning_rate,          0.01,   'learning_rate' );
is( $p->threshold,              0.02,   'threshold' );
is( $p->max_iterations,         5,      'threshold' );
isa_ok( $p->weights,           'ARRAY', 'weights' );
isa_ok( $p->training_examples, 'ARRAY', 'examples' );

is( $p->add_examples( [-1 => 1, 1] ), $p, 'add_examples' );
can_ok( $p, 'add_example',                'add_example' );

##
## backwards compat
##

my $pc = new AI::Perceptron(
			    Inputs => 2,
			    N      => 0.001,
			    W      => [ -0.1, 0.2, 0.3 ],
			   );

is( $pc->num_inputs,     2,       'num_inputs() [compat]' );
is( $pc->learning_rate,  0.001,   'learning_rate() [compat]' );
is( $pc->threshold,     -0.1,     'threshold() [compat]' );

my @weights = $pc->weights;
is( @weights, 3, 'weights() in list context [compat]' );



( run in 1.665 second using v1.01-cache-2.11-cpan-49f99fa48dc )