AI-Perceptron

 view release on metacpan or  search on metacpan

README  view on Meta::CPAN


    The *squarewave generator* just turns the result into a positive or
    negative number.

    So in summary, when you feed the perceptron some numeric inputs you get
    either a positive or negative output depending on the input's weights
    and a threshold.

TRAINING
    Usually you have to train a perceptron before it will give you the
    outputs you expect. This is done by giving the perceptron a set of
    examples containing the output you want for some given inputs:

        -1 => -1, -1
        -1 =>  1, -1
        -1 => -1,  1
         1 =>  1,  1

    If you've ever studied boolean logic, you should recognize that as the
    truth table for an "AND" gate (ok so we're using -1 instead of the
    commonly used 0, same thing really).

README  view on Meta::CPAN

            update weights for each example that fails

    The value each weight is adjusted by is calculated as follows:

        delta[i] = learning_rate * (expected_output - output) * input[i]

    Which is know as a negative feedback loop - it uses the current output
    as an input to determine what the next output will be.

    Also, note that this means you can get stuck in an infinite loop. It's
    not a bad idea to set the maximum number of iterations to prevent that.

CONSTRUCTOR
    new( [%args] )
        Creates a new perceptron with the following default properties:

            num_inputs    = 1
            learning_rate = 0.01
            threshold     = 0.0
            weights       = empty list

        Ideally you should use the accessors to set the properties, but for
        backwards compatability you can still use the following arguments:

            Inputs => $number_of_inputs  (positive int)
            N      => $learning_rate     (float)
            W      => [ @weights ]       (floats)

        The number of elements in *W* must be equal to the number of inputs
        plus one. This is because older version of AI::Perceptron combined
        the threshold and the weights a single list where W[0] was the
        threshold and W[1] was the first weight. Great idea, eh? :) That's

README  view on Meta::CPAN

        Adds the @training_examples to to current list of examples. See
        training_examples() for more details.

    train( [ @training_examples ] )
        Uses the *Stochastic Approximation of the Gradient-Descent* model to
        adjust the perceptron's weights until all training examples are
        classified correctly.

        @training_examples can be passed for convenience. These are passed
        to add_examples(). If you want to re-train the perceptron with an
        entirely new set of examples, reset the training_examples().

AUTHOR
    Steve Purkis <spurkis@epn.nu>

COPYRIGHT
    Copyright (c) 1999-2003 Steve Purkis. All rights reserved.

    This package is free software; you can redistribute it and/or modify it
    under the same terms as Perl itself.

lib/AI/Perceptron.pm  view on Meta::CPAN


The I<squarewave generator> just turns the result into a positive or negative
number.

So in summary, when you feed the perceptron some numeric inputs you get either
a positive or negative output depending on the input's weights and a threshold.

=head1 TRAINING

Usually you have to train a perceptron before it will give you the outputs you
expect.  This is done by giving the perceptron a set of examples containing the
output you want for some given inputs:

    -1 => -1, -1
    -1 =>  1, -1
    -1 => -1,  1
     1 =>  1,  1

If you've ever studied boolean logic, you should recognize that as the truth
table for an C<AND> gate (ok so we're using -1 instead of the commonly used 0,
same thing really).

lib/AI/Perceptron.pm  view on Meta::CPAN

        update weights for each example that fails

The value each weight is adjusted by is calculated as follows:

    delta[i] = learning_rate * (expected_output - output) * input[i]

Which is know as a negative feedback loop - it uses the current output as an
input to determine what the next output will be.

Also, note that this means you can get stuck in an infinite loop.  It's not a
bad idea to set the maximum number of iterations to prevent that.

=head1 CONSTRUCTOR

=over 4

=item new( [%args] )

Creates a new perceptron with the following default properties:

    num_inputs    = 1
    learning_rate = 0.01
    threshold     = 0.0
    weights       = empty list

Ideally you should use the accessors to set the properties, but for backwards
compatability you can still use the following arguments:

    Inputs => $number_of_inputs  (positive int)
    N      => $learning_rate     (float)
    W      => [ @weights ]       (floats)

The number of elements in I<W> must be equal to the number of inputs plus one.
This is because older version of AI::Perceptron combined the threshold and the
weights a single list where W[0] was the threshold and W[1] was the first
weight.  Great idea, eh? :)  That's why it's I<DEPRECATED>.

lib/AI/Perceptron.pm  view on Meta::CPAN

Adds the @training_examples to to current list of examples.  See
L<training_examples()> for more details.

=item train( [ @training_examples ] )

Uses the I<Stochastic Approximation of the Gradient-Descent> model to adjust
the perceptron's weights until all training examples are classified correctly.

@training_examples can be passed for convenience.  These are passed to
L<add_examples()>.  If you want to re-train the perceptron with an entirely new
set of examples, reset the L<training_examples()>.

=back

=head1 AUTHOR

Steve Purkis E<lt>spurkis@epn.nuE<gt>

=head1 COPYRIGHT

Copyright (c) 1999-2003 Steve Purkis.  All rights reserved.

t/01_basic.t  view on Meta::CPAN

          ->threshold( 0.8 )
          ->weights([ -0.5, 0.5 ])
          ->max_iterations( 20 );

# get the current output of the node given a training example:
my @inputs = ( 1, 1 );
my $target_output  = 1;
my $current_output = $p->compute_output( @inputs );

ok( defined $current_output,         'compute_output' );
is( $current_output, $target_output, 'expected output for preset weights' );

# train the perceptron until it gets it right:
my @training_examples = ( [ -$target_output, @inputs ] );
is( $p->add_examples( @training_examples ), $p, 'add_examples' );
is( $p->train, $p, 'train' );
is( $p->compute_output( @inputs ), -$target_output, 'perceptron re-trained' );



( run in 0.670 second using v1.01-cache-2.11-cpan-49f99fa48dc )