AI-Perceptron
view release on metacpan or search on metacpan
lib/AI/Perceptron.pm view on Meta::CPAN
+---------------+
X[1] o------ |W[1] T |
X[2] o------ |W[2] +---------+ +-------------------+
. | . | ___ |_________| __ Squarewave |_______\ Output
. | . | \ | S | __| Generator | /
. | . | /__ | +-------------------+
X[n] o------ |W[n] | Sum |
+-----+---------+
S = T + Sum( W[i]*X[i] ) as i goes from 1 -> n
Output = 1 if S > 0; else -1
Where C<X[n]> are the perceptron's I<inputs>, C<W[n]> are the I<Weights> that
get applied to the corresponding input, and C<T> is the I<Threshold>.
The I<squarewave generator> just turns the result into a positive or negative
number.
So in summary, when you feed the perceptron some numeric inputs you get either
a positive or negative output depending on the input's weights and a threshold.
=head1 TRAINING
Usually you have to train a perceptron before it will give you the outputs you
expect. This is done by giving the perceptron a set of examples containing the
output you want for some given inputs:
-1 => -1, -1
-1 => 1, -1
-1 => -1, 1
1 => 1, 1
If you've ever studied boolean logic, you should recognize that as the truth
table for an C<AND> gate (ok so we're using -1 instead of the commonly used 0,
same thing really).
You I<train> the perceptron by iterating over the examples and adjusting the
I<weights> and I<threshold> by some value until the perceptron's output matches
the expected output of each example:
while some examples are incorrectly classified
update weights for each example that fails
The value each weight is adjusted by is calculated as follows:
delta[i] = learning_rate * (expected_output - output) * input[i]
Which is know as a negative feedback loop - it uses the current output as an
input to determine what the next output will be.
Also, note that this means you can get stuck in an infinite loop. It's not a
bad idea to set the maximum number of iterations to prevent that.
=head1 CONSTRUCTOR
=over 4
=item new( [%args] )
Creates a new perceptron with the following default properties:
num_inputs = 1
learning_rate = 0.01
threshold = 0.0
weights = empty list
Ideally you should use the accessors to set the properties, but for backwards
compatability you can still use the following arguments:
Inputs => $number_of_inputs (positive int)
N => $learning_rate (float)
W => [ @weights ] (floats)
The number of elements in I<W> must be equal to the number of inputs plus one.
This is because older version of AI::Perceptron combined the threshold and the
weights a single list where W[0] was the threshold and W[1] was the first
weight. Great idea, eh? :) That's why it's I<DEPRECATED>.
=back
=head1 ACCESSORS
=over 4
=item num_inputs( [ $int ] )
Set/get the perceptron's number of inputs.
=item learning_rate( [ $float ] )
Set/get the perceptron's number of inputs.
=item weights( [ \@weights ] )
Set/get the perceptron's weights (floats).
For backwards compatability, returns a list containing the I<threshold> as the
first element in list context:
($threshold, @weights) = $p->weights;
This usage is I<DEPRECATED>.
=item threshold( [ $float ] )
Set/get the perceptron's number of inputs.
=item training_examples( [ \@examples ] )
Set/get the perceptron's list of training examples. This should be a list of
arrayrefs of the form:
[ $expected_result => @inputs ]
=item max_iterations( [ $int ] )
Set/get the perceptron's number of inputs, a negative value implies no maximum.
=back
( run in 0.972 second using v1.01-cache-2.11-cpan-39bf76dae61 )