AI-Perceptron
view release on metacpan or search on metacpan
5960616263646566676869707172737475767778TRAINING
Usually you have to train a perceptron
before
it will give you the
outputs you expect. This is done by giving the perceptron a set of
examples containing the output you want
for
some
given
inputs:
-1
=> -1, -1
-1
=> 1, -1
-1
=> -1, 1
1
=> 1, 1
If you've ever studied boolean logic, you should recognize that as the
truth table
for
an
"AND"
gate (ok so we're using -1 instead of the
commonly used 0, same thing really).
You
*train
* the perceptron by iterating over the examples and adjusting
the
*weights
* and
*threshold
* by some value
until
the perceptron's
output matches the expected output of
each
example:
while
some examples are incorrectly classified
update weights
for
each
example that fails
lib/AI/Perceptron.pm view on Meta::CPAN
244245246247248249250251252253254255256257258259260261262263Usually you have to train a perceptron
before
it will give you the outputs you
expect. This is done by giving the perceptron a set of examples containing the
output you want
for
some
given
inputs:
-1
=> -1, -1
-1
=> 1, -1
-1
=> -1, 1
1
=> 1, 1
If you've ever studied boolean logic, you should recognize that as the truth
table
for
an C<AND> gate (ok so we're using -1 instead of the commonly used 0,
same thing really).
You I<train> the perceptron by iterating over the examples and adjusting the
I<weights> and I<threshold> by some value
until
the perceptron's output matches
the expected output of
each
example:
while
some examples are incorrectly classified
update weights
for
each
example that fails
( run in 1.437 second using v1.01-cache-2.11-cpan-49f99fa48dc )