AI-NaiveBayes

 view release on metacpan or  search on metacpan

MANIFEST  view on Meta::CPAN

README.pod
a
dist.ini
lib/AI/NaiveBayes.pm
lib/AI/NaiveBayes/Classification.pm
lib/AI/NaiveBayes/Learner.pm
t/01-learner.t
t/02-predict.t
t/author-pod-coverage.t
t/author-pod-syntax.t
t/default_training.t

README.pod  view on Meta::CPAN



# ABSTRACT: A Bayesian classifier

=encoding utf8

=head1 SYNOPSIS

    # AI::NaiveBayes objects are created by AI::NaiveBayes::Learner
    # but for quick start you can use the 'train' class method
    # that is a shortcut using default AI::NaiveBayes::Learner settings

    my $classifier = AI::NaiveBayes->train( 
        {
            attributes => {
                sheep => 1, very => 1,  valuable => 1, farming => 1
            },
            labels => ['farming']
        },
        {
            attributes => {

README.pod  view on Meta::CPAN

=head1 DESCRIPTION

This module implements the classic "Naive Bayes" machine learning
algorithm.  This is a low level class that accepts only pre-computed feature-vectors
as input, see L<AI::Classifier::Text> for a text classifier that uses
this class.  

Creation of C<AI::NaiveBayes> classifier object out of training
data is done by L<AI::NaiveBayes::Learner>. For quick start 
you can use the limited C<train> class method that trains the 
classifier in a default way.

The classifier object is immutable.

It is a well-studied probabilistic algorithm often used in
automatic text categorization.  Compared to other algorithms (kNN,
SVM, Decision Trees), it's pretty fast and reasonably competitive in
the quality of its results.

A paper by Fabrizio Sebastiani provides a really good introduction to
text categorization:

README.pod  view on Meta::CPAN


=over 4

=item new( model => $model )

Internal. See L<AI::NaiveBayes::Learner> to learn how to create a C<AI::NaiveBayes>
classifier from training data.

=item train( LIST of HASHREFS )

Shortcut for creating a trained classifier using L<AI::NaiveBayes::Learner> default
settings. 
Arguments are passed to the C<add_example> method of the L<AI::NaiveBayes::Learner>
object one by one.

=item classify( HASHREF )

Classifies a feature-vector of the form:

    { feature1 => weight1, feature2 => weight2, ... }
    

lib/AI/NaiveBayes.pm  view on Meta::CPAN

AI::NaiveBayes - A Bayesian classifier

=head1 VERSION

version 0.04

=head1 SYNOPSIS

    # AI::NaiveBayes objects are created by AI::NaiveBayes::Learner
    # but for quick start you can use the 'train' class method
    # that is a shortcut using default AI::NaiveBayes::Learner settings

    my $classifier = AI::NaiveBayes->train( 
        {
            attributes => {
                sheep => 1, very => 1,  valuable => 1, farming => 1
            },
            labels => ['farming']
        },
        {
            attributes => {

lib/AI/NaiveBayes.pm  view on Meta::CPAN

=head1 DESCRIPTION

This module implements the classic "Naive Bayes" machine learning
algorithm.  This is a low level class that accepts only pre-computed feature-vectors
as input, see L<AI::Classifier::Text> for a text classifier that uses
this class.  

Creation of C<AI::NaiveBayes> classifier object out of training
data is done by L<AI::NaiveBayes::Learner>. For quick start 
you can use the limited C<train> class method that trains the 
classifier in a default way.

The classifier object is immutable.

It is a well-studied probabilistic algorithm often used in
automatic text categorization.  Compared to other algorithms (kNN,
SVM, Decision Trees), it's pretty fast and reasonably competitive in
the quality of its results.

A paper by Fabrizio Sebastiani provides a really good introduction to
text categorization:

lib/AI/NaiveBayes.pm  view on Meta::CPAN


=over 4

=item new( model => $model )

Internal. See L<AI::NaiveBayes::Learner> to learn how to create a C<AI::NaiveBayes>
classifier from training data.

=item train( LIST of HASHREFS )

Shortcut for creating a trained classifier using L<AI::NaiveBayes::Learner> default
settings. 
Arguments are passed to the C<add_example> method of the L<AI::NaiveBayes::Learner>
object one by one.

=item classify( HASHREF )

Classifies a feature-vector of the form:

    { feature1 => weight1, feature2 => weight2, ... }

lib/AI/NaiveBayes/Learner.pm  view on Meta::CPAN

package AI::NaiveBayes::Learner;
$AI::NaiveBayes::Learner::VERSION = '0.04';
use strict;
use warnings;
use 5.010;

use List::Util qw( min sum );
use Moose;
use AI::NaiveBayes;

has attributes => (is => 'ro', isa => 'HashRef', default => sub { {} }, clearer => '_clear_attrs');
has labels     => (is => 'ro', isa => 'HashRef', default => sub { {} }, clearer => '_clear_labels');
has examples  => (is => 'ro', isa => 'Int',     default => 0, clearer => '_clear_examples');

has features_kept => (is => 'ro', predicate => 'limit_features');

has classifier_class => ( is => 'ro', isa => 'Str', default => 'AI::NaiveBayes' );

sub add_example {
    my ($self, %params) = @_;
    for ('attributes', 'labels') {
        die "Missing required '$_' parameter" unless exists $params{$_};
    }

    $self->{examples}++;

    my $attributes = $params{attributes};

lib/AI/NaiveBayes/Learner.pm  view on Meta::CPAN

training data into internal structures and then constructs a classifier when
the C<classifier> method is called.

=head1 ATTRIBUTES

=over 4

=item C<features_kept>

Indicates how many features should remain after calculating probabilities. By
default all of them will be kept. For C<features_kept> > 1, C<features_kept> of
features will be preserved. For values lower than 1, a specified fraction of 
features will be kept (e.g. top 20% of features for C<features_kept> = 0.2).

The rest of the attributes is for class' internal usage, and thus not
documented.

=item C<classifier_class>

The class of the classifier to be created.  By default it is
C<AI::NaiveBayes>

=back

=head1 METHODS

=over 4

=item C<add_example( attributes => HASHREF, labels => LIST )>



( run in 1.094 second using v1.01-cache-2.11-cpan-0a6323c29d9 )