AI-NaiveBayes

 view release on metacpan or  search on metacpan

INSTALL  view on Meta::CPAN

details on enabling it in your environment.

## Installing with the CPAN shell

Alternatively, if your CPAN shell is set up, you should just be able to do:

    % cpan AI::NaiveBayes

## Manual installation

As a last resort, you can manually install it. Download the tarball, untar it,
then build it:

    % perl Makefile.PL
    % make && make test

Then install it:

    % make install

If your perl is system-managed, you can create a local::lib in your home

LICENSE  view on Meta::CPAN

software and to any other program whose authors commit to using it.
You can use it for your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Specifically, the General Public License is designed to make
sure that you have the freedom to give away or sell copies of free
software, that you receive source code or can get it if you want it,
that you can change the software or use pieces of it in new free
programs; and that you know you can do these things.

  To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.

  For example, if you distribute copies of a such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have.  You must make sure that they, too, receive or can get the
source code.  And you must tell them their rights.

  We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.

LICENSE  view on Meta::CPAN


                    GNU GENERAL PUBLIC LICENSE
   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

  0. This License Agreement applies to any program or other work which
contains a notice placed by the copyright holder saying it may be
distributed under the terms of this General Public License.  The
"Program", below, refers to any such program or work, and a "work based
on the Program" means either the Program or any work containing the
Program or a portion of it, either verbatim or with modifications.  Each
licensee is addressed as "you".

  1. You may copy and distribute verbatim copies of the Program's source
code as you receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice and
disclaimer of warranty; keep intact all the notices that refer to this
General Public License and to the absence of any warranty; and give any
other recipients of the Program a copy of this General Public License
along with the Program.  You may charge a fee for the physical act of
transferring a copy.

LICENSE  view on Meta::CPAN

    exchange for a fee.

Mere aggregation of another independent work with the Program (or its
derivative) on a volume of a storage or distribution medium does not bring
the other work under the scope of these terms.

  3. You may copy and distribute the Program (or a portion or derivative of
it, under Paragraph 2) in object code or executable form under the terms of
Paragraphs 1 and 2 above provided that you also do one of the following:

    a) accompany it with the complete corresponding machine-readable
    source code, which must be distributed under the terms of
    Paragraphs 1 and 2 above; or,

    b) accompany it with a written offer, valid for at least three
    years, to give any third party free (except for a nominal charge
    for the cost of distribution) a complete machine-readable copy of the
    corresponding source code, to be distributed under the terms of
    Paragraphs 1 and 2 above; or,

    c) accompany it with the information you received as to where the
    corresponding source code may be obtained.  (This alternative is
    allowed only for noncommercial distribution and only if you
    received the program in object code or executable form alone.)

Source code for a work means the preferred form of the work for making
modifications to it.  For an executable file, complete source code means
all the source code for all modules it contains; but, as a special
exception, it need not include source code for modules which are standard
libraries that accompany the operating system on which the executable
file runs, or for standard header files or definitions files that
accompany that operating system.

  4. You may not copy, modify, sublicense, distribute or transfer the
Program except as expressly provided under this General Public License.
Any attempt otherwise to copy, modify, sublicense, distribute or transfer
the Program is void, and will automatically terminate your rights to use
the Program under this License.  However, parties who have received
copies, or rights to use copies, from you under this General Public
License will not have their licenses terminated so long as such parties
remain in full compliance.

  5. By copying, distributing or modifying the Program (or any work based
on the Program) you indicate your acceptance of this license to do so,
and all its terms and conditions.

  6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the original
licensor to copy, distribute or modify the Program subject to these
terms and conditions.  You may not impose any further restrictions on the
recipients' exercise of the rights granted herein.

  7. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

Each version is given a distinguishing version number.  If the Program
specifies a version number of the license which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation.  If the Program does not specify a version number of
the license, you may choose any version ever published by the Free Software
Foundation.

  8. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission.  For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this.  Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.

                            NO WARRANTY

  9. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS

LICENSE  view on Meta::CPAN

The hypothetical commands `show w' and `show c' should show the
appropriate parts of the General Public License.  Of course, the
commands you use may be called something other than `show w' and `show
c'; they could even be mouse-clicks or menu items--whatever suits your
program.

You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary.  Here a sample; alter the names:

  Yoyodyne, Inc., hereby disclaims all copyright interest in the
  program `Gnomovision' (a program to direct compilers to make passes
  at assemblers) written by James Hacker.

  <signature of Ty Coon>, 1 April 1989
  Ty Coon, President of Vice

That's all there is to it!


--- The Artistic License 1.0 ---

This software is Copyright (c) 2012 by Opera Software ASA.

This is free software, licensed under:

LICENSE  view on Meta::CPAN

  - "Reasonable copying fee" is whatever you can justify on the basis of media
    cost, duplication charges, time of people involved, and so on. (You will
    not be required to justify it to the Copyright Holder, but only to the
    computing community at large as a market that must bear the fee.) 
  - "Freely Available" means that no fee is charged for the item itself, though
    there may be fees involved in handling the item. It also means that
    recipients of the item may redistribute it under the same conditions they
    received it. 

1. You may make and give away verbatim copies of the source form of the
Standard Version of this Package without restriction, provided that you
duplicate all of the original copyright notices and associated disclaimers.

2. You may apply bug fixes, portability fixes and other modifications derived
from the Public Domain or from the Copyright Holder. A Package modified in such
a way shall still be considered the Standard Version.

3. You may otherwise modify your copy of this Package in any way, provided that
you insert a prominent notice in each changed file stating how and when you
changed that file, and provided that you do at least ONE of the following:

LICENSE  view on Meta::CPAN

4. You may distribute the programs of this Package in object code or executable
form, provided that you do at least ONE of the following:

  a) distribute a Standard Version of the executables and library files,
     together with instructions (in the manual page or equivalent) on where to
     get the Standard Version.

  b) accompany the distribution with the machine-readable source of the Package
     with your modifications.

  c) accompany any non-standard executables with their corresponding Standard
     Version executables, giving the non-standard executables non-standard
     names, and clearly documenting the differences in manual pages (or
     equivalent), together with instructions on where to get the Standard
     Version.

  d) make other distribution arrangements with the Copyright Holder.

5. You may charge a reasonable copying fee for any distribution of this
Package.  You may charge any fee you choose for support of this Package. You
may not charge a fee for this Package itself. However, you may distribute this

META.json  view on Meta::CPAN

   },
   "name" : "AI-NaiveBayes",
   "no_index" : {
      "directory" : [
         "examples",
         "t/lib"
      ]
   },
   "prereqs" : {
      "configure" : {
         "requires" : {
            "ExtUtils::MakeMaker" : "0"
         }
      },
      "develop" : {
         "requires" : {
            "Pod::Coverage::TrustPod" : "0",
            "Test::Pod" : "1.41",
            "Test::Pod::Coverage" : "1.08"
         }
      },
      "runtime" : {
         "requires" : {
            "File::Find::Rule" : "0.32",
            "List::Util" : "0",
            "Moose" : "1.15",
            "MooseX::Storage" : "0.25",
            "perl" : "5.010",
            "strict" : "0",
            "warnings" : "0"
         }
      },
      "test" : {
         "requires" : {
            "Test::More" : "0"
         }
      }
   },
   "release_status" : "stable",
   "resources" : {
      "repository" : {
         "type" : "git",
         "web" : "http://github.com/zby/AI-NaiveBayes"
      }
   },
   "version" : "0.04",
   "x_contributors" : [
      "schweickism <schweickism@hotmail.com>",
      "Zbigniew \u0141ukasiak <zzbbyy@gmail.com>"
   ],

META.yml  view on Meta::CPAN

---
abstract: 'A Bayesian classifier'
author:
  - 'Zbigniew Lukasiak <zlukasiak@opera.com>'
  - 'Tadeusz Sośnierz <tsosnierz@opera.com>'
  - 'Ken Williams <ken@mathforum.org>'
build_requires:
  Test::More: '0'
configure_requires:
  ExtUtils::MakeMaker: '0'
dynamic_config: 0
generated_by: 'Dist::Zilla version 6.008, CPAN::Meta::Converter version 2.150005'
license: perl
meta-spec:
  url: http://module-build.sourceforge.net/META-spec-v1.4.html
  version: '1.4'
name: AI-NaiveBayes
no_index:
  directory:
    - examples
    - t/lib
requires:
  File::Find::Rule: '0.32'
  List::Util: '0'
  Moose: '1.15'
  MooseX::Storage: '0.25'
  perl: '5.010'
  strict: '0'
  warnings: '0'
resources:
  repository: http://github.com/zby/AI-NaiveBayes
version: '0.04'
x_contributors:
  - 'schweickism <schweickism@hotmail.com>'
  - 'Zbigniew Łukasiak <zzbbyy@gmail.com>'
x_serialization_backend: 'YAML::Tiny version 1.69'

README.pod  view on Meta::CPAN



sub classify {
    my ($self, $newattrs) = @_;
    $newattrs or die "Missing parameter for classify()";

    my $m = $self->model;

    # Note that we're using the log(prob) here.  That's why we add instead of multiply.

    my %scores = %{$m->{prior_probs}};
    my %features;
    while (my ($feature, $value) = each %$newattrs) {
        next unless exists $m->{attributes}{$feature};  # Ignore totally unseen features
        while (my ($label, $attributes) = each %{$m->{probs}}) {
            my $score = ($attributes->{$feature} || $m->{smoother}{$label})*$value;  # P($feature|$label)**$value
            $scores{$label} += $score;
            $features{$feature}{$label} = $score;
        }
    }

    rescale(\%scores);

    return AI::NaiveBayes::Classification->new( label_sums => \%scores, features => \%features );
}

sub rescale {
    my ($scores) = @_;

    # Scale everything back to a reasonable area in logspace (near zero), un-loggify, and normalize
    my $total = 0;
    my $max = max(values %$scores);
    foreach (values %$scores) {
        $_ = exp($_ - $max);
        $total += $_**2;
    }
    $total = sqrt($total);
    foreach (values %$scores) {
        $_ /= $total;
    }
}


__PACKAGE__->meta->make_immutable;

1;
__END__

README.pod  view on Meta::CPAN


    my $classifier = AI::NaiveBayes->train( 
        {
            attributes => {
                sheep => 1, very => 1,  valuable => 1, farming => 1
            },
            labels => ['farming']
        },
        {
            attributes => {
                vampires => 1, cannot => 1, see => 1, their => 1,
                images => 1, mirrors => 1
            },
            labels => ['vampire']
        },
    );

    # Classify a feature vector
    my $result = $classifier->classify({bar => 3, blurp => 2});
    
    # $result is now a AI::NaiveBayes::Classification object
    
    my $best_category = $result->best_category;
    
=head1 DESCRIPTION

This module implements the classic "Naive Bayes" machine learning
algorithm.  This is a low level class that accepts only pre-computed feature-vectors
as input, see L<AI::Classifier::Text> for a text classifier that uses
this class.  

Creation of C<AI::NaiveBayes> classifier object out of training
data is done by L<AI::NaiveBayes::Learner>. For quick start 
you can use the limited C<train> class method that trains the 
classifier in a default way.

The classifier object is immutable.

It is a well-studied probabilistic algorithm often used in
automatic text categorization.  Compared to other algorithms (kNN,
SVM, Decision Trees), it's pretty fast and reasonably competitive in
the quality of its results.

A paper by Fabrizio Sebastiani provides a really good introduction to
text categorization:
L<http://faure.iei.pi.cnr.it/~fabrizio/Publications/ACMCS02.pdf>

=head1 METHODS

=over 4

=item new( model => $model )

README.pod  view on Meta::CPAN

settings. 
Arguments are passed to the C<add_example> method of the L<AI::NaiveBayes::Learner>
object one by one.

=item classify( HASHREF )

Classifies a feature-vector of the form:

    { feature1 => weight1, feature2 => weight2, ... }
    
The result is a C<AI::NaiveBayes::Classification> object.

=item rescale

Internal

=back

=head1 ATTRIBUTES 

=over 4

=item model

README.pod  view on Meta::CPAN

certain string of words in a document, so we have:

    P(words | cat) P(cat)
        P(cat | words) = --------------------
    P(words)

We have applied Bayes' Theorem because C<P(cat | words)> is a difficult
quantity to compute directly, but C<P(words | cat)> and C<P(cat)> are accessible
(see below).

The greater the expression above, the greater the probability that the given
document belongs to the given category.  So we want to find the maximum
value.  We write this as

    P(words | cat) P(cat)
        Best category =   ArgMax      -----------------------
    cat in cats          P(words)


Since C<P(words)> doesn't change over the range of categories, we can get rid
of it.  That's good, because we didn't want to have to compute these values
anyway.  So our new formula is:

    Best category =   ArgMax      P(words | cat) P(cat)
        cat in cats

Finally, we note that if C<w1, w2, ... wn> are the words in the document,
then this expression is equivalent to:

    Best category =   ArgMax      P(w1|cat)*P(w2|cat)*...*P(wn|cat)*P(cat)
        cat in cats

That's the formula I use in my document categorization code.  The last
step is the only non-rigorous one in the derivation, and this is the
"naive" part of the Naive Bayes technique.  It assumes that the
probability of each word appearing in a document is unaffected by the
presence or absence of each other word in the document.  We assume
this even though we know this isn't true: for example, the word
"iodized" is far more likely to appear in a document that contains the
word "salt" than it is to appear in a document that contains the word
"subroutine".  Luckily, as it turns out, making this assumption even
when it isn't true may have little effect on our results, as the
following paper by Pedro Domingos argues:
L<"http://www.cs.washington.edu/homes/pedrod/mlj97.ps.gz">

=head1 SEE ALSO

Algorithm::NaiveBayes (3), AI::Classifier::Text(3) 

=head1 BASED ON

Much of the code and description is from L<Algorithm::NaiveBayes>.

lib/AI/NaiveBayes.pm  view on Meta::CPAN



sub classify {
    my ($self, $newattrs) = @_;
    $newattrs or die "Missing parameter for classify()";

    my $m = $self->model;

    # Note that we're using the log(prob) here.  That's why we add instead of multiply.

    my %scores = %{$m->{prior_probs}};
    my %features;
    while (my ($feature, $value) = each %$newattrs) {
        next unless exists $m->{attributes}{$feature};  # Ignore totally unseen features
        while (my ($label, $attributes) = each %{$m->{probs}}) {
            my $score = ($attributes->{$feature} || $m->{smoother}{$label})*$value;  # P($feature|$label)**$value
            $scores{$label} += $score;
            $features{$feature}{$label} = $score;
        }
    }

    rescale(\%scores);

    return AI::NaiveBayes::Classification->new( label_sums => \%scores, features => \%features );
}

sub rescale {
    my ($scores) = @_;

    # Scale everything back to a reasonable area in logspace (near zero), un-loggify, and normalize
    my $total = 0;
    my $max = max(values %$scores);
    foreach (values %$scores) {
        $_ = exp($_ - $max);
        $total += $_**2;
    }
    $total = sqrt($total);
    foreach (values %$scores) {
        $_ /= $total;
    }
}


__PACKAGE__->meta->make_immutable;

1;

=pod

lib/AI/NaiveBayes.pm  view on Meta::CPAN


    my $classifier = AI::NaiveBayes->train( 
        {
            attributes => {
                sheep => 1, very => 1,  valuable => 1, farming => 1
            },
            labels => ['farming']
        },
        {
            attributes => {
                vampires => 1, cannot => 1, see => 1, their => 1,
                images => 1, mirrors => 1
            },
            labels => ['vampire']
        },
    );

    # Classify a feature vector
    my $result = $classifier->classify({bar => 3, blurp => 2});
    
    # $result is now a AI::NaiveBayes::Classification object
    
    my $best_category = $result->best_category;

=head1 DESCRIPTION

This module implements the classic "Naive Bayes" machine learning
algorithm.  This is a low level class that accepts only pre-computed feature-vectors
as input, see L<AI::Classifier::Text> for a text classifier that uses
this class.  

Creation of C<AI::NaiveBayes> classifier object out of training
data is done by L<AI::NaiveBayes::Learner>. For quick start 
you can use the limited C<train> class method that trains the 
classifier in a default way.

The classifier object is immutable.

It is a well-studied probabilistic algorithm often used in
automatic text categorization.  Compared to other algorithms (kNN,
SVM, Decision Trees), it's pretty fast and reasonably competitive in
the quality of its results.

A paper by Fabrizio Sebastiani provides a really good introduction to
text categorization:
L<http://faure.iei.pi.cnr.it/~fabrizio/Publications/ACMCS02.pdf>

=head1 METHODS

=over 4

=item new( model => $model )

lib/AI/NaiveBayes.pm  view on Meta::CPAN

settings. 
Arguments are passed to the C<add_example> method of the L<AI::NaiveBayes::Learner>
object one by one.

=item classify( HASHREF )

Classifies a feature-vector of the form:

    { feature1 => weight1, feature2 => weight2, ... }

The result is a C<AI::NaiveBayes::Classification> object.

=item rescale

Internal

=back

=head1 ATTRIBUTES 

=over 4

=item model

lib/AI/NaiveBayes.pm  view on Meta::CPAN

certain string of words in a document, so we have:

    P(words | cat) P(cat)
        P(cat | words) = --------------------
    P(words)

We have applied Bayes' Theorem because C<P(cat | words)> is a difficult
quantity to compute directly, but C<P(words | cat)> and C<P(cat)> are accessible
(see below).

The greater the expression above, the greater the probability that the given
document belongs to the given category.  So we want to find the maximum
value.  We write this as

    P(words | cat) P(cat)
        Best category =   ArgMax      -----------------------
    cat in cats          P(words)

Since C<P(words)> doesn't change over the range of categories, we can get rid
of it.  That's good, because we didn't want to have to compute these values
anyway.  So our new formula is:

    Best category =   ArgMax      P(words | cat) P(cat)
        cat in cats

Finally, we note that if C<w1, w2, ... wn> are the words in the document,
then this expression is equivalent to:

    Best category =   ArgMax      P(w1|cat)*P(w2|cat)*...*P(wn|cat)*P(cat)
        cat in cats

That's the formula I use in my document categorization code.  The last
step is the only non-rigorous one in the derivation, and this is the
"naive" part of the Naive Bayes technique.  It assumes that the
probability of each word appearing in a document is unaffected by the
presence or absence of each other word in the document.  We assume
this even though we know this isn't true: for example, the word
"iodized" is far more likely to appear in a document that contains the
word "salt" than it is to appear in a document that contains the word
"subroutine".  Luckily, as it turns out, making this assumption even
when it isn't true may have little effect on our results, as the
following paper by Pedro Domingos argues:
L<"http://www.cs.washington.edu/homes/pedrod/mlj97.ps.gz">

=head1 SEE ALSO

Algorithm::NaiveBayes (3), AI::Classifier::Text(3) 

=head1 BASED ON

Much of the code and description is from L<Algorithm::NaiveBayes>.

lib/AI/NaiveBayes/Classification.pm  view on Meta::CPAN

package AI::NaiveBayes::Classification;
$AI::NaiveBayes::Classification::VERSION = '0.04';
use strict;
use warnings;
use 5.010;
use Moose;

has features => (is => 'ro', isa => 'HashRef[HashRef]', required => 1);
has label_sums => (is => 'ro', isa => 'HashRef', required => 1);
has best_category => (is => 'ro', isa => 'Str', lazy_build => 1);

sub _build_best_category {
    my $self = shift;
    my $sc = $self->label_sums;

    my ($best_cat, $best_score) = each %$sc;
    while (my ($key, $val) = each %$sc) {
        ($best_cat, $best_score) = ($key, $val) if $val > $best_score;
    }
    return $best_cat;
}

sub find_predictors{
    my $self = shift;

    my $best_cat = $self->best_category;
    my $features = $self->features;
    my @predictors; 
    for my $feature ( keys %$features  ) {
        for my $cat ( keys %{ $features->{$feature } } ){
            next if $cat eq $best_cat;
            push @predictors, [ $feature, $features->{$feature}{$best_cat} - $features->{$feature}{$cat} ];
        }
    }
    @predictors = sort { abs( $b->[1] ) <=> abs( $a->[1] ) } @predictors;
    return $best_cat, @predictors;
}


__PACKAGE__->meta->make_immutable;

1;

=pod

=encoding UTF-8

=head1 NAME

AI::NaiveBayes::Classification - The result of a bayesian classification

=head1 VERSION

version 0.04

=head1 SYNOPSIS

    my $result = $classifier->classify({bar => 3, blurp => 2});
    # $result is an AI::NaiveBayes::Classification object
    say $result->best_category;
    my $predictors = $result->find_predictors;

=head1 DESCRIPTION

AI::NaiveBayes::Classification represents the result of a bayesian classification,
produced by AI::NaiveBayes classifier.

=head1 METHODS

=over 4

=item C<best_category()>

Returns a string being a label that suits given document the best.

=item C<find_predictors()>

This method returns the C<best_category()>, as well as the list of all the predictors
along with their influence on the best category selected. So the second value
returned is a list of array references, where each one contains a string being a
single feature and a number describing its influence on the result. So the
second part of the result may look like this:

    (
        [ 'activities',  1.2511540632952 ],
        [ 'over',       -1.0269523272981 ],
        [ 'provide',     0.8280157033269 ],
        [ 'natural',     0.7361042359385 ],
        [ 'against',    -0.6923354975173 ],
    )

=back

lib/AI/NaiveBayes/Classification.pm  view on Meta::CPAN


This software is copyright (c) 2012 by Opera Software ASA.

This is free software; you can redistribute it and/or modify it under
the same terms as the Perl 5 programming language system itself.

=cut

__END__

# ABSTRACT: The result of a bayesian classification

lib/AI/NaiveBayes/Learner.pm  view on Meta::CPAN

use 5.010;

use List::Util qw( min sum );
use Moose;
use AI::NaiveBayes;

has attributes => (is => 'ro', isa => 'HashRef', default => sub { {} }, clearer => '_clear_attrs');
has labels     => (is => 'ro', isa => 'HashRef', default => sub { {} }, clearer => '_clear_labels');
has examples  => (is => 'ro', isa => 'Int',     default => 0, clearer => '_clear_examples');

has features_kept => (is => 'ro', predicate => 'limit_features');

has classifier_class => ( is => 'ro', isa => 'Str', default => 'AI::NaiveBayes' );

sub add_example {
    my ($self, %params) = @_;
    for ('attributes', 'labels') {
        die "Missing required '$_' parameter" unless exists $params{$_};
    }

    $self->{examples}++;

lib/AI/NaiveBayes/Learner.pm  view on Meta::CPAN

        # P(attr|label) = $count/$label_tokens                         (simple)
        # P(attr|label) = ($count + 1)/($label_tokens + $vocab_size)   (with smoothing)
        # log P(attr|label) = log($count + 1) - log($label_tokens + $vocab_size)

        my $denominator = log($label_tokens + $vocab_size);

        while (my ($attribute, $count) = each %{ $labels->{$label}{attributes} }) {
            $model->{probs}{$label}{$attribute} = log($count + 1) - $denominator;
        }

        if ($self->limit_features) {
            my %old  = %{$model->{probs}{$label}};
            my @features = sort { abs($old{$a}) <=> abs($old{$b}) } keys(%old);
            my $limit = min($self->features_kept, 0+@features);
            if ($limit < 1) {
                $limit = int($limit * keys(%old));
            }
            my @top = @features[0..$limit-1];
            my %kept = map { $_ => $old{$_} } @top;
            $model->{probs}{$label} = \%kept;
        }
    }
    my $classifier_class = $self->classifier_class;
    return $classifier_class->new( model => $model );
}

sub add_hash {
    my ($first, $second) = @_;

lib/AI/NaiveBayes/Learner.pm  view on Meta::CPAN

=head1 NAME

AI::NaiveBayes::Learner - Build AI::NaiveBayes classifier from a set of training examples.

=head1 VERSION

version 0.04

=head1 SYNOPSIS

    my $learner = AI::NaiveBayes::Learner->new(features_kept => 0.5);
    $learner->add_example(
        attributes => { sheep => 1, very => 1, valuable => 1, farming => 1 },
        labels => ['farming'] 
    );

    my $classifier = $learner->classifier;

=head1 DESCRIPTION

This is a trainer of AI::NaiveBayes classifiers.  It saves information passed
by the C<add_example> method from
training data into internal structures and then constructs a classifier when
the C<classifier> method is called.

=head1 ATTRIBUTES

=over 4

=item C<features_kept>

Indicates how many features should remain after calculating probabilities. By
default all of them will be kept. For C<features_kept> > 1, C<features_kept> of
features will be preserved. For values lower than 1, a specified fraction of 
features will be kept (e.g. top 20% of features for C<features_kept> = 0.2).

The rest of the attributes is for class' internal usage, and thus not
documented.

=item C<classifier_class>

The class of the classifier to be created.  By default it is
C<AI::NaiveBayes>

=back

=head1 METHODS

=over 4

=item C<add_example( attributes => HASHREF, labels => LIST )>

Saves the information from a training example into internal data structures.
C<attributes> should be of the form of 
    { feature1 => weight1, feature2 => weight2, ... }
C<labels> should be a list of strings denoting one or more classes to which the example belongs.

=item C<classifier()>

    Creates an AI::NaiveBayes classifier based on the data accumulated before.

=back

t/01-learner.t  view on Meta::CPAN

use AI::NaiveBayes::Learner;
ok(1); # If we made it this far, we're loaded.

my $learner = AI::NaiveBayes::Learner->new();

# Populate
$learner->add_example( attributes => _hash(qw(sheep very valuable farming)),
		   labels => ['farming'] );
is $learner->{labels}{farming}{count}, 1;

$learner->add_example( attributes => _hash(qw(farming requires many kinds animals)),
		   labels => ['farming'] );
is $learner->{labels}{farming}{count}, 2;
is keys %{$learner->{labels}}, 1;

$learner->add_example( attributes => _hash(qw(vampires drink blood vampires may staked)),
		   labels => ['vampire'] );
is $learner->{labels}{vampire}{count}, 1;

$learner->add_example( attributes => _hash(qw(vampires cannot see their images mirrors)),
		   labels => ['vampire'] );
is $learner->{labels}{vampire}{count}, 2;
is keys %{$learner->{labels}}, 2;

# features_kept > 1
$learner = AI::NaiveBayes::Learner->new(features_kept => 5);
$learner->add_example( attributes => _hash(qw(one two three four)),
		   labels => ['farming'] );
$learner->add_example( attributes => _hash(qw(five six seven eight)),
		   labels => ['farming'] );
$learner->add_example( attributes => _hash(qw(one two three four five)),
		   labels => ['farming'] );
my $model = $learner->classifier->model;
is keys %{$model->{probs}{farming}}, 5, '5 features kept';
is join(" ", sort { $a cmp $b } keys %{$model->{probs}{farming}}), 'five four one three two';

# features_kept < 1
$learner = AI::NaiveBayes::Learner->new(features_kept => 0.5);
$learner->add_example( attributes => _hash(qw(one two three four)),
		   labels => ['farming'] );
$learner->add_example( attributes => _hash(qw(five six seven eight)),
		   labels => ['farming'] );
$learner->add_example( attributes => _hash(qw(one two three four)),
		   labels => ['farming'] );
$model = $learner->classifier->model;
is keys %{$model->{probs}{farming}}, 4, 'half features kept';
is join(" ", sort { $a cmp $b } keys %{$model->{probs}{farming}}), 'four one three two';

sub _hash { +{ map {$_,1} @_ } }

t/02-predict.t  view on Meta::CPAN

use Test::More tests => 12;
use AI::NaiveBayes;
use AI::NaiveBayes::Learner;
ok(1); # If we made it this far, we're loaded.

my $lr = AI::NaiveBayes::Learner->new();

# Populate
$lr->add_example( attributes => _hash(qw(sheep very valuable farming)),
           labels => ['farming'] );
$lr->add_example( attributes => _hash(qw(farming requires many kinds animals)),
           labels => ['farming'] );
$lr->add_example( attributes => _hash(qw(vampires drink blood vampires may staked)),
           labels => ['vampire'] );
$lr->add_example( attributes => _hash(qw(vampires cannot see their images mirrors)),
           labels => ['vampire'] );

my $classifier = $lr->classifier;
ok $classifier;

# Predict
my $s = $classifier->classify( _hash(qw(i would like to begin farming sheep)) );
my $h = $s->label_sums;
ok $h;
ok $h->{farming} > 0.5;
ok $h->{vampire} < 0.5;

$s = $classifier->classify( _hash(qw(i see that many vampires may have eaten my beautiful daughter's blood)) );
$h = $s->label_sums;
ok $h;
ok $h->{farming} < 0.5;
ok $h->{vampire} > 0.5;

# Find predictors

my $p = $classifier->classify( _hash( qw(i would like to begin farming sheep)) );
my( $best_cat, @predictors ) = $p->find_predictors();
is( $best_cat, 'farming', 'Best category' );
is( scalar @predictors, 2, 'farming and sheep - two predictors' );
is( $predictors[0][0], 'farming', 'Farming is the best predictor' );

# Prior probs
$lr = AI::NaiveBayes::Learner->new();

# Populate
$lr->add_example( attributes => _hash(qw(sheep very valuable farming)),
           labels => ['farming'] );
$lr->add_example( attributes => _hash(qw(farming requires many kinds animals)),
           labels => ['farming'] );
$lr->add_example( attributes => _hash(qw(good soil)),
           labels => ['farming'] );
$lr->add_example( attributes => _hash(qw(vampires drink blood vampires may staked)),
           labels => ['vampire'] );

$classifier = $lr->classifier;

# Predict
$s = $classifier->classify( _hash(qw(jakis tekst po polsku)) );
$h = $s->label_sums;
ok(abs( 3 - $h->{farming} / $h->{vampire} ) < 0.01, 'Prior probabillities' );


t/default_training.t  view on Meta::CPAN

use Test::More tests => 2;
use AI::NaiveBayes;
ok(1); # If we made it this far, we're loaded.

my $classifier = AI::NaiveBayes->train( 
    {
        attributes => _hash(qw(sheep very valuable farming)),
        labels => ['farming']
    },
    {
        attributes => _hash(qw(vampires cannot see their images mirrors)),
        labels => ['vampire']
    },
);

isa_ok( $classifier, 'AI::NaiveBayes' );


################################################################
sub _hash { +{ map {$_,1} @_ } }



( run in 1.646 second using v1.01-cache-2.11-cpan-49f99fa48dc )