AI-Calibrate

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

Revision history for Perl extension AI::Calibrate.

1.5   Fri Aug 3 2012
      - Changes to ./t/AI-Calibrate-1.t to let it pass with almost-equal
        numbers.

1.4   Thu Aug 2 2012
      - Revised calibration algorithm based on bug
      - Updated tests in ./t
      - Added ./t/AI-Calibrate-KL.t using Kun Liu's dataset.
      - Added ./t/AI-Calibrate-pathologies.t to test for pathological cases.

1.3   Fri Nov 4
      - Removed dependency on Test::Deep, added explicit declaration of
        dependency on Test::More to Makefile.PL

1.2   Thu Nov 3
      - Fixed test ./t/AI-Calibrate-NB.t so that test wouldn't fail.  Used to
        call is_deeply, which was failing on slight differences between
        floating point numbers.  Now compares with a small tolerance.

1.1   Thu Feb 28 19:00:06 2008
      - Added new function print_mapping
      - Added new test file AI-Calibrate-NB.t which, if AI::NaiveBayes1 is
        present, trains a classifier and calibrates it.

1.0   Thu Feb 05 11:37:31 2008
      - First public release to CPAN.

0.01  Thu Jan 24 11:37:31 2008
	- original version; created by h2xs 1.23 with options
		-XA -n AI::Calibrate

MANIFEST  view on Meta::CPAN

Changes
Makefile.PL
MANIFEST
README
t/AI-Calibrate-1.t
t/AI-Calibrate-pathologies.t
t/AI-Calibrate-NB.t
t/AI-Calibrate-KL.t
lib/AI/Calibrate.pm
META.yml                                 Module YAML meta-data (added by MakeMaker)
META.json                                Module JSON meta-data (added by MakeMaker)

META.json  view on Meta::CPAN

{
   "abstract" : "Perl module for producing probabilities from classifier scores",
   "author" : [
      "Tom Fawcett <tfawcett@acm.org>"
   ],
   "dynamic_config" : 1,
   "generated_by" : "ExtUtils::MakeMaker version 6.62, CPAN::Meta::Converter version 2.112150",
   "license" : [
      "unknown"
   ],
   "meta-spec" : {
      "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec",
      "version" : "2"
   },
   "name" : "AI-Calibrate",
   "no_index" : {
      "directory" : [
         "t",
         "inc"
      ]
   },
   "prereqs" : {
      "build" : {
         "requires" : {
            "ExtUtils::MakeMaker" : 0
         }
      },
      "configure" : {
         "requires" : {
            "ExtUtils::MakeMaker" : 0
         }
      },
      "runtime" : {
         "requires" : {
            "Test::More" : 0
         }
      }
   },
   "release_status" : "stable",
   "version" : "1.5"
}

META.yml  view on Meta::CPAN

---
abstract: 'Perl module for producing probabilities from classifier scores'
author:
  - 'Tom Fawcett <tfawcett@acm.org>'
build_requires:
  ExtUtils::MakeMaker: 0
configure_requires:
  ExtUtils::MakeMaker: 0
dynamic_config: 1
generated_by: 'ExtUtils::MakeMaker version 6.62, CPAN::Meta::Converter version 2.112150'
license: unknown
meta-spec:
  url: http://module-build.sourceforge.net/META-spec-v1.4.html
  version: 1.4
name: AI-Calibrate
no_index:
  directory:
    - t
    - inc
requires:
  Test::More: 0
version: 1.5

Makefile.PL  view on Meta::CPAN

use 5.008008;
use ExtUtils::MakeMaker;
# See lib/ExtUtils/MakeMaker.pm for details of how to influence
# the contents of the Makefile that is written.
WriteMakefile(
    NAME              => 'AI::Calibrate',
    VERSION_FROM      => 'lib/AI/Calibrate.pm', # finds $VERSION
    PREREQ_PM         => {Test::More => 0}, # e.g., Module::Name => 1.1
    ($] >= 5.005 ?     ## Add these new keywords supported since 5.005
      (ABSTRACT_FROM  => 'lib/AI/Calibrate.pm', # retrieve abstract from module
       AUTHOR         => 'Tom Fawcett <tfawcett@acm.org>') : ()),
);

README  view on Meta::CPAN

-*- Mode: Text -*-

AI-Calibrate version 1.0
=========================

AI::Calibrate - Perl module for producing probabilities from classifier scores

In AI, classifiers usually return some sort of an instance score with their
classifications.  These scores can be used as probabilities in various
calculations, but first they need to be calibrated.  Naive Bayes, for example,
is a very useful classifier, but the scores it produces are usually "bunched"
around 0 and 1, making these scores poor probability estimates.  Support
vector machines have a similar problem.  Both classifier types should be
calibrated before their scores are used as probability estimates.  This module
calibrates a classifier using the Pool Adjacent Violators algorithm.

INSTALLATION

To install this module type the following:

   perl Makefile.PL
   make
   make test
   make install

DEPENDENCIES

No other modules are required.

COPYRIGHT AND LICENCE

Copyright (C) 2008 by Tom Fawcett

This library is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either Perl version 5.8.8 or,
at your option, any later version of Perl 5 you may have available.


lib/AI/Calibrate.pm  view on Meta::CPAN

package AI::Calibrate;

use 5.008008;
use strict;
use warnings;

use vars qw($VERSION);
$VERSION = "1.5";

require Exporter;

our @ISA = qw(Exporter);

# This allows declaration:
#	use AI::Calibrate ':all';
# If you do not need this, moving things directly into @EXPORT or @EXPORT_OK
# will save memory.
our %EXPORT_TAGS = (
    'all' => [
        qw(
              calibrate
              score_prob
              print_mapping
            )
    ]
);

our @EXPORT_OK = ( @{ $EXPORT_TAGS{'all'} } );
our @EXPORT = qw( );

use constant DEBUG => 0;

# Structure slot names
use constant SCORE => 0;
use constant PROB  => 1;

=head1 NAME

AI::Calibrate - Perl module for producing probabilities from classifier scores

=head1 SYNOPSIS

  use AI::Calibrate ':all';
  ... train a classifier ...
  ... test classifier on $points ...
  $calibrated = calibrate($points);

=head1 DESCRIPTION

Classifiers usually return some sort of an instance score with their
classifications.  These scores can be used as probabilities in various
calculations, but first they need to be I<calibrated>.  Naive Bayes, for
example, is a very useful classifier, but the scores it produces are usually
"bunched" around 0 and 1, making these scores poor probability estimates.
Support vector machines have a similar problem.  Both classifier types should
be calibrated before their scores are used as probability estimates.

This module calibrates classifier scores using a method called the Pool
Adjacent Violators (PAV) algorithm.  After you train a classifier, you take a
(usually separate) set of test instances and run them through the classifier,
collecting the scores assigned to each.  You then supply this set of instances
to the calibrate function defined here, and it will return a set of ranges
mapping from a score range to a probability estimate.

For example, assume you have the following set of instance results from your
classifier.  Each result is of the form C<[ASSIGNED_SCORE, TRUE_CLASS]>:

 my $points = [
              [.9, 1],
              [.8, 1],
              [.7, 0],
              [.6, 1],
              [.55, 1],
              [.5, 1],
              [.45, 0],
              [.4, 1],
              [.35, 1],
              [.3, 0 ],
              [.27, 1],
              [.2, 0 ],
              [.18, 0],
              [.1, 1 ],
              [.02, 0]
             ];

If you then call calibrate($points), it will return this structure:

 [
   [.9,    1 ],
   [.7,  3/4 ],
   [.45, 2/3 ],
   [.3,  1/2 ],
   [.2,  1/3 ],
   [.02,   0 ]
  ]

This means that, given a SCORE produced by the classifier, you can map the
SCORE onto a probability like this:

               SCORE >= .9        prob = 1
         .9  > SCORE >= .7        prob = 3/4
         .7  > SCORE >= .45       prob = 2/3
         .45 > SCORE >= .3        prob = 3/4
         .2  > SCORE >= .7        prob = 3/4
         .02 > SCORE              prob = 0

For a realistic example of classifier calibration, see the test file
t/AI-Calibrate-NB.t, which uses the AI::NaiveBayes1 module to train a Naive
Bayes classifier then calibrates it using this module.

=cut

=head1 FUNCTIONS

=over 4

=item B<calibrate>

This is the main calibration function.  The calling form is:

my $calibrated = calibrate( $data, $sorted);

$data looks like: C<[ [score, class], [score, class], [score, class]...]>
Each score is a number.  Each class is either 0 (negative class) or 1
(positive class).

$sorted is boolean (0 by default) indicating whether the data are already
sorted by score.  Unless this is set to 1, calibrate() will sort the data
itself.

Calibrate returns a reference to an ordered list of references:

  [ [score, prob], [score, prob], [score, prob] ... ]

Scores will be in descending numerical order.  See the DESCRIPTION section for
how this structure is interpreted.  You can pass this structure to the
B<score_prob> function, along with a new score, to get a probability.

=cut

sub calibrate {
    my($data, $sorted) = @_;

    if (DEBUG) {
        print "Original data:\n";
        for my $pair (@$data) {
            my($score, $prob) = @$pair;
            print "($score, $prob)\n";
        }
    }

    #  Copy the data over so PAV can clobber the PROB field
    my $new_data = [ map([@$_], @$data) ];

    #   If not already sorted, sort data decreasing by score
    if (!$sorted) {
        $new_data = [ sort { $b->[SCORE] <=> $a->[SCORE] } @$new_data ];
    }

    PAV($new_data);

    if (DEBUG) {
        print("After PAV, vector is:\n");
        print_vector($new_data);
    }

    my(@result);
    my( $last_prob, $last_score);

    push(@$new_data, [-1e10, 0]);

    for my $pair (@$new_data) {
        print "Seeing @$pair\n" if DEBUG;
        my($score, $prob) = @$pair;
        if (defined($last_prob) and $prob < $last_prob) {
            print("Pushing [$last_score, $last_prob]\n") if DEBUG;
            push(@result, [$last_score, $last_prob] );
        }
        $last_prob = $prob;
        $last_score = $score;
    }

    return \@result;
}


sub PAV {
    my ( $result ) = @_;

    for ( my $i = 0; $i < @$result - 1; $i++ ) {
        if ( $result->[$i][PROB] < $result->[ $i + 1 ][PROB] ) {
            $result->[$i][PROB] =
                ( $result->[$i][PROB] + $result->[ $i + 1 ][PROB] ) / 2;
            $result->[ $i + 1 ][PROB] = $result->[$i][PROB];
            print "Averaging elements $i and ", $i + 1, "\n" if DEBUG;

            for ( my $j = $i - 1; $j >= 0; $j-- ) {
                if ( $result->[$j][PROB] < $result->[ $i + 1 ][PROB] ) {
                    my $d = ( $i + 1 ) - $j + 1;
                    flatten( $result, $j, $d );
                }
                else {
                    last;
                }
            }
        }
    }
}

sub print_vector {
    my($vec) = @_;
    for my $pair (@$vec) {
        print join(", ", @$pair), "\n";
    }
}


sub flatten {
    my ( $vec, $start, $len ) = @_;
    if (DEBUG) {
        print "Flatten called on vec, $start, $len\n";
        print "Vector before: \n";
        print_vector($vec);
    }

    my $sum = 0;
    for my $i ( $start .. $start + $len-1 ) {
        $sum += $vec->[$i][PROB];
    }
    my $avg = $sum / $len;
    print "Sum = $sum, avg = $avg\n" if DEBUG;
    for my $i ( $start .. $start + $len -1) {
        $vec->[$i][PROB] = $avg;
    }
    if (DEBUG) {
        print "Vector after: \n";
        print_vector($vec);
    }
}

=item B<score_prob>

This is a simple utility function that takes the structure returned by
B<calibrate>, along with a new score, and returns the probability estimate.
Example calling form:

  $p = score_prob($calibrated, $score);

Once you have a trained, calibrated classifier, you could imagine using it
like this:

 $calibrated = calibrate( $calibration_set );
 print "Input instances, one per line:\n";
 while (<>) {
    chomp;
    my(@fields) = split;
    my $score = classifier(@fields);
    my $prob = score_prob($score);
    print "Estimated probability: $prob\n";
 }

=cut

sub score_prob {
    my($calibrated, $score) = @_;

    my $last_prob = 1.0;

    for my $tuple (@$calibrated) {
        my($bound, $prob) = @$tuple;
        return $prob if $score >= $bound;
        $last_prob = $prob;
    }
    #  If we drop off the end, probability estimate is zero
    return 0;
}


=item B<print_mapping>

This is a simple utility function that takes the structure returned by
B<calibrate> and prints out a simple list of lines describing the mapping
created.

Example calling form:

  print_mapping($calibrated);

Sample output:

  1.00 > SCORE >= 1.00     prob = 1.000
  1.00 > SCORE >= 0.71     prob = 0.667
  0.71 > SCORE >= 0.39     prob = 0.000
  0.39 > SCORE >= 0.00     prob = 0.000

These ranges are not necessarily compressed/optimized, as this sample output
shows.

=back

=cut
sub print_mapping {
    my($calibrated) = @_;
    my $last_bound = 1.0;
    for my $tuple (@$calibrated) {
        my($bound, $prob) = @$tuple;
        printf("%0.3f > SCORE >= %0.3f     prob = %0.3f\n",
               $last_bound, $bound, $prob);
        $last_bound = $bound;
    }
    if ($last_bound != 0) {
        printf("%0.3f > SCORE >= %0.3f     prob = %0.3f\n",
               $last_bound, 0, 0);
    }
}

=head1 DETAILS

The PAV algorithm is conceptually straightforward.  Given a set of training
cases ordered by the scores assigned by the classifier, it first assigns a
probability of one to each positive instance and a probability of zero to each
negative instance, and puts each instance in its own group.  It then looks, at
each iteration, for adjacent violators: adjacent groups whose probabilities
locally increase rather than decrease.  When it finds such groups, it pools
them and replaces their probability estimates with the average of the group's
values.  It continues this process of averaging and replacement until the
entire sequence is monotonically decreasing.  The result is a sequence of
instances, each of which has a score and an associated probability estimate,
which can then be used to map scores into probability estimates.

For further information on the PAV algorithm, you can read the section in my
paper referenced below.

=head1 EXPORT

This module exports three functions: calibrate, score_prob and print_mapping.

=head1 BUGS

None known.  This implementation is straightforward but inefficient (its time
is O(n^2) in the length of the data series).  A linear time algorithm is
known, and in a later version of this module I'll probably implement it.

=head1 SEE ALSO

The AI::NaiveBayes1 perl module.

My paper "PAV and the ROC Convex Hull" has a good discussion of the PAV
algorithm, including examples:
L<http://home.comcast.net/~tom.fawcett/public_html/papers/PAV-ROCCH-dist.pdf>

If you want to read more about the general issue of classifier calibration,
here are some good papers, which are freely available on the web:

I<"Transforming classifier scores into accurate multiclass probability estimates">
by Bianca Zadrozny and Charles Elkan

I<"Predicting Good Probabilities With Supervised Learning">
by A. Niculescu-Mizil and R. Caruana


=head1 AUTHOR

Tom Fawcett, E<lt>tom.fawcett@gmail.comE<gt>

=head1 COPYRIGHT AND LICENSE

Copyright (C) 2008-2012 by Tom Fawcett

This library is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either Perl version 5.8.8 or,
at your option, any later version of Perl 5 you may have available.

=cut
1;

t/AI-Calibrate-1.t  view on Meta::CPAN

#  -*- Mode: CPerl -*-
use strict;
# Before `make install' is performed this script should be runnable with
# `make test'. After `make install' it should work as `perl AI-Calibrate.t'

#########################

use Test::More tests => 34;
BEGIN { use_ok('AI::Calibrate', ':all') };

srand;

sub deeply_approx {
    # Like Test::More::is_deeply but uses approx() to compare elements.
    my( $got, $expected ) = @_;
    my $EPSILON = 1.0e-6;
    sub max {  $_[0] > $_[1] ? $_[0] : $_[1] }
    sub approx {
        my($x, $y) = @_;
        print("approx($x, $y)\n");
        if ($x == 0 and $y == 0) {
            return(1);
        } else {
            return(abs($x-$y) / max($x,$y) < $EPSILON);
        }
    }
    for my $i (0 .. $#{$got}) {
        my $g = $got->[$i];
        if (defined($expected->[$i])) {
            my $e = $expected->[$i];
            if (!approx($g->[0], $e->[0])) {
                return(0);
            }
            if (!approx($g->[1], $e->[1])) {
                return(0);
            }
        } else {
            return(0);
        }
    }
    return(1);
}

#  Given an array reference, shuffle the array.  This is the Fisher-Yates code
#  from The Perl Cookbook.
sub shuffle_array {
   my($array) = shift;
   my($i);
   for ($i = @$array ; --$i; ) {
      my $j = int rand ($i+1);
      next if $i == $j;
      @$array[$i,$j] = @$array[$j,$i]
   }
}

#  These points are from the ROCCH-PAV paper, Table 1
#  Format of each point is [Threshold, Class].
my $points = [
              [.9, 1],
              [.8, 1],
              [.7, 0],
              [.6, 1],
              [.55, 1],
              [.5, 1],
              [.45, 0],
              [.4, 1],
              [.35, 1],
              [.3, 0 ],
              [.27, 1],
              [.2, 0 ],
              [.18, 0],
              [.1, 1 ],
              [.02, 0]
             ];

my $calibrated_expected =
  [
   [0.8, 1],
   [0.5, 0.75],
   [0.35, 0.666666666666667],
   [0.27, 0.5],
   [0.1, 0.333333333333333]
  ];

my $calibrated_got = calibrate( $points, 1 );

pass("ran_ok");

ok(deeply_approx($calibrated_got, $calibrated_expected),
   "pre-sorted calibration");

#  Shuffle the arrays a bit and try calibrating again

for (1 .. 10) {
    shuffle_array($points);
    my $calibrated_got = calibrate($points, 0);
    ok(deeply_approx($calibrated_got, $calibrated_expected),
       "unsorted cal $_");
}

#  Tweak the thresholds

for (1 .. 10) {
    my $delta = rand;
    my @delta_points;
    for my $point (@$points) {
        my($thresh, $class) = @$point;
        push(@delta_points, [ $thresh+$delta, $class]);
    }
    my @delta_expected;
    for my $point (@$calibrated_expected) {
        my($thresh, $class) = @$point;
        push(@delta_expected, [ $thresh+$delta, $class]);
    }
    my $delta_got = calibrate(\@delta_points, 0);
    ok(deeply_approx($delta_got, \@delta_expected), "unsorted cal $_");
}

my @test_estimates =
  ( [100, 1],
    [.9,    1 ],
    [.8,   1],
    [.7,  3/4 ],
    [.5,  3/4 ],
    [.45, 2/3 ],
    [.35, 2/3 ],
    [.3,  1/2 ],
    [.2,  1/3 ],
    [.02,   0 ],
    [.00001, 0]
);


print "Using this mapping:\n";
print_mapping($calibrated_got);
print;

for my $pair (@test_estimates) {
    my($score, $prob_expected) = @$pair;
    my $prob_got = score_prob($calibrated_got, $score);
    is($prob_got, $prob_expected, "score_prob test @$pair");
}

t/AI-Calibrate-KL.t  view on Meta::CPAN

#  -*- Mode: CPerl -*-
use strict;
# Before `make install' is performed this script should be runnable with
# `make test'. After `make install' it should work as `perl AI-Calibrate.t'

#########################

use Test::More tests => 4;
BEGIN { use_ok('AI::Calibrate', ':all') };

sub trim($) {
    my $string = shift;
    $string =~ s/^\s+//;
    $string =~ s/\s+$//;
    return $string;
}

#  These points are from Kun Liu
#  Format of each point is [Threshold, Class].
my $points = [
              [0.999,	1],
              [0.998,	1],
              [0.742,	0],
              [0.737,	1],
              [0.685,	1],
              [0.636,	1],
              [0.613,	1],
              [0.598,	1],
              [0.559,	1],
              [0.542,	1],
              [0.541,	1],
              [0.505,	1],
              [0.490,	0],
              [0.477,	1],
              [0.475,	1],
              [0.442,	0],
              [0.442,	0],
              [0.439,	1],
              [0.425,	1],
              [0.413,	0],
              [0.411,	0],
              [0.409,	0],
              [0.401,	1],
              [0.399,	0],
              [0.386,	0],
              [0.385,	0],
              [0.375,	1],
              [0.374,	0],
              [0.369,	0],
              [0.367,	1],
              [0.362,	1],
              [0.359,	1],
              [0.359,	0],
             ];

my $calibrated_expected =
  [[0.998, 1],
   [0.505, 0.9],
   [0.475, 0.666666666666667],
   [0.425, 0.5],
   [0.359, 0.384615384615384]
  ];

my $calibrated_got = calibrate( $points, 1 );

pass("ran_ok");

is_deeply($calibrated_got, $calibrated_expected, "calibration");



my $expected_mapping = "
1.000 > SCORE >= 0.998     prob = 1.000
0.998 > SCORE >= 0.505     prob = 0.900
0.505 > SCORE >= 0.475     prob = 0.667
0.475 > SCORE >= 0.425     prob = 0.500
0.425 > SCORE >= 0.359     prob = 0.385
0.359 > SCORE >= 0.000     prob = 0.000
";

my $output = '';
open TOOUTPUT, '>', \$output or die "Can't open TOOUTPUT: $!";
my $stdout = select(TOOUTPUT);
print_mapping($calibrated_got);
close(TOOUTPUT);
select $stdout;

is(trim($output), trim($expected_mapping), "printed mapping");

t/AI-Calibrate-NB.t  view on Meta::CPAN

#  -*- Mode: CPerl -*-
use English;
use strict;
# Before `make install' is performed this script should be runnable with
# `make test'. After `make install' it should work as `perl AI-Calibrate.t'

use Test::More;

eval("use AI::NaiveBayes1");
if ($EVAL_ERROR) {
    plan skip_all => 'AI::NaiveBayes1 does not seem to be present';
} else {
    plan tests => 2;
}

use_ok('AI::Calibrate', ':all');

my @instances =
  ( [ { outlook=>'sunny',temperature=>85,humidity=>85,windy=>'FALSE'},
      'no'],
    [ {outlook=>'sunny',temperature=>80,humidity=>90,windy=>'TRUE'},
      'no'],
    [ {outlook=>'overcast',temperature=>83,humidity=>86,windy=>'FALSE'},
      'yes'],
    [ {outlook=>'rainy',temperature=>70,humidity=>96,windy=>'FALSE'},
      'yes'],
    [ {outlook=>'rainy',temperature=>68,humidity=>80,windy=>'FALSE'},
      'yes'],
    [ {outlook=>'rainy',temperature=>65,humidity=>70,windy=>'TRUE'},
      'no'],
    [ {outlook=>'overcast',temperature=>64,humidity=>65,windy=>'TRUE'},
      'yes'],
    [ {outlook=>'sunny',temperature=>72,humidity=>95,windy=>'FALSE'},
      'no'],
    [ {outlook=>'sunny',temperature=>69,humidity=>70,windy=>'FALSE'},
      'yes'],
    [ {outlook=>'rainy',temperature=>75,humidity=>80,windy=>'FALSE'},
      'yes'],
    [ {outlook=>'sunny',temperature=>75,humidity=>70,windy=>'TRUE'},
      'yes'],
    [ {outlook=>'overcast',temperature=>72,humidity=>90,windy=>'TRUE'},
      'yes'],
    [ {outlook=>'overcast',temperature=>81,humidity=>75,windy=>'FALSE'},
      'yes'],
    [ {outlook=>'rainy',temperature=>71,humidity=>91,windy=>'TRUE'},
      'no']
    );

my $nb = AI::NaiveBayes1->new;
$nb->set_real('temperature', 'humidity');

for my $inst (@instances) {
    my($attrs, $play) = @$inst;
    $nb->add_instance(attributes=>$attrs, label=>"play=$play");
}

$nb->train;

my @points;
for my $inst (@instances) {
    my($attrs, $play) = @$inst;

    my $ph = $nb->predict(attributes=>$attrs);

    my $play_score = $ph->{"play=yes"};
    push(@points, [$play_score, ($play eq "yes" ? 1 : 0)]);
}

my $calibrated = calibrate(\@points, 0); # not sorted

print "Mapping:\n";
print_mapping($calibrated);

my(@expected) =
  (
   [0.779495793582905, 1],
   [0.535425255450615, 0.666666666666667]
  );

for my $i (0 .. $#expected) {
    print "$i = @{$expected[$i]}\n";
}

# This fails because two numbers differ at the 15th digit:
# is_deeply($calibrated, \@expected, "Naive Bayes calibration test");

sub close_enough {
    my($x, $y) = @_;
    return(abs($x - $y) < 1.0e-5);
}

sub lists_close_enough {
    my($got, $expected) = @_;
    if (@$got != @$expected) {
        return 0;
    }
    for my $i (0 .. $#{$got}) {
        for my $elem (0, 1) {
            if (! close_enough($got->[$i][$elem], $expected->[$i][$elem])) {
                diag(sprintf( "Got: %f\n", $got->[$i]));
                diag(sprintf( "Expected: %f\n", $expected->[$i]));
                return 0;
            }
        }
    }
    return 1;
}

ok(lists_close_enough($calibrated, \@expected),
   'Calibration of NB1 results');

t/AI-Calibrate-pathologies.t  view on Meta::CPAN

#  -*- Mode: CPerl -*-
use strict;
# Before `make install' is performed this script should be runnable with
# `make test'. After `make install' it should work as `perl AI-Calibrate.t'

use Test::More tests => 6;
BEGIN { use_ok('AI::Calibrate', ':all') };

my $points0 = [ ];


use Data::Dumper;

is_deeply( calibrate($points0), [], "empty point set");

my $points1 = [
    [.9, 1]
    ];

is_deeply(calibrate($points1), [[0.9,1]], "Singleton point set");

my $points2 = [
    [.8, 1],
    [.7, 0],
    ];

is_deeply(calibrate($points2), [[0.8, 1]], "two-point perfect");

my $points3 = [
    [.8, 0],
    [.7, 1],
    ];

is_deeply(calibrate($points3), [[0.7, 0.5]], "two-point anti-perfect");

my $points4 = [
    [.8, 0],
    [.8, 1],
    ];

is_deeply(calibrate($points4), [[0.8, 0.5]], "two-point conflicting");

 view all matches for this distribution
 view release on metacpan -  search on metacpan

( run in 1.476 second using v1.00-cache-2.02-grep-82fe00e-cpan-72ae3ad1e6da )