view release on metacpan or search on metacpan
* shuffle data
* added import tag ":process_data"
* added more useful data to the confusion matrix:
* sum of column and rows to make it look more classic :)
* more_stats option to show more stats:
* precision, specificity, F1 score, negative predicted value, false negative rate, false positive rate
* false discovery rate, false omission rate, balanced accuracy
Version 1.02 26 AUGUST 2021
* minimum perl version changed to 5.8.1 due to YAML
* fix test for display_confusion_matrix
* modifier "n" ( >= perl 5.22 ) changed to primitive '?:', 5.22 is too high
* fixed inaccurate test for output
* YAML (nerve file) for portability supported
* make subroutines exportable, the names are too long
* :local_data
* :portable_data
* cleaned & refactored codes
* &display_confusion_matrix
* improved the documentation
Version 1.01 24 AUGUST 2021
* Fixed some technical issues
* fixed test scripts not run in correct sequence
* must be creation -> train -> validate/test
* local::lib issue should be fixed by now
Version 1.00 23 AUGUST 2021
* The following features were implemented over the course of time (see also My::Perceptron v0.04 on github):
* create perceptron
* process data: &train method
* read csv - for training stage
* save and load the perceptron
* output algorithm for train
* read and calculate data line by line
* validate method
* read csv bulk
* write predicted values into original file
* write predicted values into new file
* test method
* read csv bulk
* write predicted values into original file
* write predicted values into new file
* confusion matrix
* read only expected and predicted columns, line by line
* return a hash of data
* TP, TN, FP, FN
* total entries
* accuracy
* sensitivity
* display confusion matrix data to console
* use Text:Matrix
* synonyms
* synonyms MUST call actual subroutines and not copy pasting!
* train: tame, exercise
* validate: take_mock_exam, take_lab_test
* test: take_real_exam, work_in_real_world
* generate_confusion_matrix: get_exam_results
* display_confusion_matrix: display_exam_results
* save_perceptron: preserve
* load_perceptron: revive
t/02-state_portable.t
t/02-state_synonyms.t
t/04-train.t
t/04-train_synonyms_exercise.t
t/04-train_synonyms_tame.t
t/06-validate.t
t/06-validate_synonyms_lab.t
t/06-validate_synonyms_mock.t
t/08-confusion_matrix.t
t/08-confusion_matrix_synonyms.t
t/10-test.t
t/10-test_synonyms_exam.t
t/10-test_synonyms_work.t
t/12-shuffle_data.t
t/12-shuffle_data_synonym.t
t/book_list_test-filled-non-binary.csv
t/book_list_test-filled.csv
t/book_list_test.csv
t/book_list_test_exam-filled.csv
t/book_list_test_work-filled.csv
t/book_list_to_shuffle.csv
t/book_list_train.csv
t/book_list_validate-filled.csv
t/book_list_validate.csv
t/book_list_validate_lab-filled.csv
t/book_list_validate_mock-filled.csv
t/perceptron_1.nerve
t/perceptron_exercise.nerve
t/perceptron_state_synonyms.nerve
t/perceptron_tame.nerve
"List::Util" : "0",
"Storable" : "0",
"Text::CSV" : "2.01",
"Text::Matrix" : "1.00",
"YAML" : "0",
"local::lib" : "0",
"perl" : "5.008001",
"utf8" : "0"
}
},
"test" : {
"requires" : {
"FindBin" : "0",
"Test::More" : "0",
"Test::Output" : "1.033"
}
}
},
"release_status" : "stable",
"version" : "1.04",
"x_serialization_backend" : "JSON::PP version 4.02"
Makefile.PL view on Meta::CPAN
'YAML' => '0',
'File::Basename' => '0',
'List::Util' => '0',
},
dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', },
clean => { FILES => 'AI-Perceptron-Simple-*' },
);
# Compatibility with old versions of ExtUtils::MakeMaker
unless (eval { ExtUtils::MakeMaker->VERSION('6.64'); 1 }) {
my $test_requires = delete $WriteMakefileArgs{TEST_REQUIRES} || {};
@{$WriteMakefileArgs{PREREQ_PM}}{keys %$test_requires} = values %$test_requires;
}
unless (eval { ExtUtils::MakeMaker->VERSION('6.55_03'); 1 }) {
my $build_requires = delete $WriteMakefileArgs{BUILD_REQUIRES} || {};
@{$WriteMakefileArgs{PREREQ_PM}}{keys %$build_requires} = values %$build_requires;
}
delete $WriteMakefileArgs{CONFIGURE_REQUIRES}
unless eval { ExtUtils::MakeMaker->VERSION('6.52'); 1 };
delete $WriteMakefileArgs{MIN_PERL_VERSION}
AI-Perceptron-Simple (v1.04)
A Newbie Friendly Module to Create, Train, Validate and Test Perceptrons / Neurons
This module provides methods to build, train, validate and test a perceptron. It can also save the data of the perceptron for future use for any actual AI programs.
This module is also aimed to help newbies grasp hold of the concept of perceptron, training, validation and testing as much as possible. Hence, all the methods and subroutines in this module are decoupled as much as possible so that the actual script...
INSTALLATION
To install this module, run the following commands:
perl Makefile.PL
make
make test
make install
SUPPORT AND DOCUMENTATION
After installing, you can find documentation for this module with the
perldoc command.
perldoc AI::Perceptron::Simple
You can also look for information at:
docs/AI-Perceptron-Simple-1.04.html view on Meta::CPAN
<li><a href="#exercise">exercise ( ... )</a></li>
<li><a href="#train-stimuli_train_csv-expected_output_header-save_nerve_to_file">train ( $stimuli_train_csv, $expected_output_header, $save_nerve_to_file )</a></li>
<li><a href="#train-stimuli_train_csv-expected_output_header-save_nerve_to_file-display_stats-identifier">train ( $stimuli_train_csv, $expected_output_header, $save_nerve_to_file, $display_stats, $identifier )</a></li>
<li><a href="#calculate_output-self-stimuli_hash">&_calculate_output( $self, \%stimuli_hash )</a></li>
<li><a href="#tune-self-stimuli_hash-tune_up_or_down">&_tune( $self, \%stimuli_hash, $tune_up_or_down )</a></li>
</ul>
</li>
<li><a href="#VALIDATION-RELATED-METHODS">VALIDATION RELATED METHODS</a>
<ul>
<li><a href="#take_mock_exam">take_mock_exam (...)</a></li>
<li><a href="#take_lab_test">take_lab_test (...)</a></li>
<li><a href="#validate-options">validate ( \%options )</a></li>
</ul>
</li>
<li><a href="#TESTING-RELATED-SUBROUTINES-METHODS">TESTING RELATED SUBROUTINES/METHODS</a>
<ul>
<li><a href="#take_real_exam">take_real_exam (...)</a></li>
<li><a href="#work_in_real_world">work_in_real_world (...)</a></li>
<li><a href="#test-options">test ( \%options )</a></li>
<li><a href="#real_validate_or_test-data_hash_ref">_real_validate_or_test ( $data_hash_ref )</a></li>
<li><a href="#fill_predicted_values-self-stimuli_validate-predicted_index-aoa">&_fill_predicted_values ( $self, $stimuli_validate, $predicted_index, $aoa )</a></li>
</ul>
</li>
<li><a href="#RESULTS-RELATED-SUBROUTINES-METHODS">RESULTS RELATED SUBROUTINES/METHODS</a>
<ul>
<li><a href="#get_exam_results">get_exam_results ( ... )</a></li>
<li><a href="#get_confusion_matrix-options">get_confusion_matrix ( \%options )</a></li>
<li><a href="#collect_stats-options">&_collect_stats ( \%options )</a></li>
<li><a href="#calculate_total_entries-c_matrix_ref">&_calculate_total_entries ( $c_matrix_ref )</a></li>
<li><a href="#calculate_accuracy-c_matrix_ref">&_calculate_accuracy ( $c_matrix_ref )</a></li>
docs/AI-Perceptron-Simple-1.04.html view on Meta::CPAN
$nerve->tame( ... );
$nerve->exercise( ... );
$nerve->train( $training_data_csv, $expected_column_name, $save_nerve_to );
# or
$nerve->train(
$training_data_csv, $expected_column_name, $save_nerve_to,
$show_progress, $identifier); # these two parameters must go together
# validate
$nerve->take_lab_test( ... );
$nerve->take_mock_exam( ... );
# fill results to original file
$nerve->validate( {
stimuli_validate => $validation_data_csv,
predicted_column_index => 4,
} );
# or
# fill results to a new file
$nerve->validate( {
stimuli_validate => $validation_data_csv,
predicted_column_index => 4,
results_write_to => $new_csv
} );
# test - see "validate" method, same usage
$nerve->take_real_exam( ... );
$nerve->work_in_real_world( ... );
$nerve->test( ... );
# confusion matrix
my %c_matrix = $nerve->get_confusion_matrix( {
full_data_file => $file_csv,
actual_output_header => $header_name,
predicted_output_header => $predicted_header_name,
more_stats => 1, # optional
} );
docs/AI-Perceptron-Simple-1.04.html view on Meta::CPAN
<dt id="portable_data---subroutines-under-NERVE-PORTABILITY-RELATED-SUBROUTINES-section"><code>:portable_data</code> - subroutines under <code>NERVE PORTABILITY RELATED SUBROUTINES</code> section.</dt>
<dd>
</dd>
</dl>
<p>Most of the stuff are OO.</p>
<h1 id="DESCRIPTION">DESCRIPTION</h1>
<p>This module provides methods to build, train, validate and test a perceptron. It can also save the data of the perceptron for future use for any actual AI programs.</p>
<p>This module is also aimed to help newbies grasp hold of the concept of perceptron, training, validation and testing as much as possible. Hence, all the methods and subroutines in this module are decoupled as much as possible so that the actual scr...
<p>The implementation here is super basic as it only takes in input of the dendrites and calculate the output. If the output is higher than the threshold, the final result (category) will be 1 aka perceptron is activated. If not, then the result will...
<p>Depending on how you view or categorize the final result, the perceptron will fine tune itself (aka train) based on the learning rate until the desired result is met. Everything from here on is all mathematics and numbers which only makes sense to...
<p>Whenever the perceptron fine tunes itself, it will increase/decrease all the dendrites that is significant (attributes labelled 1) for each input. This means that even when the perceptron successfully fine tunes itself to suite all the data in you...
<h1 id="CONVENTIONS-USED">CONVENTIONS USED</h1>
<p>Please take note that not all subroutines/method must be used to make things work. All the subroutines and methods are listed out for the sake of writing the documentation.</p>
docs/AI-Perceptron-Simple-1.04.html view on Meta::CPAN
</dl>
<p>This subroutine should be called in the procedural way for now.</p>
<h1 id="VALIDATION-RELATED-METHODS">VALIDATION RELATED METHODS</h1>
<p>All the validation methods here have the same parameters as the actual <code>validate</code> method and they all do the same stuff. They are also used in the same way.</p>
<h2 id="take_mock_exam">take_mock_exam (...)</h2>
<h2 id="take_lab_test">take_lab_test (...)</h2>
<h2 id="validate-options">validate ( \%options )</h2>
<p>This method validates the perceptron against another set of data after it has undergone the training process.</p>
<p>This method calculates the output of each row of data and write the result into the predicted column. The data begin written into the new file or the original file will maintain it's sequence.</p>
<p>Please take note that this method will load all the data of the validation stimuli, so please split your stimuli into multiple files if possible and call this method a few more times.</p>
<p>For <code>%options</code>, the followings are needed unless mentioned:</p>
docs/AI-Perceptron-Simple-1.04.html view on Meta::CPAN
<dt id="results_write_to-new_csv_file">results_write_to => $new_csv_file</dt>
<dd>
<p>Optional.</p>
<p>The default behaviour will write the predicted output back into <code>stimuli_validate</code> ie the original data. The sequence of the data will be maintained.</p>
</dd>
</dl>
<p><i>*This method will call <code>_real_validate_or_test</code> to do the actual work.</i></p>
<h1 id="TESTING-RELATED-SUBROUTINES-METHODS">TESTING RELATED SUBROUTINES/METHODS</h1>
<p>All the testing methods here have the same parameters as the actual <code>test</code> method and they all do the same stuff. They are also used in the same way.</p>
<h2 id="take_real_exam">take_real_exam (...)</h2>
<h2 id="work_in_real_world">work_in_real_world (...)</h2>
<h2 id="test-options">test ( \%options )</h2>
<p>This method is used to put the trained nerve to the test. You can think of it as deploying the nerve for the actual work or maybe putting the nerve into an empty brain and see how well the brain survives :)</p>
<p>This method works and behaves the same way as the <code>validate</code> method. See <code>validate</code> for the details.</p>
<p><i>*This method will call &_real_validate_or_test to do the actual work.</i></p>
<h2 id="real_validate_or_test-data_hash_ref">_real_validate_or_test ( $data_hash_ref )</h2>
<p>This is where the actual validation or testing takes place.</p>
<p><code>$data_hash_ref</code> is the list of parameters passed into the <code>validate</code> or <code>test</code> methods.</p>
<p>This is a <b>method</b>, so use the OO way. This is one of the exceptions to the rules where private subroutines are treated as methods :)</p>
<h2 id="fill_predicted_values-self-stimuli_validate-predicted_index-aoa">&_fill_predicted_values ( $self, $stimuli_validate, $predicted_index, $aoa )</h2>
<p>This is where the filling in of the predicted values takes place. Take note that the parameters naming are the same as the ones used in the <code>validate</code> and <code>test</code> method.</p>
<p>This subroutine should be called in the procedural way.</p>
<h1 id="RESULTS-RELATED-SUBROUTINES-METHODS">RESULTS RELATED SUBROUTINES/METHODS</h1>
<p>This part is related to generating the confusion matrix.</p>
<h2 id="get_exam_results">get_exam_results ( ... )</h2>
<p>The parameters and usage are the same as <code>get_confusion_matrix</code>. See the next method.</p>
docs/AI-Perceptron-Simple-1.04.html view on Meta::CPAN
<h2 id="get_confusion_matrix-options">get_confusion_matrix ( \%options )</h2>
<p>Returns the confusion matrix in the form of a hash. The hash will contain these keys: <code>true_positive</code>, <code>true_negative</code>, <code>false_positive</code>, <code>false_negative</code>, <code>accuracy</code>, <code>sensitivity</code>...
<p>If you are trying to manipulate the confusion matrix hash or something, take note that all the stats are in percentage (%) in decimal (if any) except the total entries.</p>
<p>For <code>%options</code>, the followings are needed unless mentioned:</p>
<dl>
<dt id="full_data_file-filled_test_file">full_data_file => $filled_test_file</dt>
<dd>
<p>This is the CSV file filled with the predicted values.</p>
<p>Make sure that you don't do anything to the actual and predicted output in this file after testing the nerve. These two columns must contain binary values only!</p>
</dd>
<dt id="actual_output_header-actual_column_name">actual_output_header => $actual_column_name</dt>
<dd>
</dd>
<dt id="predicted_output_header-predicted_column_name">predicted_output_header => $predicted_column_name</dt>
<dd>
<p>The binary values are treated as follows:</p>
docs/specifications.t view on Meta::CPAN
#!/usr/bin/perl
use AI::Perceptron::Simple;
use Test::More;
plan( skip_all => "This is just the specification" );
done_testing;
######### specifications ##############
#
# This specification is based on My::Perceptron (see my github repo) and packed into AI::Perceptron::Simple v1.00
#
# Version 0.01 - completed on 8 August 2021
# [v] able to create perceptron
# [v] able to process data: &train method
# [v] read csv - for training stage
# [v] able to save the actual perceptron object and load it back
#
# Version 0.02 - completed on 17 August 2021
# [v] implement output algorithm for train and finalize it
# [v] read and calculate data line by line, not bulk, so no shuffling method
# [v] implement validate method
# [v] read csv bulk - for validating and testing stages
# [v] write into a new csv file - validation and testing stages
# [v] implement testing method
# [v] read csv bulk - for validating and testing stages
# [v] write into a new csv file - validation and testing stages
#
# Version 0.03 - completed on 19 August 2021
# [v] implement confusion matrix
# [v] read only expected and predicted columns, line by line
# [v] return a hash of data
# [v] TP, TN, FP, FN
# [v] accuracy
# [v] sensitivity
# [v] remove the return value for "train" method
# [v] display confusion matrix data to console
# [v] use Text:Matrix
#
# Version 0.04 / Version 1.0 - completed on 23 AUGUST 2021
# [v] add synonyms
# [v] synonyms MUST call actual subroutines and not copy pasting!
# train: [v] tame [v] exercise
# validate: [v] take_mock_exam [v] take_lab_test
# test: [v] take_real_exam [v] work_in_real_world
# generate_confusion_matrix: [v] get_exam_results
# display_confusion_matrix: [v] display_exam_results
# save_perceptron: [v] preserve
# load_perceptron: [v] revive
#
# Version 1.01
# [v] fixed currently known issues as much as possible (see 'Changes')
# - "long size integer" === "byte order not compatible"
#
# Version 1.02
# [v] minimum perl version changed to 5.8 due to Test::Output
# [v] YAML (nerve file) for portability
# [v] make subroutines exportable, the names are too long
# [v] :local_data
# [v] :portable_data
# [v] fix test for display_confusion_matrix
# [v] modifier "n" (perl 5.22 and above) changed to primitive '?:', 5.22 is too high
# [v] fixed inaccurate test for output part
# [v] clean & refactor codes
# [v] refactored &display_confusion_matrix
# [v] improve the documentation
#
# Version 1.03
# [v] data processing: shuffle data + import tag
# [v] add more useful data to the confusion matrix
# [v] sum of column and rows to make it look more classic :)
# [v] optional option to show more stats
# [v] precision [v] specificity [v] F1 score
docs/specifications.t view on Meta::CPAN
# shouldn't give any output upon completion
# should save a copy of itselt into a new file
# returns the nerve's data filename to be used in validate()
# these two can go into a loop with conditions checking
# which means that we can actuall write this
# $perceptron->validate( $stimuli_validate,
# $perceptron->train( $stimuli_train, $save_nerve_to_file )
# );
# and then check the confusion matrix, if not satisfied, run the loop again :)
$perceptron->validate( $stimuli_validate, $nerve_data_to_read );
$perceptron->test( $stimuli_test ); # loads nerve data from data file, turn into a object, then do the following:
# reads from csv :
# validation stimuli
# testing stimuli
# both will call the same subroutine to do calculation
# both will write predicted data into the original data file
# show results ie confusion matrix (TP-true positive, TN-true negative, FP-false positive, FN-false negative)
# this should only be done during validation and testing
$perceptron->generate_confusion_matrix( { 1 => $csv_header_true, 0 => $csv_header_false } );
# calculates the 4 thingy based on the current data on hand (RAM), don't read from file again, it shouldn't be a problem
# returns a hash
# ie it must be used together with validate() and test() to avoid problems
# ie validate() and test() must be in different scripts, which makes sense
# unless, create 3 similar objects to do the work in one go
# save data of the trained perceptron
$perceptron->save_data( $data_file );
# see train() on saving copy of the perceptron
# load data of percpetron for use in actual program
My::Perceptron::load_data( $data_file );
# loads the perceptron and returns the actual My::Perceptron object
# should work though as Storable claims it can do that
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
package AI::Perceptron::Simple;
use 5.008001;
use strict;
use warnings;
use Carp "croak";
use utf8;
binmode STDOUT, ":utf8";
require local::lib; # no local::lib in tests, this is also to avoid loading local::lib multiple times
use Text::CSV qw( csv );
use Text::Matrix;
use File::Basename qw( basename );
use List::Util qw( shuffle );
=head1 NAME
AI::Perceptron::Simple
A Newbie Friendly Module to Create, Train, Validate and Test Perceptrons / Neurons
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
$nerve->tame( ... );
$nerve->exercise( ... );
$nerve->train( $training_data_csv, $expected_column_name, $save_nerve_to );
# or
$nerve->train(
$training_data_csv, $expected_column_name, $save_nerve_to,
$show_progress, $identifier); # these two parameters must go together
# validate
$nerve->take_lab_test( ... );
$nerve->take_mock_exam( ... );
# fill results to original file
$nerve->validate( {
stimuli_validate => $validation_data_csv,
predicted_column_index => 4,
} );
# or
# fill results to a new file
$nerve->validate( {
stimuli_validate => $validation_data_csv,
predicted_column_index => 4,
results_write_to => $new_csv
} );
# test - see "validate" method, same usage
$nerve->take_real_exam( ... );
$nerve->work_in_real_world( ... );
$nerve->test( ... );
# confusion matrix
my %c_matrix = $nerve->get_confusion_matrix( {
full_data_file => $file_csv,
actual_output_header => $header_name,
predicted_output_header => $predicted_header_name,
more_stats => 1, # optional
} );
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
preserve_as_yaml save_perceptron_yaml revive_from_yaml load_perceptron_yaml
);
our %EXPORT_TAGS = (
process_data => [ qw( shuffle_data shuffle_stimuli ) ],
local_data => [ qw( preserve save_perceptron revive load_perceptron ) ],
portable_data => [ qw( preserve_as_yaml save_perceptron_yaml revive_from_yaml load_perceptron_yaml ) ],
);
=head1 DESCRIPTION
This module provides methods to build, train, validate and test a perceptron. It can also save the data of the perceptron for future use for any actual AI programs.
This module is also aimed to help newbies grasp hold of the concept of perceptron, training, validation and testing as much as possible. Hence, all the methods and subroutines in this module are decoupled as much as possible so that the actual script...
The implementation here is super basic as it only takes in input of the dendrites and calculate the output. If the output is higher than the threshold, the final result (category) will
be 1 aka perceptron is activated. If not, then the result will be 0 (not activated).
Depending on how you view or categorize the final result, the perceptron will fine tune itself (aka train) based on the learning rate until the desired result is met. Everything from
here on is all mathematics and numbers which only makes sense to the computer and not humans anymore.
Whenever the perceptron fine tunes itself, it will increase/decrease all the dendrites that is significant (attributes labelled 1) for each input. This means that even when the
perceptron successfully fine tunes itself to suite all the data in your file for the first round, the perceptron might still get some of the things wrong for the next round of training.
Therefore, the perceptron should be trained for as many rounds as possible. The more "confusion" the perceptron is able to correctly handle, the more "mature" the perceptron is.
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
shuffle_data( @_ );
}
sub shuffle_data {
my $stimuli = shift or croak "Please specify the original file name";
my @shuffled_stimuli_names = @_
or croak "Please specify the output files for the shuffled data";
my @aoa;
for ( @shuffled_stimuli_names ) {
# copied from _real_validate_or_test
# open for shuffling
my $aoa = csv (in => $stimuli, encoding => ":encoding(utf-8)");
my $attrib_array_ref = shift @$aoa; # 'remove' the header, it's annoying :)
@aoa = shuffle( @$aoa ); # this can only process actual array
unshift @aoa, $attrib_array_ref; # put back the headers before saving file
csv( in => \@aoa, out => $_, encoding => ":encoding(utf-8)" )
and
print "Saved shuffled data into ", basename($_), "!\n";
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
my $data_ref = shift;
my %data = %{ $data_ref };
# check keys
$data{ learning_rate } = LEARNING_RATE if not exists $data{ learning_rate };
$data{ threshold } = THRESHOLD if not exists $data{ threshold };
#####
# don't pack this key checking process into a subroutine for now
# this is also used in &_real_validate_or_test
my @missing_keys;
for ( qw( initial_value attribs ) ) {
push @missing_keys, $_ unless exists $data{ $_ };
}
croak "Missing keys: @missing_keys" if @missing_keys;
#####
# continue to process the rest of the data
my %attributes;
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
}
}
=head1 VALIDATION RELATED METHODS
All the validation methods here have the same parameters as the actual C<validate> method and they all do the same stuff. They are also used in the same way.
=head2 take_mock_exam (...)
=head2 take_lab_test (...)
=head2 validate ( \%options )
This method validates the perceptron against another set of data after it has undergone the training process.
This method calculates the output of each row of data and write the result into the predicted column. The data begin written into the new file or the original file will maintain it's sequence.
Please take note that this method will load all the data of the validation stimuli, so please split your stimuli into multiple files if possible and call this method a few more times.
For C<%options>, the followings are needed unless mentioned:
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
This column will be filled with binary numbers and the full new data will be saved to the file specified in the C<results_write_to> key.
=item results_write_to => $new_csv_file
Optional.
The default behaviour will write the predicted output back into C<stimuli_validate> ie the original data. The sequence of the data will be maintained.
=back
I<*This method will call C<_real_validate_or_test> to do the actual work.>
=cut
sub take_mock_exam {
my ( $self, $data_hash_ref ) = @_;
$self->_real_validate_or_test( $data_hash_ref );
}
sub take_lab_test {
my ( $self, $data_hash_ref ) = @_;
$self->_real_validate_or_test( $data_hash_ref );
}
sub validate {
my ( $self, $data_hash_ref ) = @_;
$self->_real_validate_or_test( $data_hash_ref );
}
=head1 TESTING RELATED SUBROUTINES/METHODS
All the testing methods here have the same parameters as the actual C<test> method and they all do the same stuff. They are also used in the same way.
=head2 take_real_exam (...)
=head2 work_in_real_world (...)
=head2 test ( \%options )
This method is used to put the trained nerve to the test. You can think of it as deploying the nerve for the actual work or maybe putting the nerve into an empty brain and see how
well the brain survives :)
This method works and behaves the same way as the C<validate> method. See C<validate> for the details.
I<*This method will call &_real_validate_or_test to do the actual work.>
=cut
# redirect to _real_validate_or_test
sub take_real_exam {
my ( $self, $data_hash_ref ) = @_;
$self->_real_validate_or_test( $data_hash_ref );
}
sub work_in_real_world {
my ( $self, $data_hash_ref ) = @_;
$self->_real_validate_or_test( $data_hash_ref );
}
sub test {
my ( $self, $data_hash_ref ) = @_;
$self->_real_validate_or_test( $data_hash_ref );
}
=head2 _real_validate_or_test ( $data_hash_ref )
This is where the actual validation or testing takes place.
C<$data_hash_ref> is the list of parameters passed into the C<validate> or C<test> methods.
This is a B<method>, so use the OO way. This is one of the exceptions to the rules where private subroutines are treated as methods :)
=cut
sub _real_validate_or_test {
my $self = shift; my $data_hash_ref = shift;
#####
my @missing_keys;
for ( qw( stimuli_validate predicted_column_index ) ) {
push @missing_keys, $_ unless exists $data_hash_ref->{ $_ };
}
croak "Missing keys: @missing_keys" if @missing_keys;
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
unshift @$aoa, $attrib_array_ref;
print "Saving data to $output_file\n";
csv( in => $aoa, out => $output_file, encoding => ":encoding(utf-8)" );
print "Done saving!\n";
}
=head2 &_fill_predicted_values ( $self, $stimuli_validate, $predicted_index, $aoa )
This is where the filling in of the predicted values takes place. Take note that the parameters naming are the same as the ones used in the C<validate> and C<test> method.
This subroutine should be called in the procedural way.
=cut
sub _fill_predicted_values {
my ( $self, $stimuli_validate, $predicted_index, $aoa ) = @_;
# CSV processing is all according to the documentation of Text::CSV
open my $data_fh, "<:encoding(UTF-8)", $stimuli_validate
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
=head2 get_confusion_matrix ( \%options )
Returns the confusion matrix in the form of a hash. The hash will contain these keys: C<true_positive>, C<true_negative>, C<false_positive>, C<false_negative>, C<accuracy>, C<sensitivity>. More stats like C<precision>, C<specificity> and C<F1_Score> ...
If you are trying to manipulate the confusion matrix hash or something, take note that all the stats are in percentage (%) in decimal (if any) except the total entries.
For C<%options>, the followings are needed unless mentioned:
=over 4
=item full_data_file => $filled_test_file
This is the CSV file filled with the predicted values.
Make sure that you don't do anything to the actual and predicted output in this file after testing the nerve. These two columns must contain binary values only!
=item actual_output_header => $actual_column_name
=item predicted_output_header => $predicted_column_name
The binary values are treated as follows:
=over 4
=item C<0> is negative
t/00-load.t view on Meta::CPAN
#!perl
use 5.006;
use strict;
use warnings;
use Test::More;
plan tests => 1;
BEGIN {
use_ok( 'AI::Perceptron::Simple' ) || print "Bail out!\n";
}
diag( "Testing AI::Perceptron::Simple $AI::Perceptron::Simple::VERSION, Perl $], $^X" );
t/00-manifest.t view on Meta::CPAN
#!perl
use 5.006;
use strict;
use warnings;
use Test::More;
unless ( $ENV{RELEASE_TESTING} ) {
plan( skip_all => "Author tests not required for installation" );
}
my $min_tcm = 0.9;
eval "use Test::CheckManifest $min_tcm";
plan skip_all => "Test::CheckManifest $min_tcm required" if $@;
ok_manifest();
t/00-pod-coverage.t view on Meta::CPAN
#!perl
use 5.006;
use strict;
use warnings;
use Test::More;
unless ( $ENV{RELEASE_TESTING} ) {
plan( skip_all => "Author tests not required for installation" );
}
# Ensure a recent version of Test::Pod::Coverage
my $min_tpc = 1.08;
eval "use Test::Pod::Coverage $min_tpc";
plan skip_all => "Test::Pod::Coverage $min_tpc required for testing POD coverage"
if $@;
# Test::Pod::Coverage doesn't require a minimum Pod::Coverage version,
# but older versions don't recognize some common documentation styles
my $min_pc = 0.18;
eval "use Pod::Coverage $min_pc";
plan skip_all => "Pod::Coverage $min_pc required for testing POD coverage"
if $@;
all_pod_coverage_ok();
#!perl
use 5.006;
use strict;
use warnings;
use Test::More;
unless ( $ENV{RELEASE_TESTING} ) {
plan( skip_all => "Author tests not required for installation" );
}
# Ensure a recent version of Test::Pod
my $min_tp = 1.22;
eval "use Test::Pod $min_tp";
plan skip_all => "Test::Pod $min_tp required for testing POD" if $@;
all_pod_files_ok();
t/02-creation.t view on Meta::CPAN
use Test::More;
use AI::Perceptron::Simple;
my $module_name = "AI::Perceptron::Simple";
my $initial_value = 1.5;
my @attributes = qw( glossy_look has_flowers );
# print AI::Perceptron::Simple::LEARNING_RATE;
# all important parameter test
my $perceptron = AI::Perceptron::Simple->new( {
initial_value => $initial_value,
attribs => \@attributes
} );
is( ref $perceptron, $module_name, "Correct object" );
# default learning_rate() threshold()
is( $perceptron->learning_rate, 0.05, "Correct default learning rate -> ".$perceptron->learning_rate );
is( $perceptron->threshold, 0.5, "Correct default passing rate -> ".$perceptron->threshold );
t/02-creation.t view on Meta::CPAN
is( $perceptron->learning_rate, 0.3, "Correct custom learning_rate -> ".$perceptron->learning_rate );
is( $perceptron->threshold, 0.85, "Correct custom passing rate -> ".$perceptron->threshold );
# get_attributes()
my %attributes = $perceptron->get_attributes;
for ( @attributes ) {
ok( $attributes{ $_ }, "Attribute \'$_\' present" );
is( $attributes{ $_ }, $initial_value, "Correct initial value (".$attributes{$_}.") for \'$_\'" );
}
# don't try to use Test::Carp, it won't work, it only tests for direct calling of carp and croak etc
subtest "Caught missing mandatory parameters" => sub {
eval {
my $no_attribs = AI::Perceptron::Simple->new( { initial_value => $initial_value} );
};
like( $@, qr/attribs/, "Caught missing attribs" );
eval {
my $perceptron = AI::Perceptron::Simple->new( { attribs => \@attributes} );
};
like($@, qr/initial_value/, "Caught missing initial_value");
#my $no_both = AI::Perceptron::Simple->new; # this will fail and give output to use hash ref, nice
eval { my $no_both = AI::Perceptron::Simple->new( {} ); };
like( $@, qr/Missing keys: initial_value attribs/, "Caught missing initial_value and attribs" );
};
done_testing();
# besiyata d'shmaya
t/02-state_portable.t view on Meta::CPAN
#!/usr/bin/perl
use strict;
use warnings;
use Test::More;
use AI::Perceptron::Simple qw( :portable_data );
# for :local_data test, see 04-train.t 02-state_synonyms.t utilizes the full invocation
use FindBin;
use constant MODULE_NAME => "AI::Perceptron::Simple";
my @attributes = qw ( has_trees trees_coverage_more_than_half has_other_living_things );
my $total_headers = scalar @attributes;
my $perceptron = AI::Perceptron::Simple->new( {
initial_value => 0.01,
attribs => \@attributes
} );
subtest "All data related subroutines found" => sub {
# this only checks if the subroutines are contained in the package
ok( AI::Perceptron::Simple->can("preserve_as_yaml"), "&preserve_as_yaml is present" );
ok( AI::Perceptron::Simple->can("save_perceptron_yaml"), "&save_perceptron_yaml is persent" );
ok( AI::Perceptron::Simple->can("revive_from_yaml"), "&revive_from_yaml is present" );
ok( AI::Perceptron::Simple->can("load_perceptron_yaml"), "&load_perceptron_yaml is present" );
};
my $yaml_nerve_file = $FindBin::Bin . "/portable_nerve.yaml";
# save file
save_perceptron_yaml( $perceptron, $yaml_nerve_file );
ok( -e $yaml_nerve_file, "Found the YAML perceptron." );
# load and check
ok( my $transfered_nerve = load_perceptron_yaml( $yaml_nerve_file ), "&loaded_perceptron_from_YAML" );
is_deeply( $transfered_nerve, $perceptron, "&load_perceptron_yaml - correct data after loading" );
is ( ref ($transfered_nerve), "AI::Perceptron::Simple", "Loaded back as a blessed object" );
# test synonyms
AI::Perceptron::Simple::preserve_as_yaml( $perceptron, $yaml_nerve_file );
ok( -e $yaml_nerve_file, "Synonym - Found the YAML perceptron." );
ok( $transfered_nerve = AI::Perceptron::Simple::revive_from_yaml( $yaml_nerve_file ), "&revive_from_yaml is working correctly" );
is_deeply( $transfered_nerve, $perceptron, "&revive_from_yaml - correct data after loading" );
is ( ref ($transfered_nerve), "AI::Perceptron::Simple", "Loaded back as a blessed object" );
done_testing();
# besiyata d'shmaya
t/02-state_synonyms.t view on Meta::CPAN
ok( AI::Perceptron::Simple->can("revive"), "&revive is present" );
my $nerve_file = $FindBin::Bin . "/perceptron_state_synonyms.nerve";
ok( AI::Perceptron::Simple::preserve( $perceptron, $nerve_file ), "preserve is working good so far" );
ok( -e $nerve_file, "Found the perceptron file" );
ok( AI::Perceptron::Simple::revive( $nerve_file ), "Perceptron loaded" );
my $loaded_perceptron = AI::Perceptron::Simple::revive( $nerve_file );
is( ref $loaded_perceptron, MODULE_NAME, "Correct class after loading" );
done_testing();
# besiyata d'shmaya
t/04-train.t view on Meta::CPAN
} );
my %attribs = $perceptron->get_attributes; # merging will cause problems
my $perceptron_headers = keys %attribs; # scalar context directly return number of keys
is( $perceptron_headers, $total_headers, "Correct headers" );
# print $FindBin::Bin, "\n";
# print TRAINING_DATA, "\n";
ok ( -e TRAINING_DATA, "Found the training file" );
# saving and loading - this should go into a new test script
my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
#ok( $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER ), "No problem with \'train\' method so far" );
{
local $@ = "";
eval { $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER) };
is ( $@, "", "No problem with \'train\' method (verbose) so far" );
}
ok ( $perceptron->train( TRAINING_DATA, "brand", $nerve_file), "No problem with \'train\' method (non-verbose) so far" );
# no longer returns the file anymore since v0.03
# is ( $perceptron->train( TRAINING_DATA, "brand", $nerve_file), $nerve_file, "\'train\' method returns the correct value" );
subtest "Data related subroutine found" => sub {
ok( AI::Perceptron::Simple->can("save_perceptron"), "&save_perceptron is persent" );
ok( AI::Perceptron::Simple->can("load_perceptron"), "&loaded_perceptron is present" );
};
ok( save_perceptron( $perceptron, $nerve_file ), "save_perceptron is working good so far" );
ok( -e $nerve_file, "Found the perceptron file" );
ok( load_perceptron( $nerve_file ), "Perceptron loaded" );
my $loaded_perceptron = load_perceptron( $nerve_file );
is( ref $loaded_perceptron, MODULE_NAME, "Correct class after loading" );
done_testing();
# besiyata d'shmaya
t/04-train_synonyms_exercise.t view on Meta::CPAN
} );
my %attribs = $perceptron->get_attributes; # merging will cause problems
my $perceptron_headers = keys %attribs; # scalar context directly return number of keys
is( $perceptron_headers, $total_headers, "Correct headers" );
# print $FindBin::Bin, "\n";
# print TRAINING_DATA, "\n";
ok ( -e TRAINING_DATA, "Found the training file" );
# saving and loading - this should go into a new test script
my $nerve_file = $FindBin::Bin . "/perceptron_exercise.nerve";
#ok( $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER ), "No problem with \'train\' method so far" );
{
local $@ = "";
eval { $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER) };
is ( $@, "", "No problem with \'train\' method (verbose) so far" );
}
ok ( $perceptron->train( TRAINING_DATA, "brand", $nerve_file), "No problem with \'train\' method (non-verbose) so far" );
t/04-train_synonyms_exercise.t view on Meta::CPAN
ok( AI::Perceptron::Simple->can("save_perceptron"), "&save_perceptron is persent" );
ok( AI::Perceptron::Simple->can("load_perceptron"), "&loaded_perceptron is present" );
ok( AI::Perceptron::Simple::save_perceptron( $perceptron, $nerve_file ), "save_perceptron is working good so far" );
ok( -e $nerve_file, "Found the perceptron file" );
ok( AI::Perceptron::Simple::load_perceptron( $nerve_file ), "Perceptron loaded" );
my $loaded_perceptron = AI::Perceptron::Simple::load_perceptron( $nerve_file );
is( ref $loaded_perceptron, MODULE_NAME, "Correct class after loading" );
done_testing();
# besiyata d'shmaya
t/04-train_synonyms_tame.t view on Meta::CPAN
} );
my %attribs = $perceptron->get_attributes; # merging will cause problems
my $perceptron_headers = keys %attribs; # scalar context directly return number of keys
is( $perceptron_headers, $total_headers, "Correct headers" );
# print $FindBin::Bin, "\n";
# print TRAINING_DATA, "\n";
ok ( -e TRAINING_DATA, "Found the training file" );
# saving and loading - this should go into a new test script
my $nerve_file = $FindBin::Bin . "/perceptron_tame.nerve";
#ok( $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER ), "No problem with \'train\' method so far" );
{
local $@ = "";
eval { $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER) };
is ( $@, "", "No problem with \'train\' method (verbose) so far" );
}
ok ( $perceptron->train( TRAINING_DATA, "brand", $nerve_file), "No problem with \'train\' method (non-verbose) so far" );
t/04-train_synonyms_tame.t view on Meta::CPAN
ok( AI::Perceptron::Simple->can("save_perceptron"), "&save_perceptron is persent" );
ok( AI::Perceptron::Simple->can("load_perceptron"), "&loaded_perceptron is present" );
ok( AI::Perceptron::Simple::save_perceptron( $perceptron, $nerve_file ), "save_perceptron is working good so far" );
ok( -e $nerve_file, "Found the perceptron file" );
ok( AI::Perceptron::Simple::load_perceptron( $nerve_file ), "Perceptron loaded" );
my $loaded_perceptron = AI::Perceptron::Simple::load_perceptron( $nerve_file );
is( ref $loaded_perceptron, MODULE_NAME, "Correct class after loading" );
done_testing();
# besiyata d'shmaya
t/06-validate.t view on Meta::CPAN
use strict;
use warnings;
use Test::More;
use Test::Output;
use AI::Perceptron::Simple;
use FindBin;
# TRAINING_DATA & VALIDATION_DATA have the same contents, in real world, don't do this
# use different sets of data for training and validating the nerve. Same goes to testing data.
# I'm doing this only to make sure the nerve is working correctly
use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant VALIDATION_DATA => $FindBin::Bin . "/book_list_validate.csv";
use constant VALIDATION_DATA_OUTPUT_FILE => $FindBin::Bin . "/book_list_validate-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";
t/06-validate.t view on Meta::CPAN
predicted_column_index => 4,
results_write_to => VALIDATION_DATA_OUTPUT_FILE
} ),
"Validate succedded!" );
} qr/book_list_validate\-filled\.csv/, "Correct output for validate when saving to NEW file";
ok( -e VALIDATION_DATA_OUTPUT_FILE, "New validation file found" );
isnt( -s VALIDATION_DATA_OUTPUT_FILE, 0, "New output file is not empty" );
done_testing;
# besiyata d'shmaya
t/06-validate_synonyms_lab.t view on Meta::CPAN
use strict;
use warnings;
use Test::More;
use Test::Output;
use AI::Perceptron::Simple;
use FindBin;
# TRAINING_DATA & VALIDATION_DATA have the same contents, in real world, don't do this
# use different sets of data for training and validating the nerve. Same goes to testing data.
# I'm doing this only to make sure the nerve is working correctly
use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant VALIDATION_DATA => $FindBin::Bin . "/book_list_validate.csv";
use constant VALIDATION_DATA_OUTPUT_FILE => $FindBin::Bin . "/book_list_validate_lab-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";
t/06-validate_synonyms_lab.t view on Meta::CPAN
for ( 0..5 ) {
print "Round $_\n";
$perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER );
print "\n";
}
#print Dumper($perceptron), "\n";
# write ack to original file
my $ori_file_size = -s VALIDATION_DATA;
stdout_like {
ok ( $perceptron->take_lab_test( {
stimuli_validate => VALIDATION_DATA,
predicted_column_index => 4,
} ),
"Validate succedded!" );
} qr/book_list_validate\.csv/, "Correct output for take_lab_test when saving file";
# with new output file
stdout_like {
ok ( $perceptron->take_lab_test( {
stimuli_validate => VALIDATION_DATA,
predicted_column_index => 4,
results_write_to => VALIDATION_DATA_OUTPUT_FILE
} ),
"Validate succedded!" );
} qr/book_list_validate_lab\-filled\.csv/, "Correct output for take_lab_test when saving to NEW file";
ok( -e VALIDATION_DATA_OUTPUT_FILE, "New validation file found" );
isnt( -s VALIDATION_DATA_OUTPUT_FILE, 0, "New output file is not empty" );
done_testing;
# besiyata d'shmaya
t/06-validate_synonyms_mock.t view on Meta::CPAN
use strict;
use warnings;
use Test::More;
use Test::Output;
use AI::Perceptron::Simple;
use FindBin;
# TRAINING_DATA & VALIDATION_DATA have the same contents, in real world, don't do this
# use different sets of data for training and validating the nerve. Same goes to testing data.
# I'm doing this only to make sure the nerve is working correctly
use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant VALIDATION_DATA => $FindBin::Bin . "/book_list_validate.csv";
use constant VALIDATION_DATA_OUTPUT_FILE => $FindBin::Bin . "/book_list_validate_mock-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";
t/06-validate_synonyms_mock.t view on Meta::CPAN
predicted_column_index => 4,
results_write_to => VALIDATION_DATA_OUTPUT_FILE
} ),
"Validate succedded!" );
} qr/book_list_validate_mock\-filled\.csv/, "Correct output for take_mock_exam when saving to NEW file";
ok( -e VALIDATION_DATA_OUTPUT_FILE, "New validation file found" );
isnt( -s VALIDATION_DATA_OUTPUT_FILE, 0, "New output file is not empty" );
done_testing;
# besiyata d'shmaya
t/08-confusion_matrix.t view on Meta::CPAN
use Test::More;
use Test::Output;
use AI::Perceptron::Simple;
#use local::lib;
use Text::Matrix;
use FindBin;
use constant TEST_FILE => $FindBin::Bin . "/book_list_test-filled.csv";
use constant NON_BINARY_FILE => $FindBin::Bin . "/book_list_test-filled-non-binary.csv";
my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
my $perceptron = AI::Perceptron::Simple::load_perceptron( $nerve_file );
ok ( my %c_matrix = $perceptron->get_confusion_matrix( {
full_data_file => TEST_FILE,
actual_output_header => "brand",
predicted_output_header => "predicted",
} ),
"get_confusion_matrix method is working");
is ( ref \%c_matrix, ref {}, "Confusion matrix in correct data structure" );
is ( $c_matrix{ true_positive }, 2, "Correct true_positive" );
is ( $c_matrix{ true_negative }, 4, "Correct true_negative" );
is ( $c_matrix{ false_positive }, 1, "Correct false_positive" );
is ( $c_matrix{ false_negative }, 3, "Correct false_negative" );
is ( $c_matrix{ total_entries }, 10, "Total entries is correct" );
ok ( AI::Perceptron::Simple::_calculate_total_entries( \%c_matrix ),
"Testing the 'untestable' &_calculate_total_entries" );
is ( $c_matrix{ total_entries }, 10, "'illegal' calculation of total entries is correct" );
like ( $c_matrix{ accuracy }, qr/60/, "Accuracy seems correct to me" );
ok ( AI::Perceptron::Simple::_calculate_accuracy( \%c_matrix ),
"Testing the 'untestable' &_calculate_accuracy" );
like ( $c_matrix{ accuracy }, qr/60/, "'illegal' calculation of accuracy seems correct to me" );
like ( $c_matrix{ sensitivity }, qr/40/, "Accuracy seems correct to me" );
ok ( AI::Perceptron::Simple::_calculate_sensitivity( \%c_matrix ),
"Testing the 'untestable' &_calculate_sensitivity" );
like ( $c_matrix{ accuracy }, qr/60/, "'illegal' calculation of sensitivity seems correct to me" );
{
local $@;
eval {
$perceptron->get_confusion_matrix( {
full_data_file => NON_BINARY_FILE,
actual_output_header => "brand",
predicted_output_header => "predicted",
} );
t/08-confusion_matrix.t view on Meta::CPAN
eval {
$perceptron->display_confusion_matrix( \%c_matrix );
};
like ( $@, qr/zero_as one_as/, "Both keys not found" );
}
# more_stats enabled
subtest "More stats" => sub {
my %c_matrix_more_stats = $perceptron->get_confusion_matrix( {
full_data_file => TEST_FILE,
actual_output_header => "brand",
predicted_output_header => "predicted",
more_stats => 1,
} );
like ( $c_matrix_more_stats{ precision }, qr/66.66/, "Precision seems correct to me" );
is ( $c_matrix_more_stats{ specificity }, 80, "Specificity seems correct to me" );
t/08-confusion_matrix.t view on Meta::CPAN
"display_exam_results is working");
} qr /(?:$piece)/, "$piece displayed";
}
$perceptron->display_exam_results( \%c_matrix_more_stats, {
zero_as => "MP520",
one_as => "Yi Lin" } );
};
done_testing;
# besiyata d'shmaya
t/08-confusion_matrix_synonyms.t view on Meta::CPAN
use Test::More;
use Test::Output;
use AI::Perceptron::Simple;
#use local::lib;
use Text::Matrix;
use FindBin;
use constant TEST_FILE => $FindBin::Bin . "/book_list_test-filled.csv";
use constant NON_BINARY_FILE => $FindBin::Bin . "/book_list_test-filled-non-binary.csv";
my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
my $perceptron = AI::Perceptron::Simple::load_perceptron( $nerve_file );
ok ( my %c_matrix = $perceptron->get_exam_results( {
full_data_file => TEST_FILE,
actual_output_header => "brand",
predicted_output_header => "predicted",
} ),
"get_exam_results method is working");
is ( ref \%c_matrix, ref {}, "Confusion matrix in correct data structure" );
is ( $c_matrix{ true_positive }, 2, "Correct true_positive" );
is ( $c_matrix{ true_negative }, 4, "Correct true_negative" );
is ( $c_matrix{ false_positive }, 1, "Correct false_positive" );
is ( $c_matrix{ false_negative }, 3, "Correct false_negative" );
is ( $c_matrix{ total_entries }, 10, "Total entries is correct" );
ok ( AI::Perceptron::Simple::_calculate_total_entries( \%c_matrix ),
"Testing the 'untestable' &_calculate_total_entries" );
is ( $c_matrix{ total_entries }, 10, "'illegal' calculation of total entries is correct" );
like ( $c_matrix{ accuracy }, qr/60/, "Accuracy seems correct to me" );
ok ( AI::Perceptron::Simple::_calculate_accuracy( \%c_matrix ),
"Testing the 'untestable' &_calculate_accuracy" );
like ( $c_matrix{ accuracy }, qr/60/, "'illegal' calculation of accuracy seems correct to me" );
like ( $c_matrix{ sensitivity }, qr/40/, "Accuracy seems correct to me" );
ok ( AI::Perceptron::Simple::_calculate_sensitivity( \%c_matrix ),
"Testing the 'untestable' &_calculate_sensitivity" );
like ( $c_matrix{ accuracy }, qr/60/, "'illegal' calculation of sensitivity seems correct to me" );
{
local $@;
eval {
$perceptron->get_exam_results( {
full_data_file => NON_BINARY_FILE,
actual_output_header => "brand",
predicted_output_header => "predicted",
} );
t/08-confusion_matrix_synonyms.t view on Meta::CPAN
local $@;
eval {
$perceptron->display_exam_results( \%c_matrix );
};
like ( $@, qr/zero_as one_as/, "Both keys not found" );
}
# more_stats enabled
subtest "More stats" => sub {
my %c_matrix_more_stats = $perceptron->get_confusion_matrix( {
full_data_file => TEST_FILE,
actual_output_header => "brand",
predicted_output_header => "predicted",
more_stats => 1,
} );
like ( $c_matrix_more_stats{ precision }, qr/66.66/, "Precision seems correct to me" );
is ( $c_matrix_more_stats{ specificity }, 80, "Specificity seems correct to me" );
t/08-confusion_matrix_synonyms.t view on Meta::CPAN
ok ( $perceptron->display_exam_results( \%c_matrix_more_stats, { zero_as => "MP520", one_as => "Yi Lin" } ),
"display_exam_results is working");
} qr /(?:$piece)/, "$piece displayed";
}
$perceptron->display_exam_results( \%c_matrix_more_stats, {
zero_as => "MP520",
one_as => "Yi Lin" } );
};
done_testing;
# besiyata d'shmaya
t/10-test.t view on Meta::CPAN
use strict;
use warnings;
use Test::More;
use Test::Output;
use AI::Perceptron::Simple;
use FindBin;
use constant TEST_DATA => $FindBin::Bin . "/book_list_test.csv";
use constant TEST_DATA_NEW_FILE => $FindBin::Bin . "/book_list_test-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";
my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
ok( -s $nerve_file, "Found nerve file to load" );
my $mature_nerve = AI::Perceptron::Simple::load_perceptron( $nerve_file );
# write to original file
stdout_like {
ok ( $mature_nerve->test( {
stimuli_validate => TEST_DATA,
predicted_column_index => 4,
} ),
"Testing stage succedded!" );
} qr/book_list_test\.csv/, "Correct output for testing when saving back to original file";
# with new output file
stdout_like {
ok ( $mature_nerve->test( {
stimuli_validate => TEST_DATA,
predicted_column_index => 4,
results_write_to => TEST_DATA_NEW_FILE
} ),
"Testing stage succedded!" );
} qr/book_list_test\-filled\.csv/, "Correct output for testing when saving to NEW file";
ok( -e TEST_DATA_NEW_FILE, "New testing file found" );
isnt( -s TEST_DATA_NEW_FILE, 0, "New output file is not empty" );
done_testing;
# besiyata d'shmaya
t/10-test_synonyms_exam.t view on Meta::CPAN
use strict;
use warnings;
use Test::More;
use Test::Output;
use AI::Perceptron::Simple;
use FindBin;
use constant TEST_DATA => $FindBin::Bin . "/book_list_test.csv";
use constant TEST_DATA_NEW_FILE => $FindBin::Bin . "/book_list_test_exam-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";
my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
ok( -s $nerve_file, "Found nerve file to load" );
my $mature_nerve = AI::Perceptron::Simple::load_perceptron( $nerve_file );
# write to original file
stdout_like {
ok ( $mature_nerve->take_real_exam( {
stimuli_validate => TEST_DATA,
predicted_column_index => 4,
} ),
"Testing stage succedded!" );
} qr/book_list_test\.csv/, "Correct output for testing when saving back to original file";
# with new output file
stdout_like {
ok ( $mature_nerve->take_real_exam( {
stimuli_validate => TEST_DATA,
predicted_column_index => 4,
results_write_to => TEST_DATA_NEW_FILE
} ),
"Testing stage succedded!" );
} qr/book_list_test_exam\-filled\.csv/, "Correct output for testing when saving to NEW file";
ok( -e TEST_DATA_NEW_FILE, "New testing file found" );
isnt( -s TEST_DATA_NEW_FILE, 0, "New output file is not empty" );
done_testing;
# besiyata d'shmaya
t/10-test_synonyms_work.t view on Meta::CPAN
use strict;
use warnings;
use Test::More;
use Test::Output;
use AI::Perceptron::Simple;
use FindBin;
use constant TEST_DATA => $FindBin::Bin . "/book_list_test.csv";
use constant TEST_DATA_NEW_FILE => $FindBin::Bin . "/book_list_test_work-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";
my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
ok( -s $nerve_file, "Found nerve file to load" );
my $mature_nerve = AI::Perceptron::Simple::load_perceptron( $nerve_file );
# write to original file
stdout_like {
ok ( $mature_nerve->work_in_real_world( {
stimuli_validate => TEST_DATA,
predicted_column_index => 4,
} ),
"Testing stage succedded!" );
} qr/book_list_test\.csv/, "Correct output for testing when saving back to original file";
# with new output file
stdout_like {
ok ( $mature_nerve->work_in_real_world( {
stimuli_validate => TEST_DATA,
predicted_column_index => 4,
results_write_to => TEST_DATA_NEW_FILE
} ),
"Testing stage succedded!" );
} qr/book_list_test_work\-filled\.csv/, "Correct output for testing when saving to NEW file";
ok( -e TEST_DATA_NEW_FILE, "New testing file found" );
isnt( -s TEST_DATA_NEW_FILE, 0, "New output file is not empty" );
done_testing;
# besiyata d'shmaya
t/12-shuffle_data.t view on Meta::CPAN
stdout_like {
shuffle_data( ORIGINAL_STIMULI, $shuffled_data_1, $shuffled_data_2, $shuffled_data_3 );
} qr/^Saved/, "Correct output after saving file";
ok( -e $shuffled_data_1, "Found the first shuffled file" );
ok( -e $shuffled_data_2, "Found the second shuffled file" );
ok( -e $shuffled_data_3, "Found the third shuffled file" );
done_testing();
# besiyata d'shmaya
t/12-shuffle_data_synonym.t view on Meta::CPAN
stdout_like {
shuffle_stimuli( ORIGINAL_STIMULI, $shuffled_data_1, $shuffled_data_2, $shuffled_data_3 );
} qr/^Saved/, "Correct output after saving file";
ok( -e $shuffled_data_1, "Found the first shuffled file" );
ok( -e $shuffled_data_2, "Found the second shuffled file" );
ok( -e $shuffled_data_3, "Found the third shuffled file" );
done_testing();
# besiyata d'shmaya
xt/boilerplate.t view on Meta::CPAN
#!perl
use 5.006;
use strict;
use warnings;
use Test::More;
plan tests => 3;
sub not_in_file_ok {
my ($filename, %regex) = @_;
open( my $fh, '<', $filename )
or die "couldn't open $filename for reading: $!";
my %violated;
while (my $line = <$fh>) {
while (my ($desc, $regex) = each %regex) {