AI-Perceptron-Simple

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

Revision history for AI-Perceptron-Simple

The specifications and offline documentation can be found in the "docs" directory

Version 1.04    17 SEPTEMBER 2021
* fixed some critical problems
    * yml nerve not loading back as an AI::Perceptron::Simple object
    * fix docs: missing parameter $nerve for:
        * save_perceptron
        * save_perceptron_yaml
* changed die to croak for file opening

Version 1.03    9 SEPTEMBER 2021
* data processing subroutine available:
    * shuffle data
    * added import tag ":process_data"
* added more useful data to the confusion matrix:
    * sum of column and rows to make it look more classic :)
    * more_stats option to show more stats:
        * precision, specificity, F1 score, negative predicted value, false negative rate, false positive rate
        * false discovery rate, false omission rate, balanced accuracy

Version 1.02    26 AUGUST 2021
* minimum perl version changed to 5.8.1 due to YAML
* fix test for display_confusion_matrix
    * modifier "n" ( >= perl 5.22 ) changed to primitive '?:', 5.22 is too high
    * fixed inaccurate test for output
* YAML (nerve file) for portability supported
    * make subroutines exportable, the names are too long
       * :local_data
       * :portable_data
* cleaned & refactored codes
    * &display_confusion_matrix
* improved the documentation

Version 1.01    24 AUGUST 2021
* Fixed some technical issues
    * fixed test scripts not run in correct sequence
        * must be creation -> train -> validate/test
    * local::lib issue should be fixed by now

Version 1.00    23 AUGUST 2021
* The following features were implemented over the course of time (see also My::Perceptron v0.04 on github):
    * create perceptron
    * process data: &train method
        * read csv - for training stage
    * save and load the perceptron

    * output algorithm for train
        * read and calculate data line by line
    * validate method
        * read csv bulk
        * write predicted values into original file
        * write predicted values into new file
    * test method
        * read csv bulk
        * write predicted values into original file
        * write predicted values into new file

    * confusion matrix
        * read only expected and predicted columns, line by line
        * return a hash of data
            * TP, TN, FP, FN
            * total entries
            * accuracy
            * sensitivity
    * display confusion matrix data to console
        * use Text:Matrix

    * synonyms
        * synonyms MUST call actual subroutines and not copy pasting!
        * train: tame, exercise
        * validate: take_mock_exam, take_lab_test
        * test:  take_real_exam, work_in_real_world
        * generate_confusion_matrix: get_exam_results
        * display_confusion_matrix: display_exam_results
        * save_perceptron: preserve
        * load_perceptron: revive



LICENSE  view on Meta::CPAN

		       The Artistic License 2.0

	    Copyright (c) 2000-2006, The Perl Foundation.

     Everyone is permitted to copy and distribute verbatim copies
      of this license document, but changing it is not allowed.

Preamble

This license establishes the terms under which a given free software
Package may be copied, modified, distributed, and/or redistributed.
The intent is that the Copyright Holder maintains some artistic
control over the development of that Package while still keeping the
Package available as open source and free software.

You are always permitted to make arrangements wholly outside of this
license directly with the Copyright Holder of a given Package.  If the
terms of this license do not permit the full use that you propose to
make of the Package, you should contact the Copyright Holder and seek
a different licensing arrangement. 

Definitions

    "Copyright Holder" means the individual(s) or organization(s)
    named in the copyright notice for the entire Package.

    "Contributor" means any party that has contributed code or other
    material to the Package, in accordance with the Copyright Holder's
    procedures.

    "You" and "your" means any person who would like to copy,
    distribute, or modify the Package.

    "Package" means the collection of files distributed by the
    Copyright Holder, and derivatives of that collection and/or of
    those files. A given Package may consist of either the Standard
    Version, or a Modified Version.

    "Distribute" means providing a copy of the Package or making it
    accessible to anyone else, or in the case of a company or
    organization, to others outside of your company or organization.

    "Distributor Fee" means any fee that you charge for Distributing
    this Package or providing support for this Package to another
    party.  It does not mean licensing fees.

    "Standard Version" refers to the Package if it has not been
    modified, or has been modified only in ways explicitly requested
    by the Copyright Holder.

    "Modified Version" means the Package, if it has been changed, and
    such changes were not explicitly requested by the Copyright
    Holder. 

    "Original License" means this Artistic License as Distributed with
    the Standard Version of the Package, in its current version or as
    it may be modified by The Perl Foundation in the future.

    "Source" form means the source code, documentation source, and
    configuration files for the Package.

    "Compiled" form means the compiled bytecode, object code, binary,
    or any other form resulting from mechanical transformation or
    translation of the Source form.


Permission for Use and Modification Without Distribution

(1)  You are permitted to use the Standard Version and create and use
Modified Versions for any purpose without restriction, provided that
you do not Distribute the Modified Version.


Permissions for Redistribution of the Standard Version

(2)  You may Distribute verbatim copies of the Source form of the
Standard Version of this Package in any medium without restriction,
either gratis or for a Distributor Fee, provided that you duplicate
all of the original copyright notices and associated disclaimers.  At
your discretion, such verbatim copies may or may not include a
Compiled form of the Package.

(3)  You may apply any bug fixes, portability changes, and other
modifications made available from the Copyright Holder.  The resulting
Package will still be considered the Standard Version, and as such
will be subject to the Original License.


Distribution of Modified Versions of the Package as Source 

(4)  You may Distribute your Modified Version as Source (either gratis
or for a Distributor Fee, and with or without a Compiled form of the
Modified Version) provided that you clearly document how it differs
from the Standard Version, including, but not limited to, documenting
any non-standard features, executables, or modules, and provided that
you do at least ONE of the following:

    (a)  make the Modified Version available to the Copyright Holder
    of the Standard Version, under the Original License, so that the
    Copyright Holder may include your modifications in the Standard
    Version.

    (b)  ensure that installation of your Modified Version does not
    prevent the user installing or running the Standard Version. In
    addition, the Modified Version must bear a name that is different
    from the name of the Standard Version.

    (c)  allow anyone who receives a copy of the Modified Version to
    make the Source form of the Modified Version available to others
    under
		
	(i)  the Original License or

	(ii)  a license that permits the licensee to freely copy,
	modify and redistribute the Modified Version using the same
	licensing terms that apply to the copy that the licensee
	received, and requires that the Source form of the Modified
	Version, and of any works derived from it, be made freely
	available in that license fees are prohibited but Distributor
	Fees are allowed.


Distribution of Compiled Forms of the Standard Version 
or Modified Versions without the Source

(5)  You may Distribute Compiled forms of the Standard Version without
the Source, provided that you include complete instructions on how to
get the Source of the Standard Version.  Such instructions must be
valid at the time of your distribution.  If these instructions, at any
time while you are carrying out such distribution, become invalid, you
must provide new instructions on demand or cease further distribution.
If you provide valid instructions or cease distribution within thirty
days after you become aware that the instructions are invalid, then
you do not forfeit any of your rights under this license.

(6)  You may Distribute a Modified Version in Compiled form without
the Source, provided that you comply with Section 4 with respect to
the Source of the Modified Version.


Aggregating or Linking the Package 

(7)  You may aggregate the Package (either the Standard Version or
Modified Version) with other packages and Distribute the resulting
aggregation provided that you do not charge a licensing fee for the
Package.  Distributor Fees are permitted, and licensing fees for other
components in the aggregation are permitted. The terms of this license
apply to the use and Distribution of the Standard or Modified Versions
as included in the aggregation.

(8) You are permitted to link Modified and Standard Versions with
other works, to embed the Package in a larger work of your own, or to
build stand-alone binary or bytecode versions of applications that
include the Package, and Distribute the result without restriction,
provided the result does not expose a direct interface to the Package.


Items That are Not Considered Part of a Modified Version 

(9) Works (including, but not limited to, modules and scripts) that
merely extend or make use of the Package, do not, by themselves, cause
the Package to be a Modified Version.  In addition, such works are not
considered parts of the Package itself, and are not subject to the
terms of this license.


General Provisions

(10)  Any use, modification, and distribution of the Standard or
Modified Versions is governed by this Artistic License. By using,
modifying or distributing the Package, you accept this license. Do not
use, modify, or distribute the Package, if you do not accept this
license.

(11)  If your Modified Version has been derived from a Modified
Version made by someone other than you, you are nevertheless required
to ensure that your Modified Version complies with the requirements of
this license.

(12)  This license does not grant you the right to use any trademark,
service mark, tradename, or logo of the Copyright Holder.

(13)  This license includes the non-exclusive, worldwide,
free-of-charge patent license to make, have made, use, offer to sell,
sell, import and otherwise transfer the Package with respect to any
patent claims licensable by the Copyright Holder that are necessarily
infringed by the Package. If you institute patent litigation
(including a cross-claim or counterclaim) against any party alleging
that the Package constitutes direct or contributory patent
infringement, then this Artistic License to you shall terminate on the
date that such litigation is filed.

(14)  Disclaimer of Warranty:
THE PACKAGE IS PROVIDED BY THE COPYRIGHT HOLDER AND CONTRIBUTORS "AS
IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES. THE IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR
NON-INFRINGEMENT ARE DISCLAIMED TO THE EXTENT PERMITTED BY YOUR LOCAL
LAW. UNLESS REQUIRED BY LAW, NO COPYRIGHT HOLDER OR CONTRIBUTOR WILL
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES ARISING IN ANY WAY OUT OF THE USE OF THE PACKAGE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

META.json  view on Meta::CPAN

{
   "abstract" : "unknown",
   "author" : [
      "Raphael Jong Jun Jie <ellednera@cpan.org>"
   ],
   "dynamic_config" : 1,
   "generated_by" : "ExtUtils::MakeMaker version 7.62, CPAN::Meta::Converter version 2.150010",
   "license" : [
      "artistic_2"
   ],
   "meta-spec" : {
      "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec",
      "version" : 2
   },
   "name" : "AI-Perceptron-Simple",
   "no_index" : {
      "directory" : [
         "t",
         "inc"
      ]
   },
   "prereqs" : {
      "build" : {
         "requires" : {
            "ExtUtils::MakeMaker" : "0"
         }
      },
      "configure" : {
         "requires" : {
            "ExtUtils::MakeMaker" : "0"
         }
      },
      "runtime" : {
         "requires" : {
            "Carp" : "0",
            "File::Basename" : "0",
            "List::Util" : "0",
            "Storable" : "0",
            "Text::CSV" : "2.01",
            "Text::Matrix" : "1.00",
            "YAML" : "0",
            "local::lib" : "0",
            "perl" : "5.008001",
            "utf8" : "0"
         }
      },
      "test" : {
         "requires" : {
            "FindBin" : "0",
            "Test::More" : "0",
            "Test::Output" : "1.033"
         }
      }
   },
   "release_status" : "stable",
   "version" : "1.04",
   "x_serialization_backend" : "JSON::PP version 4.02"
}

META.yml  view on Meta::CPAN

---
abstract: unknown
author:
  - 'Raphael Jong Jun Jie <ellednera@cpan.org>'
build_requires:
  ExtUtils::MakeMaker: '0'
  FindBin: '0'
  Test::More: '0'
  Test::Output: '1.033'
configure_requires:
  ExtUtils::MakeMaker: '0'
dynamic_config: 1
generated_by: 'ExtUtils::MakeMaker version 7.62, CPAN::Meta::Converter version 2.150010'
license: artistic_2
meta-spec:
  url: http://module-build.sourceforge.net/META-spec-v1.4.html
  version: '1.4'
name: AI-Perceptron-Simple
no_index:
  directory:
    - t
    - inc
requires:
  Carp: '0'
  File::Basename: '0'
  List::Util: '0'
  Storable: '0'
  Text::CSV: '2.01'
  Text::Matrix: '1.00'
  YAML: '0'
  local::lib: '0'
  perl: '5.008001'
  utf8: '0'
version: '1.04'
x_serialization_backend: 'CPAN::Meta::YAML version 0.018'

Makefile.PL  view on Meta::CPAN

use 5.006;
use strict;
use warnings;
use ExtUtils::MakeMaker;

my %WriteMakefileArgs = (
    NAME             => 'AI::Perceptron::Simple',
    AUTHOR           => q{Raphael Jong Jun Jie <ellednera@cpan.org>},
    VERSION_FROM     => 'lib/AI/Perceptron/Simple.pm',
    #ABSTRACT_FROM    => 'lib/AI/Perceptron/Simple.pm',
    LICENSE          => 'artistic_2',
    MIN_PERL_VERSION => '5.008001',
    CONFIGURE_REQUIRES => {
        'ExtUtils::MakeMaker' => '0',
    },
    TEST_REQUIRES => {
        'Test::More' => '0',
        'Test::Output' => '1.033',
        'FindBin' => '0',
    },
    PREREQ_PM => {
        'utf8' => '0',
        'local::lib' => '0',
        'Carp' => '0',
        'Storable' => '0',
        'Text::CSV' => '2.01',
        'Text::Matrix' => '1.00',
        'YAML' => '0',
        'File::Basename' => '0',
        'List::Util' => '0',
    },
    dist  => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', },
    clean => { FILES => 'AI-Perceptron-Simple-*' },
);

# Compatibility with old versions of ExtUtils::MakeMaker
unless (eval { ExtUtils::MakeMaker->VERSION('6.64'); 1 }) {
    my $test_requires = delete $WriteMakefileArgs{TEST_REQUIRES} || {};
    @{$WriteMakefileArgs{PREREQ_PM}}{keys %$test_requires} = values %$test_requires;
}

unless (eval { ExtUtils::MakeMaker->VERSION('6.55_03'); 1 }) {
    my $build_requires = delete $WriteMakefileArgs{BUILD_REQUIRES} || {};
    @{$WriteMakefileArgs{PREREQ_PM}}{keys %$build_requires} = values %$build_requires;
}

delete $WriteMakefileArgs{CONFIGURE_REQUIRES}
    unless eval { ExtUtils::MakeMaker->VERSION('6.52'); 1 };
delete $WriteMakefileArgs{MIN_PERL_VERSION}
    unless eval { ExtUtils::MakeMaker->VERSION('6.48'); 1 };
delete $WriteMakefileArgs{LICENSE}
    unless eval { ExtUtils::MakeMaker->VERSION('6.31'); 1 };

WriteMakefile(%WriteMakefileArgs);

README  view on Meta::CPAN

A Newbie Friendly Module to Create, Train, Validate and Test Perceptrons / Neurons

This module provides methods to build, train, validate and test a perceptron. It can also save the data of the perceptron for future use for any actual AI programs.

This module is also aimed to help newbies grasp hold of the concept of perceptron, training, validation and testing as much as possible. Hence, all the methods and subroutines in this module are decoupled as much as possible so that the actual script...

INSTALLATION

To install this module, run the following commands:

	perl Makefile.PL
	make
	make test
	make install

SUPPORT AND DOCUMENTATION

After installing, you can find documentation for this module with the
perldoc command.

    perldoc AI::Perceptron::Simple

You can also look for information at:

    RT, CPAN's request tracker (report bugs here)
        https://rt.cpan.org/NoAuth/Bugs.html?Dist=AI-Perceptron-Simple

    CPAN Ratings
        https://cpanratings.perl.org/d/AI-Perceptron-Simple

    Search CPAN
        https://metacpan.org/release/AI-Perceptron-Simple


LICENSE AND COPYRIGHT

This software is Copyright (c) 2021 by Raphael Jong Jun Jie.

This is free software, licensed under:

  The Artistic License 2.0 (GPL Compatible)

docs/AI-Perceptron-Simple-1.04.html  view on Meta::CPAN

<title></title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<link rev="made" href="mailto:root@localhost" />
</head>

<body>



<ul id="index">
  <li><a href="#NAME">NAME</a></li>
  <li><a href="#VERSION">VERSION</a></li>
  <li><a href="#SYNOPSIS">SYNOPSIS</a></li>
  <li><a href="#EXPORT">EXPORT</a></li>
  <li><a href="#DESCRIPTION">DESCRIPTION</a></li>
  <li><a href="#CONVENTIONS-USED">CONVENTIONS USED</a></li>
  <li><a href="#DATASET-STRUCTURE">DATASET STRUCTURE</a></li>
  <li><a href="#PERCEPTRON-DATA">PERCEPTRON DATA</a></li>
  <li><a href="#DATA-PROCESSING-RELATED-SUBROUTINES">DATA PROCESSING RELATED SUBROUTINES</a>
    <ul>
      <li><a href="#shuffle_stimuli">shuffle_stimuli ( ... )</a></li>
      <li><a href="#shuffle_data-original_data-shuffled_1-shuffled_2">shuffle_data ( $original_data =&gt; $shuffled_1, $shuffled_2, ... )</a></li>
      <li><a href="#shuffle_data-ORIGINAL_DATA-shuffled_1-shuffled_2">shuffle_data ( ORIGINAL_DATA, $shuffled_1, $shuffled_2, ... )</a></li>
    </ul>
  </li>
  <li><a href="#CREATION-RELATED-SUBROUTINES-METHODS">CREATION RELATED SUBROUTINES/METHODS</a>
    <ul>
      <li><a href="#new-options">new ( \%options )</a></li>
      <li><a href="#get_attributes">get_attributes</a></li>
      <li><a href="#learning_rate-value">learning_rate ( $value )</a></li>
      <li><a href="#learning_rate">learning_rate</a></li>
      <li><a href="#threshold-value">threshold ( $value )</a></li>
      <li><a href="#threshold">threshold</a></li>
    </ul>
  </li>
  <li><a href="#TRAINING-RELATED-SUBROUTINES-METHODS">TRAINING RELATED SUBROUTINES/METHODS</a>
    <ul>
      <li><a href="#tame">tame ( ... )</a></li>
      <li><a href="#exercise">exercise ( ... )</a></li>
      <li><a href="#train-stimuli_train_csv-expected_output_header-save_nerve_to_file">train ( $stimuli_train_csv, $expected_output_header, $save_nerve_to_file )</a></li>
      <li><a href="#train-stimuli_train_csv-expected_output_header-save_nerve_to_file-display_stats-identifier">train ( $stimuli_train_csv, $expected_output_header, $save_nerve_to_file, $display_stats, $identifier )</a></li>
      <li><a href="#calculate_output-self-stimuli_hash">&amp;_calculate_output( $self, \%stimuli_hash )</a></li>
      <li><a href="#tune-self-stimuli_hash-tune_up_or_down">&amp;_tune( $self, \%stimuli_hash, $tune_up_or_down )</a></li>
    </ul>
  </li>
  <li><a href="#VALIDATION-RELATED-METHODS">VALIDATION RELATED METHODS</a>
    <ul>
      <li><a href="#take_mock_exam">take_mock_exam (...)</a></li>
      <li><a href="#take_lab_test">take_lab_test (...)</a></li>
      <li><a href="#validate-options">validate ( \%options )</a></li>
    </ul>
  </li>
  <li><a href="#TESTING-RELATED-SUBROUTINES-METHODS">TESTING RELATED SUBROUTINES/METHODS</a>
    <ul>
      <li><a href="#take_real_exam">take_real_exam (...)</a></li>
      <li><a href="#work_in_real_world">work_in_real_world (...)</a></li>
      <li><a href="#test-options">test ( \%options )</a></li>
      <li><a href="#real_validate_or_test-data_hash_ref">_real_validate_or_test ( $data_hash_ref )</a></li>
      <li><a href="#fill_predicted_values-self-stimuli_validate-predicted_index-aoa">&amp;_fill_predicted_values ( $self, $stimuli_validate, $predicted_index, $aoa )</a></li>
    </ul>
  </li>
  <li><a href="#RESULTS-RELATED-SUBROUTINES-METHODS">RESULTS RELATED SUBROUTINES/METHODS</a>
    <ul>
      <li><a href="#get_exam_results">get_exam_results ( ... )</a></li>
      <li><a href="#get_confusion_matrix-options">get_confusion_matrix ( \%options )</a></li>
      <li><a href="#collect_stats-options">&amp;_collect_stats ( \%options )</a></li>
      <li><a href="#calculate_total_entries-c_matrix_ref">&amp;_calculate_total_entries ( $c_matrix_ref )</a></li>
      <li><a href="#calculate_accuracy-c_matrix_ref">&amp;_calculate_accuracy ( $c_matrix_ref )</a></li>
      <li><a href="#calculate_sensitivity-c_matrix_ref">&amp;_calculate_sensitivity ( $c_matrix_ref )</a></li>
      <li><a href="#calculate_precision-c_matrix_ref">&amp;_calculate_precision ( $c_matrix_ref )</a></li>
      <li><a href="#calculate_specificity-c_matrix_ref">&amp;_calculate_specificity ( $c_matrix_ref )</a></li>
      <li><a href="#calculate_f1_score-c_matrix_ref">&amp;_calculate_f1_score ( $c_matrix_ref )</a></li>
      <li><a href="#calculate_negative_predicted_value-c_matrix_ref">&amp;_calculate_negative_predicted_value( $c_matrix_ref )</a></li>
      <li><a href="#calculate_false_negative_rate-c_matrix_ref">&amp;_calculate_false_negative_rate( $c_matrix_ref )</a></li>
      <li><a href="#calculate_false_positive_rate-c_matrix_ref">&amp;_calculate_false_positive_rate( $c_matrix_ref )</a></li>
      <li><a href="#calculate_false_discovery_rate-c_matrix_ref">&amp;_calculate_false_discovery_rate( $c_matrix_ref )</a></li>
      <li><a href="#calculate_false_omission_rate-c_matrix_ref">&amp;_calculate_false_omission_rate( $c_matrix_ref )</a></li>
      <li><a href="#calculate_balanced_accuracy-c_matrix_ref">&amp;_calculate_balanced_accuracy( $c_matrix_ref )</a></li>
      <li><a href="#display_exam_results">display_exam_results ( ... )</a></li>
      <li><a href="#display_confusion_matrix-confusion_matrix-labels">display_confusion_matrix ( \%confusion_matrix, \%labels )</a></li>
      <li><a href="#build_matrix-c_matrix-labels">&amp;_build_matrix ( $c_matrix, $labels )</a></li>
      <li><a href="#print_extended_matrix-matrix-c_matrix">&amp;_print_extended_matrix ( $matrix, $c_matrix )</a></li>
    </ul>
  </li>
  <li><a href="#NERVE-DATA-RELATED-SUBROUTINES">NERVE DATA RELATED SUBROUTINES</a>
    <ul>
      <li><a href="#preserve">preserve ( ... )</a></li>
      <li><a href="#save_perceptron-nerve-nerve_file">save_perceptron ( $nerve, $nerve_file )</a></li>
      <li><a href="#revive">revive (...)</a></li>
      <li><a href="#load_perceptron-nerve_file_to_load">load_perceptron ( $nerve_file_to_load )</a></li>
    </ul>
  </li>
  <li><a href="#NERVE-PORTABILITY-RELATED-SUBROUTINES">NERVE PORTABILITY RELATED SUBROUTINES</a>
    <ul>
      <li><a href="#preserve_as_yaml">preserve_as_yaml ( ... )</a></li>
      <li><a href="#save_perceptron_yaml-nerve-yaml_nerve_file">save_perceptron_yaml ( $nerve, $yaml_nerve_file )</a></li>
      <li><a href="#revive_from_yaml">revive_from_yaml (...)</a></li>
      <li><a href="#load_perceptron_yaml-yaml_nerve_file">load_perceptron_yaml ( $yaml_nerve_file )</a></li>
    </ul>
  </li>
  <li><a href="#TO-DO">TO DO</a></li>
  <li><a href="#KNOWN-ISSUES">KNOWN ISSUES</a>
    <ul>
      <li><a href="#Portability-of-Nerve-Data">Portability of Nerve Data</a></li>
    </ul>
  </li>
  <li><a href="#AUTHOR">AUTHOR</a></li>
  <li><a href="#BUGS">BUGS</a></li>
  <li><a href="#SUPPORT">SUPPORT</a></li>
  <li><a href="#ACKNOWLEDGEMENTS">ACKNOWLEDGEMENTS</a></li>
  <li><a href="#SEE-ALSO">SEE ALSO</a></li>
  <li><a href="#LICENSE-AND-COPYRIGHT">LICENSE AND COPYRIGHT</a></li>
</ul>

<h1 id="NAME">NAME</h1>

<p>AI::Perceptron::Simple</p>

<p>A Newbie Friendly Module to Create, Train, Validate and Test Perceptrons / Neurons</p>

<h1 id="VERSION">VERSION</h1>

<p>Version 1.04</p>

<h1 id="SYNOPSIS">SYNOPSIS</h1>

<pre><code>    #!/usr/bin/perl

    use AI::Perceptron::Simple qw(...);

    # create a new nerve / neuron / perceptron
    $nerve = AI::Perceptron::Simple-&gt;new( {
        initial_value =&gt; $size_of_each_dendrite,
        learning_rate =&gt; 0.3, # optional
        threshold =&gt; 0.85, # optional
        attribs =&gt; \@dendrites,
    } );

    # train
    $nerve-&gt;tame( ... );
    $nerve-&gt;exercise( ... );
    $nerve-&gt;train( $training_data_csv, $expected_column_name, $save_nerve_to );
    # or
    $nerve-&gt;train(
        $training_data_csv, $expected_column_name, $save_nerve_to, 
        $show_progress, $identifier); # these two parameters must go together


    # validate
    $nerve-&gt;take_lab_test( ... );
    $nerve-&gt;take_mock_exam( ... );

    # fill results to original file
    $nerve-&gt;validate( { 
        stimuli_validate =&gt; $validation_data_csv, 
        predicted_column_index =&gt; 4,
     } );
    # or        
    # fill results to a new file
    $nerve-&gt;validate( {
        stimuli_validate =&gt; $validation_data_csv,
        predicted_column_index =&gt; 4,
        results_write_to =&gt; $new_csv
    } );


    # test - see &quot;validate&quot; method, same usage
    $nerve-&gt;take_real_exam( ... );
    $nerve-&gt;work_in_real_world( ... );
    $nerve-&gt;test( ... );


    # confusion matrix
    my %c_matrix = $nerve-&gt;get_confusion_matrix( { 
        full_data_file =&gt; $file_csv, 
        actual_output_header =&gt; $header_name,
        predicted_output_header =&gt; $predicted_header_name,
        more_stats =&gt; 1, # optional
    } );

    # accessing the confusion matrix
    my @keys = qw( true_positive true_negative false_positive false_negative 
                   total_entries accuracy sensitivity );
    for ( @keys ) {
        print $_, &quot; =&gt; &quot;, $c_matrix{ $_ }, &quot;\n&quot;;
    }

    # output to console
    $nerve-&gt;display_confusion_matrix( \%c_matrix, { 
        zero_as =&gt; &quot;bad apples&quot;, # cat  milk   green  etc.
        one_as =&gt; &quot;good apples&quot;, # dog  honey  pink   etc.
    } );


    # saving and loading data of perceptron locally
    # NOTE: nerve data is automatically saved after each trainning process
    use AI::Perceptron::Simple &quot;:local_data&quot;;

    my $nerve_file = &quot;apples.nerve&quot;;
    preserve( ... );
    save_perceptron( $nerve, $nerve_file );

    # load data of percpetron for use in actual program
    my $apple_nerve = revive( ... );
    my $apple_nerve = load_perceptron( $nerve_file );


    # for portability of nerve data
    use AI::Perceptron::Simple &quot;:portable_data&quot;;

    my $yaml_nerve_file = &quot;pearls.yaml&quot;;
    preserve_as_yaml ( ... );
    save_perceptron_yaml ( $nerve, $yaml_nerve_file );

    # load nerve data on the other computer
    my $pearl_nerve = revive_from_yaml ( ... );
    my $pearl_nerve = load_perceptron_yaml ( $yaml_nerve_file );


    # processing data
    use AI::Perceptron::Simple &quot;:process_data&quot;;
    shuffle_stimuli ( ... )
    shuffle_data ( ORIGINAL_STIMULI, $new_file_1, $new_file_2, ... );
    shuffle_data ( $original_stimuli =&gt; $new_file_1, $new_file_2, ... );</code></pre>

<h1 id="EXPORT">EXPORT</h1>

<p>None by default.</p>

<p>All the subroutines from <code>DATA PROCESSING RELATED SUBROUTINES</code>, <code>NERVE DATA RELATED SUBROUTINES</code> and <code>NERVE PORTABILITY RELATED SUBROUTINES</code> sections are importable through tags or manually specifying them.</p>

<p>The tags available include the following:</p>

<dl>

docs/specifications.t  view on Meta::CPAN

#   ? implement shuffling system into training stage, bulk data processing   
#   ? Data processing: splitting data, k-fold
#   -...
#
#
############ "flow" of the codes ############

# these three steps could be done in seperated scripts if necessary
# &train and &validate could be put inside a loop or something
# the parameters make more sense when they are taken from @ARGV
    # so when it's the first time training, it will create the nerve_file,
    # the second time and up it will directly overrride that file since everything is read from it
    # ... anyway :) afterall training stage wasn't meant to be a fully working program, so it shouldnt be a problem
# just assume that 
$perceptron->train( $stimuli_train, $save_nerve_to_file ); 
    # reads training stimuli from csv
    # tune attributes based on csv data
        # calls the same subroutine to do the calculation
    # shouldn't give any output upon completion
    # should save a copy of itselt into a new file
    # returns the nerve's data filename to be used in validate()
        # these two can go into a loop with conditions checking
        # which means that we can actuall write this
            # $perceptron->validate( $stimuli_validate, 
            #                        $perceptron->train( $stimuli_train, $save_nerve_to_file ) 
            #                       );
            # and then check the confusion matrix, if not satisfied, run the loop again :)
$perceptron->validate( $stimuli_validate, $nerve_data_to_read );
$perceptron->test( $stimuli_test ); # loads nerve data from data file, turn into a object, then do the following:
    # reads from csv :
        # validation stimuli
        # testing stimuli
    # both will call the same subroutine to do calculation
    # both will write predicted data into the original data file

# show results ie confusion matrix (TP-true positive, TN-true negative, FP-false positive, FN-false negative)
# this should only be done during validation and testing
$perceptron->generate_confusion_matrix( { 1 => $csv_header_true, 0 => $csv_header_false } );
    # calculates the 4 thingy based on the current data on hand (RAM), don't read from file again, it shouldn't be a problem
        # returns a hash
    # ie it must be used together with validate() and test() to avoid problems
        # ie validate() and test() must be in different scripts, which makes sense
        # unless, create 3 similar objects to do the work in one go
        
# save data of the trained perceptron
$perceptron->save_data( $data_file );
    # see train() on saving copy of the perceptron

# load data of percpetron for use in actual program
My::Perceptron::load_data( $data_file );
    # loads the perceptron and returns the actual My::Perceptron object
        # should work though as Storable claims it can do that


# besiyata d'shmaya




lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

our $VERSION = '1.04';

# default values
use constant LEARNING_RATE => 0.05;
use constant THRESHOLD => 0.5;
use constant TUNE_UP => 1;
use constant TUNE_DOWN => 0;

=head1 SYNOPSIS

    #!/usr/bin/perl

    use AI::Perceptron::Simple qw(...);

    # create a new nerve / neuron / perceptron
    $nerve = AI::Perceptron::Simple->new( {
        initial_value => $size_of_each_dendrite,
        learning_rate => 0.3, # optional
        threshold => 0.85, # optional
        attribs => \@dendrites,
    } );

    # train
    $nerve->tame( ... );
    $nerve->exercise( ... );
    $nerve->train( $training_data_csv, $expected_column_name, $save_nerve_to );
    # or
    $nerve->train(
        $training_data_csv, $expected_column_name, $save_nerve_to, 
        $show_progress, $identifier); # these two parameters must go together


    # validate
    $nerve->take_lab_test( ... );
    $nerve->take_mock_exam( ... );

    # fill results to original file
    $nerve->validate( { 
        stimuli_validate => $validation_data_csv, 
        predicted_column_index => 4,
     } );
    # or        
    # fill results to a new file
    $nerve->validate( {
        stimuli_validate => $validation_data_csv,
        predicted_column_index => 4,
        results_write_to => $new_csv
    } );


    # test - see "validate" method, same usage
    $nerve->take_real_exam( ... );
    $nerve->work_in_real_world( ... );
    $nerve->test( ... );


    # confusion matrix
    my %c_matrix = $nerve->get_confusion_matrix( { 
        full_data_file => $file_csv, 
        actual_output_header => $header_name,
        predicted_output_header => $predicted_header_name,
        more_stats => 1, # optional
    } );

    # accessing the confusion matrix
    my @keys = qw( true_positive true_negative false_positive false_negative 
                   total_entries accuracy sensitivity );
    for ( @keys ) {
        print $_, " => ", $c_matrix{ $_ }, "\n";
    }

    # output to console
    $nerve->display_confusion_matrix( \%c_matrix, { 
        zero_as => "bad apples", # cat  milk   green  etc.
        one_as => "good apples", # dog  honey  pink   etc.
    } );


    # saving and loading data of perceptron locally
    # NOTE: nerve data is automatically saved after each trainning process
    use AI::Perceptron::Simple ":local_data";

    my $nerve_file = "apples.nerve";
    preserve( ... );
    save_perceptron( $nerve, $nerve_file );

    # load data of percpetron for use in actual program
    my $apple_nerve = revive( ... );
    my $apple_nerve = load_perceptron( $nerve_file );


    # for portability of nerve data
    use AI::Perceptron::Simple ":portable_data";

    my $yaml_nerve_file = "pearls.yaml";
    preserve_as_yaml ( ... );
    save_perceptron_yaml ( $nerve, $yaml_nerve_file );

    # load nerve data on the other computer
    my $pearl_nerve = revive_from_yaml ( ... );
    my $pearl_nerve = load_perceptron_yaml ( $yaml_nerve_file );


    # processing data
    use AI::Perceptron::Simple ":process_data";
    shuffle_stimuli ( ... )
    shuffle_data ( ORIGINAL_STIMULI, $new_file_1, $new_file_2, ... );
    shuffle_data ( $original_stimuli => $new_file_1, $new_file_2, ... );

=head1 EXPORT

None by default.

All the subroutines from C<DATA PROCESSING RELATED SUBROUTINES>, C<NERVE DATA RELATED SUBROUTINES> and C<NERVE PORTABILITY RELATED SUBROUTINES> sections are importable through tags or manually specifying them.

The tags available include the following:

=over 4

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

=item C<:portable_data> - subroutines under C<NERVE PORTABILITY RELATED SUBROUTINES> section.

=back

Most of the stuff are OO.

=cut

use Exporter qw( import );
our @EXPORT_OK = qw( 
    shuffle_data shuffle_stimuli
    preserve save_perceptron revive load_perceptron
    preserve_as_yaml save_perceptron_yaml revive_from_yaml load_perceptron_yaml
);
our %EXPORT_TAGS = ( 
    process_data => [ qw( shuffle_data shuffle_stimuli ) ],
    local_data => [ qw( preserve save_perceptron revive load_perceptron ) ],
    portable_data => [ qw( preserve_as_yaml save_perceptron_yaml revive_from_yaml load_perceptron_yaml ) ],
);

=head1 DESCRIPTION

This module provides methods to build, train, validate and test a perceptron. It can also save the data of the perceptron for future use for any actual AI programs.

This module is also aimed to help newbies grasp hold of the concept of perceptron, training, validation and testing as much as possible. Hence, all the methods and subroutines in this module are decoupled as much as possible so that the actual script...

The implementation here is super basic as it only takes in input of the dendrites and calculate the output. If the output is higher than the threshold, the final result (category) will 
be 1 aka perceptron is activated. If not, then the result will be 0 (not activated).

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


=head2 shuffle_data ( $original_data => $shuffled_1, $shuffled_2, ... )

=head2 shuffle_data ( ORIGINAL_DATA, $shuffled_1, $shuffled_2, ... )

Shuffles C<$original_data> or C<ORIGINAL_DATA> and saves them to other files.

=cut

sub shuffle_stimuli {
    shuffle_data( @_ );
}

sub shuffle_data {
    my $stimuli = shift or croak "Please specify the original file name";
    my @shuffled_stimuli_names = @_ 
        or croak "Please specify the output files for the shuffled data";
    
    my @aoa;
    for ( @shuffled_stimuli_names ) {
        # copied from _real_validate_or_test
        # open for shuffling
        my $aoa = csv (in => $stimuli, encoding => ":encoding(utf-8)");
        my $attrib_array_ref = shift @$aoa; # 'remove' the header, it's annoying :)
        @aoa = shuffle( @$aoa ); # this can only process actual array
        unshift @aoa, $attrib_array_ref; # put back the headers before saving file

        csv( in => \@aoa, out => $_, encoding => ":encoding(utf-8)" ) 
        and
        print "Saved shuffled data into ", basename($_), "!\n";

    }
}

=head1 CREATION RELATED SUBROUTINES/METHODS

=head2 new ( \%options )

Creates a brand new perceptron and initializes the value of each attribute / dendrite aka. weight. Think of it as the thickness or plasticity of the dendrites.

For C<%options>, the followings are needed unless mentioned:

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


This is the passing rate to determine the neuron output (C<0> or C<1>).

Generally speaking, this value is usually between C<0> and C<1>. However, it all depend on your combination of numbers for the other options.

=back

=cut

sub new {
    my $class = shift;
    
    my $data_ref = shift;
    my %data = %{ $data_ref };
    
    # check keys
    $data{ learning_rate } = LEARNING_RATE if not exists $data{ learning_rate };
    $data{ threshold } = THRESHOLD if not exists $data{ threshold };
    
    #####
    # don't pack this key checking process into a subroutine for now
    # this is also used in &_real_validate_or_test
    my @missing_keys;
    for ( qw( initial_value attribs ) ) {
        push @missing_keys, $_ unless exists $data{ $_ };
    }
    
    croak "Missing keys: @missing_keys" if @missing_keys;
    #####
    
    # continue to process the rest of the data
    my %attributes;
    for ( @{ $data{ attribs } } ) {
        $attributes{ $_ } = $data{ initial_value };
    }
    
    my %processed_data = (
        learning_rate => $data{ learning_rate },
        threshold => $data{ threshold },
        attributes_hash_ref => \%attributes,
    );
    
    bless \%processed_data, $class;
}

=head2 get_attributes

Obtains a hash of all the attributes of the perceptron

=cut

sub get_attributes {
    my $self = shift;
    %{ $self->{attributes_hash_ref} };
}

=head2 learning_rate ( $value )

=head2 learning_rate

If C<$value> is given, sets the learning rate to C<$value>. If not, then it returns the learning rate.

=cut

sub learning_rate {
    my $self = shift;
    if ( @_ ) {
        $self->{learning_rate} = shift;
    } else {
        $self->{learning_rate}
    }
}

=head2 threshold ( $value )

=head2 threshold

If C<$value> is given, sets the threshold / passing rate to C<$value>. If not, then it returns the passing rate.

=cut

sub threshold {
    my $self = shift;
    if ( @_ ) {
        $self->{ threshold } = shift;
    } else {
        $self->{ threshold };
    }
}

=head1 TRAINING RELATED SUBROUTINES/METHODS

All the training methods here have the same parameters as the two actual C<train> method and they all do the same stuff. They are also used in the same way.

=head2 tame ( ... )

=head2 exercise ( ... )

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


The new sum of all C<weightage * input> after fine-tuning the nerve

=back

If C<$display_stats> is specified ie. set to C<1>, then you B<MUST> specify the C<$identifier>. C<$identifier> is the column / header name that is used to identify a specific row of data in C<$stimuli_train_csv>.

=cut

sub tame {
    train( @_ );
}

sub exercise {
    train( @_ );
}

sub train {
    my $self = shift;
    my( $stimuli_train_csv, $expected_output_header, $save_nerve_to_file, $display_stats, $identifier ) = @_;
    
    $display_stats = 0 if not defined $display_stats;
    if ( $display_stats and not defined $identifier ) {
        croak "Please specifiy a string for \$identifier if you are trying to display stats";
    }
    
    # CSV processing is all according to the documentation of Text::CSV
    open my $data_fh, "<:encoding(UTF-8)", $stimuli_train_csv 
        or croak "Can't open $stimuli_train_csv: $!";
    
    my $csv = Text::CSV->new( {auto_diag => 1, binary => 1} );
    
    my $attrib = $csv->getline($data_fh);
    $csv->column_names( $attrib );

    # individual row
    ROW: while ( my $row = $csv->getline_hr($data_fh) ) {
        # print $row->{book_name}, " -> ";
        # print $row->{$expected_output_header} ? "意林\n" : "魅丽优品\n";

        # calculate the output and fine tune parameters if necessary
        while (1) {
            my $output = _calculate_output( $self, $row );
            
            #print "Sum = ", $output, "\n";
            
            # $expected_output_header to be checked together over here
            # if output >= threshold
            #    then category/result aka output is considered 1
            # else output considered 0
            
            # output expected/actual tuning
            #    0       0             -
            #    1       0             down
            #    0       1             up
            #    1       1             -
            if ( ($output >= $self->threshold) and ( $row->{$expected_output_header} eq 0 ) ) {
                _tune( $self, $row, TUNE_DOWN );

                if ( $display_stats ) {
                    print $row->{$identifier}, "\n";
                    print "   -> TUNED DOWN";
                    print "   Old sum = ", $output;
                    print "   Threshold = ", $self->threshold;
                    print "   New Sum = ", _calculate_output( $self, $row ), "\n";                
                }
                
            } elsif ( ($output < $self->threshold) and ( $row->{$expected_output_header} eq 1 ) ) {
                _tune( $self, $row, TUNE_UP );
                
                if ( $display_stats ) {
                    print $row->{$identifier}, "\n";
                    print "   -> TUNED UP";
                    print "   Old sum = ", $output;
                    print "   Threshold = ", $self->threshold;
                    print "   New Sum = ", _calculate_output( $self, $row ), "\n";
                }

            } elsif ( ($output < $self->threshold) and ( $row->{$expected_output_header} eq 0 ) ) {
            
                if ( $display_stats ) {
                    print $row->{$identifier}, "\n";
                    print "   -> NO TUNING NEEDED";
                    print "   Sum = ", _calculate_output( $self, $row );
                    print "   Threshold = ", $self->threshold, "\n";
                }
                
                next ROW;
                
            } elsif ( ($output >= $self->threshold) and ( $row->{$expected_output_header} eq 1 ) ) {
            
                if ( $display_stats ) {
                    print $row->{$identifier}, "\n";
                    print "   -> NO TUNING NEEDED";
                    print "   Sum = ", _calculate_output( $self, $row );
                    print "   Threshold = ", $self->threshold, "\n";
                }
                
                next ROW;
            } #else { print "Something's not right\n'" }
        }
    }

    close $data_fh;
    
    save_perceptron( $self, $save_nerve_to_file ); # this doesn't return anything
    
}

=head2 &_calculate_output( $self, \%stimuli_hash )

Calculates and returns the C<sum(weightage*input)> for each individual row of data. Actually, it justs add up all the existing weight since the C<input> is always 1 for now :)

C<%stimuli_hash> is the actual data to be used for training. It might contain useless columns.

This will get all the avaible dendrites using the C<get_attributes> method and then use all the keys ie. headers to access the corresponding values.

This subroutine should be called in the procedural way for now.

=cut

sub _calculate_output {
    my $self = shift; 
    my $stimuli_hash_ref = shift;
    
    my %dendrites = $self->get_attributes;
    my $sum; # this is the output
    
    for ( keys %dendrites ) {
        # if input is 1 for a dendrite, then calculate it
        if ( $stimuli_hash_ref->{ $_ } ) {
            # $sum += $dendrites{ $_ } * 1; # no need, if 1 then it is always the value itself
            # this is very efficient, nice :)
            $sum += $dendrites{ $_ };
        }
    }
    
    $sum;
}

=head2 &_tune( $self, \%stimuli_hash, $tune_up_or_down )

Fine tunes the nerve. This will directly alter the attributes values in C<$self> according to the attributes / dendrites specified in C<new>.

The C<%stimuli_hash> here is the same as the one in the C<_calculate_output> method.

C<%stimuli_hash> will be used to determine which dendrite in C<$self> needs to be fine-tuned. As long as the value of any key in C<%stimuli_hash> returns true (1) then that dendrite in C<$self> will be tuned.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


Value is C<0>

=back

This subroutine should be called in the procedural way for now.

=cut

sub _tune {
    my $self = shift; 
    my ( $stimuli_hash_ref, $tuning_status ) = @_;

    my %dendrites = $self->get_attributes;

    for ( keys %dendrites ) {
        if ( $tuning_status == TUNE_DOWN ) {
            
            if ( $stimuli_hash_ref->{ $_ } ) { # must check this one, it must be 1 before we can alter the actual dendrite size in the nerve :)
                $self->{ attributes_hash_ref }{ $_ } -= $self->learning_rate;
            }
            #print $_, ": ", $self->{ attributes_hash_ref }{ $_ }, "\n";
            
        } elsif ( $tuning_status == TUNE_UP ) {
            
            if ( $stimuli_hash_ref->{ $_ } ) {
                $self->{ attributes_hash_ref }{ $_ } += $self->learning_rate;
            }
            #print $_, ": ", $self->{ attributes_hash_ref }{ $_ }, "\n";
            
        }
    }

}

=head1 VALIDATION RELATED METHODS

All the validation methods here have the same parameters as the actual C<validate> method and they all do the same stuff. They are also used in the same way.

=head2 take_mock_exam (...)

=head2 take_lab_test (...)

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN


The default behaviour will write the predicted output back into C<stimuli_validate> ie the original data. The sequence of the data will be maintained.

=back

I<*This method will call C<_real_validate_or_test> to do the actual work.>

=cut

sub take_mock_exam {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

sub take_lab_test {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

sub validate {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

=head1 TESTING RELATED SUBROUTINES/METHODS

All the testing methods here have the same parameters as the actual C<test> method and they all do the same stuff. They are also used in the same way.

=head2 take_real_exam (...)

=head2 work_in_real_world (...)

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

well the brain survives :)

This method works and behaves the same way as the C<validate> method. See C<validate> for the details.

I<*This method will call &_real_validate_or_test to do the actual work.>

=cut

# redirect to _real_validate_or_test
sub take_real_exam {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

sub work_in_real_world {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

sub test {
    my ( $self, $data_hash_ref ) = @_;
    $self->_real_validate_or_test( $data_hash_ref );
}

=head2 _real_validate_or_test ( $data_hash_ref )

This is where the actual validation or testing takes place. 

C<$data_hash_ref> is the list of parameters passed into the C<validate> or C<test> methods.

This is a B<method>, so use the OO way. This is one of the exceptions to the rules where private subroutines are treated as methods :)

=cut

sub _real_validate_or_test {

    my $self = shift;   my $data_hash_ref = shift;
    
    #####
    my @missing_keys;
    for ( qw( stimuli_validate predicted_column_index ) ) {
        push @missing_keys, $_ unless exists $data_hash_ref->{ $_ };
    }
    
    croak "Missing keys: @missing_keys" if @missing_keys;
    #####
    
    my $stimuli_validate = $data_hash_ref->{ stimuli_validate };
    my $predicted_index = $data_hash_ref->{ predicted_column_index };
    
    # actual processing starts here
    my $output_file = defined $data_hash_ref->{ results_write_to } 
                        ? $data_hash_ref->{ results_write_to }
                        : $stimuli_validate;
    
    # open for writing results
    my $aoa = csv (in => $stimuli_validate, encoding => ":encoding(utf-8)");
    
    my $attrib_array_ref = shift @$aoa; # 'remove' the header, it's annoying :)

    $aoa = _fill_predicted_values( $self, $stimuli_validate, $predicted_index, $aoa );

    # put back the array of headers before saving file
    unshift @$aoa, $attrib_array_ref;

    print "Saving data to $output_file\n";
    csv( in => $aoa, out => $output_file, encoding => ":encoding(utf-8)" );
    print "Done saving!\n";

}

=head2 &_fill_predicted_values ( $self, $stimuli_validate, $predicted_index, $aoa )

This is where the filling in of the predicted values takes place. Take note that the parameters naming are the same as the ones used in the C<validate> and C<test> method.

This subroutine should be called in the procedural way.

=cut

sub _fill_predicted_values {
    my ( $self, $stimuli_validate, $predicted_index, $aoa ) = @_;

    # CSV processing is all according to the documentation of Text::CSV
    open my $data_fh, "<:encoding(UTF-8)", $stimuli_validate 
        or croak "Can't open $stimuli_validate: $!";
    
    my $csv = Text::CSV->new( {auto_diag => 1, binary => 1} );
    
    my $attrib = $csv->getline($data_fh);
    
    $csv->column_names( $attrib );

    # individual row
    my $row = 0;
    while ( my $data = $csv->getline_hr($data_fh) ) {
        
        if ( _calculate_output( $self, $data )  >= $self->threshold ) {
            # write 1 into aoa
            $aoa->[ $row ][ $predicted_index ] = 1;
        } else {
            #write 0 into aoa
            $aoa->[ $row ][ $predicted_index ] = 0;
        }
        
        $row++;
    }
    
    close $data_fh;
    
    $aoa;
}

=head1 RESULTS RELATED SUBROUTINES/METHODS

This part is related to generating the confusion matrix.

=head2 get_exam_results ( ... )

The parameters and usage are the same as C<get_confusion_matrix>. See the next method.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

Optional.

Setting it to C<1> will process more stats that are usually not so important eg. C<precision>, C<specificity> and C<F1_Score>

=back

=cut

sub get_exam_results {

    my ( $self, $info ) = @_;
    
    $self->get_confusion_matrix( $info );
}

sub get_confusion_matrix {

    my ( $self, $info ) = @_;

    my %c_matrix = _collect_stats( $info ); # processes total_entries, accuracy, sensitivity etc
    
    %c_matrix;
}


=head2 &_collect_stats ( \%options )

Generates a hash of confusion matrix based on C<%options> given in the C<get_confusion_matrix> method.

=cut

sub _collect_stats {
    my $info = shift;
    my $file = $info->{ full_data_file };
    my $actual_header = $info->{ actual_output_header };
    my $predicted_header = $info->{ predicted_output_header };
    my $more_stats = defined ( $info->{ more_stats } ) ? 1 : 0;
    
    my %c_matrix = ( 
        true_positive => 0, true_negative => 0, false_positive => 0, false_negative => 0,
        accuracy => 0, sensitivity => 0
    );
    
    # CSV processing is all according to the documentation of Text::CSV
    open my $data_fh, "<:encoding(UTF-8)", $file
        or croak "Can't open $file: $!";
    
    my $csv = Text::CSV->new( {auto_diag => 1, binary => 1} );
    
    my $attrib = $csv->getline($data_fh); # get the row of headers, can't specify any column
    # shouldn't be a problem, since we're reading line by line :)

    $csv->column_names( $attrib );

    # individual row
    while ( my $row = $csv->getline_hr($data_fh) ) {
        
        # don't pack this part into another subroutine, number of rows can be very big
        if ( $row->{ $actual_header } == 1 and $row->{ $predicted_header } == 1 ) {

            # true positive
            $c_matrix{ true_positive }++;
            
        } elsif ( $row->{ $actual_header } == 0 and $row->{ $predicted_header } == 0 ) {
            
            # true negative
            $c_matrix{ true_negative }++;
            
        } elsif ( $row->{ $actual_header } == 1 and $row->{ $predicted_header } == 0 ) {
            
            # false negative
            $c_matrix{ false_negative }++;
            
        } elsif ( $row->{ $actual_header } == 0 and $row->{ $predicted_header } == 1 ) {
            
            # false positive
            $c_matrix{ false_positive }++;
            
        } else {
        
            croak "Something's wrong!\n".
            "Make sure that the actual and predicted values in your file are binary ie 0 or 1" ;
            
        }
    }
    
    close $data_fh;

    _calculate_total_entries( \%c_matrix );

    _calculate_sensitivity( \%c_matrix );
    
    _calculate_accuracy( \%c_matrix );
    
    if ( $more_stats == 1 ) {
        _calculate_precision( \%c_matrix );
        
        _calculate_specificity( \%c_matrix );
        
        _calculate_f1_score( \%c_matrix );
        
        # unimplemented, some more left
        _calculate_negative_predicted_value( \%c_matrix ); #
        _calculate_false_negative_rate( \%c_matrix ); #
        _calculate_false_positive_rate( \%c_matrix ); #
        _calculate_false_discovery_rate( \%c_matrix ); #
        _calculate_false_omission_rate( \%c_matrix ); #
        _calculate_balanced_accuracy( \%c_matrix ); #
    }
    
    %c_matrix;
}

=head2 &_calculate_total_entries ( $c_matrix_ref )

Calculates and adds the data for the C<total_entries> key in the confusion matrix hash.

=cut

sub _calculate_total_entries {

    my $c_matrix = shift;
    my $total = $c_matrix->{ true_negative } + $c_matrix->{ false_positive };
       $total += $c_matrix->{ false_negative } + $c_matrix->{ true_positive };

    $c_matrix->{ total_entries } = $total;

}

=head2 &_calculate_accuracy ( $c_matrix_ref )

Calculates and adds the data for the C<accuracy> key in the confusion matrix hash.

=cut

sub _calculate_accuracy {

    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ true_positive } + $c_matrix->{ true_negative };
    my $denominator = $numerator + $c_matrix->{ false_positive } + $c_matrix->{ false_negative };
    
    $c_matrix->{ accuracy } = $numerator / $denominator * 100;
    
    # no need to return anything, we're using ref
}

=head2 &_calculate_sensitivity ( $c_matrix_ref )

Calculates and adds the data for the C<sensitivity> key in the confusion matrix hash.

=cut

sub _calculate_sensitivity {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ true_positive };
    my $denominator = $numerator + $c_matrix->{ false_negative };
    
    $c_matrix->{ sensitivity } = $numerator / $denominator * 100;

    # no need to return anything, we're using ref
}

=head2 &_calculate_precision ( $c_matrix_ref )

Calculates and adds the data for the C<precision> key in the confusion matrix hash.

=cut

sub _calculate_precision {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ true_positive };
    my $denominator = $numerator + $c_matrix->{ false_positive };
    
    $c_matrix->{ precision } = $numerator / $denominator * 100;
}

=head2 &_calculate_specificity ( $c_matrix_ref )

Calculates and adds the data for the C<specificity> key in the confusion matrix hash.

=cut

sub _calculate_specificity {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ true_negative };
    my $denominator = $numerator + $c_matrix->{ false_positive };
    
    $c_matrix->{ specificity } = $numerator / $denominator * 100;
}

=head2 &_calculate_f1_score ( $c_matrix_ref )

Calculates and adds the data for the C<F1_Score> key in the confusion matrix hash.

=cut

sub _calculate_f1_score {
    my $c_matrix = shift;
    
    my $numerator = 2 * $c_matrix->{ true_positive };
    my $denominator = $numerator + $c_matrix->{ false_positive } + $c_matrix->{ false_negative };
    
    $c_matrix->{ F1_Score } = $numerator / $denominator * 100;
}       

=head2  &_calculate_negative_predicted_value( $c_matrix_ref )

Calculates and adds the data for the C<negative_predicted_value> key in the confusion matrix hash.

=cut

sub _calculate_negative_predicted_value {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ true_negative };
    my $denominator = $numerator + $c_matrix->{ false_negative };
    
    $c_matrix->{ negative_predicted_value } = $numerator / $denominator * 100;
}

=head2  &_calculate_false_negative_rate( $c_matrix_ref )

Calculates and adds the data for the C<false_negative_rate> key in the confusion matrix hash.

=cut

sub _calculate_false_negative_rate {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ false_negative };
    my $denominator = $numerator + $c_matrix->{ true_positive };
    
    $c_matrix->{ false_negative_rate } = $numerator / $denominator * 100;
}

=head2  &_calculate_false_positive_rate( $c_matrix_ref )

Calculates and adds the data for the C<false_positive_rate> key in the confusion matrix hash.

=cut

sub _calculate_false_positive_rate {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ false_positive };
    my $denominator = $numerator + $c_matrix->{ true_negative };
    
    $c_matrix->{ false_positive_rate } = $numerator / $denominator * 100;
}

=head2  &_calculate_false_discovery_rate( $c_matrix_ref )

Calculates and adds the data for the C<false_discovery_rate> key in the confusion matrix hash.

=cut

sub _calculate_false_discovery_rate {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ false_positive };
    my $denominator = $numerator + $c_matrix->{ true_positive };
    
    $c_matrix->{ false_discovery_rate } = $numerator / $denominator * 100;
}

=head2  &_calculate_false_omission_rate( $c_matrix_ref )

Calculates and adds the data for the C<false_omission_rate> key in the confusion matrix hash.

=cut

sub _calculate_false_omission_rate {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ false_negative };
    my $denominator = $numerator + $c_matrix->{ true_negative };
    
    $c_matrix->{ false_omission_rate } = $numerator / $denominator * 100;
}

=head2  &_calculate_balanced_accuracy( $c_matrix_ref )

Calculates and adds the data for the C<balanced_accuracy> key in the confusion matrix hash.

=cut

sub _calculate_balanced_accuracy {
    my $c_matrix = shift;
    
    my $numerator = $c_matrix->{ sensitivity } + $c_matrix->{ specificity };
    my $denominator = 2;
    
    $c_matrix->{ balanced_accuracy } = $numerator / $denominator; # numerator already in %
}

=head2 display_exam_results ( ... )

The parameters are the same as C<display_confusion_matrix>. See the next method.

=head2 display_confusion_matrix ( \%confusion_matrix, \%labels ) 

Display the confusion matrix. If C<%confusion_matrix> has C<more_stats> elements, it will display them if they exists. The default elements ie C<accuracy> and C<sensitivity> must be present, while the rest can be absent.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

=back

Please take note that non-ascii characters ie. non-English alphabets B<might> cause the output to go off :)

For the C<%labels>, there is no need to enter "actual X", "predicted X" etc. It will be prefixed with C<A: > for actual and C<P: > for the predicted values by default.

=cut

sub display_exam_results {

    my ( $self, $c_matrix, $labels ) = @_;
    
    $self->display_confusion_matrix( $c_matrix, $labels );
}

sub display_confusion_matrix {
    my ( $self, $c_matrix, $labels ) = @_;
    
    #####
    my @missing_keys;
    for ( qw( zero_as one_as ) ) {
        push @missing_keys, $_ unless exists $labels->{ $_ };
    }
    
    croak "Missing keys: @missing_keys" if @missing_keys;
    #####
    
    _print_extended_matrix ( _build_matrix( $c_matrix, $labels ) );

}

=head2 &_build_matrix ( $c_matrix, $labels )

Builds the matrix using C<Text::Matrix> module.

C<$c_matrix> and C<$labels> are the same as the ones passed to C<display_exam_results> and C<>display_confusion_matrix.

Returns a list C<( $matrix, $c_matrix )> which can directly be passed to C<_print_extended_matrix>.

=cut

sub _build_matrix {

    my ( $c_matrix, $labels ) = @_;

    my $predicted_columns = [ "P: ".$labels->{ zero_as }, "P: ".$labels->{ one_as }, "Sum" ];
    my $actual_rows = [ "A: ".$labels->{ zero_as }, "A: ".$labels->{ one_as }, "Sum"];
    
    # row sum
    my $actual_0_sum = $c_matrix->{ true_negative } + $c_matrix->{ false_positive };
    my $actual_1_sum = $c_matrix->{ false_negative } + $c_matrix->{ true_positive };
    # column sum
    my $predicted_0_sum = $c_matrix->{ true_negative } + $c_matrix->{ false_negative };
    my $predicted_1_sum = $c_matrix->{ false_positive } + $c_matrix->{ true_positive };
    
    my $data = [ 
        [ $c_matrix->{ true_negative },  $c_matrix->{ false_positive }, $actual_0_sum ],
        [ $c_matrix->{ false_negative }, $c_matrix->{ true_positive }, $actual_1_sum ],
        [ $predicted_0_sum, $predicted_1_sum, $c_matrix->{ total_entries } ],
    ];
    my $matrix = Text::Matrix->new(
        rows => $actual_rows,
        columns => $predicted_columns,
        data => $data,
    );
    
    $matrix, $c_matrix;
}

=head2 &_print_extended_matrix ( $matrix, $c_matrix )

Extends and outputs the matrix on the screen.

C<$matrix> and C<$c_matrix> are the same as returned by C<&_build_matrix>.

=cut

sub _print_extended_matrix {

    my ( $matrix, $c_matrix ) = @_;
    
    print "~~" x24, "\n";
    print "CONFUSION MATRIX (A:actual  P:predicted)\n";
    print "~~" x24, "\n";

    print $matrix->matrix();

    print "~~" x24, "\n";
    print "Total of ", $c_matrix->{ total_entries } , " entries\n";
    print "  Accuracy: $c_matrix->{ accuracy } %\n";
    print "  Sensitivity: $c_matrix->{ sensitivity } %\n";
    # more stats
    print "  Precision: $c_matrix->{ precision } %\n" if exists $c_matrix->{ precision };
    print "  Specificity: $c_matrix->{ specificity } %\n" if exists $c_matrix->{ specificity };
    print "  F1 Score: $c_matrix->{ F1_Score } %\n" if exists $c_matrix->{ F1_Score };
    print "  Negative Predicted Value: $c_matrix->{ negative_predicted_value } %\n" if exists $c_matrix->{ negative_predicted_value };
    print "  False Negative Rate: $c_matrix->{ false_negative_rate } %\n" if exists $c_matrix->{ false_negative_rate };
    print "  False Positive Rate: $c_matrix->{ false_positive_rate } %\n" if exists $c_matrix->{ false_positive_rate };
    print "  False Discovery Rate: $c_matrix->{ false_discovery_rate } %\n" if exists $c_matrix->{ false_discovery_rate };
    print "  False Omission Rate: $c_matrix->{ false_omission_rate } %\n" if exists $c_matrix->{ false_omission_rate };
    print "  Balanced Accuracy: $c_matrix->{ balanced_accuracy } %\n" if exists $c_matrix->{ balanced_accuracy };
    print "~~" x24, "\n";
}

=head1 NERVE DATA RELATED SUBROUTINES

This part is about saving the data of the nerve. These subroutines can be imported using the C<:local_data> tag.

B<The subroutines are to be called in the procedural way>. No checking is done currently.

See C<PERCEPTRON DATA> and C<KNOWN ISSUES> sections for more details on the subroutines in this section.

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

The parameters and usage are the same as C<save_perceptron>. See the next subroutine.

=head2 save_perceptron ( $nerve, $nerve_file )

Saves the C<AI::Perceptron::Simple> object into a C<Storable> file. There shouldn't be a need to call this method manually since after every training 
process this will be called automatically.

=cut

sub preserve {
    save_perceptron( @_ );
}

sub save_perceptron {
    my $self = shift;
    my $nerve_file = shift;
    use Storable;
    store $self, $nerve_file;
    no Storable;
}

=head2 revive (...)

The parameters and usage are the same as C<load_perceptron>. See the next subroutine.

=head2 load_perceptron ( $nerve_file_to_load )

Loads the data and turns it into a C<AI::Perceptron::Simple> object as the return value.

=cut

sub revive {
    load_perceptron( @_ );
}

sub load_perceptron {
    my $nerve_file_to_load = shift;
    use Storable;
    my $loaded_nerve = retrieve( $nerve_file_to_load );
    no Storable;
    
    $loaded_nerve;
}

=head1 NERVE PORTABILITY RELATED SUBROUTINES

These subroutines can be imported using the C<:portable_data> tag.

The file type currently supported is YAML. Please be careful with the data as you won't want the nerve data accidentally modified.

=head2 preserve_as_yaml ( ... )

The parameters and usage are the same as C<save_perceptron_yaml>. See the next subroutine.

=head2 save_perceptron_yaml ( $nerve, $yaml_nerve_file )

Saves the C<AI::Perceptron::Simple> object into a C<YAML> file.

=cut

sub preserve_as_yaml {
    save_perceptron_yaml( @_ );
}

sub save_perceptron_yaml {
    my $self = shift;
    my $nerve_file = shift;
    use YAML;
    YAML::DumpFile( $nerve_file, $self );
    no YAML;
}

=head2 revive_from_yaml (...)

The parameters and usage are the same as C<load_perceptron>. See the next subroutine.

=head2 load_perceptron_yaml ( $yaml_nerve_file )

Loads the YAML data and turns it into a C<AI::Perceptron::Simple> object as the return value.

=cut

sub revive_from_yaml {
    load_perceptron_yaml( @_ );
}

sub load_perceptron_yaml {
    my $nerve_file_to_load = shift;
    use YAML;
    local $YAML::LoadBlessed = 1;
    my $loaded_nerve = YAML::LoadFile( $nerve_file_to_load );
    no YAML;
    
    $loaded_nerve;
}

=head1 TO DO

These are the to-do's that B<MIGHT> be done in the future. Don't put too much hope in them please :)

=over 4

=item * Clean up and refactor source codes

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

=head1 BUGS

Please report any bugs or feature requests to C<bug-ai-perceptron-simple at rt.cpan.org>, or through
the web interface at L<https://rt.cpan.org/NoAuth/ReportBug.html?Queue=AI-Perceptron-Simple>.  I will be notified, and then you'll
automatically be notified of progress on your bug as I make changes.

=head1 SUPPORT

You can find documentation for this module with the perldoc command.

    perldoc AI::Perceptron::Simple


You can also look for information at:

=over 4

=item * RT: CPAN's request tracker (report bugs here)

L<https://rt.cpan.org/NoAuth/Bugs.html?Dist=AI-Perceptron-Simple>

lib/AI/Perceptron/Simple.pm  view on Meta::CPAN

=head1 SEE ALSO

AI::Perceptron, Text::Matrix, YAML

=head1 LICENSE AND COPYRIGHT

This software is Copyright (c) 2021 by Raphael Jong Jun Jie.

This is free software, licensed under:

  The Artistic License 2.0 (GPL Compatible)


=cut

1; # End of AI::Perceptron::Simple

t/00-load.t  view on Meta::CPAN

#!perl
use 5.006;
use strict;
use warnings;
use Test::More;

plan tests => 1;

BEGIN {
    use_ok( 'AI::Perceptron::Simple' ) || print "Bail out!\n";
}

diag( "Testing AI::Perceptron::Simple $AI::Perceptron::Simple::VERSION, Perl $], $^X" );

t/00-manifest.t  view on Meta::CPAN

#!perl
use 5.006;
use strict;
use warnings;
use Test::More;

unless ( $ENV{RELEASE_TESTING} ) {
    plan( skip_all => "Author tests not required for installation" );
}

my $min_tcm = 0.9;
eval "use Test::CheckManifest $min_tcm";
plan skip_all => "Test::CheckManifest $min_tcm required" if $@;

ok_manifest();

t/00-pod-coverage.t  view on Meta::CPAN

#!perl
use 5.006;
use strict;
use warnings;
use Test::More;

unless ( $ENV{RELEASE_TESTING} ) {
    plan( skip_all => "Author tests not required for installation" );
}

# Ensure a recent version of Test::Pod::Coverage
my $min_tpc = 1.08;
eval "use Test::Pod::Coverage $min_tpc";
plan skip_all => "Test::Pod::Coverage $min_tpc required for testing POD coverage"
    if $@;

# Test::Pod::Coverage doesn't require a minimum Pod::Coverage version,
# but older versions don't recognize some common documentation styles
my $min_pc = 0.18;
eval "use Pod::Coverage $min_pc";
plan skip_all => "Pod::Coverage $min_pc required for testing POD coverage"
    if $@;

all_pod_coverage_ok();

t/00-pod.t  view on Meta::CPAN

#!perl
use 5.006;
use strict;
use warnings;
use Test::More;

unless ( $ENV{RELEASE_TESTING} ) {
    plan( skip_all => "Author tests not required for installation" );
}

# Ensure a recent version of Test::Pod
my $min_tp = 1.22;
eval "use Test::Pod $min_tp";
plan skip_all => "Test::Pod $min_tp required for testing POD" if $@;

all_pod_files_ok();

t/02-creation.t  view on Meta::CPAN

use AI::Perceptron::Simple;

my $module_name = "AI::Perceptron::Simple";
my $initial_value = 1.5;
my @attributes = qw( glossy_look has_flowers );

# print AI::Perceptron::Simple::LEARNING_RATE;

# all important parameter test
my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => $initial_value,
    attribs => \@attributes
} );

is( ref $perceptron, $module_name, "Correct object" );

# default learning_rate() threshold()
is( $perceptron->learning_rate, 0.05, "Correct default learning rate -> ".$perceptron->learning_rate );
is( $perceptron->threshold, 0.5, "Correct default passing rate -> ".$perceptron->threshold );

# new learning_rate() threshold()
# direct invocation is seldom used, but might be useful in some ways if there's a loop
$perceptron->learning_rate(0.123);
is( $perceptron->learning_rate, 0.123, "Correct new learning_rate -> ".$perceptron->learning_rate );

$perceptron->threshold(0.4);
is( $perceptron->threshold, 0.4, "Correct new passing rate -> ".$perceptron->threshold );

$perceptron = AI::Perceptron::Simple->new( {
    initial_value => $initial_value,
    learning_rate => 0.3,
    threshold => 0.85,
    attribs => \@attributes
} );
is( $perceptron->learning_rate, 0.3, "Correct custom learning_rate -> ".$perceptron->learning_rate );
is( $perceptron->threshold, 0.85, "Correct custom passing rate -> ".$perceptron->threshold );

# get_attributes()
my %attributes = $perceptron->get_attributes;
for ( @attributes ) {
    ok( $attributes{ $_ }, "Attribute \'$_\' present" );
    is( $attributes{ $_ }, $initial_value, "Correct initial value (".$attributes{$_}.") for  \'$_\'" );
}

# don't try to use Test::Carp, it won't work, it only tests for direct calling of carp and croak etc
subtest "Caught missing mandatory parameters" => sub {
    eval {
        my $no_attribs = AI::Perceptron::Simple->new( { initial_value => $initial_value} );
    };
    like( $@, qr/attribs/, "Caught missing attribs" );
    
    eval {
        my $perceptron = AI::Perceptron::Simple->new( { attribs => \@attributes} );
    };
    like($@, qr/initial_value/, "Caught missing initial_value");
    
    #my $no_both = AI::Perceptron::Simple->new; # this will fail and give output to use hash ref, nice
    eval { my $no_both = AI::Perceptron::Simple->new( {} ); };
    like( $@, qr/Missing keys: initial_value attribs/, "Caught missing initial_value and attribs" );
};


done_testing();

# besiyata d'shmaya




t/02-state_portable.t  view on Meta::CPAN

# for :local_data test, see 04-train.t   02-state_synonyms.t utilizes the full invocation

use FindBin;
use constant MODULE_NAME => "AI::Perceptron::Simple";

my @attributes = qw ( has_trees trees_coverage_more_than_half has_other_living_things );

my $total_headers = scalar @attributes;

my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => 0.01,
    attribs => \@attributes
} );

subtest "All data related subroutines found" => sub {
    # this only checks if the subroutines are contained in the package
    ok( AI::Perceptron::Simple->can("preserve_as_yaml"), "&preserve_as_yaml is present" );
    ok( AI::Perceptron::Simple->can("save_perceptron_yaml"), "&save_perceptron_yaml is persent" );

    ok( AI::Perceptron::Simple->can("revive_from_yaml"), "&revive_from_yaml is present" );
    ok( AI::Perceptron::Simple->can("load_perceptron_yaml"), "&load_perceptron_yaml is present" );

};

my $yaml_nerve_file = $FindBin::Bin . "/portable_nerve.yaml";

# save file
save_perceptron_yaml( $perceptron, $yaml_nerve_file );
ok( -e $yaml_nerve_file, "Found the YAML perceptron." );
# load and check
ok( my $transfered_nerve = load_perceptron_yaml( $yaml_nerve_file ), "&loaded_perceptron_from_YAML" );

t/02-state_synonyms.t  view on Meta::CPAN

use warnings;
use Test::More;

use AI::Perceptron::Simple;

use FindBin;
use constant MODULE_NAME => "AI::Perceptron::Simple";

# 36 headers
my @attributes = qw ( 
    glossy_cover	has_plastic_layer_on_cover	male_present	female_present	total_people_1	total_people_2	total_people_3
	total_people_4	total_people_5_n_above	has_flowers	flower_coverage_more_than_half	has_leaves	leaves_coverage_more_than_half	has_trees
	trees_coverage_more_than_half	has_other_living_things	has_fancy_stuff	has_obvious_inanimate_objects	red_shades	blue_shades	yellow_shades
	orange_shades	green_shades	purple_shades	brown_shades	black_shades	overall_red_dominant	overall_green_dominant
	overall_yellow_dominant	overall_pink_dominant	overall_purple_dominant	overall_orange_dominant	overall_blue_dominant	overall_brown_dominant
	overall_black_dominant	overall_white_dominant );

my $total_headers = scalar @attributes;

my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => 0.01,
    attribs => \@attributes
} );

ok( AI::Perceptron::Simple->can("preserve"), "&preserve is persent" );
ok( AI::Perceptron::Simple->can("revive"), "&revive is present" );

my $nerve_file = $FindBin::Bin . "/perceptron_state_synonyms.nerve";
ok( AI::Perceptron::Simple::preserve( $perceptron, $nerve_file ), "preserve is working good so far" );
ok( -e $nerve_file, "Found the perceptron file" );

ok( AI::Perceptron::Simple::revive( $nerve_file ), "Perceptron loaded" );

t/04-train.t  view on Meta::CPAN

# pwd is the actual .pm module in blib
# ie. My-Perceptron/blib/lib/My/Perceptron.pm
use FindBin;
use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

# 36 headers
my @attributes = qw ( 
    glossy_cover	has_plastic_layer_on_cover	male_present	female_present	total_people_1	total_people_2	total_people_3
	total_people_4	total_people_5_n_above	has_flowers	flower_coverage_more_than_half	has_leaves	leaves_coverage_more_than_half	has_trees
	trees_coverage_more_than_half	has_other_living_things	has_fancy_stuff	has_obvious_inanimate_objects	red_shades	blue_shades	yellow_shades
	orange_shades	green_shades	purple_shades	brown_shades	black_shades	overall_red_dominant	overall_green_dominant
	overall_yellow_dominant	overall_pink_dominant	overall_purple_dominant	overall_orange_dominant	overall_blue_dominant	overall_brown_dominant
	overall_black_dominant	overall_white_dominant );

my $total_headers = scalar @attributes;

my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => 0.01,
    attribs => \@attributes
} );

my %attribs = $perceptron->get_attributes; # merging will cause problems
my $perceptron_headers = keys %attribs; # scalar context directly return number of keys

is( $perceptron_headers, $total_headers, "Correct headers" );
# print $FindBin::Bin, "\n";
# print TRAINING_DATA, "\n";
ok ( -e TRAINING_DATA, "Found the training file" );

t/04-train.t  view on Meta::CPAN

eval { $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER) };
is ( $@, "", "No problem with \'train\' method (verbose) so far" );
}

ok ( $perceptron->train( TRAINING_DATA, "brand", $nerve_file), "No problem with \'train\' method (non-verbose) so far" );

# no longer returns the file anymore since v0.03
# is ( $perceptron->train( TRAINING_DATA, "brand", $nerve_file), $nerve_file, "\'train\' method returns the correct value" );

subtest "Data related subroutine found" => sub {
    ok( AI::Perceptron::Simple->can("save_perceptron"), "&save_perceptron is persent" );
    ok( AI::Perceptron::Simple->can("load_perceptron"), "&loaded_perceptron is present" );
};


ok( save_perceptron( $perceptron, $nerve_file ), "save_perceptron is working good so far" );
ok( -e $nerve_file, "Found the perceptron file" );

ok( load_perceptron( $nerve_file ), "Perceptron loaded" );
my $loaded_perceptron = load_perceptron( $nerve_file );
is( ref $loaded_perceptron, MODULE_NAME, "Correct class after loading" );

t/04-train_synonyms_exercise.t  view on Meta::CPAN

use AI::Perceptron::Simple;

use FindBin;
use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

# 36 headers
my @attributes = qw ( 
    glossy_cover	has_plastic_layer_on_cover	male_present	female_present	total_people_1	total_people_2	total_people_3
	total_people_4	total_people_5_n_above	has_flowers	flower_coverage_more_than_half	has_leaves	leaves_coverage_more_than_half	has_trees
	trees_coverage_more_than_half	has_other_living_things	has_fancy_stuff	has_obvious_inanimate_objects	red_shades	blue_shades	yellow_shades
	orange_shades	green_shades	purple_shades	brown_shades	black_shades	overall_red_dominant	overall_green_dominant
	overall_yellow_dominant	overall_pink_dominant	overall_purple_dominant	overall_orange_dominant	overall_blue_dominant	overall_brown_dominant
	overall_black_dominant	overall_white_dominant );

my $total_headers = scalar @attributes;

my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => 0.01,
    attribs => \@attributes
} );

my %attribs = $perceptron->get_attributes; # merging will cause problems
my $perceptron_headers = keys %attribs; # scalar context directly return number of keys

is( $perceptron_headers, $total_headers, "Correct headers" );
# print $FindBin::Bin, "\n";
# print TRAINING_DATA, "\n";
ok ( -e TRAINING_DATA, "Found the training file" );

t/04-train_synonyms_tame.t  view on Meta::CPAN

use AI::Perceptron::Simple;

use FindBin;
use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

# 36 headers
my @attributes = qw ( 
    glossy_cover	has_plastic_layer_on_cover	male_present	female_present	total_people_1	total_people_2	total_people_3
	total_people_4	total_people_5_n_above	has_flowers	flower_coverage_more_than_half	has_leaves	leaves_coverage_more_than_half	has_trees
	trees_coverage_more_than_half	has_other_living_things	has_fancy_stuff	has_obvious_inanimate_objects	red_shades	blue_shades	yellow_shades
	orange_shades	green_shades	purple_shades	brown_shades	black_shades	overall_red_dominant	overall_green_dominant
	overall_yellow_dominant	overall_pink_dominant	overall_purple_dominant	overall_orange_dominant	overall_blue_dominant	overall_brown_dominant
	overall_black_dominant	overall_white_dominant );

my $total_headers = scalar @attributes;

my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => 0.01,
    attribs => \@attributes
} );

my %attribs = $perceptron->get_attributes; # merging will cause problems
my $perceptron_headers = keys %attribs; # scalar context directly return number of keys

is( $perceptron_headers, $total_headers, "Correct headers" );
# print $FindBin::Bin, "\n";
# print TRAINING_DATA, "\n";
ok ( -e TRAINING_DATA, "Found the training file" );

t/06-validate.t  view on Meta::CPAN

use strict;
use warnings;
use Test::More;
use Test::Output;

use AI::Perceptron::Simple;

use FindBin;

# TRAINING_DATA & VALIDATION_DATA have the same contents, in real world, don't do this
    # use different sets of data for training and validating the nerve. Same goes to testing data.
    # I'm doing this only to make sure the nerve is working correctly

use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant VALIDATION_DATA => $FindBin::Bin . "/book_list_validate.csv";


use constant VALIDATION_DATA_OUTPUT_FILE => $FindBin::Bin . "/book_list_validate-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

# 36 headers
my @attributes = qw ( 
    glossy_cover	has_plastic_layer_on_cover	male_present	female_present	total_people_1	total_people_2	total_people_3
	total_people_4	total_people_5_n_above	has_flowers	flower_coverage_more_than_half	has_leaves	leaves_coverage_more_than_half	has_trees
	trees_coverage_more_than_half	has_other_living_things	has_fancy_stuff	has_obvious_inanimate_objects	red_shades	blue_shades	yellow_shades
	orange_shades	green_shades	purple_shades	brown_shades	black_shades	overall_red_dominant	overall_green_dominant
	overall_yellow_dominant	overall_pink_dominant	overall_purple_dominant	overall_orange_dominant	overall_blue_dominant	overall_brown_dominant
	overall_black_dominant	overall_white_dominant );

my $total_headers = scalar @attributes;

my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => 0.01,
    learning_rate => 0.001,
    threshold => 0.8,
    attribs => \@attributes
} );

my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";

for ( 0..5 ) {
    print "Round $_\n";
    $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER );
    print "\n";
}
#print Dumper($perceptron), "\n";

# write ack to original file
my $ori_file_size = -s VALIDATION_DATA;
stdout_like {
    ok ( $perceptron->validate( {
                stimuli_validate => VALIDATION_DATA,
                predicted_column_index => 4,
            } ), 
            "Validate succedded!" );
} qr/book_list_validate\.csv/, "Correct output for validate when saving file";

# with new output file
stdout_like {
    ok ( $perceptron->validate( {
            stimuli_validate => VALIDATION_DATA,
            predicted_column_index => 4,
            results_write_to => VALIDATION_DATA_OUTPUT_FILE
        } ), 
        "Validate succedded!" );

} qr/book_list_validate\-filled\.csv/, "Correct output for validate when saving to NEW file";

ok( -e VALIDATION_DATA_OUTPUT_FILE, "New validation file found" );
isnt( -s VALIDATION_DATA_OUTPUT_FILE, 0, "New output file is not empty" );

done_testing;
# besiyata d'shmaya


t/06-validate_synonyms_lab.t  view on Meta::CPAN

use strict;
use warnings;
use Test::More;
use Test::Output;

use AI::Perceptron::Simple;

use FindBin;

# TRAINING_DATA & VALIDATION_DATA have the same contents, in real world, don't do this
    # use different sets of data for training and validating the nerve. Same goes to testing data.
    # I'm doing this only to make sure the nerve is working correctly

use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant VALIDATION_DATA => $FindBin::Bin . "/book_list_validate.csv";


use constant VALIDATION_DATA_OUTPUT_FILE => $FindBin::Bin . "/book_list_validate_lab-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

# 36 headers
my @attributes = qw ( 
    glossy_cover	has_plastic_layer_on_cover	male_present	female_present	total_people_1	total_people_2	total_people_3
	total_people_4	total_people_5_n_above	has_flowers	flower_coverage_more_than_half	has_leaves	leaves_coverage_more_than_half	has_trees
	trees_coverage_more_than_half	has_other_living_things	has_fancy_stuff	has_obvious_inanimate_objects	red_shades	blue_shades	yellow_shades
	orange_shades	green_shades	purple_shades	brown_shades	black_shades	overall_red_dominant	overall_green_dominant
	overall_yellow_dominant	overall_pink_dominant	overall_purple_dominant	overall_orange_dominant	overall_blue_dominant	overall_brown_dominant
	overall_black_dominant	overall_white_dominant );

my $total_headers = scalar @attributes;

my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => 0.01,
    learning_rate => 0.001,
    threshold => 0.8,
    attribs => \@attributes
} );

my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";

for ( 0..5 ) {
    print "Round $_\n";
    $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER );
    print "\n";
}
#print Dumper($perceptron), "\n";

# write ack to original file
my $ori_file_size = -s VALIDATION_DATA;
stdout_like {
    ok ( $perceptron->take_lab_test( {
                stimuli_validate => VALIDATION_DATA,
                predicted_column_index => 4,
            } ), 
            "Validate succedded!" );
} qr/book_list_validate\.csv/, "Correct output for take_lab_test when saving file";

# with new output file
stdout_like {
    ok ( $perceptron->take_lab_test( {
            stimuli_validate => VALIDATION_DATA,
            predicted_column_index => 4,
            results_write_to => VALIDATION_DATA_OUTPUT_FILE
        } ), 
        "Validate succedded!" );

} qr/book_list_validate_lab\-filled\.csv/, "Correct output for take_lab_test when saving to NEW file";

ok( -e VALIDATION_DATA_OUTPUT_FILE, "New validation file found" );
isnt( -s VALIDATION_DATA_OUTPUT_FILE, 0, "New output file is not empty" );

done_testing;
# besiyata d'shmaya


t/06-validate_synonyms_mock.t  view on Meta::CPAN

use strict;
use warnings;
use Test::More;
use Test::Output;

use AI::Perceptron::Simple;

use FindBin;

# TRAINING_DATA & VALIDATION_DATA have the same contents, in real world, don't do this
    # use different sets of data for training and validating the nerve. Same goes to testing data.
    # I'm doing this only to make sure the nerve is working correctly

use constant TRAINING_DATA => $FindBin::Bin . "/book_list_train.csv";
use constant VALIDATION_DATA => $FindBin::Bin . "/book_list_validate.csv";


use constant VALIDATION_DATA_OUTPUT_FILE => $FindBin::Bin . "/book_list_validate_mock-filled.csv";
use constant MODULE_NAME => "AI::Perceptron::Simple";
use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

# 36 headers
my @attributes = qw ( 
    glossy_cover	has_plastic_layer_on_cover	male_present	female_present	total_people_1	total_people_2	total_people_3
	total_people_4	total_people_5_n_above	has_flowers	flower_coverage_more_than_half	has_leaves	leaves_coverage_more_than_half	has_trees
	trees_coverage_more_than_half	has_other_living_things	has_fancy_stuff	has_obvious_inanimate_objects	red_shades	blue_shades	yellow_shades
	orange_shades	green_shades	purple_shades	brown_shades	black_shades	overall_red_dominant	overall_green_dominant
	overall_yellow_dominant	overall_pink_dominant	overall_purple_dominant	overall_orange_dominant	overall_blue_dominant	overall_brown_dominant
	overall_black_dominant	overall_white_dominant );

my $total_headers = scalar @attributes;

my $perceptron = AI::Perceptron::Simple->new( {
    initial_value => 0.01,
    learning_rate => 0.001,
    threshold => 0.8,
    attribs => \@attributes
} );

my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";

for ( 0..5 ) {
    print "Round $_\n";
    $perceptron->train( TRAINING_DATA, "brand", $nerve_file, WANT_STATS, IDENTIFIER );
    print "\n";
}
#print Dumper($perceptron), "\n";

# write ack to original file
my $ori_file_size = -s VALIDATION_DATA;
stdout_like {
    ok ( $perceptron->take_mock_exam( {
                stimuli_validate => VALIDATION_DATA,
                predicted_column_index => 4,
            } ), 
            "Validate succedded!" );
} qr/book_list_validate\.csv/, "Correct output for take_mock_exam when saving file";

# with new output file
stdout_like {
    ok ( $perceptron->take_mock_exam( {
            stimuli_validate => VALIDATION_DATA,
            predicted_column_index => 4,
            results_write_to => VALIDATION_DATA_OUTPUT_FILE
        } ), 
        "Validate succedded!" );

} qr/book_list_validate_mock\-filled\.csv/, "Correct output for take_mock_exam when saving to NEW file";

ok( -e VALIDATION_DATA_OUTPUT_FILE, "New validation file found" );
isnt( -s VALIDATION_DATA_OUTPUT_FILE, 0, "New output file is not empty" );

done_testing;
# besiyata d'shmaya


t/08-confusion_matrix.t  view on Meta::CPAN


use FindBin;

use constant TEST_FILE => $FindBin::Bin . "/book_list_test-filled.csv";
use constant NON_BINARY_FILE => $FindBin::Bin . "/book_list_test-filled-non-binary.csv";

my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
my $perceptron = AI::Perceptron::Simple::load_perceptron( $nerve_file );

ok ( my %c_matrix = $perceptron->get_confusion_matrix( { 
        full_data_file => TEST_FILE, 
        actual_output_header => "brand",
        predicted_output_header => "predicted",
    } ), 
    "get_confusion_matrix method is working");

is ( ref \%c_matrix, ref {}, "Confusion matrix in correct data structure" );

is ( $c_matrix{ true_positive }, 2, "Correct true_positive" );
is ( $c_matrix{ true_negative }, 4, "Correct true_negative" );
is ( $c_matrix{ false_positive }, 1, "Correct false_positive" );
is ( $c_matrix{ false_negative }, 3, "Correct false_negative" );

is ( $c_matrix{ total_entries }, 10, "Total entries is correct" );
ok ( AI::Perceptron::Simple::_calculate_total_entries( \%c_matrix ), 
    "Testing the 'untestable' &_calculate_total_entries" );
is ( $c_matrix{ total_entries }, 10, "'illegal' calculation of total entries is correct" );

like ( $c_matrix{ accuracy }, qr/60/, "Accuracy seems correct to me" );
ok ( AI::Perceptron::Simple::_calculate_accuracy( \%c_matrix ), 
    "Testing the 'untestable' &_calculate_accuracy" );
like ( $c_matrix{ accuracy }, qr/60/, "'illegal' calculation of accuracy seems correct to me" );

like ( $c_matrix{ sensitivity }, qr/40/, "Accuracy seems correct to me" );
ok ( AI::Perceptron::Simple::_calculate_sensitivity( \%c_matrix ), 
    "Testing the 'untestable' &_calculate_sensitivity" );
like ( $c_matrix{ accuracy }, qr/60/, "'illegal' calculation of sensitivity seems correct to me" );

{
    local $@;
    eval {
        $perceptron->get_confusion_matrix( { 
            full_data_file => NON_BINARY_FILE,
            actual_output_header => "brand",
            predicted_output_header => "predicted",
        } );
    };

    like ( $@, qr/Something\'s wrong\!/, "Croaked! Found non-binary values in file");
}

my $piece;
my @pieces = ('A: ', 'P: ', 'actual', 'predicted', 'entries', 'Accuracy', 'Sensitivity', 'MP520', 'Yi Lin');

for $piece ( @pieces ) {
    stdout_like {
    
        ok ( $perceptron->display_exam_results( \%c_matrix, { zero_as => "MP520", one_as => "Yi Lin"  } ),
            "display_exam_results is working");
        
    } qr /(?:$piece)/, "$piece displayed";

}

{
    local $@;
    
    eval {
        $perceptron->display_confusion_matrix( \%c_matrix, { one_as => "Yi Lin" } );
    };
    
    like ( $@, qr/zero_as/, "Missing keys found: zero_as!" );
    unlike ( $@, qr/one_as/, "Confirmed one_as is present but not zero_as" );
}

{
    local $@;
    
    eval {
        $perceptron->display_confusion_matrix( \%c_matrix, { zero_as => "MP520" } );
    };
    
    like ( $@, qr/one_as/, "Missing keys found: one_as!" );
    unlike ( $@, qr/zero_as/, "Confirmed zero_as is present but not one_as" );
}

{
    local $@;
    
    eval {
        $perceptron->display_confusion_matrix( \%c_matrix );
    };
    
    like ( $@, qr/zero_as one_as/, "Both keys not found" );
}

# more_stats enabled

subtest "More stats" => sub {

    my %c_matrix_more_stats = $perceptron->get_confusion_matrix( { 
            full_data_file => TEST_FILE, 
            actual_output_header => "brand",
            predicted_output_header => "predicted",
            more_stats => 1,
        } );

    like ( $c_matrix_more_stats{ precision }, qr/66.66/, "Precision seems correct to me" );
    is ( $c_matrix_more_stats{ specificity }, 80, "Specificity seems correct to me" );
    is ( $c_matrix_more_stats{ F1_Score }, 50, "F1 Score seems correct to me" );
    like ( $c_matrix_more_stats{ negative_predicted_value }, qr/57.142/, "Negative Predicted Value seems correct to me" );
    is ( $c_matrix_more_stats{ false_negative_rate }, 60, "False Negative Rate seems correct to me" );
    is ( $c_matrix_more_stats{ false_positive_rate }, 20, "False positive Rate seems correct to me" );
    like ( $c_matrix_more_stats{ false_discovery_rate }, qr/33.33/, "False Discovery Rate seems correct to me" );
    like ( $c_matrix_more_stats{ false_omission_rate }, qr/42.85/, "False Omission Rate seems correct to me" );
    is ( $c_matrix_more_stats{ balanced_accuracy }, 60, "Balanced Acuracy seems correct to me" );


    my $piece;
    my @pieces = ('A: ', 'P: ', 'actual', 'predicted', 'entries', 'Accuracy', 'Sensitivity', 'MP520', 'Yi Lin', "Precision", "Specificity", "F1 Score", "Negative Predicted Value", "False Negative Rate", "False Positive Rate", "False Discovery Rate", ...

    for $piece ( @pieces ) {
        stdout_like {
        
            ok ( $perceptron->display_exam_results( \%c_matrix_more_stats, { zero_as => "MP520", one_as => "Yi Lin"  } ),
                "display_exam_results is working");
            
        } qr /(?:$piece)/, "$piece displayed";

    }
    $perceptron->display_exam_results( \%c_matrix_more_stats, { 
        zero_as => "MP520", 
        one_as => "Yi Lin"  } );
};

done_testing;

# besiyata d'shmaya




t/08-confusion_matrix_synonyms.t  view on Meta::CPAN


use FindBin;

use constant TEST_FILE => $FindBin::Bin . "/book_list_test-filled.csv";
use constant NON_BINARY_FILE => $FindBin::Bin . "/book_list_test-filled-non-binary.csv";

my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
my $perceptron = AI::Perceptron::Simple::load_perceptron( $nerve_file );

ok ( my %c_matrix = $perceptron->get_exam_results( { 
        full_data_file => TEST_FILE, 
        actual_output_header => "brand",
        predicted_output_header => "predicted",
    } ), 
    "get_exam_results method is working");

is ( ref \%c_matrix, ref {}, "Confusion matrix in correct data structure" );

is ( $c_matrix{ true_positive }, 2, "Correct true_positive" );
is ( $c_matrix{ true_negative }, 4, "Correct true_negative" );
is ( $c_matrix{ false_positive }, 1, "Correct false_positive" );
is ( $c_matrix{ false_negative }, 3, "Correct false_negative" );

is ( $c_matrix{ total_entries }, 10, "Total entries is correct" );
ok ( AI::Perceptron::Simple::_calculate_total_entries( \%c_matrix ), 
    "Testing the 'untestable' &_calculate_total_entries" );
is ( $c_matrix{ total_entries }, 10, "'illegal' calculation of total entries is correct" );

like ( $c_matrix{ accuracy }, qr/60/, "Accuracy seems correct to me" );
ok ( AI::Perceptron::Simple::_calculate_accuracy( \%c_matrix ), 
    "Testing the 'untestable' &_calculate_accuracy" );
like ( $c_matrix{ accuracy }, qr/60/, "'illegal' calculation of accuracy seems correct to me" );

like ( $c_matrix{ sensitivity }, qr/40/, "Accuracy seems correct to me" );
ok ( AI::Perceptron::Simple::_calculate_sensitivity( \%c_matrix ), 
    "Testing the 'untestable' &_calculate_sensitivity" );
like ( $c_matrix{ accuracy }, qr/60/, "'illegal' calculation of sensitivity seems correct to me" );

{
    local $@;
    eval {
        $perceptron->get_exam_results( { 
            full_data_file => NON_BINARY_FILE,
            actual_output_header => "brand",
            predicted_output_header => "predicted",
        } );
    };

    like ( $@, qr/Something\'s wrong\!/, "Croaked! Found non-binary values in file");
}


my $piece;
my @pieces = ('A: ', 'P: ', 'actual', 'predicted', 'entries', 'Accuracy', 'Sensitivity', 'MP520', 'Yi Lin');

for $piece ( @pieces ) {
    stdout_like {
    
        ok ( $perceptron->display_exam_results( \%c_matrix, { zero_as => "MP520", one_as => "Yi Lin"  } ),
            "display_exam_results is working");
        
    } qr /(?:$piece)/, "$piece displayed";

}


{
    local $@;
    
    eval {
        $perceptron->display_exam_results( \%c_matrix, { one_as => "Yi Lin" } );
    };
    
    like ( $@, qr/zero_as/, "Missing keys found: zero_as!" );
    unlike ( $@, qr/one_as/, "Confirmed one_as is present but not zero_as" );
}

{
    local $@;
    
    eval {
        $perceptron->display_exam_results( \%c_matrix, { zero_as => "MP520" } );
    };
    
    like ( $@, qr/one_as/, "Missing keys found: one_as!" );
    unlike ( $@, qr/zero_as/, "Confirmed zero_as is present but not one_as" );
}

{
    local $@;
    
    eval {
        $perceptron->display_exam_results( \%c_matrix );
    };
    
    like ( $@, qr/zero_as one_as/, "Both keys not found" );
}
# more_stats enabled

subtest "More stats" => sub {

    my %c_matrix_more_stats = $perceptron->get_confusion_matrix( { 
            full_data_file => TEST_FILE, 
            actual_output_header => "brand",
            predicted_output_header => "predicted",
            more_stats => 1,
        } );

    like ( $c_matrix_more_stats{ precision }, qr/66.66/, "Precision seems correct to me" );
    is ( $c_matrix_more_stats{ specificity }, 80, "Specificity seems correct to me" );
    is ( $c_matrix_more_stats{ F1_Score }, 50, "F1 Score seems correct to me" );
    like ( $c_matrix_more_stats{ negative_predicted_value }, qr/57.142/, "Negative Predicted Value seems correct to me" );
    is ( $c_matrix_more_stats{ false_negative_rate }, 60, "False Negative Rate seems correct to me" );
    is ( $c_matrix_more_stats{ false_positive_rate }, 20, "False positive Rate seems correct to me" );
    like ( $c_matrix_more_stats{ false_discovery_rate }, qr/33.33/, "False Discovery Rate seems correct to me" );
    like ( $c_matrix_more_stats{ false_omission_rate }, qr/42.85/, "False Omission Rate seems correct to me" );
    is ( $c_matrix_more_stats{ balanced_accuracy }, 60, "Balanced Acuracy seems correct to me" );


    my $piece;
    my @pieces = ('A: ', 'P: ', 'actual', 'predicted', 'entries', 'Accuracy', 'Sensitivity', 'MP520', 'Yi Lin', "Precision", "Specificity", "F1 Score", "Negative Predicted Value", "False Negative Rate", "False Positive Rate", "False Discovery Rate", ...

    for $piece ( @pieces ) {
        stdout_like {
        
            ok ( $perceptron->display_exam_results( \%c_matrix_more_stats, { zero_as => "MP520", one_as => "Yi Lin"  } ),
                "display_exam_results is working");
            
        } qr /(?:$piece)/, "$piece displayed";

    }
    $perceptron->display_exam_results( \%c_matrix_more_stats, { 
        zero_as => "MP520", 
        one_as => "Yi Lin"  } );
};
done_testing;

# besiyata d'shmaya




t/10-test.t  view on Meta::CPAN

use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
ok( -s $nerve_file, "Found nerve file to load" );

my $mature_nerve = AI::Perceptron::Simple::load_perceptron( $nerve_file );

# write to original file
stdout_like {
    ok ( $mature_nerve->test( {
            stimuli_validate => TEST_DATA,
            predicted_column_index => 4,
        } ), 
        "Testing stage succedded!" );

} qr/book_list_test\.csv/, "Correct output for testing when saving back to original file";


# with new output file
stdout_like {
    ok ( $mature_nerve->test( {
            stimuli_validate => TEST_DATA,
            predicted_column_index => 4,
            results_write_to => TEST_DATA_NEW_FILE
        } ), 
        "Testing stage succedded!" );

} qr/book_list_test\-filled\.csv/, "Correct output for testing when saving to NEW file";

ok( -e TEST_DATA_NEW_FILE, "New testing file found" );
isnt( -s TEST_DATA_NEW_FILE, 0, "New output file is not empty" );

done_testing;
# besiyata d'shmaya


t/10-test_synonyms_exam.t  view on Meta::CPAN

use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
ok( -s $nerve_file, "Found nerve file to load" );

my $mature_nerve = AI::Perceptron::Simple::load_perceptron( $nerve_file );

# write to original file
stdout_like {
    ok ( $mature_nerve->take_real_exam( {
            stimuli_validate => TEST_DATA,
            predicted_column_index => 4,
        } ), 
        "Testing stage succedded!" );

} qr/book_list_test\.csv/, "Correct output for testing when saving back to original file";


# with new output file
stdout_like {
    ok ( $mature_nerve->take_real_exam( {
            stimuli_validate => TEST_DATA,
            predicted_column_index => 4,
            results_write_to => TEST_DATA_NEW_FILE
        } ), 
        "Testing stage succedded!" );

} qr/book_list_test_exam\-filled\.csv/, "Correct output for testing when saving to NEW file";

ok( -e TEST_DATA_NEW_FILE, "New testing file found" );
isnt( -s TEST_DATA_NEW_FILE, 0, "New output file is not empty" );

done_testing;
# besiyata d'shmaya


t/10-test_synonyms_work.t  view on Meta::CPAN

use constant WANT_STATS => 1;
use constant IDENTIFIER => "book_name";

my $nerve_file = $FindBin::Bin . "/perceptron_1.nerve";
ok( -s $nerve_file, "Found nerve file to load" );

my $mature_nerve = AI::Perceptron::Simple::load_perceptron( $nerve_file );

# write to original file
stdout_like {
    ok ( $mature_nerve->work_in_real_world( {
            stimuli_validate => TEST_DATA,
            predicted_column_index => 4,
        } ), 
        "Testing stage succedded!" );

} qr/book_list_test\.csv/, "Correct output for testing when saving back to original file";


# with new output file
stdout_like {
    ok ( $mature_nerve->work_in_real_world( {
            stimuli_validate => TEST_DATA,
            predicted_column_index => 4,
            results_write_to => TEST_DATA_NEW_FILE
        } ), 
        "Testing stage succedded!" );

} qr/book_list_test_work\-filled\.csv/, "Correct output for testing when saving to NEW file";

ok( -e TEST_DATA_NEW_FILE, "New testing file found" );
isnt( -s TEST_DATA_NEW_FILE, 0, "New output file is not empty" );

done_testing;
# besiyata d'shmaya


t/12-shuffle_data.t  view on Meta::CPAN


{
local $@;
eval { shuffle_data($original_stimuli) };
like( $@, qr/output files/, "Croaked when new file names not present" )
}

shuffle_data( $original_stimuli => $shuffled_data_1, $shuffled_data_2, $shuffled_data_3 );

stdout_like {
    shuffle_data( ORIGINAL_STIMULI, $shuffled_data_1, $shuffled_data_2, $shuffled_data_3 );
} qr/^Saved/, "Correct output after saving file";


ok( -e $shuffled_data_1, "Found the first shuffled file" );
ok( -e $shuffled_data_2, "Found the second shuffled file" );
ok( -e $shuffled_data_3, "Found the third shuffled file" );

done_testing();

# besiyata d'shmaya

t/12-shuffle_data_synonym.t  view on Meta::CPAN


{
local $@;
eval { shuffle_stimuli($original_stimuli) };
like( $@, qr/output files/, "Croaked when new file names not present" )
}

shuffle_stimuli( $original_stimuli => $shuffled_data_1, $shuffled_data_2, $shuffled_data_3 );

stdout_like {
    shuffle_stimuli( ORIGINAL_STIMULI, $shuffled_data_1, $shuffled_data_2, $shuffled_data_3 );
} qr/^Saved/, "Correct output after saving file";


ok( -e $shuffled_data_1, "Found the first shuffled file" );
ok( -e $shuffled_data_2, "Found the second shuffled file" );
ok( -e $shuffled_data_3, "Found the third shuffled file" );

done_testing();

# besiyata d'shmaya

t/portable_nerve.yaml  view on Meta::CPAN

--- !!perl/hash:AI::Perceptron::Simple
attributes_hash_ref:
  has_other_living_things: 0.01
  has_trees: 0.01
  trees_coverage_more_than_half: 0.01
learning_rate: 0.05
threshold: 0.5



( run in 0.425 second using v1.01-cache-2.11-cpan-4d50c553e7e )