AI-PSO

 view release on metacpan or  search on metacpan

MPL-1.1.txt  view on Meta::CPAN

     the Covered Code is available under the terms of this License,
     including a description of how and where You have fulfilled the
     obligations of Section 3.2. The notice must be conspicuously included
     in any notice in an Executable version, related documentation or
     collateral in which You describe recipients' rights relating to the
     Covered Code. You may distribute the Executable version of Covered
     Code or ownership rights under a license of Your choice, which may
     contain terms different from this License, provided that You are in
     compliance with the terms of this License and that the license for the
     Executable version does not attempt to limit or alter the recipient's
     rights in the Source Code version from the rights set forth in this
     License. If You distribute the Executable version under a different
     license You must make it absolutely clear that any terms which differ
     from this License are offered by You alone, not by the Initial
     Developer or any Contributor. You hereby agree to indemnify the
     Initial Developer and every Contributor for any liability incurred by
     the Initial Developer or such Contributor as a result of any such
     terms You offer.

     3.7. Larger Works.
     You may create a Larger Work by combining Covered Code with other code

MPL-1.1.txt  view on Meta::CPAN

     (b)  any software, hardware, or device, other than such Participant's
     Contributor Version, directly or indirectly infringes any patent, then
     any rights granted to You by such Participant under Sections 2.1(b)
     and 2.2(b) are revoked effective as of the date You first made, used,
     sold, distributed, or had made, Modifications made by that
     Participant.

     8.3.  If You assert a patent infringement claim against Participant
     alleging that such Participant's Contributor Version directly or
     indirectly infringes any patent where such claim is resolved (such as
     by license or settlement) prior to the initiation of patent
     infringement litigation, then the reasonable value of the licenses
     granted by such Participant under Sections 2.1 or 2.2 shall be taken
     into account in determining the amount or value of any payment or
     license.

     8.4.  In the event of termination under Sections 8.1 or 8.2 above,
     all end user license agreements (excluding distributors and resellers)
     which have been validly granted by You or any distributor hereunder
     prior to termination shall survive termination.

MPL-1.1.txt  view on Meta::CPAN

     THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU.

10. U.S. GOVERNMENT END USERS.

     The Covered Code is a "commercial item," as that term is defined in
     48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer
     software" and "commercial computer software documentation," as such
     terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent with 48
     C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995),
     all U.S. Government End Users acquire Covered Code with only those
     rights set forth herein.

11. MISCELLANEOUS.

     This License represents the complete agreement concerning subject
     matter hereof. If any provision of this License is held to be
     unenforceable, such provision shall be reformed only to the extent
     necessary to make it enforceable. This License shall be governed by
     California law provisions (except to the extent applicable law, if
     any, provides otherwise), excluding its conflict-of-law provisions.
     With respect to disputes in which at least one party is a citizen of,

examples/NeuralNet/NeuralNet.h  view on Meta::CPAN

        /// \param neuron a pointer to the connected Neuron
        ///
        void addConnection(Neuron *neuron)
        {
            checkSize();
            m_neurons[m_numConnections++] = neuron;
        }


        ///
        /// \fn void setWeight(int index, double weight)
        /// \brief sets the connection weight of connection at
        ///           index to weight
        /// \param index an int
        /// \param weight a double
        ///
        void setWeight(int index, double weight)
        {
            if(index >= 0 && index <= m_numConnections)
                m_weights[index] = weight;
        }


        ///
        /// \fn int numConnections()
        /// \brief returns the number of connections this Neuron has
        /// \return int

examples/NeuralNet/NeuralNet.h  view on Meta::CPAN

        double    m_value;            /// value of this Neuron
        TransferFunction *xfer;        
};




/// 
/// \class Input NeuralNet.h NeuralNet
/// \brief Simulates an input neuron in a Neural net.  This class extends Neuron
///            but allows for its value to be set directly and it also overrides 
///            the virtual value function so that it returns its value directly 
///            rather than passing though a transfer function.
///
class NEURALNET_API Input : public Neuron
{
    public:


        ///
        /// \fn Input(double value)

examples/NeuralNet/NeuralNet.h  view on Meta::CPAN

        ///
        /// \fn ~Input()
        /// \brief destructor
        ///
        virtual ~Input()
        {
        }


        ///
        /// \fn void setValue(double value)
        /// \brief sets the value of this input Neuron to value
        /// \param value a double
        ///
        void setValue(double value)
        {
            m_value = value;
        }


        ///
        /// \fn double value()
        /// \brief override of virtual function.
        /// \return double
        ///

examples/NeuralNet/NeuralNet.h  view on Meta::CPAN

/// \class Hidden NeuralNet.h NeuralNet
/// \brief simulates a hidden Neuron
///
class NEURALNET_API Hidden : public Neuron
{

    public:

        ///
        /// \fn Hidden()
        /// \brief constructor which sets transfer function
        ///
        Hidden() : Neuron()
        {
//            delete xfer;
//            xfer = new Logistic();
        }


        ///
        /// \fn ~Hidden()
        /// \brief destructor
        ///
        virtual ~Hidden()
        {
        }


        ///
        /// \fn void setTransferFunction(char *xferFunc)
        /// \brief sets the transfer function for this Neuron
        ///
        void setTransferFunction(const char *xferFunc)
        {
            string xferName = string(xferFunc);
            if(xferName != "UnityGain")
            {
                if(xferName == "Logistic")
                {
                    delete xfer;
                    xfer = new Logistic();
                }
                // add if statements for each new transfer function object

examples/NeuralNet/NeuralNet.h  view on Meta::CPAN

        /// \brief constructor
        /// \param numInputs an int
        /// \param numHidden an int
        ///
        NeuralNet(int numInputs = 3, int numHidden = 2, const char *xferFunc = "Logistic") : m_numInputs(numInputs), m_numHidden(numHidden)
        {
            m_inputs = new Input[m_numInputs];
//            m_hidden = new Neuron[m_numHidden];
            m_hidden = new Hidden[m_numHidden];
            for(int i = 0; i < m_numHidden; i++)
                m_hidden[i].setTransferFunction(xferFunc);
            m_xferFunc = string(xferFunc);
            connectionize();
        }


        ///
        /// \fn ~NeuralNet()
        /// \brief destructor 
        ///
        ~NeuralNet()
        {
            delete [] m_inputs;
            delete [] m_hidden;
        }


        ///
        /// \fn void setInput(int index, double value)
        /// \brief sets the value of the Input Neuron given by index to value
        /// \param index an int
        /// \param value a double
        ///
        void setInput(int index, double value)
        {
            if(index >= 0 && index < m_numInputs)
                m_inputs[index].setValue(value);
        }


        ///
        /// \fn void setWeightsToOne()
        /// \brief sets all of the connections weights to unity
        /// \note this is really only used for testing/debugging purposes
        ///
        void setWeightsToOne()
        {
            for(int i = 0; i < m_numHidden; i++)
                for(int j = 0; j < m_hidden[i].numConnections(); j++)
                    m_hidden[i].setWeight(j, 1.0);
            for(int k = 0; k < m_output.numConnections(); k++)
                m_output.setWeight(k, 1.0);
        }


        ///
        /// \fn double value()
        /// \brief returns the final network value 
        /// \return double
        ///
        double value()
        {
            return m_output.value();
        }


        ///
        /// \fn void setHiddenWeight(int indexHidden, int indexInput, double weight)
        /// \brief sets the connection weight between a pair of input and hidden neurons
        /// \param indexHidden an int
        /// \param indexInput an int
        /// \param weight a double
        ///
        void setHiddenWeight(int indexHidden, int indexInput, double weight)
        {
            if(indexHidden >= 0 && indexHidden < m_numHidden)
                m_hidden[indexHidden].setWeight(indexInput, weight);
        }


        ///
        /// \fn void setOutputWeight(int index, double weight)
        /// \brief sets the connection weight between a pair of hidden and output neurons
        /// \param index an int
        /// \param weight a double
        ///
        void setOutputWeight(int index, double weight)
        {
            m_output.setWeight(index, weight);
        }

/*
        void read(istream & in)
        {
            in  >> m_numInputs
                >> m_numHidden;
            
            delete [] m_inputs;
            delete [] m_hidden;

examples/NeuralNet/NeuralNet.h  view on Meta::CPAN

            m_inputs = new Input[m_numInputs];
            m_hidden = new Neuron[m_numHidden];
            connectionize();

            double weight;

            for(int i = 0; i < m_numHidden; i++)
                for(int j = 0; j < m_hidden[i].numConnections(); j++)
                {
                    in >> weight;
                    m_hidden[i].setWeight(j, weight);
                }
            for(int k = 0; k < m_output.numConnections(); k++)
            {
                in >> weight;
                m_output.setWeight(k, weight);
            }
            
        }

        friend istream & operator>>(istream & in, NeuralNet & ann)
        {
            ann.read(in);
            return in;
        }

examples/NeuralNet/main.cpp  view on Meta::CPAN

    }
    ids.close();


    NeuralNet *m_ann = new NeuralNet(numInputs, numHidden, xferFunc.c_str());

    double weight;
    for(int c = 0; c < numHidden; c++) {
        for(int j = 0; j < numInputs; j++) {
            ifs >> weight;
            m_ann->setHiddenWeight(c, j, weight);
        }
    }
    for(int k = 0; k < numHidden; k++) {
        ifs >> weight;
        m_ann->setOutputWeight(k, weight);
    }
    
    for(int d = 0; d < numInputs; d++) {
        m_ann->setInput(d, dataForNet[d]);
    }

    delete [] dataForNet;

    ifs.close();
    if(ifs.is_open()) {
        cerr << "Error closing neural network configuration file" << endl;
    }

    cout << m_ann->value() << endl;

examples/NeuralNet/pso_ann.pl  view on Meta::CPAN

my $expectedValue = 3.5;	# this is the value that we want to train the ANN to produce (just like the example in t/PTO.t)


sub test_fitness_function(@) {
    my (@arr) = (@_);
	&writeAnnConfig($annConfig, $numInputs, $numHidden, $xferFunc, @arr);
	my $netValue = &runANN($annConfig, $annInputs);
	print "network value = $netValue\n";

	# the closer the network value gets to our desired value
	# then we want to set the fitness closer to 1.
	#
	# This is a special case of the sigmoid, and looks an awful lot
	# like the hyperbolic tangent ;)
	#
	my $magnitudeFromBest = abs($expectedValue - $netValue);
	return 2 / (1 + exp($magnitudeFromBest));
}

pso_set_params(\%test_params);
pso_register_fitness_function('test_fitness_function');
pso_optimize();
#my @solution = pso_get_solution_array();




##### io #########

sub writeAnnConfig() {

lib/AI/PSO.pm  view on Meta::CPAN

use strict;
use warnings;
use Math::Random;
use Callback;

require Exporter;

our @ISA = qw(Exporter);

our @EXPORT = qw(
    pso_set_params
    pso_register_fitness_function
    pso_optimize
    pso_get_solution_array
);

our $VERSION = '0.86';


######################## BEGIN MODULE CODE #################################

lib/AI/PSO.pm  view on Meta::CPAN

#
my @particles = ();
my $user_fitness_function;
my @solution = ();
#----------   END GLOBAL DATA STRUCTURES --------


#---------- BEGIN EXPORTED SUBROUTINES ----------

#
# pso_set_params
#  - sets the global module parameters from the hash passed in
#
sub pso_set_params(%) {
    my (%params) = %{$_[0]};
    my $retval = 0;

    #no strict 'refs';
    #foreach my $key (keys(%params)) {
    #    $$key = $params{$key};
    #}
    #use strict 'refs';

    $numParticles   = defined($params{numParticles})   ? $params{numParticles}   : 'null';

lib/AI/PSO.pm  view on Meta::CPAN

	}
    
    $retval = 1 if($param_string =~ m/null/);

    return $retval;
}


#
# pso_register_fitness_function
#  - sets the user-defined callback fitness function
#
sub pso_register_fitness_function($) {
    my ($func) = @_;
    $user_fitness_function = new Callback(\&{"main::$func"});
    return 0;
}


#
# pso_optimize

lib/AI/PSO.pm  view on Meta::CPAN

	if($psoRandomRange =~ m/null/) {
		$useModifiedAlgorithm = 1;
	} else {
		$useModifiedAlgorithm = 0;
	}
	&initialize_particles();
}

#
# initialize_particles
#    - sets up internal data structures
#    - initializes particle positions and velocities with an element of randomness
#
sub initialize_particles() {
    for(my $p = 0; $p < $numParticles; $p++) {
        $particles[$p]           = {};  # each particle is a hash of arrays with the array sizes being the dimensionality of the problem space
        $particles[$p]{nextPos}  = [];  # nextPos is the array of positions to move to on the next positional update
        $particles[$p]{bestPos}  = [];  # bestPos is the position of that has yielded the best fitness for this particle (it gets updated when a better fitness is found)
        $particles[$p]{currPos}  = [];  # currPos is the current position of this particle in the problem space
        $particles[$p]{velocity} = [];  # velocity ... come on ...

lib/AI/PSO.pm  view on Meta::CPAN

}



#
# initialize_neighbors
# NOTE: I made this a separate subroutine so that different topologies of neighbors can be created and used instead of this.
# NOTE: This subroutine is currently not used because we access neighbors by index to the particle array rather than storing their references
# 
#  - adds a neighbor array to the particle hash data structure
#  - sets the neighbor based on the default neighbor hash function
#
sub initialize_neighbors() {
    for(my $p = 0; $p < $numParticles; $p++) {
        for(my $n = 0; $n < $numNeighbors; $n++) {
            $particles[$p]{neighbor}[$n] = $particles[&get_index_of_neighbor($p, $n)];
        }
    }
}


lib/AI/PSO.pm  view on Meta::CPAN


            ## update position
            for(my $d = 0; $d < $dimensions; $d++) {
                $particles[$p]{currPos}[$d] = $particles[$p]{nextPos}[$d];
            }

            ## test _current_ fitness of position
            my $fitness = &compute_fitness(@{$particles[$p]{currPos}});
            # if this position in hyperspace is the best so far...
            if($fitness > &compute_fitness(@{$particles[$p]{bestPos}})) {
                # for each dimension, set the best position as the current position
                for(my $d2 = 0; $d2 < $dimensions; $d2++) {
                    $particles[$p]{bestPos}[$d2] = $particles[$p]{currPos}[$d2];
                }
            }

            ## check for exit criteria
            if($fitness >= $exitFitness) {
                #...write solution
                print "Y:$iter:$p:$fitness\n";
                &save_solution(@{$particles[$p]{bestPos}});

lib/AI/PSO.pm  view on Meta::CPAN

}


#
# compute_fitness
# - computes the fitness of a particle by using the user-specified fitness function
# 
# NOTE: I originally had a 'fitness cache' so that particles that stumbled upon the same
#       position wouldn't have to recalculate their fitness (which is often expensive).
#       However, this may be undesirable behavior for the user (if you come across the same position
#       then you may be settling in on a local maxima so you might want to randomize things and
#       keep searching.  For this reason, I'm leaving the cache out.  It would be trivial
#       for users to implement their own cache since they are passed the same array of values.
#
sub compute_fitness(@) {
    my (@values) = @_;
    my $return_fitness = 0;

#    no strict 'refs';
#    if(defined(&{"main::$user_fitness_function"})) {
#        $return_fitness = &$user_fitness_function(@values);

lib/AI/PSO.pm  view on Meta::CPAN

      meWeight       => 2.0,   # 'individuality' weighting constant (higher means more individuality)
      meMin          => 0.0,   # 'individuality' minimum random weight
      meMax          => 1.0,   # 'individuality' maximum random weight
      themWeight     => 2.0,   # 'social' weighting constant (higher means trust group more)
      themMin        => 0.0,   # 'social' minimum random weight 
      themMax        => 1.0,   # 'social' maximum random weight
      exitFitness    => 0.9,   # minimum fitness to achieve before exiting
      verbose        => 0,     # 0 prints solution
                               # 1 prints (Y|N):particle:fitness at each iteration
                               # 2 dumps each particle (+1)
      psoRandomRange => 4.0,   # setting this enables the original PSO algorithm and
                               # also subsequently ignores the  me*/them* parameters
  );


  sub custom_fitness_function(@input) {	
        # this is a callback function.  
        # @input will be passed to this, you do not need to worry about setting it...
        # ... do something with @input which is an array of floats
        # return a value in [0,1] with 0 being the worst and 1 being the best
  }

  pso_set_params(\%params);
  pso_register_fitness_function('custom_fitness_function');
  pso_optimize();
  my @solutionArray = pso_get_solution_array();

E<32>

=head2  General Guidelines

=over 2

=item 1. Sociality versus individuality

    I suggest that meWeight and themWeight add up up to 4.0, or that 
    psoRandomRange = 4.0.  Also, you should also be setting meMin 
    and themMin to 0, and meMin and themMax to 1 unless you really 
    know what you are doing.

=item 2. Search space coverage

    If you have a large search space, increasing deltaMin and deltaMax 
    and delta max can help cover more area. Conversely, if you have a 
    small search space, then decreasing them will fine tune the search.

=item 3. Swarm Topology

lib/AI/PSO.pm  view on Meta::CPAN

  The algorithm implemented in this module is taken from the book 
  I<Swarm Intelligence> by Russell Eberhart and James Kennedy.  
  I highly suggest you read the book if you are interested in this 
  sort of thing.  


=head1 EXPORTED FUNCTIONS

=over 4

=item pso_set_params()

  Sets the particle swarm configuration parameters to use for the search.

=item pso_register_fitness_function()

  Sets the user defined fitness function to call.  The fitness function 
  should return a value between 0 and 1.  Users may want to look into 
  the sigmoid function [1 / (1+e^(-x))] and it's variants to implement 
  this.  Also, you may want to take a look at either t/PSO.t for the 
  simple test or examples/NeuralNetwork/pso_ann.pl for an example on 
  how to train a simple 3-layer feed forward neural network.  (Note 
  that a real training application would have a real dataset with many 
  input-output pairs...pso_ann.pl is a _very_ simple example.  Also note 
  that the neural network exmaple requires g++.  Type 'make run' in the 
  examples/NeuralNetwork directory to run the example.  Lastly, the 
  neural network c++ code is in a very different coding style.  I did 
  indeed write this, but it was many years ago when I was striving to 
  make my code nicely formatted and good looking :)).

=item pso_optimize()

  Runs the particle swarm optimization algorithm.  This consists of 

t/PSO.t  view on Meta::CPAN

        foreach my $val (@arr) {
                $sum += $val;
        }
	# sigmoid-like ==> squash the result to [0,1] and get as close to 3.5 as we can
	return 2 / (1 + exp(abs($testValue - $sum)));

	return $ret;
}


ok( pso_set_params(\%test_params) == 0 );
ok( pso_register_fitness_function('test_fitness_function') == 0 );
ok( pso_optimize() == 0 );
my @solution = pso_get_solution_array();
ok( $#solution == $test_params{numParticles} - 1 );

ok( pso_set_params(\%test_params2) == 0 );
ok( pso_register_fitness_function('test_fitness_function') == 0 );
ok( pso_optimize() == 0 );
my @solution2 = pso_get_solution_array();
ok( $#solution2 == $test_params2{numParticles} - 1 );



( run in 0.578 second using v1.01-cache-2.11-cpan-49f99fa48dc )