AI-PSO
view release on metacpan or search on metacpan
Revision history for Perl extension AI::PSO.
0.86 Tue Nov 21 20:41:23 2006
- updated documentation
- added support for original RE & JK algorithm
- abstracted initialization function
0.85 Wed Nov 15 22:30:47 2006
- corrected the fitness function in the test
- added perceptron c++ code that I wrote a long time ago ;)
- added an example (pso_ann.pl) for training a simple feed-forward neural network
- updated POD
0.82 Sat Nov 11 22:20:31 2006
- fixed POD to correctly 'use AI::PSO'
MPL-1.1.txt view on Meta::CPAN
Electronic Distribution Mechanism is maintained by a third party.
3.3. Description of Modifications.
You must cause all Covered Code to which You contribute to contain a
file documenting the changes You made to create that Covered Code and
the date of any change. You must include a prominent statement that
the Modification is derived, directly or indirectly, from Original
Code provided by the Initial Developer and including the name of the
Initial Developer in (a) the Source Code, and (b) in any notice in an
Executable version or related documentation in which You describe the
origin or ownership of the Covered Code.
3.4. Intellectual Property Matters
(a) Third Party Claims.
If Contributor has knowledge that a license under a third party's
intellectual property rights is required to exercise the rights
granted by such Contributor under Sections 2.1 or 2.2,
Contributor must include a text file with the Source Code
distribution titled "LEGAL" which describes the claim and the
party making the claim in sufficient detail that a recipient will
know whom to contact. If Contributor obtains such knowledge after
MPL-1.1.txt view on Meta::CPAN
(b) Contributor APIs.
If Contributor's Modifications include an application programming
interface and Contributor has knowledge of patent licenses which
are reasonably necessary to implement that API, Contributor must
also include this information in the LEGAL file.
(c) Representations.
Contributor represents that, except as disclosed pursuant to
Section 3.4(a) above, Contributor believes that Contributor's
Modifications are Contributor's original creation(s) and/or
Contributor has sufficient rights to grant the rights conveyed by
this License.
3.5. Required Notices.
You must duplicate the notice in Exhibit A in each file of the Source
Code. If it is not possible to put such notice in a particular Source
Code file due to its structure, then You must include such notice in a
location (such as a relevant directory) where a user would be likely
to look for such a notice. If You created one or more Modification(s)
You may add your name as a Contributor to the notice described in
lib/AI/PSO.pm view on Meta::CPAN
my $deltaMax = 'null'; # This is the maximum scalar position change value when searching
#-#-# my 'how much do I trust myself verses my neighbors' parameters #-#-#
my $meWeight = 'null'; # 'individuality' weighting constant (higher weight (than group) means trust individual more, neighbors less)
my $meMin = 'null'; # 'individuality' minimum random weight (this should really be between 0, 1)
my $meMax = 'null'; # 'individuality' maximum random weight (this should really be between 0, 1)
my $themWeight = 'null'; # 'social' weighting constant (higher weight (than individual) means trust group more, self less)
my $themMin = 'null'; # 'social' minimum random weight (this should really be between 0, 1)
my $themMax = 'null'; # 'social' maximum random weight (this should really be between 0, 1)
my $psoRandomRange = 'null'; # PSO::.86 new variable to support original unmodified algorithm
my $useModifiedAlgorithm = 'null';
#-#-# user/debug parameters #-#-#
my $verbose = 0; # This one defaults for obvious reasons...
#NOTE: $meWeight and $themWeight should really add up to a constant value.
# Swarm Intelligence defines a 'pso random range' constant and then computes two random numbers
# within this range by first getting a random number and then subtracting it from the range.
# e.g.
# $randomRange = 4.0
lib/AI/PSO.pm view on Meta::CPAN
#
sub save_solution(@) {
@solution = @_;
}
#
# compute_fitness
# - computes the fitness of a particle by using the user-specified fitness function
#
# NOTE: I originally had a 'fitness cache' so that particles that stumbled upon the same
# position wouldn't have to recalculate their fitness (which is often expensive).
# However, this may be undesirable behavior for the user (if you come across the same position
# then you may be settling in on a local maxima so you might want to randomize things and
# keep searching. For this reason, I'm leaving the cache out. It would be trivial
# for users to implement their own cache since they are passed the same array of values.
#
sub compute_fitness(@) {
my (@values) = @_;
my $return_fitness = 0;
lib/AI/PSO.pm view on Meta::CPAN
meWeight => 2.0, # 'individuality' weighting constant (higher means more individuality)
meMin => 0.0, # 'individuality' minimum random weight
meMax => 1.0, # 'individuality' maximum random weight
themWeight => 2.0, # 'social' weighting constant (higher means trust group more)
themMin => 0.0, # 'social' minimum random weight
themMax => 1.0, # 'social' maximum random weight
exitFitness => 0.9, # minimum fitness to achieve before exiting
verbose => 0, # 0 prints solution
# 1 prints (Y|N):particle:fitness at each iteration
# 2 dumps each particle (+1)
psoRandomRange => 4.0, # setting this enables the original PSO algorithm and
# also subsequently ignores the me*/them* parameters
);
sub custom_fitness_function(@input) {
# this is a callback function.
# @input will be passed to this, you do not need to worry about setting it...
# ... do something with @input which is an array of floats
# return a value in [0,1] with 0 being the worst and 1 being the best
}
lib/AI/PSO.pm view on Meta::CPAN
fitness. If it does not meet the exit criteria then it gets
information from neighboring particles about how well they are doing.
If a neighboring particle is doing better, then the current particle
tries to move closer to its neighbor by adjusting its position. As
mentioned, the velocity controls how quickly a particle changes
location in the problem hyperspace. There are also some stochastic
weights involved in the positional updates so that each particle is
truly independent and can take its own search path while still
incorporating good information from other particles. In this
particluar perl module, the user is able to choose from two
implementations of the algorithm. One is the original implementation
from I<Swarm Intelligence> which requires the definition of a
'random range' to which the two stochastic weights are required to
sum. The other implementation allows the user to define the weighting
of how much a particle follows its own path versus following its
peers. In both cases there is an element of randomness.
Solution convergence is quite fast once one particle becomes close to
a local maxima. Having more particles active means there is more of
a chance that you will not be stuck in a local maxima. Often times
different neighborhoods (when not configured in a global neighborhood
( run in 1.353 second using v1.01-cache-2.11-cpan-f985c23238c )