view release on metacpan or search on metacpan
SYNOPSIS
use AI::ParticleSwarmOptimization::Pmap;
my $pso = AI::ParticleSwarmOptimization::Pmap->new (
-fitFunc => \&calcFit,
-dimensions => 3,
-iterations => 10,
-numParticles => 1000,
# only for many-core version # the best if == $#cores of your system
# selecting best value if undefined
-workers => 4,
);
my $fitValue = $pso->optimize ();
my ($best) = $pso->getBestParticles (1);
my ($fit, @values) = $pso->getParticleBestPos ($best);
printf "Fit %.4f at (%s)\n",
$fit, join ', ', map {sprintf '%.4f', $_} @values;
sub calcFit {
my @values = @_;
my $offset = int (-@values / 2);
my $sum;
select( undef, undef, undef, 0.01 ); # Simulation of heavy processing...
Description
This module is enhancement of on original AI::ParticleSwarmOptimization
to support multi-core processing with use of Pmap. Below you can find
original documentation of that module, but with one difference. There
is new parameter "-workers", which one can use to define of number of
parallel processes that will be used during computations.
The Particle Swarm Optimization technique uses communication of the
current best position found between a number of particles moving over a
hyper surface as a technique for locating the best location on the
surface (where 'best' is the minimum of some fitness function). For a
Wikipedia discussion of PSO see
http://en.wikipedia.org/wiki/Particle_swarm_optimization.
This pure Perl module is an implementation of the Particle Swarm
Optimization technique for finding minima of hyper surfaces. It
presents an object oriented interface that facilitates easy
configuration of the optimization parameters and (in principle) allows
the creation of derived classes to reimplement all aspects of the
optimization engine (a future version will describe the replaceable
engine components).
This implementation allows communication of a local best point between
a selected number of neighbours. It does not support a single global
best position that is known to all particles in the swarm.
Methods
AI::ParticleSwarmOptimization provides the following public methods.
The parameter lists shown for the methods denote optional parameters by
showing them in [].
new (%parameters)
Create an optimization object. The following parameters may be used:
forward to the next iteration. Defaults to 0.9
See also -meWeight and -themWeight.
-iterations: number, optional
Number of optimization iterations to perform. Defaults to 1000.
-meWeight: number, optional
Coefficient determining the influence of the current local best
position on the next iterations velocity. Defaults to 0.5.
See also -inertia and -themWeight.
-numNeighbors: positive number, optional
Number of local particles considered to be part of the
neighbourhood of the current particle. Defaults to the square root
of the total number of particles.
-stallSpeed: positive number, optional
Speed below which a particle is considered to be stalled and is
repositioned to a new random location with a new initial speed.
By default -stallSpeed is undefined but particles with a speed of 0
will be repositioned.
-themWeight: number, optional
Coefficient determining the influence of the neighbourhod best
position on the next iterations velocity. Defaults to 0.5.
See also -inertia and -meWeight.
-exitPlateau: boolean, optional
Set true to have the optimization check for plateaus (regions where
the fit hasn't improved much for a while) during the search. The
optimization ends when a suitable plateau is detected following the
burn in period.
If set to a non-zero value -verbose determines the level of
diagnostic print reporting that is generated during optimization.
The following constants may be bitwise ored together to set logging
options:
* kLogBetter
prints particle details when its fit becomes bebtter than its
previous best.
* kLogStall
prints particle details when its velocity reaches 0 or falls
below the stall threshold.
* kLogIter
Shows the current iteration number.
Set or change optimization parameters. See -new above for a
description of the parameters that may be supplied.
init ()
Reinitialize the optimization. init () will be called during the
first call to optimize () if it hasn't already been called.
optimize ()
Runs the minimization optimization. Returns the fit value of the best
fit found. The best possible fit is negative infinity.
optimize () may be called repeatedly to continue the fitting process.
The fit processing on each subsequent call will continue from where
the last call left off.
getParticleState ()
Returns the vector of position
getBestParticles ([$n])
Takes an optional count.
Returns a list containing the best $n particle numbers. If $n is not
specified only the best particle number is returned.
getParticleBestPos ($particleNum)
Returns a list containing the best value of the fit and the vector of
its point in hyper space.
my ($fit, @vector) = $pso->getParticleBestPos (3)
getIterationCount ()
Return the number of iterations performed. This may be useful when
the -exitFit criteria has been met or where multiple calls to
optimize have been made.
example/PSOTest-MultiCore.pl view on Meta::CPAN
++$|;
#-----------------------------------------------------------------------
#my $pso = AI::ParticleSwarmOptimization->new( # Single-core
#my $pso = AI::ParticleSwarmOptimization::MCE->new( # Multi-core
my $pso = AI::ParticleSwarmOptimization::Pmap->new( # Multi-core
-fitFunc => \&calcFit,
-dimensions => 10,
-iterations => 10,
-numParticles => 1000,
# only for many-core version # the best if == $#cores of your system
# selecting best value if undefined
-workers => 4,
);
my $beg = time;
$pso->init();
my $fitValue = $pso->optimize ();
my ( $best ) = $pso->getBestParticles (1);
my ( $fit, @values ) = $pso->getParticleBestPos ($best);
my $iters = $pso->getIterationCount();
printf "Fit %.4f at (%s) after %d iterations\n", $fit, join (', ', map {sprintf '%.4f', $_} @values), $iters;
warn "\nTime: ", time - $beg, "\n\n";
#=======================================================================
exit 0;
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
print "Iter $iter\n" if $self->{verbose} & AI::ParticleSwarmOptimization::kLogIter;
my @lst = grep { defined $_ } parallel_map {
my $prtcl = $_;
#---------------------------------------------------------------
@{$prtcl->{currPos}} = @{$prtcl->{nextPos}};
$prtcl->{currFit} = $prtcl->{nextFit};
my $fit = $prtcl->{currFit};
if ($self->_betterFit ($fit, $prtcl->{bestFit})) {
# Save position - best fit for this particle so far
$self->_saveBest ($prtcl, $fit, $iter);
}
my $ret;
if( defined $self->{exitFit} and $fit < $self->{exitFit} ){
$ret = $fit;
}elsif( !($self->{verbose} & AI::ParticleSwarmOptimization::kLogIterDetail) ){
$ret = undef;
}else{
printf "Part %3d fit %8.2f", $prtcl->{id}, $fit
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
print "\n";
$ret = undef;
}
#---------------------------------------------------------------
[ $prtcl, $ret ]
} @{ $self->{ prtcls } };
@{ $self->{ prtcls } } = map { $_->[ 0 ] } @lst;
$self->{ bestBest } = min map { $_->{ bestFit } } @{ $self->{ prtcls } };
my @fit = map { $_->[ 1 ] } grep { defined $_->[ 1 ] } @lst;
return scalar( @fit ) ? ( sort { $a <=> $b } @fit )[ 0 ] : undef;
}
#=======================================================================
sub _updateVelocities {
my ($self, $iter) = @_;
@{ $self->{ prtcls } } = parallel_map {
my $prtcl = $_;
#---------------------------------------------------------------
my $bestN = $self->{prtcls}[$self->_getBestNeighbour ($prtcl)];
my $velSq;
for my $d (0 .. $self->{dimensions} - 1) {
my $meFactor =
$self->_randInRange (-$self->{meWeight}, $self->{meWeight});
my $themFactor =
$self->_randInRange (-$self->{themWeight}, $self->{themWeight});
my $meDelta = $prtcl->{bestPos}[$d] - $prtcl->{currPos}[$d];
my $themDelta = $bestN->{bestPos}[$d] - $prtcl->{currPos}[$d];
$prtcl->{velocity}[$d] =
$prtcl->{velocity}[$d] * $self->{inertia} +
$meFactor * $meDelta +
$themFactor * $themDelta;
$velSq += $prtcl->{velocity}[$d]**2;
}
my $vel = sqrt ($velSq);
if (!$vel or $self->{stallSpeed} and $vel <= $self->{stallSpeed}) {
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
=head1 SYNOPSIS
use AI::ParticleSwarmOptimization::Pmap;
my $pso = AI::ParticleSwarmOptimization::Pmap->new (
-fitFunc => \&calcFit,
-dimensions => 3,
-iterations => 10,
-numParticles => 1000,
# only for many-core version # the best if == $#cores of your system
# selecting best value if undefined
-workers => 4,
);
my $fitValue = $pso->optimize ();
my ($best) = $pso->getBestParticles (1);
my ($fit, @values) = $pso->getParticleBestPos ($best);
printf "Fit %.4f at (%s)\n",
$fit, join ', ', map {sprintf '%.4f', $_} @values;
sub calcFit {
my @values = @_;
my $offset = int (-@values / 2);
my $sum;
select( undef, undef, undef, 0.01 ); # Simulation of heavy processing...
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
}
=head1 Description
This module is enhancement of on original AI::ParticleSwarmOptimization to support
multi-core processing with use of Pmap. Below you can find original documentation
of that module, but with one difference. There is new parameter "-workers", which
one can use to define of number of parallel processes that will be used during
computations.
The Particle Swarm Optimization technique uses communication of the current best
position found between a number of particles moving over a hyper surface as a
technique for locating the best location on the surface (where 'best' is the
minimum of some fitness function). For a Wikipedia discussion of PSO see
http://en.wikipedia.org/wiki/Particle_swarm_optimization.
This pure Perl module is an implementation of the Particle Swarm Optimization
technique for finding minima of hyper surfaces. It presents an object oriented
interface that facilitates easy configuration of the optimization parameters and
(in principle) allows the creation of derived classes to reimplement all aspects
of the optimization engine (a future version will describe the replaceable
engine components).
This implementation allows communication of a local best point between a
selected number of neighbours. It does not support a single global best position
that is known to all particles in the swarm.
=head1 Methods
AI::ParticleSwarmOptimization provides the following public methods. The parameter lists shown
for the methods denote optional parameters by showing them in [].
=over 4
=item new (%parameters)
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
next iteration. Defaults to 0.9
See also I<-meWeight> and I<-themWeight>.
=item I<-iterations>: number, optional
Number of optimization iterations to perform. Defaults to 1000.
=item I<-meWeight>: number, optional
Coefficient determining the influence of the current local best position on the
next iterations velocity. Defaults to 0.5.
See also I<-inertia> and I<-themWeight>.
=item I<-numNeighbors>: positive number, optional
Number of local particles considered to be part of the neighbourhood of the
current particle. Defaults to the square root of the total number of particles.
=item I<-numParticles>: positive number, optional
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
=item I<-stallSpeed>: positive number, optional
Speed below which a particle is considered to be stalled and is repositioned to
a new random location with a new initial speed.
By default I<-stallSpeed> is undefined but particles with a speed of 0 will be
repositioned.
=item I<-themWeight>: number, optional
Coefficient determining the influence of the neighbourhod best position on the
next iterations velocity. Defaults to 0.5.
See also I<-inertia> and I<-meWeight>.
=item I<-exitPlateau>: boolean, optional
Set true to have the optimization check for plateaus (regions where the fit
hasn't improved much for a while) during the search. The optimization ends when
a suitable plateau is detected following the burn in period.
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
If set to a non-zero value I<-verbose> determines the level of diagnostic print
reporting that is generated during optimization.
The following constants may be bitwise ored together to set logging options:
=over 4
=item * kLogBetter
prints particle details when its fit becomes bebtter than its previous best.
=item * kLogStall
prints particle details when its velocity reaches 0 or falls below the stall
threshold.
=item * kLogIter
Shows the current iteration number.
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
Set or change optimization parameters. See I<-new> above for a description of
the parameters that may be supplied.
=item B<init ()>
Reinitialize the optimization. B<init ()> will be called during the first call
to B<optimize ()> if it hasn't already been called.
=item B<optimize ()>
Runs the minimization optimization. Returns the fit value of the best fit
found. The best possible fit is negative infinity.
B<optimize ()> may be called repeatedly to continue the fitting process. The fit
processing on each subsequent call will continue from where the last call left
off.
=item B<getParticleState ()>
Returns the vector of position
=item B<getBestParticles ([$n])>
Takes an optional count.
Returns a list containing the best $n particle numbers. If $n is not specified
only the best particle number is returned.
=item B<getParticleBestPos ($particleNum)>
Returns a list containing the best value of the fit and the vector of its point
in hyper space.
my ($fit, @vector) = $pso->getParticleBestPos (3)
=item B<getIterationCount ()>
Return the number of iterations performed. This may be useful when the
I<-exitFit> criteria has been met or where multiple calls to I<optimize> have
been made.
t/01_pso_multi.t view on Meta::CPAN
plan (tests => 1);
# Calculation tests.
my $pso = AI::ParticleSwarmOptimization::Pmap->new (
-fitFunc => \&calcFit,
-dimensions => 10,
-iterations => 10,
-numParticles => 1000,
# only for many-core version # the best if == $#cores of your system
# selecting best value if undefined
-workers => 4,
);
$pso->init();
my $fitValue = $pso->optimize ();
my ( $best ) = $pso->getBestParticles (1);
my ( $fit, @values ) = $pso->getParticleBestPos ($best);
my $iters = $pso->getIterationCount();
ok ( $fit > 0, 'Computations');
sub calcFit {
my @values = @_;
my $offset = int (-@values / 2);
my $sum;