AI-ParticleSwarmOptimization-Pmap
view release on metacpan or search on metacpan
78910111213141516171819202122232425262728293031323334353637383940414243
my
$pso
= AI::ParticleSwarmOptimization::Pmap->new (
-fitFunc
=> \
&calcFit
,
-dimensions
=> 3,
-iterations
=> 10,
-numParticles
=> 1000,
# only for many-core version # the best if == $#cores of your system
# selecting best value if undefined
-workers
=> 4,
);
my
$fitValue
=
$pso
->optimize ();
my
(
$best
) =
$pso
->getBestParticles (1);
my
(
$fit
,
@values
) =
$pso
->getParticleBestPos (
$best
);
printf
"Fit %.4f at (%s)\n"
,
$fit
,
join
', '
,
map
{
sprintf
'%.4f'
,
$_
}
@values
;
sub
calcFit {
my
@values
=
@_
;
my
$offset
=
int
(-
@values
/ 2);
my
$sum
;
select
(
undef
,
undef
,
undef
, 0.01 );
# Simulation of heavy processing...
$sum
+= (
$_
-
$offset
++) ** 2
for
@values
;
return
$sum
;
}
Description
This module is enhancement of on original AI::ParticleSwarmOptimization
original documentation of that module, but
with
one difference. There
5354555657585960616263646566676869707172
This pure Perl module is an implementation of the Particle Swarm
Optimization technique
for
finding minima of hyper surfaces. It
presents an object oriented interface that facilitates easy
configuration of the optimization parameters and (in principle) allows
the creation of derived classes to reimplement all aspects of the
optimization engine (a future version will describe the replaceable
engine components).
This implementation allows communication of a
local
best point between
a selected number of neighbours. It does not support a single global
best position that is known to all particles in the swarm.
Methods
AI::ParticleSwarmOptimization provides the following public methods.
The parameter lists shown
for
the methods denote optional parameters by
showing them in [].
new (
%parameters
)
example/PSOTest-MultiCore.pl view on Meta::CPAN
7891011121314151617181920212223242526272829303132333435363738394041424344#use AI::ParticleSwarmOptimization;
#use AI::ParticleSwarmOptimization::MCE;
use
Data::Dumper; $::Data::Dumper::Sortkeys = 1;
#=======================================================================
sub
calcFit {
my
@values
=
@_
;
my
$offset
=
int
(-
@values
/ 2);
my
$sum
;
select
(
undef
,
undef
,
undef
, 0.01 );
# Simulation of heavy processing...
$sum
+= (
$_
-
$offset
++) ** 2
for
@values
;
return
$sum
;
}
#=======================================================================
++$|;
#-----------------------------------------------------------------------
#my $pso = AI::ParticleSwarmOptimization->new( # Single-core
#my $pso = AI::ParticleSwarmOptimization::MCE->new( # Multi-core
my
$pso
= AI::ParticleSwarmOptimization::Pmap->new(
# Multi-core
-fitFunc
=> \
&calcFit
,
-dimensions
=> 10,
-iterations
=> 10,
-numParticles
=> 1000,
# only for many-core version # the best if == $#cores of your system
# selecting best value if undefined
-workers
=> 4,
);
my
$beg
=
time
;
$pso
->init();
my
$fitValue
=
$pso
->optimize ();
my
(
$best
) =
$pso
->getBestParticles (1);
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164
my
$pso
= AI::ParticleSwarmOptimization::Pmap->new (
-fitFunc
=> \
&calcFit
,
-dimensions
=> 3,
-iterations
=> 10,
-numParticles
=> 1000,
# only for many-core version # the best if == $#cores of your system
# selecting best value if undefined
-workers
=> 4,
);
my
$fitValue
=
$pso
->optimize ();
my
(
$best
) =
$pso
->getBestParticles (1);
my
(
$fit
,
@values
) =
$pso
->getParticleBestPos (
$best
);
printf
"Fit %.4f at (%s)\n"
,
$fit
,
join
', '
,
map
{
sprintf
'%.4f'
,
$_
}
@values
;
sub
calcFit {
my
@values
=
@_
;
my
$offset
=
int
(-
@values
/ 2);
my
$sum
;
select
(
undef
,
undef
,
undef
, 0.01 );
# Simulation of heavy processing...
$sum
+= (
$_
-
$offset
++) ** 2
for
@values
;
return
$sum
;
}
=head1 Description
This module is enhancement of on original AI::ParticleSwarmOptimization to support
multi-core processing with use of Pmap. Below you can find original documentation
of that module, but with one difference. There is new parameter "-workers", which
lib/AI/ParticleSwarmOptimization/Pmap.pm view on Meta::CPAN
172173174175176177178179180181182183184185186187188189190191192This pure Perl module is an implementation of the Particle Swarm Optimization
technique
for
finding minima of hyper surfaces. It presents an object oriented
interface that facilitates easy configuration of the optimization parameters and
(in principle) allows the creation of derived classes to reimplement all aspects
of the optimization engine (a future version will describe the replaceable
engine components).
This implementation allows communication of a
local
best point between a
selected number of neighbours. It does not support a single global best position
that is known to all particles in the swarm.
=head1 Methods
AI::ParticleSwarmOptimization provides the following public methods. The parameter lists shown
for the methods denote optional parameters by showing them in [].
=over 4
=item new (%parameters)
t/01_pso_multi.t view on Meta::CPAN
1920212223242526272829303132333435363738plan (
tests
=> 1);
# Calculation tests.
my
$pso
= AI::ParticleSwarmOptimization::Pmap->new (
-fitFunc
=> \
&calcFit
,
-dimensions
=> 10,
-iterations
=> 10,
-numParticles
=> 1000,
# only for many-core version # the best if == $#cores of your system
# selecting best value if undefined
-workers
=> 4,
);
$pso
->init();
my
$fitValue
=
$pso
->optimize ();
my
(
$best
) =
$pso
->getBestParticles (1);
my
(
$fit
,
@values
) =
$pso
->getParticleBestPos (
$best
);
my
$iters
=
$pso
->getIterationCount();
( run in 0.456 second using v1.01-cache-2.11-cpan-95122f20152 )