AI-PSO

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN


0.82  Sat Nov 11 22:20:31 2006
    - fixed POD to correctly 'use AI::PSO'
    - fixed fitness function in PSO.t
	- added research paper to package
	- moved into a subversion repository
	- removed requirement for perl 5.8.8
	- removed printing of solution array in test

0.80  Sat Nov 11 14:22:27 2006
	- changed namespace to AI::PSO
	- added a pso_get_solution_array function

0.70  Fri Nov 10 23:50:32 2006
	- added user callback fitness function
	- added POD
	- added tests
	- fixed typos
	- changed version to 0.70 because I like 0.7

0.01  Fri Nov 10 18:53:56 2006
	- initial version

MPL-1.1.txt  view on Meta::CPAN

     Executable version available; and if made available via Electronic
     Distribution Mechanism, must remain available for at least twelve (12)
     months after the date it initially became available, or at least six
     (6) months after a subsequent version of that particular Modification
     has been made available to such recipients. You are responsible for
     ensuring that the Source Code version remains available even if the
     Electronic Distribution Mechanism is maintained by a third party.

     3.3. Description of Modifications.
     You must cause all Covered Code to which You contribute to contain a
     file documenting the changes You made to create that Covered Code and
     the date of any change. You must include a prominent statement that
     the Modification is derived, directly or indirectly, from Original
     Code provided by the Initial Developer and including the name of the
     Initial Developer in (a) the Source Code, and (b) in any notice in an
     Executable version or related documentation in which You describe the
     origin or ownership of the Covered Code.

     3.4. Intellectual Property Matters
          (a) Third Party Claims.
          If Contributor has knowledge that a license under a third party's
          intellectual property rights is required to exercise the rights

lib/AI/PSO.pm  view on Meta::CPAN

                                       # which must obviously be less than the number of particles and greater than 0.
                                         # TODO: write code to preconstruct different topologies.  Such as fully connected, ring, star etc.
                                         #       Currently, neighbors are chosen by a simple hash function.  
                                         #       It would be fun (no theoretical benefit that I know of) to play with different topologies.
my $maxIterations = 'null';            # This is the maximum number of optimization iterations before exiting if the fitness goal is never reached.
my $exitFitness   = 'null';            # this is the exit criteria.  It must be a value between 0 and 1.
my $dimensions    = 'null';            # this is the number of variables the user is optimizing


#-#-# pso position parameters #-#-#
my $deltaMin       = 'null';           # This is the minimum scalar position change value when searching
my $deltaMax       = 'null';           # This is the maximum scalar position change value when searching

#-#-# my 'how much do I trust myself verses my neighbors' parameters #-#-#
my $meWeight   = 'null';               # 'individuality' weighting constant (higher weight (than group) means trust individual more, neighbors less)
my $meMin      = 'null';               # 'individuality' minimum random weight (this should really be between 0, 1)
my $meMax      = 'null';               # 'individuality' maximum random weight (this should really be between 0, 1)
my $themWeight = 'null';               # 'social' weighting constant (higher weight (than individual) means trust group more, self less)
my $themMin    = 'null';               # 'social' minimum random weight (this should really be between 0, 1)
my $themMax    = 'null';               # 'social' maximum random weight (this should really be between 0, 1)

my $psoRandomRange = 'null';           # PSO::.86 new variable to support original unmodified algorithm

lib/AI/PSO.pm  view on Meta::CPAN

            $bestNeighborFitness = &compute_fitness(@{$particles[$particleNeighborIndex]{bestPos}});
            $bestNeighborIndex = $particleNeighborIndex;
        }
    }
    # TODO: insert error checking code / defensive programming
    return $particleNeighborIndex;
}

#
# clamp_velocity
# - restricts the change in velocity to be within a certain range (prevents large jumps in problem hyperspace)
#
sub clamp_velocity($) {
    my ($dx) = @_;
    if($dx < $deltaMin) {
        $dx = $deltaMin;
    } elsif($dx > $deltaMax) {
        $dx = $deltaMax;
    }
    return $dx;
}

lib/AI/PSO.pm  view on Meta::CPAN


=head1 SYNOPSIS

  use AI::PSO;

  my %params = (
      numParticles   => 4,     # total number of particles involved in search 
      numNeighbors   => 3,     # number of particles with which each particle will share its progress
      maxIterations  => 1000,  # maximum number of iterations before exiting with no solution found
      dimensions     => 4,     # number of parameters you want to optimize
      deltaMin       => -4.0,  # minimum change in velocity during PSO update
      deltaMax       =>  4.0,  # maximum change in velocity during PSO update
      meWeight       => 2.0,   # 'individuality' weighting constant (higher means more individuality)
      meMin          => 0.0,   # 'individuality' minimum random weight
      meMax          => 1.0,   # 'individuality' maximum random weight
      themWeight     => 2.0,   # 'social' weighting constant (higher means trust group more)
      themMin        => 0.0,   # 'social' minimum random weight 
      themMax        => 1.0,   # 'social' maximum random weight
      exitFitness    => 0.9,   # minimum fitness to achieve before exiting
      verbose        => 0,     # 0 prints solution
                               # 1 prints (Y|N):particle:fitness at each iteration
                               # 2 dumps each particle (+1)

lib/AI/PSO.pm  view on Meta::CPAN

  not doing to well (has a low fitness), then it looks to its neighbors 
  for help and tries to be more like them while still maintaining a 
  sense of individuality.

  A particle is defined by its position and velocity.  The parameters a 
  user wants to optimize define the dimensionality of the problem 
  hyperspace.  So, if you want to optimize three variables, a particle 
  will be three dimensional and will have 3 values that devine its 
  position 3 values that define its velocity.  The position of a 
  particle determines how good it is by a user-defined fitness function.  
  The velocity of a particle determines how quickly it changes location.  
  Larger velocities provide more coverage of hyperspace at the cost of 
  solution precision.  With large velocities, a particle may come close 
  to a maxima but over-shoot it because it is moving too quickly.  With 
  smaller velocities, particles can really hone in on a local solution 
  and find the best position but they may be missing another, possibly 
  even more optimal, solution because a full search of the hyperspace 
  was not conducted.  Techniques such as simulated annealing can be 
  applied in certain areas so that the closer a partcle gets to a 
  solution, the smaller its velocity will be so that in bad areas of 
  the hyperspace, the particles move quickly, but in good areas, they 
  spend some extra time looking around.

  In general, particles fly around the problem hyperspace looking for 
  local/global maxima.  At each position, a particle computes its 
  fitness.  If it does not meet the exit criteria then it gets 
  information from neighboring particles about how well they are doing.  
  If a neighboring particle is doing better, then the current particle 
  tries to move closer to its neighbor by adjusting its position.  As 
  mentioned, the velocity controls how quickly a particle changes 
  location in the problem hyperspace.  There are also some stochastic 
  weights involved in the positional updates so that each particle is 
  truly independent and can take its own search path while still 
  incorporating good information from other particles.  In this 
  particluar perl module, the user is able to choose from two 
  implementations of the algorithm.  One is the original implementation 
  from I<Swarm Intelligence> which requires the definition of a 
  'random range' to which the two stochastic weights are required to 
  sum.  The other implementation allows the user to define the weighting
  of how much a particle follows its own path versus following its 



( run in 0.645 second using v1.01-cache-2.11-cpan-c333fce770f )