AI-PSO

 view release on metacpan or  search on metacpan

lib/AI/PSO.pm  view on Meta::CPAN

# The position of a particle in the problem hyperspace is defined by the values in the position array...
# You can think of each array value as being a dimension,
# so in N-dimensional hyperspace, the size of the position vector is N
# 
# A particle updates its position according the Euler integration equation for physical motion:
#   Xi(t) = Xi(t-1) + Vi(t)
#   The velocity portion of this contains the stochastic elements of PSO and is defined as:
#   Vi(t) = Vi(t-1)  +  P1*[pi - Xi(t-1)]  +  P2*[pg - Xi(t-1)]
#   where P1 and P2 add are two random values who's sum adds up to the PSO random range (4.0)
#   and pi is the individual's best location
#   and pg is the global (or neighborhoods) best position
#
#   The velocity vector is obviously updated before the position vector...
#
#
my @particles = ();
my $user_fitness_function;
my @solution = ();
#----------   END GLOBAL DATA STRUCTURES --------


#---------- BEGIN EXPORTED SUBROUTINES ----------

#
# pso_set_params
#  - sets the global module parameters from the hash passed in
#
sub pso_set_params(%) {
    my (%params) = %{$_[0]};
    my $retval = 0;

    #no strict 'refs';
    #foreach my $key (keys(%params)) {
    #    $$key = $params{$key};
    #}
    #use strict 'refs';

lib/AI/PSO.pm  view on Meta::CPAN



#----------  END  EXPORTED SUBROUTINES ----------



#--------- BEGIN INTERNAL SUBROUTINES -----------

#
# init
#   - initializes global variables
#   - initializes particle data structures
#
sub init() {
	if($psoRandomRange =~ m/null/) {
		$useModifiedAlgorithm = 1;
	} else {
		$useModifiedAlgorithm = 0;
	}
	&initialize_particles();
}

lib/AI/PSO.pm  view on Meta::CPAN

	}
	
    }
    &save_solution(@{$particles[$bestPartIndex]{bestPos}});
    &dump_particle($bestPartIndex);
    return 1;
}

#
# save solution
#   - simply copies the given array into the global solution array
#
sub save_solution(@) {
	@solution = @_;
}


#
# compute_fitness
# - computes the fitness of a particle by using the user-specified fitness function
# 

lib/AI/PSO.pm  view on Meta::CPAN

    know what you are doing.

=item 2. Search space coverage

    If you have a large search space, increasing deltaMin and deltaMax 
    and delta max can help cover more area. Conversely, if you have a 
    small search space, then decreasing them will fine tune the search.

=item 3. Swarm Topology

    I've personally found that using a global (fully connected) topology 
    where each particle is neighbors with all other particles 
    (numNeighbors == numParticles - 1) converges more quickly.  However, 
    this will drastically increase the number of calls to your fitness 
    function.  So, if your fitness function is the bottleneck, then you 
    should tune this value for the appropriate time/accuracy trade-off.  
    Also, I highly suggest you implement a simple fitness cache so you 
    don't end up recomputing fitness values.  This can easily be done 
    with a perl hash that is keyed on the string concatenation of the 
    array values passed to your fitness function.  Note that these are 
    floating point values, so determine how significant the values are 

lib/AI/PSO.pm  view on Meta::CPAN

  smaller velocities, particles can really hone in on a local solution 
  and find the best position but they may be missing another, possibly 
  even more optimal, solution because a full search of the hyperspace 
  was not conducted.  Techniques such as simulated annealing can be 
  applied in certain areas so that the closer a partcle gets to a 
  solution, the smaller its velocity will be so that in bad areas of 
  the hyperspace, the particles move quickly, but in good areas, they 
  spend some extra time looking around.

  In general, particles fly around the problem hyperspace looking for 
  local/global maxima.  At each position, a particle computes its 
  fitness.  If it does not meet the exit criteria then it gets 
  information from neighboring particles about how well they are doing.  
  If a neighboring particle is doing better, then the current particle 
  tries to move closer to its neighbor by adjusting its position.  As 
  mentioned, the velocity controls how quickly a particle changes 
  location in the problem hyperspace.  There are also some stochastic 
  weights involved in the positional updates so that each particle is 
  truly independent and can take its own search path while still 
  incorporating good information from other particles.  In this 
  particluar perl module, the user is able to choose from two 
  implementations of the algorithm.  One is the original implementation 
  from I<Swarm Intelligence> which requires the definition of a 
  'random range' to which the two stochastic weights are required to 
  sum.  The other implementation allows the user to define the weighting
  of how much a particle follows its own path versus following its 
  peers.  In both cases there is an element of randomness.

  Solution convergence is quite fast once one particle becomes close to 
  a local maxima.  Having more particles active means there is more of 
  a chance that you will not be stuck in a local maxima.  Often times 
  different neighborhoods (when not configured in a global neighborhood 
  fashion) will converge to different maxima.  It is quite interesting 
  to watch graphically.  If the fitness function is expensive to 
  compute, then it is often useful to start out with a small number of
  particles first and get a feel for how the algorithm converges.

  The algorithm implemented in this module is taken from the book 
  I<Swarm Intelligence> by Russell Eberhart and James Kennedy.  
  I highly suggest you read the book if you are interested in this 
  sort of thing.  



( run in 0.497 second using v1.01-cache-2.11-cpan-49f99fa48dc )