AI-Pathfinding-SMAstar
view release on metacpan or search on metacpan
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
};
return bless $self, $class;
}
sub state_eval_func {
my $self = shift;
if (@_) { $self->{_state_eval_func} = shift }
return $self->{_state_eval_func};
}
sub state_goal_p_func {
my $self = shift;
if (@_) { $self->{_state_goal_p_func} = shift }
return $self->{_state_goal_p_func};
}
sub state_num_successors_func {
my $self = shift;
if (@_) { $self->{_state_num_successors_func} = shift }
return $self->{_state_num_successors_func};
}
sub state_successors_iterator {
my $self = shift;
if (@_) { $self->{_state_successors_iterator} = shift }
return $self->{_state_successors_iterator};
}
sub state_get_data_func {
my $self = shift;
if (@_) { $self->{_state_get_data_func} = shift }
return $self->{_state_get_data_func};
}
sub show_prog_func {
my $self = shift;
if (@_) { $self->{_show_prog_func} = shift }
return $self->{_show_prog_func};
}
###################################################################
#
# Add a state from which to begin the search. There can
# be multiple start-states.
#
###################################################################
sub add_start_state
{
my ($self, $state) = @_;
my $state_eval_func = $self->{_state_eval_func};
my $state_goal_p_func = $self->{_state_goal_p_func};
my $state_num_successors_func = $self->{_state_num_successors_func},
my $state_successors_iterator = $self->{_state_successors_iterator},
my $state_get_data_func = $self->{_state_get_data_func};
# make sure required functions have been defined
if(!defined($state_eval_func)){
croak "SMAstar: evaluation function is not defined\n";
}
if(!defined($state_goal_p_func)){
croak "SMAstar: goal function is not defined\n";
}
if(!defined($state_num_successors_func)){
croak "SMAstar: num successors function is not defined\n";
}
if(!defined($state_successors_iterator)){
croak "SMAstar: successor iterator is not defined\n";
}
# create a path object from this state
my $state_obj = AI::Pathfinding::SMAstar::Path->new(
_state => $state,
_eval_func => $state_eval_func,
_goal_p_func => $state_goal_p_func,
_num_successors_func => $state_num_successors_func,
_successors_iterator => $state_successors_iterator,
_get_data_func => $state_get_data_func,
);
my $fcost = AI::Pathfinding::SMAstar::Path::fcost($state_obj);
# check if the fcost of this node looks OK (is numeric)
unless(Scalar::Util::looks_like_number($fcost)){
croak "Error: f-cost of state is not numeric. Cannot add state to queue.\n";
}
$state_obj->f_cost($fcost);
# check if the num_successors function returns a number
my $num_successors = $state_obj->get_num_successors();
unless(Scalar::Util::looks_like_number($num_successors)){
croak "Error: Number of state successors is not numeric. Cannot add state to queue.\n";
}
# test out the iterator function to make sure it returns
# an object of the correct type
my $classname = ref($state);
my $test_successor_iterator = $state_obj->{_successors_iterator}->($state);
my $test_successor = $test_successor_iterator->($state);
my $succ_classname = ref($test_successor);
unless($succ_classname eq $classname){
croak "Error: Successor iterator method of object $classname does " .
"not return an object of type $classname.\n";
}
# add this node to the queue
$self->{_priority_queue}->insert($state_obj);
}
###################################################################
#
# start the SMAstar search process
#
###################################################################
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
NOTE: Monotonicity is ensured in this implementation of SMA*, so even if your
function is not monotonic (which is possible, even given an admissible
heuristic), SMA* will assign the antecedent node's cost to a successor if
that successor's I<g+h> amounts to less than the antecedent's f-cost.
=item *
B<State goal function> (C<_state_goal_p_func> above)
Goal predicate function. This function must return 1 if the object argument is a
goal node, or 0 otherwise.
=item *
B<State number of successors function> (C<_state_num_successors_func> above)
This function must return the number of successors of the argument object/node,
i.e. all nodes that are reachable from this node via a single operation.
=item *
B<State successors iterator> (C<_state_iterator> above)
This function must return a I<handle to a function> that produces the next
successor of the argument object, i.e. it must return an iterator function that
produces the successors of this node *one* at a time. This is necessary
to maintain the memory-bounded constraint of SMA* search.
=item *
B<State get-data function> (C<_state_get_data_func> above)
This function returns a string representation of this node.
=item *
B<State show-progress function> (C<_show_prog_func> above)
This is a callback function for displaying the progress of the search.
It can be an empty callback if you do not need this output.
=item *
B<log string function> (C<log_function> above)
This is an arbitrary string used for logging. It also gets passed to
the show-progress function above.
=item *
B<str_function> (C<str_function> above)
This function returns a *unique* string representation of this node.
Uniqueness is required for SMA* to work properly.
=item *
B<max states allowed in memory> (C<max_states_in_queue> above)
An integer indicating the maximum number of expanded nodes to hold in
memory at any given time.
=item *
B<maximum cost> (C<MAX_COST> above)
An integer indicating the maximum cost, beyond which nodes will not
be expanded.
=back
=head1 DESCRIPTION
=head2 Overview
Simplified Memory-bounded A* search (or SMA* search) addresses some of the
limitations of conventional A* search, by bounding the amount of space required
to perform a shortest-path search. This module is an implementation of
SMA*, which was first introduced by Stuart Russell in 1992. SMA* is a simpler,
more efficient variation of the original MA* search introduced by P. Chakrabarti
et al. in 1989 (see references below).
=head2 Motivation and Comparison to A* Search
=head3 A* search
A* Search is an I<optimal> and I<complete> algorithm for computing a sequence of
operations leading from a system's start-state (node) to a specified goal.
In this context, I<optimal> means that A* search will return the shortest
(or cheapest) possible sequence of operations (path) leading to the goal,
and I<complete> means that A* will always find a path to
the goal if such a path exists.
In general, A* search works using a calculated cost function on each node along a
path, in addition to an I<admissible> heuristic estimating the distance from
that node to the goal. The cost is calculated as:
I<f(n) = g(n) + h(n)>
Where:
=over
=item *
I<n> is a state (node) along a path
=item *
I<g(n)> is the total cost of the path leading up to I<n>
=item *
I<h(n)> is the heuristic function, or estimated cost of the path from I<n>
to the goal node.
=back
For a given admissible heuristic function, it can be shown that A* search
is I<optimally efficient>, meaning that, in its calculation of the shortest
path, it expands fewer nodes in the search space than any other algorithm.
To be admissible, the heuristic I<h(n)> can never over-estimate the distance
from the node to the goal. Note that if the heuristic I<h(n)> is set to
zero, A* search reduces to I<Branch and Bound> search. If the cost-so-far
I<g(n)> is set to zero, A* reduces to I<Greedy Best-first> search (which is
neither complete nor optimal). If both I<g(n)> and I<h(n)> are set to zero,
the search becomes I<Breadth-first>, which is complete and optimal, but not
optimally efficient.
The space complexity of A* search is bounded by an exponential of the
branching factor of the search-space, by the length of the longest path
( run in 0.644 second using v1.01-cache-2.11-cpan-39bf76dae61 )