view release on metacpan or search on metacpan
Changes
Makefile.PL
MANIFEST
README
t/AI-Pathfinding-SMAstar.t
t/test8.lst
lib/AI/Pathfinding/SMAstar.pm
lib/AI/Pathfinding/SMAstar/Path.pm
lib/AI/Pathfinding/SMAstar/AVLQueue.pm
lib/AI/Pathfinding/SMAstar/PairObj.pm
lib/AI/Pathfinding/SMAstar/PriorityQueue.pm
lib/AI/Pathfinding/SMAstar/TreeOfQueues.pm
lib/AI/Pathfinding/SMAstar/Examples/PalUtils.pm
lib/AI/Pathfinding/SMAstar/Examples/WordObj.pm
lib/AI/Pathfinding/SMAstar/Examples/Phrase.pm
META.yml Module meta-data (added by MakeMaker)
# perform a path search in a solution-space populated by
# FrontierObj objects.
#
##################################################################
my $smastar = AI::Pathfinding::SMAstar->new(
# evaluates f(n) = g(n) + h(n), returns a number
_state_eval_func => \&FrontierObj::evaluate,
# when called on a node, returns 1 if it is a goal
_state_goal_p_func => \&FrontierObj::goal_test,
# must return the number of successors of a node
_state_num_successors_func => \&FrontierObj::get_num_successors,
# must return *one* successor at a time
_state_successors_iterator => \&FrontierObj::get_successors_iterator,
# can be any suitable string representation
_state_get_data_func => \&FrontierObj::string_representation,
my $frontierGoalObj = $smastar->start_search(
\&log_function, # returns a string used for logging progress
\&str_function, # returns a string used to *uniquely* identify a node
$max_states_in_queue, # indicate the maximum states allowed in memory
$MAX_COST, # indicate the maximum cost allowed in search
);
Explanation
In the example above, a hypothetical object, FrontierObj, is used to
represent a node in your search space. To use SMA* search to find a shortest
path from a starting node to a goal in your search space, you must define what
a node is, in your search space (or point, or state).
A common example used for informed search methods, and one that is
used in Russell's original paper, is a N-puzzle, such as an 8-puzzle or
15-puzzle. If trying to solve such a puzzle, a node in the search space
could be defined as a particular configuration of that puzzle. In the
/t directory of this module's distribution, SMA* is applied to the problem
of finding the shortest palindrome that contains a minimum number of letters
specified, over a given lexicon of words.
Once you have a definition and representation of a node in your search space, SMA*
search requires the following functions to work:
** State evaluation function (_state_eval_func above)
This function must return the cost of this node in the search space. In all
forms of A* search, this means the cost paid to arrive at this node along a path,
plus the estimated cost of going from this node to a goal state. This function
expanded.
DESCRIPTION
Overview
Memory-bounded A* search (or SMA* search) addresses some of the limitations of
conventional A* search, by bounding the amount of space required to perform a
shortest-path search. This module is an implementation of SMA*, which was first
introduced by Stuart Russell in 1992. SMA* is a more efficient variation of the
original MA* search introduced by Chakrabarti et al. in 1989. See references below.
Motivation and Comparison to A* Search
A* search
A* Search is an optimal and complete algorithm for computing a sequence of
operations leading from a system's start-state (node) to a specified goal. In
this context, optimal means that A* search will return the shortest possible
sequence of operations (path) leading to the goal, and complete means that A* will
always find a path to the goal if such a path exists.
In general, A* search works using a calculated cost function on each node
along a path, in addition to an admissible heuristic estimating the distance
from that node to the goal. The cost is calculated as:
f(n) = g(n) + h(n)
Where:
* h(n) is the heuristic function, or estimated cost of the path from n
to the goal node.
The to be admissible, the heuristic must never over-estimate the distance
from the node to the goal. If the heuristic is set to zero, A* search reduces
to Branch and Bound search.
For a given heuristic function, it can be shown that A* search is optimally
efficient, meaning that, in its calculation of the shortest path, it expands
fewer nodes in the search space than any other algorithm.
The space complexity of A* search is bounded by an exponential of the
branching factor of the search-space and the length of the longest path
examined during the search. This is can be a problem particularly if the
branching factor is large, as the algorithm may run out of memory.
SMA* Search
search significantly slower than a theoretical unbounded-memory search, due to
the extra bookkeeping it must do, and because nodes may need to be re-expanded
(the overall number of node expansions may increase).
It can be shown that of the memory-bounded variations of A* search, such MA*,
IDA*, Iterative Expansion, etc., SMA* search expands the least number of nodes
on average. However, for certain classes of problems, guaranteeing optimality
can be costly. This is particularly true in solution spaces where:
* the branching factor of the search space is large
* there are multiple equivalent optimal solutions (or shortest paths)
For solution spaces with these characteristics, stochastic methods or
approximation algorithms such as Simulated Annealing can provide a massive
reduction in time and space requirements, while introducing a tunable
probability of producing a sub-optimal solution.
METHODS
Matthias Beebe, <mbeebe@cpan.org>
INSTALLATION
To install this module type the following:
perl Makefile.PL
make
make test
make install
DEPENDENCIES
This module requires these other modules and libraries:
Tree::AVL
Test::More
COPYRIGHT AND LICENCE
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
croak "Error: f-cost of state is not numeric. Cannot add state to queue.\n";
}
$state_obj->f_cost($fcost);
# check if the num_successors function returns a number
my $num_successors = $state_obj->get_num_successors();
unless(Scalar::Util::looks_like_number($num_successors)){
croak "Error: Number of state successors is not numeric. Cannot add state to queue.\n";
}
# test out the iterator function to make sure it returns
# an object of the correct type
my $classname = ref($state);
my $test_successor_iterator = $state_obj->{_successors_iterator}->($state);
my $test_successor = $test_successor_iterator->($state);
my $succ_classname = ref($test_successor);
unless($succ_classname eq $classname){
croak "Error: Successor iterator method of object $classname does " .
"not return an object of type $classname.\n";
}
# add this node to the queue
$self->{_priority_queue}->insert($state_obj);
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
# order to perform a path-search in a solution space populated by
# FrontierObj objects.
#
##################################################################
my $smastar = AI::Pathfinding::SMAstar->new(
# evaluates f(n) = g(n) + h(n), returns a number
_state_eval_func => \&FrontierObj::evaluate,
# when called on a node, returns 1 if it is a goal
_state_goal_p_func => \&FrontierObj::goal_test,
# must return the number of successors of a node
_state_num_successors_func => \&FrontierObj::get_num_successors,
# must return *one* successor at a time
_state_successors_iterator => \&FrontierObj::get_successors_iterator,
# can be any suitable string representation
_state_get_data_func => \&FrontierObj::string_representation,
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
\&log_function, # returns a string used for logging progress
\&str_function, # returns a string used to *uniquely* identify a node
$max_states_in_queue, # indicate the maximum states allowed in memory
$MAX_COST, # indicate the maximum cost allowed in search
);
In the example above, a hypothetical object, C<FrontierObj>, is used to
represent a state, or I<node> in your search space. To use SMA* search to
find a shortest path from a starting node to a goal in your search space, you must
define what a I<node> is, in your search space (or I<point>, or I<state>).
A common example used for informed search methods, and one that is used in Russell's
original paper, is optimal puzzle solving, such as solving an 8 or 15-tile puzzle
in the least number of moves. If trying to solve such a puzzle, a I<node> in the
search space could be defined as a configuration of that puzzle (a paricular
ordering of the tiles).
There is an example provided in the /t directory of this module's distribution,
where SMA* is applied to the problem of finding the shortest palindrome that
contains a minimum number of letters specified, over a given list of words.
Once you have a definition and representation of a node in your search space, SMA*
search requires the following functions to work:
=over
=item *
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
=head1 DESCRIPTION
=head2 Overview
Simplified Memory-bounded A* search (or SMA* search) addresses some of the
limitations of conventional A* search, by bounding the amount of space required
to perform a shortest-path search. This module is an implementation of
SMA*, which was first introduced by Stuart Russell in 1992. SMA* is a simpler,
more efficient variation of the original MA* search introduced by P. Chakrabarti
et al. in 1989 (see references below).
=head2 Motivation and Comparison to A* Search
=head3 A* search
A* Search is an I<optimal> and I<complete> algorithm for computing a sequence of
operations leading from a system's start-state (node) to a specified goal.
In this context, I<optimal> means that A* search will return the shortest
(or cheapest) possible sequence of operations (path) leading to the goal,
and I<complete> means that A* will always find a path to
the goal if such a path exists.
In general, A* search works using a calculated cost function on each node along a
path, in addition to an I<admissible> heuristic estimating the distance from
that node to the goal. The cost is calculated as:
I<f(n) = g(n) + h(n)>
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
=item *
I<h(n)> is the heuristic function, or estimated cost of the path from I<n>
to the goal node.
=back
For a given admissible heuristic function, it can be shown that A* search
is I<optimally efficient>, meaning that, in its calculation of the shortest
path, it expands fewer nodes in the search space than any other algorithm.
To be admissible, the heuristic I<h(n)> can never over-estimate the distance
from the node to the goal. Note that if the heuristic I<h(n)> is set to
zero, A* search reduces to I<Branch and Bound> search. If the cost-so-far
I<g(n)> is set to zero, A* reduces to I<Greedy Best-first> search (which is
neither complete nor optimal). If both I<g(n)> and I<h(n)> are set to zero,
the search becomes I<Breadth-first>, which is complete and optimal, but not
optimally efficient.
The space complexity of A* search is bounded by an exponential of the
branching factor of the search-space, by the length of the longest path
examined during the search. This is can be a problem particularly if the
branching factor is large, because the algorithm may run out of memory.
=head3 SMA* Search
Like A* search, SMA* search is an optimal and complete algorithm for finding
a least-cost path. Unlike A*, SMA* will not run out of memory, I<unless the size
of the shortest path exceeds the amount of space in available memory>.
SMA* addresses the possibility of running out of memory
by pruning the portion of the search-space that is being examined. It relies on
the I<pathmax>, or I<monotonicity> constraint on I<f(n)> to remove the shallowest
of the highest-cost nodes from the search queue when there is no memory left to
expand new nodes. It records the best costs of the pruned nodes within their
antecedent nodes to ensure that crucial information about the search space is
not lost. To facilitate this mechanism, the search queue is best maintained
as a search-tree of search-trees ordered by cost and depth, respectively.
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
=over
=item *
the branching factor of the search space is large
=item *
there are many equivalent optimal solutions (or shortest paths)
=back
For solution spaces with these characteristics, stochastic methods or
approximation algorithms such as I<Simulated Annealing> can provide a
massive reduction in time and space requirements, while introducing a
tunable probability of producing a sub-optimal solution.
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
=head2 state_eval_func()
$smastar->state_eval_func(\&FrontierObj::evaluate);
Set or get the handle to the function that returns the cost of the object
argument (node) in the search space.
=head2 state_goal_p_func()
$smastar->state_goal_p_func(\&FrontierObj::goal_test);
Set/get the handle to the goal predicate function. This is a function
that returns 1 if the argument object is a goal node, or 0 otherwise.
=head2 state_num_successors_func()
$smastar->state_num_successors_func(\&FrontierObj::get_num_successors);
lib/AI/Pathfinding/SMAstar/Examples/Phrase.pm view on Meta::CPAN
#print "\n\nrepeated_pal_hash_key: $repeated_pal_hash_key\n";
if(my $hash_val = $repeated_pal_hash_ref->{$repeated_pal_hash_key}){
# skip because '$word' <--> '$p' pattern has already appeared in a previous palindrome.
if($hash_val != $depth){
goto LABEL1;
# next; # skip
}
}
else{
#flag this candidate as already having been tested (below).
$repeated_pal_hash_ref->{$repeated_pal_hash_key} = $depth;
}
}
#--------------------------------------------------------------------------
#--------------------------------------------------------------------------
my $len_c = length($c);
my $rev_c = reverse($c);
my $word_remainder;
lib/AI/Pathfinding/SMAstar/Examples/Phrase.pm view on Meta::CPAN
$repeated_pal_hash_key = $word . "^" . $c . "^" . $letters_seen_str;
#print "\n\nrepeated_pal_hash_key: $repeated_pal_hash_key\n";
if(my $hash_val = $repeated_pal_hash_ref->{$repeated_pal_hash_key}){
# skip because '$word' <--> '$p' pattern has already appeared in a previous palindrome.
if($hash_val != $depth){
next; #skip
}
}
else{
#flag this candidate as already having been tested (below).
$repeated_pal_hash_ref->{$repeated_pal_hash_key} = $depth;
}
}
#--------------------------------------------------------------------------
#--------------------------------------------------------------------------
my $len_c = length($c);
my $rev_c = reverse($c);
my $word_remainder;
lib/AI/Pathfinding/SMAstar/Examples/Phrase.pm view on Meta::CPAN
#print "\n\nrepeated_pal_hash_key: $repeated_pal_hash_key\n";
if(my $hash_val = $repeated_pal_hash_ref->{$repeated_pal_hash_key}){
# skip because '$word' <--> '$p' pattern has already appeared in a previous palindrome.
if($hash_val != $depth){
goto LABEL;
# next; #skip
}
}
else{
#flag this candidate as already having been tested (below).
$repeated_pal_hash_ref->{$repeated_pal_hash_key} = $depth;
}
}
#--------------------------------------------------------------------------
#--------------------------------------------------------------------------
my $len_c = length($c);
my $rev_c = reverse($c);
my $word_remainder;
lib/AI/Pathfinding/SMAstar/PriorityQueue.pm view on Meta::CPAN
# depth in the following loop
my $queue_at_largest_depth = $$avltree->largest();
$least_depth = $queue_at_largest_depth->key();
$depth_keys_iterator = $$avltree->get_keys_iterator();
# get lowest non-zero key of tree (smallest non-zero depth)
while (defined(my $key = $depth_keys_iterator->())){
#########################################################################
#
# Does this need to be a non-zero depth element? yes. (example: test68.lst)
#
#########################################################################
if($key != 0){
$least_depth = $key;
last;
}
}
# if no non-zero depths, find the next highest key and loop back
my $next_highest_cost_key;
t/AI-Pathfinding-SMAstar.t view on Meta::CPAN
#!/usr/bin/perl
# Before `make install' is performed this script should be runnable with
# `make test'. After `make install' it should work as `perl AI-Pathfinding-SMAstar.t'
#########################
# change 'tests => 1' to 'tests => last_test_to_print';
use Test::More tests => 9;
BEGIN { use_ok('AI::Pathfinding::SMAstar');
use_ok('Tree::AVL');
use_ok('AI::Pathfinding::SMAstar::Examples::PalUtils');
use_ok('AI::Pathfinding::SMAstar::Examples::WordObj');
use_ok('AI::Pathfinding::SMAstar::Examples::Phrase');
};
#########################
# Insert your test code below, the Test::More module is use()ed here so read
# its man page ( perldoc Test::More ) for help writing this test script.
my $dictionary_file;
my $min_letters;
my $caching;
my @words;
my @words_w_cands;
my @word_objs;
my $num_word_objs;
my @rev_word_objs;
my $num_words;
t/AI-Pathfinding-SMAstar.t view on Meta::CPAN
my %letter_freq;
my $max_word_length = 0;
my $MAX_COST = 99;
#my $collisions_per_length = PalUtils::collisions_per_length("ocid", "abo gad abalones rot abdicators enol aba dagoba");
#print "collisions: $collisions_per_length\n";
#exit;
$dictionary_file = 't/test8.lst';
$min_letters = 4;
$sparsity = 2;
$max_states_in_queue = 4;
diag("\ncreating AVL trees");
# create trees of WordObj objects, so that we can use
# WordObj::compare_up_to(), the 'relaxed' comparison function
my $avltree = Tree::AVL->new(
fcompare => \&AI::Pathfinding::SMAstar::Examples::WordObj::compare,