AI-DecisionTree
view release on metacpan or search on metacpan
lib/AI/DecisionTree.pm view on Meta::CPAN
/ | \
/ | \
sunny/ overcast \rainy
/ | \
HUMIDITY | WIND
/ \ *no* / \
/ \ / \
high/ \normal / \
/ \ strong/ \weak
*no* *yes* / \
*no* *yes*
(This example, and the inspiration for the C<AI::DecisionTree> module,
come directly from Tom Mitchell's excellent book "Machine Learning",
available from McGraw Hill.)
A decision tree like this one can be learned from training data, and
then applied to previously unseen data to obtain results that are
consistent with the training data.
The usual goal of a decision tree is to somehow encapsulate the
training data in the smallest possible tree. This is motivated by an
"Occam's Razor" philosophy, in which the simplest possible explanation
for a set of phenomena should be preferred over other explanations.
Also, small trees will make decisions faster than large trees, and
they are much easier for a human to look at and understand. One of
the biggest reasons for using a decision tree instead of many other
machine learning techniques is that a decision tree is a much more
scrutable decision maker than, say, a neural network.
The current implementation of this module uses an extremely simple
method for creating the decision tree based on the training instances.
It uses an Information Gain metric (based on expected reduction in
entropy) to select the "most informative" attribute at each node in
the tree. This is essentially the ID3 algorithm, developed by
J. R. Quinlan in 1986. The idea is that the attribute with the
highest Information Gain will (probably) be the best attribute to
split the tree on at each point if we're interested in making small
trees.
=head1 METHODS
=head2 Building and Querying the Tree
=over 4
=item new(...parameters...)
Creates a new decision tree object and returns it. Accepts the
following parameters:
=over 4
=item noise_mode
Controls the behavior of the
C<train()> method when "noisy" data is encountered. Here "noisy"
means that two or more training instances contradict each other, such
that they have identical attributes but different results.
If C<noise_mode> is set to C<fatal> (the default), the C<train()>
method will throw an exception (die). If C<noise_mode> is set to
C<pick_best>, the most frequent result at each noisy node will be
selected.
=item prune
A boolean C<prune> parameter which specifies
whether the tree should be pruned after training. This is usually a
good idea, so the default is to prune. Currently we prune using a
simple minimum-description-length criterion.
=item verbose
If set to a true value, some status information will be output while
training a decision tree. Default is false.
=item purge
If set to a true value, the C<do_purge()> method will be invoked
during C<train()>. The default is true.
=item max_depth
Controls the maximum depth of the tree that will be created during
C<train()>. The default is 0, which means that trees of unlimited
depth can be constructed.
=back
=item add_instance(attributes => \%hash, result => $string, name => $string)
Adds a training instance to the set of instances which will be used to
form the tree. An C<attributes> parameter specifies a hash of
attribute-value pairs for the instance, and a C<result> parameter
specifies the result.
An optional C<name> parameter lets you give a unique name to each
training instance. This can be used in coordination with the
C<set_results()> method below.
=item train()
Builds the decision tree from the list of training instances. If a
numeric C<max_depth> parameter is supplied, the maximum tree depth can
be controlled (see also the C<new()> method).
=item get_result(attributes => \%hash)
Returns the most likely result (from the set of all results given to
C<add_instance()>) for the set of attribute values given. An
C<attributes> parameter specifies a hash of attribute-value pairs for
the instance. If the decision tree doesn't have enough information to
find a result, it will return C<undef>.
=item do_purge()
Purges training instances and their associated information from the
DecisionTree object. This can save memory after training, and since
the training instances are implemented as C structs, this turns the
DecisionTree object into a pure-perl data structure that can be more
easily saved with C<Storable.pm>, for instance.
=item purge()
Returns true or false depending on the value of the tree's C<purge>
property. An optional boolean argument sets the property.
=item copy_instances(from =E<gt> $other_tree)
Allows two trees to share the same set of training instances. More
commonly, this lets you train one tree, then re-use its instances in
another tree (possibly changing the instance C<result> values using
C<set_results()>), which is much faster than re-populating the second
tree's instances from scratch.
=item set_results(\%results)
Given a hash that relates instance names to instance result values,
change the result values as specified.
=back
=head2 Tree Introspection
=over 4
( run in 0.933 second using v1.01-cache-2.11-cpan-39bf76dae61 )