AI-DecisionTree
view release on metacpan or search on metacpan
lib/AI/DecisionTree.pm view on Meta::CPAN
# i = number of instances in the entire tree
# e = number of errors below this node
# Hypothesis description length (MML):
# describe tree: number of nodes + number of edges
# describe exceptions: num_exceptions * log2(total_num_instances) * log2(total_num_results)
my $r = keys %{ $self->{results} };
my $i = $self->{tree}{instances};
my $exception_cost = log($r) * log($i) / log(2)**2;
# Pruning can turn a branch into a leaf
my $maybe_prune = sub {
my ($self, $node) = @_;
return unless $node->{children}; # Can't prune leaves
my $nodes_below = $self->nodes_below($node);
my $tree_cost = 2 * $nodes_below - 1; # $edges_below == $nodes_below - 1
my $exceptions = $self->exceptions( $node );
my $simple_rule_exceptions = $node->{instances} - $node->{distribution}[1];
my $score = -$nodes_below - ($exceptions - $simple_rule_exceptions) * $exception_cost;
#warn "Score = $score = -$nodes_below - ($exceptions - $simple_rule_exceptions) * $exception_cost\n";
if ($score < 0) {
delete @{$node}{'children', 'split_on', 'exceptions', 'nodes_below'};
$node->{result} = $node->{distribution}[0];
# XXX I'm not cleaning up 'exceptions' or 'nodes_below' keys up the tree
}
};
$self->_traverse($maybe_prune);
}
sub exceptions {
my ($self, $node) = @_;
return $node->{exceptions} if exists $node->{exeptions};
my $count = 0;
if ( exists $node->{result} ) {
$count = $node->{instances} - $node->{distribution}[1];
} else {
foreach my $child ( values %{$node->{children}} ) {
$count += $self->exceptions($child);
}
}
return $node->{exceptions} = $count;
}
sub nodes_below {
my ($self, $node) = @_;
return $node->{nodes_below} if exists $node->{nodes_below};
my $count = 0;
$self->_traverse( sub {$count++}, $node );
return $node->{nodes_below} = $count - 1;
}
# This is *not* for external use, I may change it.
sub _traverse {
my ($self, $callback, $node, $parent, $node_name) = @_;
$node ||= $self->{tree};
ref($callback) ? $callback->($self, $node, $parent, $node_name) : $self->$callback($node, $parent, $node_name);
return unless $node->{children};
foreach my $child ( keys %{$node->{children}} ) {
$self->_traverse($callback, $node->{children}{$child}, $node, $child);
}
}
sub get_result {
my ($self, %args) = @_;
croak "Missing 'attributes' or 'callback' parameter" unless $args{attributes} or $args{callback};
$self->train unless $self->{tree};
my $tree = $self->{tree};
while (1) {
if (exists $tree->{result}) {
my $r = $tree->{result};
return $r unless wantarray;
my %dist = @{$tree->{distribution}};
my $confidence = $tree->{distribution}[1] / $tree->{instances};
# my $confidence = P(H|D) = [P(D|H)P(H)]/[P(D|H)P(H)+P(D|H')P(H')]
# = [P(D|H)P(H)]/P(D);
# my $confidence =
# $confidence *= $self->{prior_freqs}{$r} / $self->{total_instances};
return ($r, $confidence, \%dist);
}
my $instance_val = (exists $args{callback} ? $args{callback}->($tree->{split_on}) :
exists $args{attributes}{$tree->{split_on}} ? $args{attributes}{$tree->{split_on}} :
'<undef>');
## no critic (ProhibitExplicitReturnUndef)
$tree = $tree->{children}{ $instance_val }
or return undef;
}
}
sub as_graphviz {
my ($self, %args) = @_;
my $colors = delete $args{leaf_colors} || {};
require GraphViz;
my $g = GraphViz->new(%args);
my $id = 1;
my $add_edge = sub {
my ($self, $node, $parent, $node_name) = @_;
# We use stringified reference names for node names, as a convenient hack.
if ($node->{split_on}) {
$g->add_node( "$node",
label => $node->{split_on},
shape => 'ellipse',
);
lib/AI/DecisionTree.pm view on Meta::CPAN
If set to a true value, the C<do_purge()> method will be invoked
during C<train()>. The default is true.
=item max_depth
Controls the maximum depth of the tree that will be created during
C<train()>. The default is 0, which means that trees of unlimited
depth can be constructed.
=back
=item add_instance(attributes => \%hash, result => $string, name => $string)
Adds a training instance to the set of instances which will be used to
form the tree. An C<attributes> parameter specifies a hash of
attribute-value pairs for the instance, and a C<result> parameter
specifies the result.
An optional C<name> parameter lets you give a unique name to each
training instance. This can be used in coordination with the
C<set_results()> method below.
=item train()
Builds the decision tree from the list of training instances. If a
numeric C<max_depth> parameter is supplied, the maximum tree depth can
be controlled (see also the C<new()> method).
=item get_result(attributes => \%hash)
Returns the most likely result (from the set of all results given to
C<add_instance()>) for the set of attribute values given. An
C<attributes> parameter specifies a hash of attribute-value pairs for
the instance. If the decision tree doesn't have enough information to
find a result, it will return C<undef>.
=item do_purge()
Purges training instances and their associated information from the
DecisionTree object. This can save memory after training, and since
the training instances are implemented as C structs, this turns the
DecisionTree object into a pure-perl data structure that can be more
easily saved with C<Storable.pm>, for instance.
=item purge()
Returns true or false depending on the value of the tree's C<purge>
property. An optional boolean argument sets the property.
=item copy_instances(from =E<gt> $other_tree)
Allows two trees to share the same set of training instances. More
commonly, this lets you train one tree, then re-use its instances in
another tree (possibly changing the instance C<result> values using
C<set_results()>), which is much faster than re-populating the second
tree's instances from scratch.
=item set_results(\%results)
Given a hash that relates instance names to instance result values,
change the result values as specified.
=back
=head2 Tree Introspection
=over 4
=item instances()
Returns a reference to an array of the training instances used to
build this tree.
=item nodes()
Returns the number of nodes in the trained decision tree.
=item depth()
Returns the depth of the tree. This is the maximum number of
decisions that would need to be made to classify an unseen instance,
i.e. the length of the longest path from the tree's root to a leaf. A
tree with a single node would have a depth of zero.
=item rule_tree()
Returns a data structure representing the decision tree. For
instance, for the tree diagram above, the following data structure
is returned:
[ 'outlook', {
'rain' => [ 'wind', {
'strong' => 'no',
'weak' => 'yes',
} ],
'sunny' => [ 'humidity', {
'normal' => 'yes',
'high' => 'no',
} ],
'overcast' => 'yes',
} ]
This is slightly remniscent of how XML::Parser returns the parsed
XML tree.
Note that while the ordering in the hashes is unpredictable, the
nesting is in the order in which the criteria will be checked at
decision-making time.
=item rule_statements()
Returns a list of strings that describe the tree in rule-form. For
instance, for the tree diagram above, the following list would be
returned (though not necessarily in this order - the order is
unpredictable):
if outlook='rain' and wind='strong' -> 'no'
if outlook='rain' and wind='weak' -> 'yes'
if outlook='sunny' and humidity='normal' -> 'yes'
if outlook='sunny' and humidity='high' -> 'no'
if outlook='overcast' -> 'yes'
( run in 1.413 second using v1.01-cache-2.11-cpan-39bf76dae61 )