view release on metacpan or search on metacpan
in a very slow demonstration of how a SOM can collapse
a three dimensional space (RGB colour values) into a
two dimensional space (the display). See L<SYNOPSIS>.
The only things added are two new fields to supply to the
constructor - set C<display> to C<hex> for display as
a unified distance matrix, rather than plain grid; set
C<display_scale> for the size of the display.
=cut
use strict;
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/NeuralNet/Kohonen/Visual.pm view on Meta::CPAN
The values of C<bmu_x> and C<bmu_y> represent The I<x> and I<y>
co-ordinates of unit to highlight using the value in the
C<hicol> to highlight it with colour. If no C<hicolo> is provided,
it default to red.
When called, this method also sets the object field flag C<plotted>:
currently, this prevents C<main_loop> from calling this routine.
See also L<METHOD get_colour_for>.
=cut
view all matches for this distribution
view release on metacpan or search on metacpan
}
# See POD for usage
sub learn {
my $self = shift;
my $inputs = shift; # input set
my $outputs = shift; # target outputs
my %args = @_; # get args into hash
my $inc = $args{inc} || 0.002; # learning gradient
my $max = $args{max} || 1024; # max iteterations
my $degrade = $args{degrade} || 0; # enable gradient degrading
return $str;
}
# See POD for usage
sub learn_set {
my $self = shift;
my $data = shift;
my %args = @_;
my $len = $#{$data}/2;
my $inc = $args{inc};
my $degrade = $args{degrade};
my $p = (defined $args{flag}) ?$args{flag} :1;
my $row = (defined $args{row}) ?$args{row}+1:1;
my $leave = (defined $args{leave})?$args{leave}:0;
for my $x (0..$len-$leave) {
d("Learning set $x...\n",4);
my $str = $self->learn( $data->[$x*2],
$data->[$x*2+1],
inc=>$inc,
max=>$max,
error=>$error,
return $data->[$row]->[0]-$self->run($data->[$row-1])->[0];
}
}
# See POD for usage
sub run_set {
my $self = shift;
my $data = shift;
my $len = $#{$data}/2;
my (@results,$res);
for my $x (0..$len) {
$res = $self->run($data->[$x*2]);
for(0..$#{$res}){$results[$x]->[$_]=$res->[$_]}
d("Running set $x [$res->[0]]...\r",4);
}
return \@results;
}
#
# Loads a CSV-like dataset from disk
#
# Usage:
# my $set = $set->load_set($file, $column, $seperator);
#
# Returns a data set of the same format as required by the
# learn_set() method. $file is the disk file to load set from.
# $column an optional variable specifying the column in the
# data set to use as the class attribute. $class defaults to 0.
# $seperator is an optional variable specifying the seperator
# character between values. $seperator defaults to ',' (a single comma).
# NOTE: This does not handle quoted fields, or any other record
# seperator other than "\n".
#
sub load_set {
my $self = shift;
my $file = shift;
my $attr = shift || 0;
my $sep = shift || ',';
my $data = [];
# use AI::NeuralNet::Mesh ':acts'
# The ':all' tag also gets these into your namespace.
#
# range() returns a closure limiting the output
# of that node to a specified set of values.
# Good for output layers.
#
# usage example:
# $net->activation(4,range(0..5));
# or:
#
# $net->activation(4,range(@numbers));
# $net->activation(4,range(6,15,26,106,28,3));
#
# Note: when using a range() activatior, train the
# net TWICE on the data set, because the first time
# the range() function searches for the top value in
# the inputs, and therefore, results could flucuate.
# The second learning cycle guarantees more accuracy.
#
sub range {
#
# ramp() preforms smooth ramp activation between 0 and 1 if $r is 1,
# or between -1 and 1 if $r is 2. $r defaults to 1, as you can see.
#
# Note: when using a ramp() activatior, train the
# net at least TWICE on the data set, because the first
# time the ramp() function searches for the top value in
# the inputs, and therefore, results could flucuate.
# The second learning cycle guarantees more accuracy.
#
sub ramp {
This fixed the usage conflict with perl 5.3.3.
With this version I have gone through and tuned up many area
of this module, including the descent algorithim in learn(),
as well as four custom activation functions, and several export
tag sets. With this release, I have also included a few
new and more practical example scripts. (See ex_wine.pl) This release
also includes a simple example of an ALN (Adaptive Logic Network) made
with this module. See ex_aln.pl. Also in this release is support for
loading data sets from simple CSV-like files. See the load_set() method
for details. This version also fixes a big bug that I never knew about
until writing some demos for this version - that is, when trying to use
more than one output node, the mesh would freeze in learning. But, that
is fixed now, and you can have as many outputs as you want (how does 3
inputs and 50 outputs sound? :-)
This network model is very flexable. It will allow for clasic binary
operation or any range of integer or floating-point inputs you care
to provide. With this you can change activation types on a per node or
per layer basis (you can even include your own anonymous subs as
activation types). You can add sigmoid transfer functions and control
the threshold. You can learn data sets in batch, and load CSV data
set files. You can do almost anything you need to with this module.
This code is deigned to be flexable. Any new ideas for this module?
See AUTHOR, below, for contact info.
This module is designed to also be a customizable, extensable
neural network simulation toolkit. Through a combination of setting
the $Connection variable and using custom activation functions, as
well as basic package inheritance, you can simulate many different
types of neural network structures with very little new code written
by you.
In this module I have included a more accurate form of "learning" for the
mesh. This form preforms descent toward a local error minimum (0) on a
directional delta, rather than the desired value for that node. This allows
for better, and more accurate results with larger datasets. This module also
uses a simpler recursion technique which, suprisingly, is more accurate than
the original technique that I've used in other ANNs.
=head1 EXPORTS
intr
pdiff
See range() intr() and pdiff() for description of their respective functions.
Also provided are several export tag sets for usage in the form of:
use AI::NeuralNet::Mesh ':tag';
Tag sets are:
:default
- These functions are always exported.
- Exports:
range()
hash refrence, or more precisely, each element in the array refrence you are passing
to the constructor, represents a layer in the network. Like the constructor above,
the first element is the input layer, and the last is the output layer. The rest are
hidden layers.
Each hash refrence is expected to have AT LEAST the "nodes" key set to the number
of nodes (neurons) in that layer. The other two keys are optional. If "activation" is left
out, it defaults to "linear". If "threshold" is left out, it defaults to 0.50.
The "activation" key can be one of four values:
other than the ones listed above.
Three of the activation syntaxes are shown in the first constructor above, the "linear",
"sigmoid" and code ref types.
You can also set the activation and threshold values after network creation with the
activation() and threshold() methods.
=item $net->learn($input_map_ref, $desired_result_ref [, options ]);
NOTE: learn_set() now has increment-degrading turned OFF by default. See note
on the degrade flag, below.
This will 'teach' a network to associate an new input map with a desired
result. It will return a string containg benchmarking information.
$maximum_iterations is the maximum numbers of iteration the loop should do.
It defaults to 1024. Set it to 0 if you never want the loop to quit before
the pattern is perfectly learned.
$maximum_allowable_percentage_of_error is the maximum allowable error to have. If
this is set, then learn() will return when the perecentage difference between the
actual results and desired results falls below $maximum_allowable_percentage_of_error.
If you do not include 'error', or $maximum_allowable_percentage_of_error is set to -1,
then learn() will not return until it gets an exact match for the desired result OR it
reaches $maximum_iterations.
$degrade_increment_flag is a simple flag used to allow/dissalow increment degrading
during learning based on a product of the error difference with several other factors.
that do need the degrade flag to work at a faster speed. See test.pl for an example. If
the degrade flag wasn't in test.pl, it would take a very long time to learn.
=item $net->learn_set(\@set, [ options ]);
This takes the same options as learn() (learn_set() uses learn() internally)
and allows you to specify a set to learn, rather than individual patterns.
A dataset is an array refrence with at least two elements in the array,
each element being another array refrence (or now, a scalar string). For
each pattern to learn, you must specify an input array ref, and an ouput
array ref as the next element. Example:
my @set = (
# inputs outputs
[ 1,2,3,4 ], [ 1,3,5,6 ],
[ 0,2,5,6 ], [ 0,2,1,2 ]
);
Inputs and outputs in the dataset can also be strings.
See the paragraph on measuring forgetfulness, below. There are
two learn_set()-specific option tags available:
flag => $flag
pattern => $row
If "flag" is set to some TRUE value, as in "flag => 1" in the hash of options, or if the option "flag"
is not set, then it will return a percentage represting the amount of forgetfullness. Otherwise,
learn_set() will return an integer specifying the amount of forgetfulness when all the patterns
are learned.
If "pattern" is set, then learn_set() will use that pattern in the data set to measure forgetfulness by.
If "pattern" is omitted, it defaults to the first pattern in the set. Example:
my @set = (
[ 0,1,0,1 ], [ 0 ],
[ 0,0,1,0 ], [ 1 ],
[ 1,1,0,1 ], [ 2 ], # <---
[ 0,1,1,0 ], [ 3 ]
);
Now why the heck would anyone want to measure forgetfulness, you ask? Maybe you wonder how I
even measure that. Well, it is not a vital value that you have to know. I just put in a
"forgetfulness measure" one day because I thought it would be neat to know.
How the module measures forgetfulness is this: First, it learns all the patterns
in the set provided, then it will run the very first pattern (or whatever pattern
is specified by the "row" option) in the set after it has finished learning. It
will compare the run() output with the desired output as specified in the dataset.
In a perfect world, the two should match exactly. What we measure is how much that
they don't match, thus the amount of forgetfulness the network has.
Example (from examples/ex_dow.pl):
[ 5, 276, 261, 196, 19.5, 19.6, 18.0, 2630, 2611, 2582], [ 2638 ],
[ 6, 287, 276, 207, 19.5, 19.5, 18.0, 2637, 2630, 2589], [ 2635 ],
[ 7, 296, 287, 212, 19.3, 19.5, 17.8, 2640, 2637, 2592], [ 2641 ]
);
# Learn the set
my $f = $net->learn_set(\@data,
inc => 0.1,
max => 500,
);
# Print it
print "Forgetfullness: $f%";
This is a snippet from the example script examples/finance.pl, which demonstrates DOW average
prediction for the next month. A more simple set defenition would be as such:
my @data = (
[ 0,1 ], [ 1 ],
[ 1,0 ], [ 0 ]
);
$net->learn_set(\@data);
Same effect as above, but not the same data (obviously).
=item $net->run($input_map_ref);
You can also do this with a string:
my $outputs = $net->run('cloudy - wind is 5 MPH NW');
See also run_uc() and run_set() below.
=item $net->run_uc($input_map_ref);
This method does the same thing as this code:
All that run_uc() does is that it automatically calls uncrunch() on the output, regardless
of whether the input was crunch() -ed or not.
=item $net->run_set($set);
This takes an array ref of the same structure as the learn_set() method, above. It returns
an array ref. Each element in the returned array ref represents the output for the corresponding
element in the dataset passed. Uses run() internally.
=item $net->get_outs($set);
Simple utility function which takes an array ref of the same structure as the learn_set() method,
above. It returns an array ref of the same type as run_set() wherein each element contains an
output value. The output values are the target values specified in the $set passed. Each element
in the returned array ref represents the output value for the corrseponding row in the dataset
passed. (A row is two elements of the dataset together, see learn_set() for dataset structure.)
=item $net->load_set($file,$column,$seperator);
Loads a CSV-like dataset from disk
Returns a data set of the same structure as required by the
learn_set() method. $file is the disk file to load set from.
$column an optional variable specifying the column in the
data set to use as the class attribute. $class defaults to 0.
$seperator is an optional variable specifying the seperator
character between values. $seperator defaults to ',' (a single comma).
NOTE: This does not handle quoted fields, or any other record
seperator other than "\n".
The returned array ref is suitable for passing directly to
learn_set() or get_outs().
=item $net->range();
See CUSTOM ACTIVATION FUNCTIONS for information on several included activation functions.
=item $net->save($filename);
This will save the complete state of the network to disk, including all weights and any
words crunched with crunch() . Also saves the layer size and activations of the network.
NOTE: The only activation type NOT saved is the CODE ref type, which must be set again
after loading.
This uses a simple flat-file text storage format, and therefore the network files should
be fairly portable.
This method will return undef if there was a problem with writing the file. If there is an
error, it will set the internal error message, which you can retrive with the error() method,
below.
If there were no errors, it will return a refrence to $net.
If the file is of an invalid file type, then load() will
return undef. Use the error() method, below, to print the error message.
If there were no errors, it will return a refrence to $net.
UPDATE: $filename can now be a newline-seperated set of mesh data. This enables you
to do $net->load(join("\n",<DATA>)) and other fun things. I added this mainly
for a demo I'm writing but not qutie done with yet. So, Cheers!
=item $net->activation($layer,$type);
This sets the activation type for layer C<$layer>.
C<$type> can be one of four values:
linear ( simply use sum of inputs as output )
sigmoid [ sigmoid_1 ] ( only positive sigmoid )
ref option points to. Therefore, you must re-apply any code ref activation types after a
load() call.
=item $net->node_activation($layer,$node,$type);
This sets the activation function for a specific node in a layer. The same notes apply
here as to the activation() method above.
=item $net->threshold($layer,$value);
This sets the activation threshold for a specific layer. The threshold only is used
when activation is set to "sigmoid", "sigmoid_1", or "sigmoid_2".
=item $net->node_threshold($layer,$node,$value);
This sets the activation threshold for a specific node in a layer. The threshold only is used
when activation is set to "sigmoid", "sigmoid_1", or "sigmoid_2".
=item $net->join_cols($array_ref,$row_length_in_elements,$high_state_character,$low_state_character);
This is more of a utility function than any real necessary function of the package.
Instead of joining all the elements of the array together in one long string, like join() ,
You can also specify the extension in a simple array ref like this:
$net->extend([2,3,1]);
Which will simply add more nodes if needed to set the number of nodes in each layer to their
respective elements. This works just like the respective new() constructor, above.
NOTE: Your net will probably require re-training after adding nodes.
You can also specify just the number of nodes for the layer in this form:
$net->extend_layer(0,5);
Which will set the number of nodes in layer 0 to 5 nodes. This is the same as calling:
$net->add_nodes(0,5);
Which does the exact same thing. See add_nodes() below.
use AI::NeuralNet::Mesh;
my $net = AI::NeuralNet::Mesh->new(2,3);
for (0..3) {
$net->learn_set([
$net->crunch("I love chips."), $net->crunch("That's Junk Food!")),
$net->crunch("I love apples."), $net->crunch("Good, Healthy Food.")),
$net->crunch("I love pop."), $net->crunch("That's Junk Food!")),
$net->crunch("I love oranges."),$net->crunch("Good, Healthy Food."))
]);
=item $net->crunched($word);
This will return undef if the word is not in the internal crunch list, or it will return the
index of the word if it exists in the crunch list.
If the word is not in the list, it will set the internal error value with a text message
that you can retrive with the error() method, below.
=item $net->word($word);
A function alias for crunched().
=item $net->col_width($width);
This is useful for formating the debugging output of Level 4 if you are learning simple
bitmaps. This will set the debugger to automatically insert a line break after that many
elements in the map output when dumping the currently run map during a learn loop.
It will return the current width when called with a 0 or undef value.
The column width is preserved across load() and save() calls.
=item $net->random($rand);
This will set the randomness factor from the network. Default is 0. When called
with no arguments, or an undef value, it will return current randomness value. When
called with a 0 value, it will disable randomness in the network. The randomness factor
is preserved across load() and save() calls.
=item $net->const($const);
This sets the run const. for the network. The run const. is a value that is added
to every input line when a set of inputs are run() or learn() -ed, to prevent the
network from hanging on a 0 value. When called with no arguments, it returns the current
const. value. It defaults to 0.0001 on a newly-created network. The run const. value
is preserved across load() and save() calls.
=item range(@range);
=item range(A,B,C);
range() returns a closure limiting the output
of that node to a specified set of values.
Good for use in output layers.
Usage example:
$net->activation(4,range(0..5));
or (in the new() hash constructor form):
$net->activation(4,range(@numbers));
$net->activation(4,range(6,15,26,106,28,3));
Note: when using a range() activatior, train the
net TWICE on the data set, because the first time
the range() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.
The actual code that implements the range closure is
Now, the actual function fits in one line of code, but I expanded it a bit
here. Line 1 creates our array of allowed output values. Lines two and
three grab our parameters off the stack which allow us access to the
internals of this node. Line 5 checks to see if the sum output of this
node is higher than any previously encountered, and, if so, it sets
the marker higher. This also shows that you can use the $self refrence
to maintain information across activations. This technique is also used
in the ramp() activator. Line 6 computes the index into the allowed
values array by first scaling the $sum to be between 0 and 1 and then
expanding it to fit smoothly inside the number of elements in the array. Then
tag as so:
use AI::NeuralNet::Mesh ':acts';
Note: when using a ramp() activatior, train the
net at least TWICE on the data set, because the first
time the ramp() function searches for the top value in
the inputs, and therefore, results could flucuate.
The second learning cycle guarantees more accuracy.
No code to show here, as it is almost exactly the same as range().
below, for more information on creating your own custom connector.
=item $AI::NeuralNet::Mesh::DEBUG
This variable controls the verbosity level. It will not hurt anything to set this
directly, yet most people find it easier to set it using the debug() method, or
any of its aliases.
=head1 CUSTOM NETWORK CONNECTORS
Creating custom network connectors is step up from average use of this module.
However, it can be very useful in creating other styles of neural networks, other
than the default fully-connected feed-foward network.
You create a custom connector by setting the variable $AI::NeuralNet::Mesh::Connector
to the fully qualified name of the function used to make the actual connections
between the nodes in the network. This variable contains '_c' by default, but if you use
this variable, be sure to add the fully qualified name of the method. For example,
in the ALN example, I use a connector in the main package called tree() instead of
the default connector. Before I call the new() constructor, I use this line of code:
from http://www.josiah.countystart.com/modules/get.pl?mesh:pod
=head1 MAILING LIST
A mailing list has been setup for AI::NeuralNet::Mesh and AI::NeuralNet::BackProp.
The list is for discussion of AI and neural net related topics as they pertain to
AI::NeuralNet::BackProp and AI::NeuralNet::mesh. I will also announce in the group
each time a new release of AI::NeuralNet::Mesh is available.
The list address is at:
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/NeuralNet/SOM.pm view on Meta::CPAN
=head2 Scenario
The basic idea is that the neural network consists of a 2-dimensional
array of N-dimensional vectors. When the training is started these
vectors may be completely random, but over time the network learns
from the sample data, which is a set of N-dimensional vectors.
Slowly, the vectors in the network will try to approximate the sample
vectors fed in. If in the sample vectors there were clusters, then
these clusters will be neighbourhoods within the rectangle (or
whatever topology you are using).
view all matches for this distribution
view release on metacpan or search on metacpan
examples/game_ai.pl view on Meta::CPAN
use constant YES => 1.0;
use constant NO => 0.0;
my $net = AI::NeuralNet::Simple->new(4,20,4);
$net->iterations(shift || 100000);
$net->train_set( [
# health knife gun enemy
[GOOD, YES, YES, 0], WANDER,
[GOOD, YES, NO, 2], HIDE,
[GOOD, YES, NO, 1], ATTACK,
[GOOD, YES, NO, 0], WANDER,
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
=item C<< format >>
The format to return a response in. Currently the only accepted value is json.
Enable JSON mode by setting the format parameter to json. This will structure the response as valid JSON.
Note: it's important to instruct the model to use JSON in the prompt. Otherwise, the model may generate large amounts whitespace.
=item C<< keep_alive >>
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
=over
=item -
If set to a positive duration (e.g. 20), the model will stay loaded for the provided duration.
=item -
If set to a negative duration (e.g. -1), the model will stay loaded indefinitely.
=item -
If set to 0, the model will be unloaded immediately once finished.
=item -
If not set, the model will stay loaded for 5 minutes by default
=back
=item C<< messages >>
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
my $ct = $resp->headers->content_type;
return unless $ct;
$ct =~ s/;\s+.*//;
if( $ct eq 'application/x-ndjson' ) {
# we only handle ndjson currently
my $handled_offset = 0;
$resp->on(progress => sub($msg,@) {
my $fresh = substr( $msg->body, $handled_offset );
my $body = $msg->body;
$body =~ s/[^\r\n]+\z//; # Strip any unfinished line
$handled_offset = length $body;
my @lines = split /\n/, $fresh;
for (@lines) {
my $payload = decode_json( $_ );
$self->validate_response( $payload, $tx );
$queue->push(
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
Future::Mojo->done( defined $res );
} until => sub($done) { $done->get };
Create a model from a Modelfile.
It is recommended to set C<modelfile> to the content of the Modelfile rather than just set C<path>. This is a requirement for remote create. Remote model creation should also create any file blobs, fields such as C<FROM> and C<ADAPTER>, explicitly wi...
=head3 Options
=over 4
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
my $ct = $resp->headers->content_type;
return unless $ct;
$ct =~ s/;\s+.*//;
if( $ct eq 'application/x-ndjson' ) {
# we only handle ndjson currently
my $handled_offset = 0;
$resp->on(progress => sub($msg,@) {
my $fresh = substr( $msg->body, $handled_offset );
my $body = $msg->body;
$body =~ s/[^\r\n]+\z//; # Strip any unfinished line
$handled_offset = length $body;
my @lines = split /\n/, $fresh;
for (@lines) {
my $payload = decode_json( $_ );
$self->validate_response( $payload, $tx );
$queue->push(
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
=item C<< format >>
The format to return a response in. Currently the only accepted value is json.
Enable JSON mode by setting the format parameter to json. This will structure the response as valid JSON.
Note: it's important to instruct the model to use JSON in the prompt. Otherwise, the model may generate large amounts whitespace.
=item C<< images >>
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
=over
=item -
If set to a positive duration (e.g. 20), the model will stay loaded for the provided duration.
=item -
If set to a negative duration (e.g. -1), the model will stay loaded indefinitely.
=item -
If set to 0, the model will be unloaded immediately once finished.
=item -
If not set, the model will stay loaded for 5 minutes by default
=back
=item C<< model >>
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
my $ct = $resp->headers->content_type;
return unless $ct;
$ct =~ s/;\s+.*//;
if( $ct eq 'application/x-ndjson' ) {
# we only handle ndjson currently
my $handled_offset = 0;
$resp->on(progress => sub($msg,@) {
my $fresh = substr( $msg->body, $handled_offset );
my $body = $msg->body;
$body =~ s/[^\r\n]+\z//; # Strip any unfinished line
$handled_offset = length $body;
my @lines = split /\n/, $fresh;
for (@lines) {
my $payload = decode_json( $_ );
$self->validate_response( $payload, $tx );
$queue->push(
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
my $ct = $resp->headers->content_type;
return unless $ct;
$ct =~ s/;\s+.*//;
if( $ct eq 'application/x-ndjson' ) {
# we only handle ndjson currently
my $handled_offset = 0;
$resp->on(progress => sub($msg,@) {
my $fresh = substr( $msg->body, $handled_offset );
my $body = $msg->body;
$body =~ s/[^\r\n]+\z//; # Strip any unfinished line
$handled_offset = length $body;
my @lines = split /\n/, $fresh;
for (@lines) {
my $payload = decode_json( $_ );
$self->validate_response( $payload, $tx );
$queue->push(
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/PBDD.pm view on Meta::CPAN
BDD_REORDER_SIFT
BDD_REORDER_RANDOM
);
@EXPORT_OK = qw(
// setup and cleanup
init
gc
verbose
kill
// simple BDD operations
lib/AI/PBDD.pm view on Meta::CPAN
internal_constvalue
internal_iscomplemented
internal_then
internal_else
// dynamic variable ordering
reorder_setMethod
reorder_now
reorder_enableDynamic
reorder_createVariableGroup
);
lib/AI/PBDD.pm view on Meta::CPAN
printDot__II($bdd, $filename);
}
}
sub makeSet {
my ($vars, $size, $offset) = @_;
if (!defined($offset)) {
return makeSetI($vars, $size);
} else {
return makeSetII($vars, $size, $offset);
}
}
sub createPair {
my ($old, $new) = @_;
lib/AI/PBDD.pm view on Meta::CPAN
kill();
=head1 DESCRIPTION
Binary Decision Diagrams (BDDs) are used for efficient computation of many common problems. This is done by giving a compact representation and a set of efficient operations on boolean functions f: {0,1}^n --> {0,1}.
It turns out that this representation is good enough to solve a huge amount of problems in Artificial Intelligence and other areas of computing such as hardware verification.
This is a Perl interface to the popular BDD package BuDDy. The interface is largely inspired by JBDD, a Java common interface for the two BDD packages BuDDy and CUDD written by Arash Vahidi, which can be found at L<http://javaddlib.sourceforge.net/jb...
lib/AI/PBDD.pm view on Meta::CPAN
BDD NOT operation. The returned result is already referenced.
=item B<$cube = makeSet($vars,$size)>
=item B<$cube = makeSet($vars,$size,$offset)>
Create a cube (all-true minterm, e.g. C<$v1 AND $v2 AND $v3> where each C<$vi> is a BDD variable) of C<$size> variables from the array referenced by C<$vars>, starting at position 0 (or C<$offset>).
=item B<$bdd = exists($bdd1,$cube)>
BDD existential quantification. Parameter C<$cube> is an all-true minterm. The returned result is already referenced.
lib/AI/PBDD.pm view on Meta::CPAN
BDD relation-product (quantification and product computation in one pass): C<EXISTS $cube: $bdd_left AND $bdd_right>. The returned result is already referenced.
=item B<$bdd = restrict($bdd1,$minterm)>
Restrict a set of variables to constant values. The returned result is already referenced.
=item B<$bdd = constrain($bdd1,$bdd2)>
Compute the generalized co-factor of C<$bdd1> w.r.t. C<$bdd2>. The returned result is already referenced.
lib/AI/PBDD.pm view on Meta::CPAN
=over 4
=item B<$cube = support($bdd)>
BDD support set as a cube.
=item B<$cnt = nodeCount($bdd)>
Number of nodes a BDD.
lib/AI/PBDD.pm view on Meta::CPAN
=head2 DYNAMIC VARIABLE ORDERING
=over 4
=item B<reorder_setMethod($method)>
Set dynamic reordering heuristic. The possible values are:
=over 4
view all matches for this distribution
view release on metacpan or search on metacpan
examples/NeuralNet/pso_ann.pl view on Meta::CPAN
&writeAnnConfig($annConfig, $numInputs, $numHidden, $xferFunc, @arr);
my $netValue = &runANN($annConfig, $annInputs);
print "network value = $netValue\n";
# the closer the network value gets to our desired value
# then we want to set the fitness closer to 1.
#
# This is a special case of the sigmoid, and looks an awful lot
# like the hyperbolic tangent ;)
#
my $magnitudeFromBest = abs($expectedValue - $netValue);
return 2 / (1 + exp($magnitudeFromBest));
}
pso_set_params(\%test_params);
pso_register_fitness_function('test_fitness_function');
pso_optimize();
#my @solution = pso_get_solution_array();
view all matches for this distribution
view release on metacpan or search on metacpan
example/PSOTest-MultiCore.pl view on Meta::CPAN
#use AI::ParticleSwarmOptimization::Pmap;
use Data::Dumper; $::Data::Dumper::Sortkeys = 1;
#=======================================================================
sub calcFit {
my @values = @_;
my $offset = int (-@values / 2);
my $sum;
select( undef, undef, undef, 0.01 ); # Simulation of heavy processing...
$sum += ($_ - $offset++) ** 2 for @values;
return $sum;
}
#=======================================================================
++$|;
#-----------------------------------------------------------------------
view all matches for this distribution
view release on metacpan or search on metacpan
example/PSOTest-MultiCore.pl view on Meta::CPAN
use AI::ParticleSwarmOptimization::Pmap;
use Data::Dumper; $::Data::Dumper::Sortkeys = 1;
#=======================================================================
sub calcFit {
my @values = @_;
my $offset = int (-@values / 2);
my $sum;
select( undef, undef, undef, 0.01 ); # Simulation of heavy processing...
$sum += ($_ - $offset++) ** 2 for @values;
return $sum;
}
#=======================================================================
++$|;
#-----------------------------------------------------------------------
view all matches for this distribution
view release on metacpan or search on metacpan
Samples/PSOPlatTest.pl view on Meta::CPAN
$fit, join (', ', map {sprintf '%.4f', $_} @values), $iters;
sub calcFit {
my @values = @_;
my $offset = int (-@values / 2);
my $sum;
$sum += ($_ - $offset++)**2 for @values;
return $sum;
}
view all matches for this distribution
view release on metacpan or search on metacpan
Benchmark/perl-vs-xs.pl view on Meta::CPAN
for my $x (0 .. WIDTH_X - 1 )
{
for my $y (0 .. WIDTH_Y - 1 )
{
$m->set_passability($x, $y, $map[$x][$y]) ;
}
}
my ( $x_start, $y_start ) = ( WIDTH_X >> 1, WIDTH_Y >> 1 );
my ( $x_end, $y_end ) = ( 0, 0 );
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Pathfinding/OptimizeMultiple.pm view on Meta::CPAN
$last_avg = $min_avg;
push @{ $self->chosen_scans() },
$self->_calc_chosen_scan( $selected_scan_idx, $iters_quota );
$flares_num_iters->set( $selected_scan_idx,
$flares_num_iters->at($selected_scan_idx) + $iters_quota );
$self->_selected_scans()->[$selected_scan_idx]->mark_as_used();
$iters_quota = 0;
lib/AI/Pathfinding/OptimizeMultiple.pm view on Meta::CPAN
=encoding UTF-8
=head1 NAME
AI::Pathfinding::OptimizeMultiple - optimize path finding searches for a large
set of initial conditions (for better average performance).
=head1 VERSION
version 0.0.17
lib/AI/Pathfinding/OptimizeMultiple.pm view on Meta::CPAN
quotas => [400, 300, 200],
selected_scans =>
[
AI::Pathfinding::OptimizeMultiple::Scan->new(
id => 'first_search',
cmd_line => "--preset first_search",
),
AI::Pathfinding::OptimizeMultiple::Scan->new(
id => 'second_search',
cmd_line => "--preset second_search",
),
AI::Pathfinding::OptimizeMultiple::Scan->new(
id => 'third_search',
cmd_line => "--preset third_search",
),
],
}
);
lib/AI/Pathfinding/OptimizeMultiple.pm view on Meta::CPAN
=item * L<Freecell Solver|http://fc-solve.shlomifish.org/>
For which this code was first written and used.
=item * L<Alternative Implementation in C#/.NET|https://bitbucket.org/shlomif/fc-solve/src/cc5b428ed9bad0132d7a7bc1a14fc6d3650edf45/fc-solve/presets/soft-threads/meta-moves/auto-gen/optimize-seq?at=master>
An Alternative implementation in C#/.NET, which was written because the
performance of the Perl/PDL code was too slow.
=item * L<PDL> - Perl Data Language
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
else{
my $successors_iterator = $best->$successors_func();
my $succ = $successors_iterator->();
if($succ){
# if succ is at max depth and is not a goal node, set succ->fcost to infinity
if($succ->depth() >= $max_depth && !$succ->$goal_p() ){
$succ->{_f_cost} = $max_cost;
}
else{
# calling eval for comparison, and maintaining pathmax property
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
if($was_on_queue{$i} && $antecedent->need_fval_change()){
# the antecedent needed fval change too.
$$priority_queue->insert($antecedent);
}
if($antecedent->need_fval_change()){
# set need_fval_change back to 0, so it will not be automatically seen as
# needing changed in the future. This is important, since we do not want
# to remove an element from the queue *unless* we need to change the fcost.
# This is because when we remove it from the queue and re-insert it, it
# loses its seniority in the queue (it becomes the newest node at its cost
# and depth) and will not be removed at the right time when searching for
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
}
$antecedent = $antecedent->{_antecedent};
$i++;
}
# Again, set need_fval_change back to 0, so it will not be automatically
# seen as needing changed in the future.
$best->{_need_fcost_change} = 0;
}
}
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
$antecedent->remember_forgotten_nodes_fcost($shcl_obj);
$antecedent->{_forgotten_nodes_num} = $antecedent->{_forgotten_nodes_num} + 1;
my $descendant_index = $shcl_obj->{_descendant_index};
# record the index of this descendant in the forgotten_nodes list
$antecedent->{_forgotten_nodes_offsets}->{$descendant_index} = 1;
# flag the antecedent as not having this descendant in the queue
$antecedent->{_descendants_produced}->[$descendant_index] = 0;
$antecedent->{_descendant_fcosts}->[$descendant_index] = -1;
# flag the ancestor node as having deleted a descendant
$antecedent->descendants_deleted(1);
lib/AI/Pathfinding/SMAstar.pm view on Meta::CPAN
For a given admissible heuristic function, it can be shown that A* search
is I<optimally efficient>, meaning that, in its calculation of the shortest
path, it expands fewer nodes in the search space than any other algorithm.
To be admissible, the heuristic I<h(n)> can never over-estimate the distance
from the node to the goal. Note that if the heuristic I<h(n)> is set to
zero, A* search reduces to I<Branch and Bound> search. If the cost-so-far
I<g(n)> is set to zero, A* reduces to I<Greedy Best-first> search (which is
neither complete nor optimal). If both I<g(n)> and I<h(n)> are set to zero,
the search becomes I<Breadth-first>, which is complete and optimal, but not
optimally efficient.
The space complexity of A* search is bounded by an exponential of the
branching factor of the search-space, by the length of the longest path
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
=head1 DATASET STRUCTURE
I<This module can only process CSV files.>
Any field ie columns that will be used for processing must be binary ie. C<0> or C<1> only. Your dataset can contain other columns with non-binary data as long as they are not one of the dendrites.
There are soem sample dataset which can be found in the C<t> directory. The original dataset can also be found in C<docs/book_list.csv>. The files can also be found L<here|https://github.com/Ellednera/AI-Perceptron-Simple>.
=head1 PERCEPTRON DATA
The perceptron/neuron data is stored using the C<Storable> module.
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
=head2 learning_rate ( $value )
=head2 learning_rate
If C<$value> is given, sets the learning rate to C<$value>. If not, then it returns the learning rate.
=cut
sub learning_rate {
my $self = shift;
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
=head2 threshold ( $value )
=head2 threshold
If C<$value> is given, sets the threshold / passing rate to C<$value>. If not, then it returns the passing rate.
=cut
sub threshold {
my $self = shift;
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
=head2 train ( $stimuli_train_csv, $expected_output_header, $save_nerve_to_file, $display_stats, $identifier )
Trains the perceptron.
C<$stimuli_train_csv> is the set of data / input (in CSV format) to train the perceptron while C<$save_nerve_to_file> is
the filename that will be generate each time the perceptron finishes the training process. This data file is the data of the C<AI::Perceptron::Simple>
object and it is used in the C<validate> method.
C<$expected_output_header> is the header name of the columns in the csv file with the actual category or the exepcted values. This is used to determine to tune the nerve up or down. This value should only be 0 or 1 for the sake of simplicity.
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
The new sum of all C<weightage * input> after fine-tuning the nerve
=back
If C<$display_stats> is specified ie. set to C<1>, then you B<MUST> specify the C<$identifier>. C<$identifier> is the column / header name that is used to identify a specific row of data in C<$stimuli_train_csv>.
=cut
sub tame {
train( @_ );
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
=head2 take_lab_test (...)
=head2 validate ( \%options )
This method validates the perceptron against another set of data after it has undergone the training process.
This method calculates the output of each row of data and write the result into the predicted column. The data begin written into the new file or the original file will maintain it's sequence.
Please take note that this method will load all the data of the validation stimuli, so please split your stimuli into multiple files if possible and call this method a few more times.
lib/AI/Perceptron/Simple.pm view on Meta::CPAN
The parameters and usage are the same as C<get_confusion_matrix>. See the next method.
=head2 get_confusion_matrix ( \%options )
Returns the confusion matrix in the form of a hash. The hash will contain these keys: C<true_positive>, C<true_negative>, C<false_positive>, C<false_negative>, C<accuracy>, C<sensitivity>. More stats like C<precision>, C<specificity> and C<F1_Score> ...
If you are trying to manipulate the confusion matrix hash or something, take note that all the stats are in percentage (%) in decimal (if any) except the total entries.
For C<%options>, the followings are needed unless mentioned:
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Perceptron.pm view on Meta::CPAN
a positive or negative output depending on the input's weights and a threshold.
=head1 TRAINING
Usually you have to train a perceptron before it will give you the outputs you
expect. This is done by giving the perceptron a set of examples containing the
output you want for some given inputs:
-1 => -1, -1
-1 => 1, -1
-1 => -1, 1
lib/AI/Perceptron.pm view on Meta::CPAN
Which is know as a negative feedback loop - it uses the current output as an
input to determine what the next output will be.
Also, note that this means you can get stuck in an infinite loop. It's not a
bad idea to set the maximum number of iterations to prevent that.
=head1 CONSTRUCTOR
=over 4
lib/AI/Perceptron.pm view on Meta::CPAN
num_inputs = 1
learning_rate = 0.01
threshold = 0.0
weights = empty list
Ideally you should use the accessors to set the properties, but for backwards
compatability you can still use the following arguments:
Inputs => $number_of_inputs (positive int)
N => $learning_rate (float)
W => [ @weights ] (floats)
lib/AI/Perceptron.pm view on Meta::CPAN
Uses the I<Stochastic Approximation of the Gradient-Descent> model to adjust
the perceptron's weights until all training examples are classified correctly.
@training_examples can be passed for convenience. These are passed to
L<add_examples()>. If you want to re-train the perceptron with an entirely new
set of examples, reset the L<training_examples()>.
=back
=head1 AUTHOR
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/PredictionClient.pm view on Meta::CPAN
| Classification Results for zzzzz |
'==========================================================================='
=head2 SETTING UP A TEST SERVER
You can set up a server by following the instructions on the TensorFlow Serving site:
https://www.tensorflow.org/deploy/tfserve
https://tensorflow.github.io/serving/setup
https://tensorflow.github.io/serving/docker
I have a prebuilt Docker container available here:
docker pull mountaintom/tensorflow-serving-inception-docker-swarm-demo
lib/AI/PredictionClient.pm view on Meta::CPAN
Start this container and run the following commands within it to get the server running:
$ cd /serving
$ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=inception-export &> inception_log &
A longer article on setting up a server is here:
https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/
=head1 ADDITIONAL INFO
view all matches for this distribution
view release on metacpan or search on metacpan
examples/data_structures.pl view on Meta::CPAN
use Data::Dumper;
$Data::Dumper::Indent = 0;
use AI::Prolog;
# note that the following line sets an experimental interface option
AI::Prolog->raw_results(0);
my $database = <<'END_PROLOG';
append([], X, X).
append([W|X],Y,[W|Z]) :- append(X,Y,Z).
END_PROLOG
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/SimulatedAnnealing.htm view on Meta::CPAN
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>AI::SimulatedAnnealing – optimize a list of numbers
according to a specified cost function.</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8"/>
<link href="mailto:" rev="made"/>
</head>
<body style="background-color: white">
<ul>
<li><a href="#name">NAME</a></li>
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/TensorFlow/Libtensorflow/Buffer.pm view on Meta::CPAN
NewFromString( $proto )
>>>
=back
Makes a copy of the input and sets an appropriate deallocator. Useful for
passing in read-only, input protobufs.
my $data = 'bytes';
my $buffer = Buffer->NewFromString(\$data);
ok $buffer, 'create buffer from string';
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AI/Termites.pm view on Meta::CPAN
my $wood_ix = delete $termite->{wood_ix} //
croak "termite can not leave wood because it is carrying nothing";
$self->{taken}--;
my $wood = $self->{wood}[$wood_ix];
$wood->{taken} = 0;
$wood->{pos}->set($termite->{pos});
}
1;
__END__
view all matches for this distribution
view release on metacpan or search on metacpan
examples/iris.pl view on Meta::CPAN
use aliased 'AI::XGBoost::DMatrix';
use AI::XGBoost qw(train);
use Data::Dataset::Classic::Iris;
# We are going to solve a multiple classification problem:
# determining plant species using a set of flower's measures
# XGBoost uses number for "class" so we are going to codify classes
my %class = (
setosa => 0,
versicolor => 1,
virginica => 2
);
my $iris = Data::Dataset::Classic::Iris::get();
# Split train and test, label and features
my $train_dataset = [map {$iris->{$_}} grep {$_ ne 'species'} keys %$iris];
my $test_dataset = [map {$iris->{$_}} grep {$_ ne 'species'} keys %$iris];
sub transpose {
# Transposing without using PDL, Data::Table, Data::Frame or other modules
# to keep minimal dependencies
my $array = shift;
examples/iris.pl view on Meta::CPAN
}
}
return \@aux;
}
$train_dataset = transpose($train_dataset);
$test_dataset = transpose($test_dataset);
my $train_label = [map {$class{$_}} @{$iris->{'species'}}];
my $test_label = [map {$class{$_}} @{$iris->{'species'}}];
my $train_data = DMatrix->From(matrix => $train_dataset, label => $train_label);
my $test_data = DMatrix->From(matrix => $test_dataset, label => $test_label);
# Multiclass problems need a diferent objective function and the number
# of classes, in this case we are using 'multi:softprob' and
# num_class => 3
my $booster = train(data => $train_data, number_of_rounds => 20, params => {
view all matches for this distribution
view release on metacpan or search on metacpan
lib/AIIA/GMT.pm view on Meta::CPAN
sub submit {
my @args = (shift);
my $client = Frontier::Client->new(url => $SERVER_URL, debug => 0);
my $ret = $client->call('Annotator.getAnnotation', @args);
my @rep;
map {push @rep, $_->{'offset'} . "\t" . $_->{'mention'};} @{$ret->{'mentions'}};
@rep = sort @rep;
return \@rep;
}
1;
view all matches for this distribution
view release on metacpan or search on metacpan
exit;
};
# print STDERR "setting ",caller().'::AIS_IDENTITY', " to $Sessions{$Coo}->{identity}\n";
# $ENV{AIS_IDENTITY} = $Sessions{$Coo}->{identity};
$ENV{AIS_IDENTITY} =
${caller().'::AIS_IDENTITY'} = $Sessions{$Coo}->{identity};
tie %{caller().'::AIS_STASH'}, DirDB => ${tied(%{$Sessions{$Coo}})};
view all matches for this distribution
view release on metacpan or search on metacpan
LPP/lpp_name.pm view on Meta::CPAN
if (defined $param{NAME}) { $self->{NAME} = $param{NAME}}
return ( $self->{NAME},$self->{TYPE},$self->{FORMAT},$self->{PLATFORM},
keys %{$self->{FILESET}} );
}
sub fileset {
my $self = shift;
my $fsname = shift;
my %param = @_;
if ( $#_ == -1 ) {
return ($self->{FILESET}{$fsname}{NAME},$self->{FILESET}{$fsname}{VRMF},
LPP/lpp_name.pm view on Meta::CPAN
$self->{FILESET}{$fsname}{COMMENTS});
}
sub sizeinfo {
my $self = shift;
my $fset = shift;
my $size_ref = shift;
$self->{FILESET}{$fset}{SIZEINFO} = $size_ref;
return $self->{FILESET}{$fset}{SIZEINFO};
}
sub requisites {
my $self = shift;
my $fset = shift;
my $ref_req = shift;
$self->{FILESET}{$fset}{REQ} = $ref_req;
return $self->{FILESET}{$fset}{REQ};
}
sub validate {
}
LPP/lpp_name.pm view on Meta::CPAN
my ($format,$platform,$type,$name,$token) = split / /, $line;
$self->lpp(NAME => $name, FORMAT => $format, TYPE => $type,
PLATFORM => $platform);
chomp ($line = <$fh>);
# add while loop here to process fileset headers
my ($fsn,$vrmf,$disk,$bosboot,$content,$lang,@desc) = split / /, $line;
$self->fileset($fsn, NAME => $fsn,VRMF => $vrmf,DISK => $disk,
BOSBOOT => $bosboot, CONTENT => $content, LANG => $lang,
DESCRIPTION => join ' ', @desc);
chomp ($line = <$fh>) until $line =~ /^\[/;
chomp ($line = <$fh>);
LPP/lpp_name.pm view on Meta::CPAN
my $self = shift;
my $fh = shift;
print $fh join ' ', $self->{FORMAT}, $self->{PLATFORM}, $self->{TYPE},
$self->{NAME}, "{\n";
foreach my $fileset (keys %{$self->{FILESET}} ) {
print $fh join ' ', $self->{FILESET}{$fileset}{NAME},
$self->{FILESET}{$fileset}{VRMF},
$self->{FILESET}{$fileset}{DISK},
$self->{FILESET}{$fileset}{BOSBOOT},
$self->{FILESET}{$fileset}{CONTENT},
$self->{FILESET}{$fileset}{LANG},
$self->{FILESET}{$fileset}{DESCRIPTION}, "\n[\n";
for my $i ( 0 .. $#{$self->{FILESET}{$fileset}{REQ}} ) {
print $fh join ' ',@{${$self->{FILESET}{$fileset}{REQ}}[$i]},"\n";
}
print $fh "%\n";
foreach my $key (sort keys %{$self->{FILESET}{$fileset}{SIZEINFO}}) {
print $fh join ' ', $key,
$self->{FILESET}{$fileset}{SIZEINFO}{$key}, "\n";
}
print $fh "%\n%\n%\n%\n]\n";
}
LPP/lpp_name.pm view on Meta::CPAN
use AIX::LPP::lpp_name;
$x = lpp_name->new();
$x->lpp(NAME => 'test.lpp',TYPE => 'I',PLATFORM => 'R',FORMAT => '4');
$x->fileset('test.lpp.rte', VRMF => '1.0.0.0',DISK => '01',BOSBOOT => 'N',
CONTENT => 'I', LANG => 'en_US', DESCRIPTION => 'test.lpp description',
COMMENTS => '');
my @reqs = [ ['*prereq','bos.rte','4.3.3.0'] ];
$x->requisites('test.lpp.rte', \@reqs);
my %sizes = { '/usr' => '5', '/etc' => '1' };
LPP/lpp_name.pm view on Meta::CPAN
or
$x = lpp_name->read(\*in_fh);
my %lppdata = $x->lpp();
my %fsdata = $x->fileset('test.lpp.rte');
my $req_ref = $x->requisites('test.lpp.rte');
my $size_ref = $x->sizeinfo('test.lpp.rte');
=head1 DESCRIPTION
AIX::LPP::lpp_name is a class module for reading, creating, and modifying
AIX lpp_name files. The lpp_name file is an internal component of AIX
packages (called LPPs). LPPs consist of filesets and information about
installing them. This information can include: prerequisites, filesystem
requirements, copywrites, etc..
=head1 CONSTRUCTOR METHODS
=over 4
=item $x = lpp_name->new();
The simple form of the new constructor method creates an empty lpp_name
object. This object is then modified using lpp() and fileset() object
methods. Basic LPP information can also be passed to new() as follows: ...
=item $x = lpp_name->read(\*in_fh);
Alternatively, a new lpp_name object can be create by reading data from
LPP/lpp_name.pm view on Meta::CPAN
=over 4
=item lpp()
=item fileset()
=item requisites()
=item sizeinfo()
view all matches for this distribution
view release on metacpan or search on metacpan
{
XSRETURN_UNDEF;
}
if ((items == 2) && (!SvREADONLY((SV*)ST(1))))
{
sv_setpv((SV*)ST(1), name);
SvSETMAGIC(ST(1));
}
OUTPUT:
RETVAL
CLEANUP:
view all matches for this distribution
view release on metacpan or search on metacpan
lib/ALBD.pm view on Meta::CPAN
# knowledge matrix.
#
# The explicit knowledge is read from UMLS::Association N11 matrix. This
# matrix contains the co-occurrence counts for all CUI pairs. The
# UMLS::Association database is completely independent from
# implementation, so any dataset, window size, or anything else may be used.
# Data is read in as a sparse matrix using the Discovery::tableToSparseMatrix
# function. This returns the primary data structures and variables used
# throughtout LBD.
#
# Matrix representation:
lib/ALBD.pm view on Meta::CPAN
# startingMatrix <- A matrix containing the explicit matrix rows for all of the
# start terms. This makes it easy to have multiple start terms
# and using this matrix as opposed to the entire explicit
# matrix drastically improves performance.
# explicitMatrix <- A matrix containing explicit connections (known connections)
# for every CUI in the dataset.
# implicitMatrix <- A matrix containing implicit connections (discovered
# connections) for every CUI in the datast
package ALBD;
lib/ALBD.pm view on Meta::CPAN
# ouptut: none, but a results file is written to disk
sub performLBD {
my $self = shift;
my $start; #used to record run times
#implicit matrix ranking requires a different set of procedures
if ($lbdOptions{'rankingProcedure'} eq 'implicitMatrix') {
$self->performLBD_implicitMatrixRanking();
return;
}
if (exists $lbdOptions{'targetCuis'}) {
lib/ALBD.pm view on Meta::CPAN
return $self;
}
# Initializes everything needed for Literature Based Discovery
# input: $optionsHashRef <- reference to LBD options hash (command line input)
# output: none, but global parameters are set
sub _initialize {
my $self = shift;
my $optionsHashRef = shift;
#initialize UMLS::Interface
lib/ALBD.pm view on Meta::CPAN
# Reads the config file in as an options hash
# input: the name of a configuration file that has key fields in '<>'s,
# The '>' is followed directly by the value for that key, no space.
# Each line of the file contains a new key-value pair (e.g. <key>value)
# If no value is provided, a default value of 1 is set
# output: a hash ref to a hash containing each key value pair
sub _readConfigFile {
my $self = shift;
my $configFileName = shift;
lib/ALBD.pm view on Meta::CPAN
if ($2) {
#add key and value to the optionsHash
$optionsHash{$1} = $2;
}
else {
#add key and set default value to the optionsHash
$optionsHash{$1} = 1;
}
}
}
}
lib/ALBD.pm view on Meta::CPAN
my $self = shift;
my $scoresRef = shift;
my $ranksRef = shift;
my $printTo = shift;
#set printTo
if (!$printTo) {
$printTo = scalar @{$ranksRef};
}
#construct the output string
view all matches for this distribution
view release on metacpan or search on metacpan
lib/ALPM/Conf.pm view on Meta::CPAN
my($optsref, $sectref) = @_;
my %hooks;
while(my($fld, $opt) = each %CFGOPTS){
$hooks{$fld} = sub {
my $val = shift;
die qq{$fld can only be set in the [options] section\n}
unless($$sectref eq 'options');
$optsref->{$opt} = $val;
};
}
return %hooks;
lib/ALPM/Conf.pm view on Meta::CPAN
my $new = { 'name' => $name };
push @$dbs, $new;
return $new;
}
sub _setsiglvl
{
my($dbs, $sect, $siglvl) = @_;
my $db = _getdb($dbs, $sect);
$db->{'siglvl'} = $siglvl;
return;
lib/ALPM/Conf.pm view on Meta::CPAN
}
sub _addmirror
{
my($dbs, $url, $sect) = @_;
die "Section has not previously been declared, cannot set URL\n" unless($sect);
my $db = _getdb($dbs, $sect);
push @{$db->{'mirrors'}}, $url;
return;
}
sub _setopt
{
my($alpm, $opt, $valstr) = @_;
no strict 'refs';
my $meth = *{"ALPM::set_$opt"}{'CODE'};
die "The ALPM::set_$opt method is missing" unless($meth);
my @val = ($opt =~ /s$/ ? map { split } $valstr : $valstr);
return $meth->($alpm, @val);
}
sub _setarch
{
my($opts) = @_;
if(!$opts->{'arch'} || $opts->{'arch'} eq 'auto'){
chomp ($opts->{'arch'} = `uname -m`);
}
lib/ALPM/Conf.pm view on Meta::CPAN
}
}
my $alpm = ALPM->new($root, $dbpath);
_setarch($opts);
while(my ($opt, $val) = each %$opts){
# The SetOption type in typemap croaks on error, no need to check.
_setopt($alpm, $opt, $val);
}
my $usesl = grep { /signatures/ } $alpm->caps;
for my $db (@$dbs){
my($r, $sl, $mirs) = @{$db}{'name', 'siglvl', 'mirrors'};
lib/ALPM/Conf.pm view on Meta::CPAN
},
'SigLevel' => sub {
if($currsect eq 'options'){
$defsiglvl = _parse_siglvl(shift);
}else{
_setsiglvl(\@dbs, $currsect, _parse_siglvl(shift));
}
},
($self->{'cfields'} ? %{$self->{'cfields'}} : ()),
);
view all matches for this distribution
view release on metacpan or search on metacpan
examples/amfclient.pl view on Meta::CPAN
my $service = 'Twitter';
my $method = 'search';
my $client = new AMF::Connection( $endpoint );
$client->setEncoding(3);
#$client->setHTTPProxy('http://127.0.0.1:8888');
#$client->addHeader( 'serviceBrowser', 'true' );
$client->setHTTPCookieJar( HTTP::Cookies->new(file => "/tmp/lwpcookies.txt", autosave => 1, ignore_discard => 1 ) );
my $params = [ "italy" ];
my ($response) = $client->call( $service.'.'.$method, $params );
my $json = JSON->new;
view all matches for this distribution
view release on metacpan or search on metacpan
ActionScript for this service:
#include "NetServices.as"
#include "NetDebug.as"
conn = NetServices.setDefaultGatewayURL("http:#host/cpu.pl");
conn = NetServices.createGatewayConnection();
connection = NetServices.createGatewayConnection();
remoteService = connection.getService("CpuUsage", this);
view all matches for this distribution