Result:
found more than 455 distributions - search limited to the first 2001 files matching your query ( run in 0.708 )


AAAA-Crypt-DH

 view release on metacpan or  search on metacpan

inc/Devel/CheckLib.pm  view on Meta::CPAN


This can also be supplied on the command-line.

=item debug

If true - emit information during processing that can be used for
debugging.

=back

And libraries are no use without header files, so ...

 view all matches for this distribution


AAC-Pvoice

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN


0.9 AAC::Pvoice::Bitmap
        * We now use Image::Magick to create the bitmaps. It's a change to
          the internals, so the methods and their parameters stay the same.
        * Returned images are cached first (using File::Cache). If an image
          has been processed before, it will be retrieved from the cache.
          The cache never expires, but every combination of parameters
          results in a new cached image (and of course the file modificationtime
          of the image is also taken into account.
    AAC::Pvoice::Dialog
        * This is a newly added class. It's a subclass of Wx::Dialog and

 view all matches for this distribution


ABI

 view release on metacpan or  search on metacpan

ABI.pm  view on Meta::CPAN

		my ($value) = $param{$key};
		delete $param{$key};
		push( @return_array, $value );
	}

#    print "\n_rearrange() after processing:\n";
#    my $i; for ($i=0;$i<@return_array;$i++) { printf "%20s => %s\n", ${$order}[$i], $return_array[$i]; } <STDIN>;
	return (@return_array);
}

sub _is_abi {

 view all matches for this distribution


AC-DC

 view release on metacpan or  search on metacpan

lib/AC/Daemon.pm  view on Meta::CPAN

        print PID "$$\n";
        print PID "# @argv\n";
        close PID;
    }

    # run as 2 processes
    while(1){
	$childpid = fork;
	die "cannot fork: $!\n" unless defined $childpid;
        if( $childpid ){
            # parent

lib/AC/Daemon.pm  view on Meta::CPAN

    my $name = shift;

    verbose( "caught signal SIG$_[0] - exiting" );

    if( $childpid > 1 ){
	# kill child process + wait for it to exit
        unlink "/var/run/$name.pid" if $name;
        kill "TERM", $childpid;
        wait;
    }

 view all matches for this distribution


AC-MrGamoo

 view release on metacpan or  search on metacpan

lib/AC/MrGamoo/AC/ReadInput.pm  view on Meta::CPAN

    return (undef, 1) unless defined $line;

    my $d;
    eval { $d = parse_dancr_log($line); };
    if( $@ ){
        problem("cannot parse data in (" . $R->config('current_file') . "). cannot process\n");
        return ;
    }

    # filter input on date range. we could just as easily filter
    # in 'map', but doing here, behind the scenes, keeps things

 view all matches for this distribution


AC-Yenta

 view release on metacpan or  search on metacpan

lib/AC/Yenta/Kibitz/Store/Client.pm  view on Meta::CPAN

    };
    if(my $e = $@){
        problem("cannot decode reply: $e");
    }

    # process
    $me->{_store_ok} = 1;
    if( $proto->{is_error} || $@ ){
        my $e = $@ || 'remote error';
        $me->run_callback('error', {
            error	=> $e,

 view all matches for this distribution


ACL-Regex

 view release on metacpan or  search on metacpan

examples/postifx-policy-server.pl  view on Meta::CPAN

	}

	return( join( " ", @a ) );
}

sub process_client($){
	my ($socket) = @_;

	# Create some stuff
	my $accept_acl = ACL->new->generate_required( 'required.txt' )->parse_acl_from_file( { Filename => "acl.permit.txt" } );
	my $reject_acl = ACL->new->generate_required( 'required.txt' )->parse_acl_from_file( { Filename => "acl.reject.txt" } );

examples/postifx-policy-server.pl  view on Meta::CPAN

  "Couldn't be a tcp server on port $default_config->{serverport} : $@\n";

# Generate a number of listener threads
my @threads = ();
for( 1 .. $TC ){
	my $thread = threads->create( \&process_client, $server );
	push( @threads, $thread );
}

foreach my $thread ( @threads ){
	$thread->join();

 view all matches for this distribution


ACME-QuoteDB

 view release on metacpan or  search on metacpan

lib/ACME/QuoteDB.pm  view on Meta::CPAN

    A: At a past company, a team I worked on a project with had a test suite, 
    in which at the completion of successful tests (100%), a 'wisenheimer' 
    success message would be printed. (Like a quote or joke or the like)
    (Interestingly, it added a 'fun' factor to testing, not that one is needed 
    of course ;). It was hard to justify spending company time to find and 
    add decent content to the hand rolled process, this would have helped.

    Q: Don't you have anything better to do, like some non-trivial work?
    A: Yup

    Q: Hey Dood! why are u uzing Class::DBI as your ORM!?  Haven't your heard 

 view all matches for this distribution


ADAMK-Release

 view release on metacpan or  search on metacpan

inc/Module/Install.pm  view on Meta::CPAN

	}
	close FH or die "close($_[0]): $!";
}
END_OLD

# _version is for processing module versions (eg, 1.03_05) not
# Perl versions (eg, 5.8.1).
sub _version ($) {
	my $s = shift || 0;
	my $d =()= $s =~ /(\.)/g;
	if ( $d >= 2 ) {

 view all matches for this distribution


AES128

 view release on metacpan or  search on metacpan

ppport.h  view on Meta::CPAN

pregfree2||5.011000|
pregfree|||
prescan_version||5.011004|
printbuf|||
printf_nocontext|||vn
process_special_blocks|||
ptr_hash|||n
ptr_table_clear||5.009005|
ptr_table_fetch||5.009005|
ptr_table_find|||n
ptr_table_free||5.009005|

 view all matches for this distribution


AFS-Command

 view release on metacpan or  search on metacpan

lib/AFS/Command/Base.pm  view on Meta::CPAN

		}

	    }

	    unless ( waitpid($pid,0) ) {
		$self->_Carp("Unable to get status of child process ($pid)");
		return;
	    }

	    if ( $? ) {
		$self->_Carp("Error running @{$self->{command}} help.  Unable to configure $class");

lib/AFS/Command/Base.pm  view on Meta::CPAN

	    $arguments->{aliases}->{f} = 'force';
	}
    }

    unless ( waitpid($pid,0) ) {
	$self->_Carp("Unable to get status of child process ($pid)");
	$errors++;
    }

    if ( $? ) {
	$self->_Carp("Error running @command $operation -help.  Unable to configure @command $operation");

 view all matches for this distribution


AFS-Monitor

 view release on metacpan or  search on metacpan

examples/Meltdown.pl  view on Meta::CPAN

#!/usr/bin/perl
#
# Meltdown.pl - Used to collect stats on running AFS process with rxdebug.
#
# Original Implementation:
#	Unknown - Meltdown.csh, Meltdown.awk
#
# Change History:

examples/Meltdown.pl  view on Meta::CPAN


use blib;
use AFS::Monitor;

sub Usage {
	print STDERR "\n\n$progName: collect rxdebug stats on AFS process.\n";
	print STDERR "usage: $progName [options]\n";
	print STDERR "options:\n";
	print STDERR " -s <server>    (required parameter, no default).\n";
	print STDERR " -p <port>      (default: 7000).\n";
	print STDERR " -t <interval>  (default: 1200 seconds).\n";

 view all matches for this distribution


AFS-PAG

 view release on metacpan or  search on metacpan

lib/AFS/PAG.pm  view on Meta::CPAN

or libkopenafs libraries.

With the functions provided by this module, a Perl program can detect
whether AFS is available on the local system (hasafs()) and whether it is
currently running inside a PAG (haspag()).  It can also create a new PAG
and put the current process in it (setpag()) and remove any AFS tokens in
the current PAG (unlog()).

Note that this module doesn't provide a direct way to obtain new AFS
tokens.  Programs that need AFS tokens should normally obtain Kerberos
tickets (via whatever means) and then run the program B<aklog>, which

lib/AFS/PAG.pm  view on Meta::CPAN

Returns true if the local host is running an AFS client and false
otherwise.

=item haspag()

Returns true if the current process is running inside a PAG and false
otherwise.  AFS tokens obtained outside of a PAG are visible to any
process on the system outside of a PAG running as the same UID.  AFS
tokens obtained inside a PAG are visible to any process in the same PAG,
regardless of UID.

=item setpag()

Creates a new, empty PAG and put the current process in it.  This should
normally be called before obtaining new AFS tokens to isolate those tokens
from other processes on the system.  Returns true on success and throws
an exception on failure.

=item unlog()

Deletes all AFS tokens in the current PAG, similar to the action of

 view all matches for this distribution


AFS

 view release on metacpan or  search on metacpan

src/AFS.pm  view on Meta::CPAN

}

bootstrap AFS;

# Preloaded methods go here.  Autoload methods go after __END__, and are
# processed by the autosplit program.

1;
__END__

 view all matches for this distribution


AI-ANN

 view release on metacpan or  search on metacpan

lib/AI/ANN.pm  view on Meta::CPAN

The first neuron with iamanoutput=1 is output 0. The second is output 1.
I hope you're seeing the pattern...
minvalue is the minimum value a neuron can pass. Default 0.
maxvalue is the maximum value a neuron can pass. Default 1.
afunc is a reference to the activation function. It should be simple and fast.
    The activation function is processed /after/ minvalue and maxvalue.
dafunc is the derivative of the activation function.
We strongly advise that you memoize your afunc and dafunc if they are at all
    complicated. We will do our best to behave.

=head2 execute

 view all matches for this distribution


AI-Calibrate

 view release on metacpan or  search on metacpan

lib/AI/Calibrate.pm  view on Meta::CPAN

probability of one to each positive instance and a probability of zero to each
negative instance, and puts each instance in its own group.  It then looks, at
each iteration, for adjacent violators: adjacent groups whose probabilities
locally increase rather than decrease.  When it finds such groups, it pools
them and replaces their probability estimates with the average of the group's
values.  It continues this process of averaging and replacement until the
entire sequence is monotonically decreasing.  The result is a sequence of
instances, each of which has a score and an associated probability estimate,
which can then be used to map scores into probability estimates.

For further information on the PAV algorithm, you can read the section in my

 view all matches for this distribution


AI-Categorizer

 view release on metacpan or  search on metacpan

lib/AI/Categorizer.pm  view on Meta::CPAN

modules.  The various details are flexible - for example, you can
choose what categorization algorithm to use, what features (words or
otherwise) of the documents should be used (or how to automatically
choose these features), what format the documents are in, and so on.

The basic process of using this module will typically involve
obtaining a collection of B<pre-categorized> documents, creating a
"knowledge set" representation of those documents, training a
categorizer on that knowledge set, and saving the trained categorizer
for later use.  There are several ways to carry out this process.  The
top-level C<AI::Categorizer> module provides an umbrella class for
high-level operations, or you may use the interfaces of the individual
classes in the framework.

A simple sample script that reads a training corpus, trains a

lib/AI/Categorizer.pm  view on Meta::CPAN

particular domain of knowledge, and there are many things a human
would consider that none of these algorithms consider.  These are only
statistical tests - at best they are neat tricks or helpful
assistants, and at worst they are totally unreliable.  If you plan to
use this module for anything really important, human supervision is
essential, both of the categorization process and the final results.

For the usage details, please see the documentation of each individual
module.

=head1 FRAMEWORK COMPONENTS

lib/AI/Categorizer.pm  view on Meta::CPAN


Deciding which features are the most important is a very large part of
the categorization task - you cannot simply consider all the words in
all the documents when training, and all the words in the document
being categorized.  There are two main reasons for this - first, it
would mean that your training and categorizing processes would take
forever and use tons of memory, and second, the significant stuff of
the documents would get lost in the "noise" of the insignificant stuff.

The process of selecting the most important features in the training
set is called "feature selection".  It is managed by the
C<AI::Categorizer::KnowledgeSet> class, and you will find the details
of feature selection processes in that class's documentation.

=head2 Collections

Because documents may be stored in lots of different formats, a
"collection" class has been created as an abstraction of a stored set

 view all matches for this distribution


AI-DecisionTree

 view release on metacpan or  search on metacpan

lib/AI/DecisionTree.pm  view on Meta::CPAN


=head1 DESCRIPTION

The C<AI::DecisionTree> module automatically creates so-called
"decision trees" to explain a set of training data.  A decision tree
is a kind of categorizer that use a flowchart-like process for
categorizing new instances.  For instance, a learned decision tree
might look like the following, which classifies for the concept "play
tennis":

                   OUTLOOK

lib/AI/DecisionTree.pm  view on Meta::CPAN


In the current implementation, only discrete-valued attributes are
supported.  This means that an attribute like "temperature" can have
values like "cool", "medium", and "hot", but using actual temperatures
like 87 or 62.3 is not going to work.  This is because the values
would split the data too finely - the tree-building process would
probably think that it could make all its decisions based on the exact
temperature value alone, ignoring all other attributes, because each
temperature would have only been seen once in the training data.

The usual way to deal with this problem is for the tree-building
process to figure out how to place the continuous attribute values
into a set of bins (like "cool", "medium", and "hot") and then build
the tree based on these bin values.  Future versions of
C<AI::DecisionTree> may provide support for this.  For now, you have
to do it yourself.

 view all matches for this distribution


AI-Evolve-Befunge

 view release on metacpan or  search on metacpan

lib/AI/Evolve/Befunge.pm  view on Meta::CPAN

it exists mainly to provide a version number to keep Build.PL happy.
The rest of this file acts as a quick-start guide to the rest of the
codebase.

The important bits from a user's standpoint are the Population object
(which drives the main process of evolving AI), and the Physics plugin
(which implements the rules of the universe those AI live in).  There
are sections below containing more detail on what these two things
are, and how they work.


lib/AI/Evolve/Befunge.pm  view on Meta::CPAN

The Population object is the main user interface to this project.
Basically you just keep running it over and over, and presumably, the
result gets better and better.

The important thing to remember here is that the results take time -
it will probably take several weeks of solid processing time before you
begin to see any promising results at all.  It takes a lot of random
code generation before it starts to generate code that does what you
want it to do.

If you don't know anything about Befunge, I recommend you read up on

lib/AI/Evolve/Befunge.pm  view on Meta::CPAN


After a population fights it out for a while, the winners are chosen
(who continue to live) and everyone else dies.  Then a new population
is generated from the winners, some random mutation (random
generation of code, as well as potentially resizing the codebase)
occurs, and the whole process starts over for the next generation.


=head1 PHYSICS

At the other end of all of this is the Physics plugin.  The Physics

lib/AI/Evolve/Befunge.pm  view on Meta::CPAN


=head1 MIGRATION

Further performance may be improved through the use of migration.

Migration is a very simple form of parallel processing.  It should scale
nearly linearly, and is a very effective means of increasing performance.

The idea is, you run multiple populations on multiple machines (one per
machine).  The only requirement is that each Population has a different
"hostname" setting.  And that's not really a requirement, it's just useful
for tracking down which host a critter came from.

When a Population object has finished processing a generation, there is
a chance that one or more (up to 3) of the surviving critters will be
written out to a special directory (which acts as an "outbox").

A separate networking program (implemented by Migrator.pm and spawned
automatically when creating a Population object) may pick up these
critters and broadcast them to some or all of the other nodes in a cluster
(deleting them from the "outbox" folder at the same time).  The instances
of this networking program on the other nodes will receive them, and write
them out to another special directory (which acts as an "inbox").

When a Population object has finished processing a generation, another
thing it does is checks the "inbox" directory for any new incoming
critters.  If they are detected, they are imported into the population
(and deleted from the "inbox").

Imported critters will compete in the next generation.  If they win,
they will be involved in the reproduction process and may contribute to
the local gene pool.

On the server end, a script called "migrationd" is provided to accept
connections and distribute critters between nodes.  The config file
specifies which server to connect to.  See the CONFIG FILE section,

lib/AI/Evolve/Befunge.pm  view on Meta::CPAN

=head1 CONFIG FILE

You can find an example config file ("example.conf") in the source
tarball.  It contains all of the variables with their default values,
and descriptions of each.  It lets you configure many important
parameters about how the evolutionary process works, so you probably
want to copy and edit it.

This file can be copied to ".ai-evolve-befunge" in your home
directory, or "/etc/ai-evolve-befunge.conf" for sitewide use.  If both
files exist, they are both loaded, in such a way that the homedir

 view all matches for this distribution


AI-ExpertSystem-Advanced

 view release on metacpan or  search on metacpan

inc/Module/Install.pm  view on Meta::CPAN

		print FH $_[$_] or die "print($_[0]): $!";
	}
	close FH or die "close($_[0]): $!";
}

# _version is for processing module versions (eg, 1.03_05) not
# Perl versions (eg, 5.8.1).
sub _version ($) {
	my $s = shift || 0;
	my $d =()= $s =~ /(\.)/g;
	if ( $d >= 2 ) {

 view all matches for this distribution


AI-ExpertSystem-Simple

 view release on metacpan or  search on metacpan

lib/AI/ExpertSystem/Simple.pm  view on Meta::CPAN

	$self->{_knowledge}->{$attribute}->set_question($text, @responses);

	eval { $t->purge(); }
}

sub process {
	my ($self) = @_;

	die "Simple->process() takes no arguments" if scalar(@_) != 1;

	my $n = $self->{_goal}->name();

	if($self->{_knowledge}->{$n}->is_value_set()) {
		return 'finished';

lib/AI/ExpertSystem/Simple.pm  view on Meta::CPAN

	my $value = $self->{_knowledge}->{$name}->get_value();

	my $x = "The goal '$name' was set to '$value' by " . ($rule ? "rule '$rule'" : 'asking a question' );
	$self->_add_to_log( $x );

	my @processed_rules;
	push( @processed_rules, $rule ) if $rule;

	$self->_explain_this( $rule, '', @processed_rules );
}

sub _explain_this {
	my ($self, $rule, $depth, @processed_rules) = @_;

	$self->_add_to_log( "${depth}Explaining rule '$rule'" );

	my %dont_do_these = map{ $_ => 1 } @processed_rules;

	my @check_these_rules = ();

	my %conditions = $self->{_rules}->{$rule}->conditions();
	foreach my $name (sort keys %conditions) {

lib/AI/ExpertSystem/Simple.pm  view on Meta::CPAN


		my $x = "$depth Action set '$name' to '$value'";
		$self->_add_to_log( $x );
	}

	@processed_rules = keys %dont_do_these;

	foreach my $x ( @check_these_rules ) {
		push( @processed_rules, $self->_explain_this( $x, "$depth ", keys %dont_do_these ) );
	}

	return @processed_rules;
}

1;

=head1 NAME

lib/AI/ExpertSystem/Simple.pm  view on Meta::CPAN

=item load( FILENAME )

This method takes the FILENAME of an XML knowledgebase and attempts to parse it to set up the data structures 
required for a consoltation.

=item process( )

Once the knowledgebase is loaded the consultation is run by repeatedly calling this method.

It returns four results:

lib/AI/ExpertSystem/Simple.pm  view on Meta::CPAN


The system has a question to ask of the user.

The question and list of valid responses is available from the get_question( ) method and the users response should be returned via the answer( ) method. 

Then simply call the process( ) method again.

=item "continue"

The system has calculated some data but has nothing to ask the user but has still not finished.

This response will be removed in future versions.

Simply call the process( ) method again.

=item "finished"

The consoltation has finished and the system has an answer for the user which is available from the answer( ) method.

lib/AI/ExpertSystem/Simple.pm  view on Meta::CPAN


=back

=item get_question( )

If the process( ) method has returned "question" then this method will return the question to ask the user 
and a list of valid responses.

=item answer( VALUE )

The user has been presented with the question from the get_question( ) method along with a set of 
valid responses and the users selection is returned by this method.

=item get_answer( )

If the process( ) method has returned "finished" then the answer to the users query will be 
returned by this method.

=item log( )

Returns a list of the actions undertaken so far and clears the log.

lib/AI/ExpertSystem/Simple.pm  view on Meta::CPAN

=item Simple->load() unable to use file

The file supplied to the load( ) method could not be used as it was either not a file 
or not readable.

=item Simple->process() takes no arguments

When the method is called it requires no arguments. This message is given if 
some arguments were supplied.

=item Simple->get_question() takes no arguments

 view all matches for this distribution


AI-FANN-Evolving

 view release on metacpan or  search on metacpan

lib/AI/FANN/Evolving/Experiment.pm  view on Meta::CPAN

=cut

sub error_func {
	my $self = shift;
	
	# process the argument
	if ( @_ ) {
		my $arg = shift;
		if ( ref $arg eq 'CODE' ) {
			$self->{'error_func'} = $arg;
			$log->info("using custom error function");

 view all matches for this distribution


AI-FANN

 view release on metacpan or  search on metacpan

ppport.h  view on Meta::CPAN

pregfree|||
prepend_elem|||
prepend_madprops|||
printbuf|||
printf_nocontext|||vn
process_special_blocks|||
ptr_table_clear||5.009005|
ptr_table_fetch||5.009005|
ptr_table_find|||n
ptr_table_free||5.009005|
ptr_table_new||5.009005|

 view all matches for this distribution


AI-Gene-Sequence

 view release on metacpan or  search on metacpan

AI/Gene/Sequence.pm  view on Meta::CPAN

algorithms in perl, for most purposes, it will be much slower
than if they were implemented in another more suitable language.
There are some problems which do lend themselves to an approach
in perl and these are the ones where the time between mutations
will be large, for instance, when composing music where the
selection process is driven by human whims.

=cut

 view all matches for this distribution


AI-Genetic-Pro

 view release on metacpan or  search on metacpan

lib/AI/Genetic/Pro.pm  view on Meta::CPAN


__END__

=head1 NAME

AI::Genetic::Pro - Efficient genetic algorithms for professional purpose with support for multiprocessing.

=head1 SYNOPSIS

    use AI::Genetic::Pro;
    

lib/AI/Genetic/Pro.pm  view on Meta::CPAN

    $ga->load('genetic.sga');

=head1 DESCRIPTION

This module provides efficient implementation of a genetic algorithm for
professional purpose with support for multiprocessing. It was designed to operate as fast as possible
even on very large populations and big individuals/chromosomes. C<AI::Genetic::Pro> 
was inspired by C<AI::Genetic>, so it is in most cases compatible 
(there are some changes). Additionally C<AI::Genetic::Pro> isn't a pure Perl solution, so it 
doesn't have limitations of its ancestor (such as slow-down in the
case of big populations ( >10000 ) or vectors with more than 33 fields).

lib/AI/Genetic/Pro.pm  view on Meta::CPAN

(and should work on any other).

Multicore support is available through Many-Core Engine (C<MCE>). 
You can gain the most speed up for big populations or time/CPU consuming 
fitness functions, however for small populations and/or simple fitness 
function better choice will be single-process version.

You can get even more speed up if you turn on use of native arrays 
(parameter: C<native>) instead of packing chromosomes into single scalar. 
However you have to remember about expensive memory use in that case.

lib/AI/Genetic/Pro.pm  view on Meta::CPAN


This module was designed to use as little memory as possible. A population
of size 10000 consisting of 92-bit vectors uses only ~24MB (C<AI::Genetic> 
would use about 78MB). However - if you use MCE - there will be bigger 
memory consumption. This is consequence of necessity of synchronization 
between many processes.

=item Advanced options

To provide more flexibility C<AI::Genetic::Pro> supports many 
statistical distributions, such as C<uniform>, C<natural>, C<chi_square>

lib/AI/Genetic/Pro.pm  view on Meta::CPAN

This defines whether native arrays should be used instead of packing each chromosome into signle scalar. 
Turning this option can give you speed up, but much more memory will be used. Allowed values are 1 or 0 (default: I<0>).

=item -mce

This defines whether Many-Core Engine (MCE) should be used during processing. 
This can give you significant speed up on many-core/CPU systems, but it'll 
increase memory consumption. Allowed values are 1 or 0 (default: I<0>).

=item -workers

This option has any meaning only if MCE is turned on. This defines how 
many process will be used during processing. Default will be used one proces per core (most efficient).

=item -strict

This defines if the check for modifying chromosomes in a user-defined fitness
function is active. Directly modifying chromosomes is not allowed and it is 

 view all matches for this distribution


AI-LibNeural

 view release on metacpan or  search on metacpan

LibNeural.pm  view on Meta::CPAN


bootstrap AI::LibNeural $VERSION;

# Preloaded methods go here.

# Autoload methods go after =cut, and are processed by the autosplit program.

1;
__END__

=head1 NAME

 view all matches for this distribution


AI-Logic-AnswerSet

 view release on metacpan or  search on metacpan

Makefile  view on Meta::CPAN

	  lib/AI/Logic/AnswerSet.pm $(INST_MAN3DIR)/AI::Logic::AnswerSet.$(MAN3EXT) 




# --- MakeMaker processPL section:


# --- MakeMaker installbin section:


 view all matches for this distribution


AI-ML

 view release on metacpan or  search on metacpan

inc/MyBuilder.pm  view on Meta::CPAN


    my $xs = path("XS", "ML.xs");
    my $xs_c = path("XS", "ML.c");

    if (!$self->up_to_date($xs, $xs_c)) {
        ExtUtils::ParseXS::process_file(
            filename => $xs->stringify, 
            prototypes => 0,
            output => $xs_c->stringify
        );
    }

 view all matches for this distribution


AI-MXNet-Gluon-ModelZoo

 view release on metacpan or  search on metacpan

examples/image_classification.pl  view on Meta::CPAN

    eval { require IO::Socket::SSL; };
    die "Need to have IO::Socket::SSL installed for https images" if $@;
}
$image = $image =~ /^https?/ ? download($image) : $image;

# Following the conventional way of preprocessing ImageNet data:
# Resize the short edge into 256 pixes,
# And then perform a center crop to obtain a 224-by-224 image.
# The following code uses the image processing functions provided 
# in the AI::MXNet::Image module.

$image = mx->image->imread($image);
$image = mx->image->resize_short($image, $model =~ /inception/ ? 330 : 256);
($image) = mx->image->center_crop($image, [($model =~ /inception/ ? 299 : 224)x2]);

 view all matches for this distribution


AI-MXNet

 view release on metacpan or  search on metacpan

lib/AI/MXNet/KVStore.pm  view on Meta::CPAN


=head2  set_optimizer

    Register an optimizer to the store

    If there are multiple machines, this process (should be a worker node)
    will pack this optimizer and send it to all servers. It returns after
    this action is done.

    Parameters
    ----------

lib/AI/MXNet/KVStore.pm  view on Meta::CPAN


    Parameters
    ----------
    name : {'local'}
    The type of KVStore
        - local works for multiple devices on a single machine (single process)
        - dist works for multi-machines (multiple processes)
    Returns
    -------
    kv : KVStore
        The created AI::MXNet::KVStore
=cut

 view all matches for this distribution


( run in 0.708 second using v1.01-cache-2.11-cpan-8d75d55dd25 )