Result:
found more than 674 distributions - search limited to the first 2001 files matching your query ( run in 1.785 )


Algorithm-GDiffDelta

 view release on metacpan or  search on metacpan

GDiffDelta.pm  view on Meta::CPAN


=head1 DESCRIPTION

This module can be used to generate binary deltas describing the
differences between two files.  Given the first file and the
delta the second file can be reconstructed.

A delta is equivalent to the output of the unix C<diff> program,
except that it can efficiently represent the differences between
similar binary files, containing any sequences of bytes.  These
deltas can be used for updating files over a network (as C<rsync>
does) or for efficiently storing a revision history of changes to
a file (as Subversion does).

There are several formats used for binary deltas.  The one supported
by this module is the GDIFF format, which is fairly simple and is
documented as a W3C note (See the SEE ALSO section below).

This module generates and processes deltas using file handles.
It supports both native Perl file handles (created with the built-in
C<open> format) and objects that support the right methods.
For an object to work it must support at least the C<read>, C<seek>,

GDiffDelta.pm  view on Meta::CPAN

and output, by wrapping them in an L<IO::Scalar|IO::Scalar> object
or similar.  A future version of this module might support reading
and writing directly through references to scalars, because that
should be much more efficient.

See the section ALGORITHM AND DELTA FORMAT below for some notes on
the algorithm used by this module and how the GDIFF delta format works.

=head1 FUNCTIONS

No functions are exported by default.  Pass the function names to

GDiffDelta.pm  view on Meta::CPAN

large file to be checksummed in separate chunks.

Another implementation of Adler-32 is provided in the
L<Digest::Adler32|Digest::Adler32> module.

The Adler-32 checksum algorithm is defined in RFC 1950, section 8.2.
Sample code in C is also provided there.

=item gdiff_apply(I<$file1>, I<$file2>, I<$delta_file>)

Takes three file handles.  The first two are read from, and it must

 view all matches for this distribution


Algorithm-GoldenSection

 view release on metacpan or  search on metacpan

lib/Algorithm/GoldenSection.pm  view on Meta::CPAN

In order to isolate a minimum of a univariate functions the minimum must first be isolated. 
Consequently the program first bounds a minimum - i.e. the program initially creates a triplet of points: 
x_low < x_int < x_high, such that f(x_int) is lower than both f(x_low) and f(x_high). Thus we ensure that there 
is a local minimum within the interval: x_low-x_high. The program then uses the Golde Section Search algorithm 
to successively narrow down on the bounded region to find the minimum. 
See http://en.wikipedia.org/wiki/Golden_section_search and
http://www.gnu.org/software/gsl/manual/html_node/One-dimensional-Minimization.html.

The module provides a Perl5OO interface. Simply construct a Algorithm::GoldenSection object with appropriate parameters
- see L</SYNOPSIS>. Then call the minimise C<method>. This returns a LIST of the value of x at the minimum, the value of
f(x) at the minimum and the number of iterations used to isolate the minimum.

 view all matches for this distribution


Algorithm-Graphs-TransitiveClosure-Tiny

 view release on metacpan or  search on metacpan

lib/Algorithm/Graphs/TransitiveClosure/Tiny.pm  view on Meta::CPAN


   {
    this => {that => undef}
   }

This behavior can be changed by setting optional second argument of
C<floyd_warshall> to a true value, i.e., calling C<floyd_warshall($graph, 1)>
with the above example hash will not remove C<that =E<gt> {}>.


=back

 view all matches for this distribution


Algorithm-Gutter

 view release on metacpan or  search on metacpan

lib/Algorithm/Gutter.pm  view on Meta::CPAN

# -*- Perl -*-
#
# Algorithm::Gutter - cellular automata to simulate rain in a gutter,
# or, "the hundred and forty-second worst drum machine in the West".

package Algorithm::Gutter;
use 5.26.0;
use Object::Pad 0.66;
our $VERSION = '0.02';

 view all matches for this distribution


Algorithm-HITS-Lite

 view release on metacpan or  search on metacpan

inc/Module/Install/Metadata.pm  view on Meta::CPAN

    }

    $self->version_from($file)      unless $self->version;
    $self->perl_version_from($file) unless $self->perl_version;

    # The remaining probes read from POD sections; if the file
    # has an accompanying .pod, use that instead
    my $pod = $file;
    if ( $pod =~ s/\.pm$/.pod/i and -e $pod ) {
        $file = $pod;
    }

inc/Module/Install/Metadata.pm  view on Meta::CPAN

    my $features = ( $self->{values}{features} ||= [] );

    my $mods;

    if ( @_ == 1 and ref( $_[0] ) ) {
        # The user used ->feature like ->features by passing in the second
        # argument as a reference.  Accomodate for that.
        $mods = $_[0];
    } else {
        $mods = \@_;
    }

 view all matches for this distribution


Algorithm-Hamming-Perl

 view release on metacpan or  search on metacpan

Perl.pm  view on Meta::CPAN

# hamming_faster - this subroutine builds two large hashes of,
#		%Hamming8by2	  2 byte data -> 3 byte Hamming code
#		%Hamming8by2rev	  3 byte Hamming code -> 2 byte data
#	for faster encodings and decodings. Running this subroutine is 
#	optional. If it is used then conversions are faster, however more 
#	memory is used to store the hashes, and a couple of seconds is added 
#	to the startup time. If it is not used, conversions are slower -
#	taking up to 5 times the usual time. A good measure is the data you
#	with to encode - more than 1 Mb would benifit from this subroutine.
#
sub hamming_faster {

Perl.pm  view on Meta::CPAN

#
sub hamming {
	my $data = shift;	# input data
	my $pos;		# counter to step through data string
	my $char_in1;		# first input byte
	my $char_in2;		# second input byte
	my $chars_in;		# both input bytes
	my $ham_text;		# hamming code in binary text "0101.."
	my $char_out;		# hamming code as bytes
	my $output = "";	# full output hamming code as bytes

Perl.pm  view on Meta::CPAN

	my $pos;		# counter to step through data string
	my $err;		# corrected bit error
	my $chars_in;		# input bytes
	my $ham_text;		# hamming code in binary text "0101..", 2 bytes
	my $ham_text1;		# hamming code for first byte
	my $ham_text2;		# hamming code for second byte
	my $char_out1;		# output data byte 1
	my $char_out2;		# output data byte 2
	my $output = "";	# full output data as bytes
	my $err_all = 0;	# count of corrected bit errors

Perl.pm  view on Meta::CPAN

=item Algorithm::Hamming::Perl::hamming_faster ()

This is an optional subroutine that will speed Hamming encoding if it is
run once at the start of the program. It does this by using a larger (hash)
cache of preprocessed results. The disadvantage is that it uses more memory,
and can add several seconds to invocation time. Only use this if you are
encoding more than 1 Mb of data.

=back

=head1 INSTALLATION

 view all matches for this distribution


Algorithm-Heapify-XS

 view release on metacpan or  search on metacpan

ppport.h  view on Meta::CPAN

The default namespace is C<DPPP_>.

=back

The good thing is that most of the above can be checked by running
F<ppport.h> on your source code. See the next section for
details.

=head1 EXAMPLES

To verify whether F<ppport.h> is needed for your module, whether you

ppport.h  view on Meta::CPAN

intuit_more|||
invert|||
invlist_array|||
invlist_destroy|||
invlist_extend|||
invlist_intersection|||
invlist_len|||
invlist_max|||
invlist_set_array|||
invlist_set_len|||
invlist_set_max|||

ppport.h  view on Meta::CPAN

 * 1. #define MY_CXT_KEY to a unique string, e.g. "DynaLoader_guts"
 * 2. Declare a typedef named my_cxt_t that is a structure that contains
 *    all the data that needs to be interpreter-local.
 * 3. Use the START_MY_CXT macro after the declaration of my_cxt_t.
 * 4. Use the MY_CXT_INIT macro such that it is called exactly once
 *    (typically put in the BOOT: section).
 * 5. Use the members of the my_cxt_t structure everywhere as
 *    MY_CXT.member.
 * 6. Use the dMY_CXT macro (a declaration) in all the functions that
 *    access MY_CXT.
 */

 view all matches for this distribution


Algorithm-History-Levels

 view release on metacpan or  search on metacpan

lib/Algorithm/History/Levels.pm  view on Meta::CPAN

         'mydb.2017-06-10.sql.gz',
         'mydb.2017-06-09.sql.gz',
         'mydb.2017-06-08.sql.gz',
         'mydb.2017-06-07.sql.gz'],

        # histories for the second level
        ['mydb.2017-06-06.sql.gz',
         'mydb.2017-05-30.sql.gz',
         'mydb.2017-05-23.sql.gz',
         'mydb.2017-05-16.sql.gz'],

lib/Algorithm/History/Levels.pm  view on Meta::CPAN

      'mydb.2017-06-10.sql.gz',
      'mydb.2017-06-09.sql.gz',
      'mydb.2017-06-08.sql.gz',
      'mydb.2017-06-07.sql.gz'],
 
     # histories for the second level
     ['mydb.2017-06-06.sql.gz',
      'mydb.2017-05-30.sql.gz',
      'mydb.2017-05-23.sql.gz',
      'mydb.2017-05-16.sql.gz'],
 

 view all matches for this distribution


Algorithm-HowSimilar

 view release on metacpan or  search on metacpan

HowSimilar.pm  view on Meta::CPAN

    @res = compare( 'this is a string',
                    'this is a string ',
                    sub { s/\s+//g; [split //] }
                  );

You already get the intersection of the strings or arrays. You can get the
union like this:

    @res = compare( $str1, $str2 );
    $intersection = $res[3];
    $union = $res[3].$res[4].$res[5];
    @res = compare( \@ary1, \@ary2 );
    @intersection = @{$res[3]};
    @union = ( @{$res[3]}, @{$res[4]}, @{$res[5]} );

=head2 EXPORT

None by default.

 view all matches for this distribution


Algorithm-IRCSRP2

 view release on metacpan or  search on metacpan

lib/Algorithm/IRCSRP2.pm  view on Meta::CPAN

L<http://www.bjrn.se/ircsrp/ircsrp.2.0.txt>.

From the specification:

   IRCSRP is based on the SRP-6 protocol [3] for password-authenticated key
   agreement. While SRP was originally designed for establishing a secure,
   authenticated channel between a user and a host, it can be adapted for group
   communcations, as described in this document.

See L<https://gitorious.org/ircsrp/ircsrp> for a working version used in Pidgin.

 view all matches for this distribution


Algorithm-IncludeExclude

 view release on metacpan or  search on metacpan

inc/Module/Install/AutoInstall.pm  view on Meta::CPAN


    $self->makemaker_args( Module::AutoInstall::_make_args() );

    my $class = ref($self);
    $self->postamble(
        "# --- $class section:\n" .
        Module::AutoInstall::postamble()
    );
}

sub auto_install_now {

 view all matches for this distribution


Algorithm-KMeans

 view release on metacpan or  search on metacpan

examples/cluster_and_visualize.pl  view on Meta::CPAN

# The mask tells the module which columns of the data file are are to be used for
# clustering, which columns are to be ignored, and which column contains a symbolic
# ID tag for a data point.  If the ID tag is in the first column and you are
# clustering 3D data in a file that has just four columns, the mask would be "N111".
# Note the first character in the mask in this case is `N' for "Name".  If, on the
# other hand, you wanted to ignore the first data coordinate (which is in the second
# column of the data file) for clustering, the mask would be "N011".  The symbolic ID
# can be in any column --- you just have to place the character `N' at the right
# place:


 view all matches for this distribution


Algorithm-KNN-XS

 view release on metacpan or  search on metacpan

LibANNInterface.xs  view on Meta::CPAN

#endif

/* include your class headers here */
#include "lib_ann_interface.h"

/* We need one MODULE... line to start the actual XS section of the file.
 * The XS++ preprocessor will output its own MODULE and PACKAGE lines */
MODULE = Algorithm::KNN::XS		PACKAGE = Algorithm::KNN::XS

## The include line executes xspp with the supplied typemap and the
## xsp interface code for our class.

 view all matches for this distribution


Algorithm-Kademlia

 view release on metacpan or  search on metacpan

CODE_OF_CONDUCT.md  view on Meta::CPAN

Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
https://github.com/sanko/Algorithm-Kademlia/discussions.
All complaints will be reviewed and investigated promptly and fairly.

All community leaders are obligated to respect the privacy and security of the
reporter of any incident.

## Enforcement Guidelines

Community leaders will follow these Community Impact Guidelines in determining

 view all matches for this distribution


Algorithm-KernelKMeans

 view release on metacpan or  search on metacpan

inc/Module/Install/Metadata.pm  view on Meta::CPAN

	my $name     = shift;
	my $features = ( $self->{values}->{features} ||= [] );
	my $mods;

	if ( @_ == 1 and ref( $_[0] ) ) {
		# The user used ->feature like ->features by passing in the second
		# argument as a reference.  Accomodate for that.
		$mods = $_[0];
	} else {
		$mods = \@_;
	}

 view all matches for this distribution


Algorithm-Kmeanspp

 view release on metacpan or  search on metacpan

inc/Module/Install/Metadata.pm  view on Meta::CPAN

	my $name     = shift;
	my $features = ( $self->{values}->{features} ||= [] );
	my $mods;

	if ( @_ == 1 and ref( $_[0] ) ) {
		# The user used ->feature like ->features by passing in the second
		# argument as a reference.  Accomodate for that.
		$mods = $_[0];
	} else {
		$mods = \@_;
	}

 view all matches for this distribution


Algorithm-Knap01DP

 view release on metacpan or  search on metacpan

lib/Algorithm/Knap01DP.pm  view on Meta::CPAN

    3
    8
    8
    
This corresponds to a problem with N=6, M=30 (objects, capacity)
then the following consecutive pair of lines hold the weight and
profit (in that order) of the different objects.
A program illustrating the use of the module follows:

    $ cat -n example.pl
    1  use strict;

 view all matches for this distribution


Algorithm-Knapsack

 view release on metacpan or  search on metacpan

t/test.t  view on Meta::CPAN

$knapsack->compute();

my @solutions = $knapsack->solutions();
ok($#solutions == 2, 'found 3 solutions');
ok(join(',', @{ $solutions[0] }) eq '0,1,3',   'first solution is correct');
ok(join(',', @{ $solutions[1] }) eq '0,1,4,5', 'second solution is correct');
ok(join(',', @{ $solutions[2] }) eq '0,2,3,4', 'third solution is correct');

ok($knapsack->{emptiness} == 0, 'emptiness is 0');

 view all matches for this distribution


Algorithm-LBFGS

 view release on metacpan or  search on metacpan

inc/Module/Install/AutoInstall.pm  view on Meta::CPAN


    $self->makemaker_args( Module::AutoInstall::_make_args() );

    my $class = ref($self);
    $self->postamble(
        "# --- $class section:\n" .
        Module::AutoInstall::postamble()
    );
}

sub auto_install_now {

 view all matches for this distribution


Algorithm-LUHN

 view release on metacpan or  search on metacpan

lib/Algorithm/LUHN.pm  view on Meta::CPAN


=head1 DESCRIPTION

This module calculates the Modulus 10 Double Add Double checksum, also known as
the LUHN Formula. This algorithm is used to verify credit card numbers and
Standard & Poor's security identifiers such as CUSIP's and CSIN's.

You can find plenty of information about the algorithm by searching the web for
"modulus 10 double add double".

=head1 FUNCTION

lib/Algorithm/LUHN.pm  view on Meta::CPAN


L<Math::CheckDigits> implements yet another approach to check digits.

I have also written a
L<review of LUHN modules|http://neilb.org/reviews/luhn.html>,
which covers them in more detail than this section.

=head1 REPOSITORY

L<https://github.com/neilb/Algorithm-LUHN>

 view all matches for this distribution


Algorithm-LUHN_XS

 view release on metacpan or  search on metacpan

lib/Algorithm/LUHN_XS.pm  view on Meta::CPAN


How much faster? Here's a benchmark, running on a 3.4GHz i7-2600:

C<Benchmark: timing 100 iterations>

C<Algorithm::LUHN: 69 secs (69.37 usr 0.00 sys)  1.44/s>

C<check_digit:      2 secs ( 1.98 usr 0.00 sys) 50.51/s>

C<check_digit_fast: 2 secs ( 1.68 usr 0.00 sys) 59.52/s>

C<check_digit_rff:  1 secs ( 1.29 usr 0.00 sys) 77.52/s>

So, it's 35x to 53x faster than the original pure Perl module, depending on
how much compatibility with the original module you need.  

The rest of the documentation is mostly a copy of the original docs, with some
additions for functions that are new.

This module calculates the Modulus 10 Double Add Double checksum, also known as
the LUHN Formula. This algorithm is used to verify credit card numbers and
Standard & Poor's security identifiers such as CUSIP's and CSIN's.

You can find plenty of information about the algorithm by searching the web for
"modulus 10 double add double".

=head1 FUNCTION

lib/Algorithm/LUHN_XS.pm  view on Meta::CPAN

faster than the check_digit() that comes in the original pure Perl 
Algorithm::LUHN module.  Here's a benchmark of 1M total calls to is_valid():

C<Benchmark: timing 100 iterations>

C<Algorithm::LUHN: 100 secs (100.29 usr 0.01 sys)  1.00/s>

C<is_valid:          3 secs (  2.46 usr 0.11 sys) 38.91/s>

C<is_valid_fast:     2 secs (  2.38 usr 0.05 sys) 41.15/s> 

C<is_valid_rff:      2 secs (  1.97 usr 0.08 sys) 48.78/s>

Algorithm::LUHN_XS varies from 38x to 48x times faster than the original
pure perl Algorithm::LUHN module. The is_valid() routine is 100% compatible
with the original, returning either '1' for success or the empty string ''
for failure.   The is_valid_fast() routine returns 1 for success and 0 for 

lib/Algorithm/LUHN_XS.pm  view on Meta::CPAN


L<Math::CheckDigits> implements yet another approach to check digits.

Neil Bowers has also written a
L<review of LUHN modules|http://neilb.org/reviews/luhn.html>,
which covers them in more detail than this section.

=head1 REPOSITORY

L<https://github.com/krschwab/Algorithm-LUHN_XS>

 view all matches for this distribution


Algorithm-LeakyBucket

 view release on metacpan or  search on metacpan

lib/Algorithm/LeakyBucket.pm  view on Meta::CPAN

Algorithm::LeakyBucket - Perl implementation of leaky bucket rate limiting

=head1 SYNOPSIS

 use Algorithm::LeakyBucket;
 my $bucket = Algorithm::LeakyBucket->new( ticks => 1, seconds => 1 ); # one per second

 while($something_happening)
 {
     if ($bucket->tick)
     {
         # allowed
         do_something();
	 # maybe decide to change limits?
	 $bucket->ticks(2);
	 $bucket->seconds(5);
     }
 }


=head1 CONSTRUCTOR

There are two required options to get the module to do anything useful.  C<ticks> and C<seconds> set the number of 
ticks allowed per that time period.  If C<ticks> is 3 and C<seconds> is 14, you will be able to run 3 ticks every 14 
seconds.  Optionally you can pass C<memcached_servers> and C<memcached_key> to distribute the limiting across multiple
processes.


 my $bucket = Algorithm::LeakyBucket->new( ticks => $ticks, seconds => $every_x_seconds,
                                  memcached_key => 'some_key',
                                  memcached_servers => [ { address => 'localhost:11211' } ] );

=DESCRIPTION

Implements leaky bucket as a rate limiter.  While the code will do rate limiting for a single process, it was intended
as a limiter for multiple processes. (But see the BUGS section)

The syntax of the C<memcached_servers> argument should be the syntax expected by the local memcache module.  If
Cache::Memcached::Fast is installed, use its syntax, otherwise you can use the syntax for Cache::Memcached.  If 
neither module is found it will use a locally defined set of vars internally to track rate limiting.  Obviously
this keeps the code from being used across processes. 

lib/Algorithm/LeakyBucket.pm  view on Meta::CPAN

		$self->{__ticks} = $value;
	}
	return $self->{__ticks};
}

sub seconds
{
        my ($self, $value) = @_;
        if (defined($value))
        {
                $self->{__seconds} = $value;
        }
        return $self->{__seconds};
}

sub current_allowed
{
        my ($self, $value) = @_;

lib/Algorithm/LeakyBucket.pm  view on Meta::CPAN

	{
		# init form mc 
		$self->mc_sync;
	}
	
	# seconds since last tick
	my $now = time();
	my $seconds_passed = $now - $self->last_tick;
	$self->last_tick( time() );

	# add tokens to bucket
	my $current_ticks_allowed = $self->current_allowed + ( $seconds_passed * ( $self->ticks / $self->seconds ));
	$self->current_allowed( $current_ticks_allowed );

	if ($current_ticks_allowed > $self->ticks)
	{
		$self->current_allowed($self->ticks);

 view all matches for this distribution


Algorithm-Line-Bresenham

 view release on metacpan or  search on metacpan

lib/Algorithm/Line/Bresenham.pm  view on Meta::CPAN

     }
    $t = ($x0-$x1)/$t;
    $r = (1-$t)*((1-$t)*$y0+2.0*$t*$y1)+$t*$t*$y2;# /* By(t=P4) */
    $t = ($x0*$x2-$x1*$x1)*$t/($x0-$x1); #/* gradient dP4/dx=0 */
    $x = int($t+0.5); $y = int($r+0.5);
    $r = ($y1-$y0)*($t-$x0)/($x1-$x0)+$y0; #/* intersect P3 | P0 P1 */
    push @points, basic_bezier($x0,$y0, $x,int($r+0.5), $x,$y);
    $r = ($y1-$y2)*($t-$x2)/($x1-$x2)+$y2; #/* intersect P4 | P1 P2 */
    $x0 = $x1 = $x; $y0 = $y; $y1 = int($r+0.5);# /* P0 = P4, P1 = P8 */
  }
  if (($y0-$y1)*($y2-$y1) > 0) { #/* vertical cut at P6? */
    $t = $y0-2*$y1+$y2; $t = ($y0-$y1)/$t;
    $r = (1-$t)*((1-$t)*$x0+2.0*$t*$x1)+$t*$t*$x2; # /* Bx(t=P6) */
    $t = ($y0*$y2-$y1*$y1)*$t/($y0-$y1); #/* gradient dP6/dy=0 */
    $x = int($r+0.5); $y = int($t+0.5);
    $r = ($x1-$x0)*($t-$y0)/($y1-$y0)+$x0; #/* intersect P6 | P0 P1 */
     push @points, basic_bezier($x0,$y0, int($r+0.5),$y, $x,$y);
    $r = ($x1-$x2)*($t-$y2)/($y1-$y2)+$x2; #/* intersect P7 | P1 P2 */
    $x0 = $x; $x1 = int($r+0.5); $y0 = $y1 = $y;# /* P0 = P6, P1 = P7 */
  }
   push @points, basic_bezier($x0,$y0, $x1,$y1, $x2,$y2); #/* remaining part */
   return @points;
}

 view all matches for this distribution


Algorithm-Line-Lerp

 view release on metacpan or  search on metacpan

ppport.h  view on Meta::CPAN

The default namespace is C<DPPP_>.

=back

The good thing is that most of the above can be checked by running
F<Algorithm-Line-Lerp/ppport.h> on your source code. See the next section for
details.

=head1 EXAMPLES

To verify whether F<Algorithm-Line-Lerp/ppport.h> is needed for your module, whether you

ppport.h  view on Meta::CPAN

invlist_contents|5.023008||Viu
_invlist_dump|5.019003||cViu
_invlistEQ|5.023006||cViu
invlist_extend|5.013010||Viu
invlist_highest|5.017002||Vniu
_invlist_intersection|5.015001||Viu
_invlist_intersection_maybe_complement_2nd|5.015008||cViu
_invlist_invert|5.015001||cViu
invlist_is_iterating|5.017008||Vniu
invlist_iterfinish|5.017008||Vniu
invlist_iterinit|5.015001||Vniu
invlist_iternext|5.015001||Vniu

ppport.h  view on Meta::CPAN

PL_scopestack|5.005000||Viu
PL_scopestack_ix|5.005000||Viu
PL_scopestack_max|5.005000||Viu
PL_scopestack_name|5.011002||Viu
PL_SCX_invlist|5.027008||Viu
PL_secondgv|5.005000||Viu
PL_setlocale_buf|5.027009||Viu
PL_setlocale_bufsize|5.027009||Viu
PL_sharehook|5.007003||Viu
PL_sighandler1p|5.031007||Viu
PL_sighandler3p|5.031007||Viu

ppport.h  view on Meta::CPAN

ssc_clear_locale|5.019005||Vniu
ssc_cp_and|5.019005||Viu
ssc_finalize|5.019005||Viu
SSCHECK|5.003007||Viu
ssc_init|5.019005||Viu
ssc_intersection|5.019005||Viu
ssc_is_anything|5.019005||Vniu
ssc_is_cp_posixl_init|5.019005||Vniu
SSC_MATCHES_EMPTY_STRING|5.021004||Viu
ssc_or|5.019005||Viu
ssc_union|5.019005||Viu

ppport.h  view on Meta::CPAN

  my $f;
  my $count = 0;
  my $match = $opt{'api-info'} =~ m!^/(.*)/$! ? $1 : "^\Q$opt{'api-info'}\E\$";

  # Sort the names, and split into two classes; one for things that are part of
  # the API; a second for things that aren't.
  my @ok_to_use;
  my @shouldnt_use;
  for $f (sort dictionary_order keys %API) {
    next unless $f =~ /$match/;
    my $base = int_parse_version($API{$f}{base}) if $API{$f}{base};

ppport.h  view on Meta::CPAN

 * 1. #define MY_CXT_KEY to a unique string, e.g. "DynaLoader_guts"
 * 2. Declare a typedef named my_cxt_t that is a structure that contains
 *    all the data that needs to be interpreter-local.
 * 3. Use the START_MY_CXT macro after the declaration of my_cxt_t.
 * 4. Use the MY_CXT_INIT macro such that it is called exactly once
 *    (typically put in the BOOT: section).
 * 5. Use the members of the my_cxt_t structure everywhere as
 *    MY_CXT.member.
 * 6. Use the dMY_CXT macro (a declaration) in all the functions that
 *    access MY_CXT.
 */

ppport.h  view on Meta::CPAN

         * Versions before 5.35.10 dereferenced empty input without checking */
#  undef utf8_to_uvchr_buf
#endif

/* This implementation brings modern, generally more restricted standards to
 * utf8_to_uvchr_buf.  Some of these are security related, and clearly must
 * be done.  But its arguable that the others need not, and hence should not.
 * The reason they're here is that a module that intends to play with the
 * latest perls should be able to work the same in all releases.  An example is
 * that perl no longer accepts any UV for a code point, but limits them to
 * IV_MAX or below.  This is for future internal use of the larger code points.

 view all matches for this distribution


Algorithm-LineSegments

 view release on metacpan or  search on metacpan

lib/Algorithm/LineSegments.pm  view on Meta::CPAN

  $heap->add($_, $o{cost}->($_, $_+1)) for 0 .. $#q - 1;

  ###################################################################
  # I haven't found a good solution to maintain the heap and modify
  # the list, so as a workaround the heap identifies a mergable pair
  # with the key and when merging elements of a pair, the second
  # element is replaced by `undef` to maintain the size of the list,
  # so the heap keys, indices into the list, remain valid. This has
  # the consequence of producing gaps in the list, and the variables
  # below maintain how the gaps can be skipped.
  ###################################################################

 view all matches for this distribution


Algorithm-LinearManifoldDataClusterer

 view release on metacpan or  search on metacpan

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN

                push @newclusters, $clusters[$sorted_cluster_indexes[$cluster_index]];
            }
            @clusters = @newclusters;
            my $worst_cluster = pop @clusters;
            print "\nthe worst cluster: @$worst_cluster\n" if $self->{_terminal_output};
            my $second_worst_cluster = pop @clusters;
            print "\nthe second worst cluster: @$second_worst_cluster\n" if $self->{_terminal_output};
            push @$worst_cluster, @$second_worst_cluster;
            fisher_yates_shuffle($worst_cluster);
            my @first_half = @$worst_cluster[0 .. int(scalar(@$worst_cluster)/2) - 1];
            my @second_half = @$worst_cluster[int(scalar(@$worst_cluster)/2) .. @$worst_cluster - 1];
            push @clusters, \@first_half;
            push @clusters, \@second_half;
            if ($self->{_terminal_output}) {
                print "\n\nShowing the clusters obtained after applying the unimodality correction:\n";
                display_clusters(\@clusters);      
            }
        }

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN

                            ($self->{_data_hash}->{$_}->[$dimen] < $mean_along_this_dimen + $delta * $range) ) 
                          ? $_ : undef }
                    @$cluster; 
        push @data_tags_for_range_tests, \@tags;
    }
    # Now find the intersection of the tag sets for each of the dimensions
    my %intersection_hash;
    foreach my $dimen (0..$self->{_data_dimensions}-1) {
        my %tag_hash_for_this_dimen  = map {$_ => 1} @{$data_tags_for_range_tests[$dimen]};
        if ($dimen == 0) {
            %intersection_hash = %tag_hash_for_this_dimen;
        } else {
            %intersection_hash = map {$_ => 1} grep {$tag_hash_for_this_dimen{$_}} 
                                 keys %intersection_hash;
        }
    }
    my @intersection_set = keys %intersection_hash;
    my $cluster_unimodality_index = scalar(@intersection_set) / scalar(@$cluster);
    return $cluster_unimodality_index;
}

sub find_best_ref_vector {
    my $self = shift;

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN


##  We first generate a set of points randomly on the unit sphere --- the number of
##  points being equal to the number of clusters desired.  These points will serve as
##  cluster means (or, as cluster centroids) subsequently when we ask
##  Math::Random::random_multivariate_normal($N, @m, @covar) to return $N number of
##  points on the sphere.  The second argument is the cluster mean and the third
##  argument the cluster covariance.  For the synthetic data, we set the cluster
##  covariance to a 2x2 diagonal matrix, with the (0,0) element corresponding to the
##  variance along the azimuth direction and the (1,1) element corresponding to the
##  variance along the elevation direction.
##

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN

##  (azimuth,elevation), you also need `wrap-around' logic for those points yielded by
##  the multivariate-normal function whose azimuth angle is outside the interval
##  (0,360) and/or whose elevation angle is outside the interval (-90,90).
##
##  Note that the first of the two dimensions for which the multivariate-normal
##  function returns the points is for the azimuth angle and the second for the
##  elevation angle.
##
##  With regard to the relationship of the Cartesian coordinates to the spherical
##  (azimuth, elevation) coordinates, we assume that (x,y) is the horizontal plane
##  and z the vertical axis.  The elevation angle theta is measure with respect to

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN

  #  is more or less spherical, you can visualize the clusters by calling

  $clusterer->visualize_clusters_on_sphere("final clustering", $clusters);

  #  where the first argument is a label to be displayed in the 3D plot and the
  #  second argument the value returned by calling linear_manifold_clusterer().

  #  SYNTHETIC DATA GENERATION:

  #  The module includes an embedded class, DataGenerator, for generating synthetic
  #  three-dimensional data that can be used to experiment with the clustering code.

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN

data element in.

Such problems, when they do possess a solution, are best tackled through iterative
algorithms in which you start with a guess for the final solution, you rearrange the
measured data on the basis of the guess, and you then use the new arrangement of the
data to refine the guess.  Subsequently, you iterate through the second and the third
steps until you do not see any discernible changes in the new arrangements of the
data.  This forms the basis of the clustering algorithm that is described under
B<Phase 1> in the section that follows.  This algorithm was first proposed in the
article "Dimension Reduction by Local Principal Component Analysis" by Kambhatla and
Leen that appeared in the journal Neural Computation in 1997.

Unfortunately, experiments show that the algorithm as proposed by Kambhatla and Leen
is much too sensitive to how the clusters are seeded initially.  To get around this

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN


Taking into account the mean associated with each cluster, re-cluster the entire data
set on the basis of the least reconstruction error.  That is, assign each data
element to that subspace for which it has the smallest reconstruction error.
Calculate the total reconstruction error associated with all the data elements.  (See
the definition of the reconstruction error in the C<Description> section.)

=item Step 5:

Stop iterating if the change in the total reconstruction error from the previous 
iteration to the current iteration is less than the value specified by the constructor

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN

to denote the symmetric normalized Laplacian.

=item Step 4:

Carry out an eigendecomposition of the C<A> matrix and choose the eigenvector
corresponding to the second smallest eigenvalue for bipartitioning the graph on the
basis of the sign of the values in the eigenvector.

=item Step 5:

If the bipartition of the previous step yields one-versus-the-rest kind of a

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN


This parameter specifies the dimensionality of the manifold on which the data resides.

=item C<cluster_search_multiplier>:

As should be clear from the C<Summary of the Algorithm> section, this parameter plays
a very important role in the successful clustering of your data.  As explained in
C<Description>, the basic algorithm used for clustering in Phase 1 --- clustering by
the minimization of the reconstruction error --- is sensitive to the choice of the
cluster seeds that are selected randomly at the beginning of the algorithm.  Should
it happen that the seeds miss one or more of the clusters, the clustering produced is

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN

    my $clusters = $clusterer->linear_manifold_clusterer();

This is the main call to the linear-manifold based clusterer.  The first call works
by side-effect, meaning that you will see the clusters in your terminal window and
they would be written out to disk files (depending on the constructor options you
have set).  The second call also returns the clusters as a reference to an array of
anonymous arrays, each holding the symbolic tags for a cluster.

=item B<display_reconstruction_errors_as_a_function_of_iterations()>:

    $clusterer->display_reconstruction_errors_as_a_function_of_iterations();

lib/Algorithm/LinearManifoldDataClusterer.pm  view on Meta::CPAN


If your data is 3-dimensional and it resides on the surface of a sphere (or in the
vicinity of such a surface), you may be able to use these methods for the
visualization of the clusters produced by the algorithm.  The first invocation
produces a Gnuplot in a terminal window that you can rotate with your mouse pointer.
The second invocation produces a `.png' image of the plot.

=item B<auto_retry_clusterer()>:

    $clusterer->auto_retry_clusterer();

 view all matches for this distribution


Algorithm-Loops

 view release on metacpan or  search on metacpan

lib/Algorithm/Loops.pm  view on Meta::CPAN

    my @list= NestedLoops(
        [  ( [ 1..$len ] ) x $len  ],
        sub { "@_" },
    );

If you want working sample code to try, see below in the section specific
to the function(s) you want to try.  The above samples only give a
I<feel> for how the functions are typically used.

=head1 FUNCTIONS

lib/Algorithm/Loops.pm  view on Meta::CPAN

Your subroutine is called (in a list context) and is passed the first
element of each of the arrays whose references you passed in (in the
corresponding order).  Any value(s) returned by your subroutine are
pushed onto an array that will eventually be returned by MapCar*.

Next your subroutine is called and is passed the B<second> element of
each of the arrays and any value(s) returned are pushed onto the results
array.  Then the process is repeated with the B<third> elements.

This continues until your subroutine has been passed all elements [except
for some cases with MapCarMin()].  If the longest array whose reference

lib/Algorithm/Loops.pm  view on Meta::CPAN

            $part %= $mult;
            $ran= int( $ran / $mult );
        }
        $unit .= 's'   if  1 != $part;
        $part ? "$part $unit" : ();
    } [ qw( sec min hour day week year ) ],
      [     60, 60, 24,   7,  52 ];
    $desc ||= '< 1 sec';
    print "Script ran for $desc.\n";

=head2 NextPermute*

=over 4

 view all matches for this distribution


Algorithm-LossyCount

 view release on metacpan or  search on metacpan

lib/Algorithm/LossyCount.pm  view on Meta::CPAN

  say $frequencies->{b};  # Approximate freq. of 'b'.
  ...

=head1 DESCRIPTION

Lossy-Counting is a approximate frequency counting algorithm proposed by Manku and Motwani in 2002 (refer L<SEE ALSO> section below.)

The main advantage of the algorithm is memory efficiency. You can get approximate count of appearance of items with very low memory footprint, compared with total inspection.
Furthermore, Lossy-Counting is an online algorithm. It is applicable to data set such that the size is unknown, and you can take intermediate result anytime.

=head1 METHODS

 view all matches for this distribution


Algorithm-MasterMind

 view release on metacpan or  search on metacpan

app/run_experiment.pl  view on Meta::CPAN

for my $i (1..$conf->{'tests'}) {

  my $solver;
  eval "\$solver = new Algorithm::MasterMind::$method \$method_options";
  die "Can't instantiate $solver: $@\n" if !$solver;
  my $secret_code = $solver->random_combination();
  my $game = { code => $secret_code,
	       combinations => []};
  my $first_string = $solver->issue_first;
  my $response =  check_combination( $secret_code, $first_string);
  push @{$game->{'combinations'}}, [$first_string,$response] ;

  $solver->feedback( $response );
    
  my $played_string = $solver->issue_next;
  while ( $played_string ne $secret_code ) {
    $response = check_combination( $secret_code, $played_string);
    push @{$game->{'combinations'}}, [$played_string, $response] ;
    $solver->feedback( $response );
    $played_string = $solver->issue_next;      
  }  
  $game->{'evaluations'} = $solver->evaluated();

 view all matches for this distribution


Algorithm-MedianSelect-XS

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN


 - Converted Algorithm::MedianSelect entirely over to XS.

0.08 2005/02/15

 - Added missing export section in documentation.

0.07 2004/02/29

 - median() takes an array instead of an arrayref.

 view all matches for this distribution


( run in 1.785 second using v1.01-cache-2.11-cpan-39bf76dae61 )