view release on metacpan or search on metacpan
bin/algorithm-networksort-chooser view on Meta::CPAN
}
#### Selection network processing
if ($opt->{median}) {
die "--selection and --median are incompatible" if defined $opt->{selection};
$opt->{selection} = int($network_size / 2);
view all matches for this distribution
view release on metacpan or search on metacpan
sub ACTION_test {
my $self = shift;
#
# Some test files take a long time to run. To save
# the testers some processing time, skip those tests
# by default (this is determined within the individual
# test files). Use the --Testlong option to set the
# AUTHOR_TESTING environment variable, which the
# longer-running test files will check for.
#
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/Numerical/Sample.pm view on Meta::CPAN
=head1 DESCRIPTION
This package gives two methods to draw fair, random samples from a set.
There is a procedural interface for the case the entire set is known,
and an object oriented interface when the a set with unknown size has
to be processed.
=head2 B<A>: C<sample (set =E<gt> ARRAYREF [,sample_size =E<gt> EXPR])>
The C<sample> function takes a set and a sample size as arguments.
If the sample size is omitted, a sample of C<1> is taken. The keywords
view all matches for this distribution
view release on metacpan or search on metacpan
pregfree|||
prepend_elem|||
prepend_madprops|||
printbuf|||
printf_nocontext|||vn
process_special_blocks|||
ptr_table_clear||5.009005|
ptr_table_fetch||5.009005|
ptr_table_find|||n
ptr_table_free||5.009005|
ptr_table_new||5.009005|
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/Paxos.pm view on Meta::CPAN
it because I think it'll be useful and I don't want it lost on github.
From L<Wikipedia|http://en.wikipedia.org/wiki/Paxos_algorithm>
Paxos is a family of protocols for solving consensus in a network of
unreliable processors. Consensus is the process of agreeing on one result
among a group of participants. This problem becomes difficult when the
participants or their communication medium may experience failures.
This package implements a basic version of the Basic Paxos protocol and
provides an API (and hooks) for extending into a more complicated solution as
view all matches for this distribution
view release on metacpan or search on metacpan
bench/benchmark.pl view on Meta::CPAN
use blib;
use Algorithm::Permute 'permute';
use Benchmark ':all';
use Getopt::Std;
# process options
my %opts;
getopts('yrhl:n:', \%opts) or usage();
$opts{h} and usage();
$opts{n} ||= 9;
$opts{l} ||= 5;
view all matches for this distribution
view release on metacpan or search on metacpan
PL_exitlistlen|5.005000||Viu
PL_expect||5.003007|ponu
PL_fdpid|5.005000||Viu
PL_filemode|5.005000||Viu
PL_firstgv|5.005000||Viu
PL_forkprocess|5.005000||Viu
PL_formtarget|5.005000||Viu
PL_GCB_invlist|5.021009||Viu
PL_generation|5.005000||Viu
PL_gensym|5.005000||Viu
PL_globalstash|5.005000||Viu
PRINTF_FORMAT_NULL_OK|5.009005|5.009005|Vn
printf_nocontext|5.007001||vdVnu
PRIVLIB|5.003007|5.003007|Vn
PRIVLIB_EXP|5.003007|5.003007|Vn
PRIVSHIFT|5.003007||Viu
process_special_blocks|5.009005||Viu
PROCSELFEXE_PATH|5.007003|5.007003|Vn
PRUNE|5.009005||Viu
PRUNE_t8|5.035004||Viu
PRUNE_t8_p8|5.033003||Viu
PRUNE_t8_pb|5.033003||Viu
# Here, we are in the middle of accumulating a hint or warning.
my $end_of_hint = 0;
# A line containing a comment end marker closes the hint. Remove that
# marker for processing below.
if (s/\s*$rcce(.*?)\s*$//) {
die "Nothing can follow the end of comment in '$_'\n" if length $1 > 0;
$end_of_hint = 1;
}
view all matches for this distribution
view release on metacpan or search on metacpan
- Update Build.PL with Logic::Minimizer requirement.
- And updated the version number everywhere.
0.19
2019-07-31
- There was more processing than needed in least_covered() to do
what was basically a find-the-minimum loop, plus it was
throwing away information that had to be re-created by the
next line of code.
Consolidated all of that, resulting in fewer hash-of-array
manipulations, and changed the return value from the single
- Make remels() remove the hash key if the array ref is empty.
- Change columns() to not auto-create empty keys.
2015-4-18
- Made the primes attribute "lazy", so that one can look
up prime implicants without going through the solving
process.
2015-4-15
- Replace row_dom() and col_dom() with row_dominance()
in Util.pm. When they were changed to returning keys
instead of deleting from the hash immediately, they
became essentially the same function, just called
in their bitstring form: min_bits, max_bits, and dc_bits.
2014-04-30
- Moosified ("has" declarations) the attributes.
- Achieved a compile-error-free version using Moose instead
of Alias. Now to make it runtime-error-free.
- As part of the compilation process, moved from a Makefile.PL
base (which was creating errors of its own) to Build.PL,
which Just Works.
- Turned attributes boolean, imp, and bits into a local
variables as they were only used in single functions.
- Defined and made use of predicate functions for attributes
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/RabinKarp.pm view on Meta::CPAN
english document, you should probably remove all white space, as well
as removing all capitalization.
=head1 INTENT
By preprocessing your document with the Rabin Karp hashing algorithm,
it makes it possible to create a "fingerprint" of your document (or documents),
and then perform multiple searches for fragments contained within your document
database.
Schleimer, Wilkerson, and Aiken suggest preproccessing to remove
unnecessary information (like whitespace), as well as known redundent information
(like, say, copyright notices or other boilerplate that is 'acceptable'.)
They also suggest a post processing pass to reduce data volume, using a technique
called winnowing (see the link at the end of this documentation.)
=head1 METHODS
=over
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/RandomPointGenerator.pm view on Meta::CPAN
You can feed it different 2D histograms --- even made-up 2D histograms --- and look
at the histogram of the generated random points to see how well the module is
working. Keep in mind, though, if your made-up input histogram has disconnected
blobs in it, the random-points that are generated may correspond to just one of the
blobs. Since the process of random-point generation involves a random walk, the
algorithm may not be able to hop from one blob to another in the input histogram if
they are too far apart. As to what exactly you'll get by way of the output histogram
would depend on your choice of the width of the proposal density.
The C<examples> directory contains the following histogram and bounding-box files
view all matches for this distribution
view release on metacpan or search on metacpan
* than the original mergesort implementation (only runs 1 and 2 are copied)
* and the "balancing" of merges is better (merged runs comprise more nearly
* equal numbers of original runs).
*
* The actual cache-friendly implementation will use a pseudo-stack
* to avoid recursion, and will unroll processing of runs of length 2,
* but it is otherwise similar to the recursive implementation.
*/
typedef struct {
IV offset; /* offset of 1st of 2 runs at this level */
stackp->runs = 0; /* current run will finish level */
/* While there are more than 2 runs remaining,
* turn them into exactly 2 runs (at the "other" level),
* each made up of approximately half the runs.
* Stack the second half for later processing,
* and set about producing the first half now.
*/
while (runs > 2) {
++level;
++stackp;
view all matches for this distribution
view release on metacpan or search on metacpan
through the CPAN shell.
0.13 Sat Mar 01 02:00 GMT 2003
- Moved SISort.pm into the root of the package to
work around a bug where Inline::MakeMaker does not specify
the path up to the .pm file to process in the Makefile.
- Updated my email address in the documentation
0.12 Jun 01 2001
- ran the documentation files through a spell checker
- updated package to use Inline 0.40's Inline::MakeMaker
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/SVM.pm view on Meta::CPAN
=head1 ACKNOWLEDGEMENTS
Thanks go out to Fiona Brinkman and the other members of the Simon Fraser
University Brinkman Laboratory for providing me the opportunity to develop
this module. Additional thanks go to Chih-Jen Lin, one of the libsvm authors,
for being particularly helpful during the development process.
As well to Dr. Alexander K. Seewald of Seewald Solutions for many bug fixes,
new test cases, and lowering the memory footprint by a factor of 20. Thank
you very much!
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/SVMLight.pm view on Meta::CPAN
=back
Support Vector Machines in general, and SVMLight specifically,
represent some of the best-performing Machine Learning approaches in
domains such as text categorization, image recognition, bioinformatics
string processing, and others.
For efficiency reasons, the underlying SVMLight engine indexes features by integers, not
strings. Since features are commonly thought of by name (e.g. the
words in a document, or mnemonic representations of engineered
features), we provide in C<Algorithm::SVMLight> a simple mechanism for
lib/Algorithm/SVMLight.pm view on Meta::CPAN
looking at the F<svm_common.h> file in the SVMLight distribution.
It would be a good idea if you only set these parameters via arguments
to C<new()> (see above) or right after calling C<new()>, since I don't
think the underlying C code expects them to change in the middle of a
process.
=item add_instance(label => $x, attributes => \%y)
Adds a training instance to the set of instances which will be used to
train the model. An C<attributes> parameter specifies a hash of
view all matches for this distribution
view release on metacpan or search on metacpan
inc/Module/Install.pm view on Meta::CPAN
print FH $_[$_] or die "print($_[0]): $!";
}
close FH or die "close($_[0]): $!";
}
# _version is for processing module versions (eg, 1.03_05) not
# Perl versions (eg, 5.8.1).
sub _version ($) {
my $s = shift || 0;
my $d =()= $s =~ /(\.)/g;
if ( $d >= 2 ) {
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/SkipList.pm view on Meta::CPAN
require Algorithm::SkipList::Node;
require Algorithm::SkipList::Header;
# Future versions should check Config module to determine if it is
# being run on a 64-bit processor, and set MAX_LEVEL to 64.
use constant MIN_LEVEL => 2;
use constant MAX_LEVEL => 32;
use constant DEF_P => 0.25;
use constant DEF_K => 0;
view all matches for this distribution
view release on metacpan or search on metacpan
inc/Module/Install/Win32.pm view on Meta::CPAN
Please download the file manually, save it to a directory in %PATH% (e.g.
C:\WINDOWS\COMMAND\), then launch the MS-DOS command line shell, "cd" to
that directory, and run "Nmake15.exe" from there; that will create the
'nmake.exe' file needed by this module.
You may then resume the installation process described in README.
-------------------------------------------------------------------------------
.
}
}
view all matches for this distribution
view release on metacpan or search on metacpan
pregfree|||
prepend_elem|||
prepend_madprops|||
printbuf|||
printf_nocontext|||vn
process_special_blocks|||
ptr_table_clear||5.009005|
ptr_table_fetch||5.009005|
ptr_table_find|||n
ptr_table_free||5.009005|
ptr_table_new||5.009005|
view all matches for this distribution
view release on metacpan or search on metacpan
pregfree|||
prepend_elem|||
prepend_madprops|||
printbuf|||
printf_nocontext|||vn
process_special_blocks|||
ptr_table_clear||5.009005|
ptr_table_fetch||5.009005|
ptr_table_find|||n
ptr_table_free||5.009005|
ptr_table_new||5.009005|
view all matches for this distribution
view release on metacpan or search on metacpan
examples/retrieve_similar_tickets.pl view on Meta::CPAN
#!/usr/bin/perl -w
### retrieve_similar_tickets.pl
### After the tickets stored in an Excel spreadsheet have been subject to the
### preprocessing steps listed in the script `ticket_preprocessor_doc_modeler.pl',
### you use the script shown here to retrieve the tickets that are most similar
### to a given query ticket.
### For obvious reasons, you would want the names of the database files
### mentioned in this script to match the names in the ticket
### preprocessing script.
### IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT:
###
### The parameter
###
examples/retrieve_similar_tickets.pl view on Meta::CPAN
use Algorithm::TicketClusterer;
my $fieldname_for_clustering = "Description";
my $unique_id_fieldname = "Request No";
my $raw_tickets_db = "raw_tickets.db";
my $processed_tickets_db = "processed_tickets.db";
my $stemmed_tickets_db = "stemmed_tickets.db";
my $inverted_index_db = "inverted_index.db";
my $tickets_vocab_db = "tickets_vocab.db";
my $idf_db = "idf.db";
my $tkt_doc_vecs_db = "tkt_doc_vecs.db";
examples/retrieve_similar_tickets.pl view on Meta::CPAN
my $clusterer = Algorithm::TicketClusterer->new(
clustering_fieldname => $fieldname_for_clustering,
unique_id_fieldname => $unique_id_fieldname,
raw_tickets_db => $raw_tickets_db,
processed_tickets_db => $processed_tickets_db,
stemmed_tickets_db => $stemmed_tickets_db,
inverted_index_db => $inverted_index_db,
tickets_vocab_db => $tickets_vocab_db,
idf_db => $idf_db,
tkt_doc_vecs_db => $tkt_doc_vecs_db,
examples/retrieve_similar_tickets.pl view on Meta::CPAN
my $rank = 1;
foreach my $ticket_id (sort { $retrieved_hash{$b} <=> $retrieved_hash{$a} }
keys %retrieved_hash) {
my $similarity_score = $retrieved_hash{$ticket_id};
print "\n\n\n --------- Retrieved ticket at similarity rank $rank (simlarity score: $similarity_score) ---------\n";
$clusterer->show_processed_ticket_clustering_data_for_given_id( $ticket_id );
$clusterer->show_original_ticket_for_given_id( $ticket_id );
$rank++;
}
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/TokenBucket.pm view on Meta::CPAN
# configure a bucket to limit a stream up to 100 items per hour
# with bursts of 5 items max
my $bucket = Algorithm::TokenBucket->new(100 / 3600, 5);
# wait until we are allowed to process 3 items
until ($bucket->conform(3)) {
sleep 0.1;
# do things
}
# process 3 items because we now can
process(3);
# leak (flush) bucket
$bucket->count(3); # same as $bucket->count(1) for 1..3;
if ($bucket->conform(10)) {
lib/Algorithm/TokenBucket.pm view on Meta::CPAN
my $time = Time::HiRes::time;
while (Time::HiRes::time - $time < 7200) { # two hours
# be bursty
if ($bucket->conform(5)) {
process(5);
$bucket->count(5);
}
}
# we're likely to have processed 200 items (and hogged CPU)
Storable::store $bucket, 'bucket.stored';
my $bucket1 =
Algorithm::TokenBucket->new(@{Storable::retrieve('bucket.stored')});
lib/Algorithm/TokenBucket.pm view on Meta::CPAN
=item conform($)
This method returns true if the bucket contains at least I<N> tokens and
false otherwise. In the case that it is true, it is allowed to transmit or
process I<N> items (not exactly right because I<N> can be fractional) from
the stream. A bucket never conforms to an I<N> greater than C<burst size>.
=cut
sub conform {
lib/Algorithm/TokenBucket.pm view on Meta::CPAN
my $time = Time::HiRes::time;
while (Time::HiRes::time - $time < 7200) { # two hours
# be bursty
Time::HiRes::sleep $bucket->until(5);
if ($bucket->conform(5)) { # should always be true
process(5);
$bucket->count(5);
}
}
# we're likely to have processed 200 items (without hogging the CPU)
=head1 BUGS
Documentation lacks the actual algorithm description. See links or read
the source (there are about 20 lines of sparse Perl in several subs).
view all matches for this distribution
view release on metacpan or search on metacpan
$self->{_get} = $o{'-get'}; # Get method to use
$self->{_set} = $o{'-set'}; # Set method to use
$self->{_data} = []; # Array of node data
# Preprocess the tree if there is one supplied
$self->preprocess($o{-tree}) if exists $o{-tree};
return $self;
}
sub _get ($$) {
sub _data ($$) {
my($self,$node) = @_;
return $self->{_data}->[$self->_get($node)];
}
sub preprocess ($$) {
my($self,$root) = @_;
# Enumeration phase
$self->_enumerate($root, 1);
} else {
return $nyd->{_node};
}
}
# Autoload methods go after =cut, and are processed by the autosplit program.
1;
__END__
=head1 NAME
=head1 DESCRIPTION
This package provides constant-time retrieval of the Nearest Common
Ancestor (NCA) of nodes in a tree. The implementation is based on the
algorithm by Harel and which can, after linear-time preprocessing,
retrieve the nearest common ancestor of two nodes in constant time.
To implement the algorithm it is necessary to store some data for each
node in the tree.
=head1 EXAMPLES
=head2 ALGORITHM
This section describes the algorithm used for preprocessing and for
nearest common ancestor retrieval. It does not provide any intuition
to I<why> the algorithm works, just a description how it works. For
the algorithm description, it is assumed that the nodes themself
contain all necessary information. The algorithm is described in a
Pascal-like fashion. For detailed information about the algorithm,
E<lt> height(j)> if and only if I<LSSB(i) E<lt> LSSB(j)>, which means
that we can replace a test of I<height(i) E<lt> height(j)> with
I<LSSB(i) E<lt> LSSB(j)>. Since I<LSSB(i)> is easier to compute, this
will speed up the computation.
=head2 Preprocessing the tree
Preprocessing the tree requires the computation of three numbers: the
I<node number>, the I<run>, and a I<magic> number. It also requires
computation of the I<leader> of each run. These computations are done
in two recursive descents and ascents of the tree.
Procedure Preprocess(root:Node)
Var x,y : Integer; (* Dummy variables *)
Begin
(x,y) := Enumerate(root,nil,1);
ComputeMagic(root,root,0);
End;
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/TrunkClassifier.pm view on Meta::CPAN
#Description: Wrapper function for running the decision trunk classifier
#Parameters: Command line arguments
#Return value: None
sub runClassifier{
#Handle commands line arguments
my $processor = Algorithm::TrunkClassifier::CommandProcessor->new(\$CLASSIFY, \$SPLITPERCENT, \$TESTSET, \$CLASSNAME, \$OUTPUT, \$LEVELS, \$PROSPECT, \$SUPPFILE, \$VERBOSE, \$USEALL, \$DATAFILE);
$processor->processCmd(@_);
#Read input data
if($VERBOSE){
print("Trunk classifier: Reading input data\n");
}
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/VSM.pm view on Meta::CPAN
} else {
my @brokenup = split /\"|\'|\.|\(|\)|\[|\]|\\|\/|\s+/, "@$query";
@clean_words = grep $_, map { /([a-z0-9_]{$min,})/i;$1 } @brokenup;
}
$query = \@clean_words;
print "\nYour processed query words are: @$query\n" if $self->{_debug};
die "Your vocabulary histogram is empty"
unless scalar(keys %{$self->{_vocab_hist}});
die "You must first construct an LSA model"
unless scalar(keys %{$self->{_doc_vecs_trunc_lsa}});
foreach ( keys %{$self->{_vocab_hist}} ) {
lib/Algorithm/VSM.pm view on Meta::CPAN
my $avg_precision;
$avg_precision += $_ for @Precision_values;
$self->{_avg_precision_for_queries}->{$query} += $avg_precision / (1.0 * @Precision_values);
$self->{_recall_for_queries}->{$query} = \@Recall_values;
}
print "\n\n========= query by query processing for Precision vs. Recall calculations finished ========\n\n"
if $self->{_debug};
my @avg_precisions;
foreach (keys %{$self->{_avg_precision_for_queries}}) {
push @avg_precisions, $self->{_avg_precision_for_queries}->{$_};
}
lib/Algorithm/VSM.pm view on Meta::CPAN
decomposition (SVD). By retaining only a subset of the singular values (usually the
N largest for some value of N), you can construct reduced-dimensionality vectors for
the documents and the queries. In VSM, as mentioned above, the size of the document
and the query vectors is equal to the size of the vocabulary. For large corpora,
this size may involve tens of thousands of words --- this can slow down the VSM
modeling and retrieval process. So you are very likely to get faster performance
with retrieval based on LSA modeling, especially if you store the model once
constructed in a database file on the disk and carry out retrievals using the
disk-based model.
lib/Algorithm/VSM.pm view on Meta::CPAN
=item I<relevancy_threshold:>
The constructor parameter B<relevancy_threshold> is used for automatic determination
of document relevancies to queries on the basis of the number of occurrences of query
words in a document. You can exercise control over the process of determining
relevancy of a document to a query by giving a suitable value to the constructor
parameter B<relevancy_threshold>. A document is considered relevant to a query only
when the document contains at least B<relevancy_threshold> number of query words.
=item I<save_model_on_disk:>
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/VectorClocks.pm view on Meta::CPAN
Description, shamelessly stolen from Wikipedia:
Vector Clocks is an algorithm for generating a partial ordering of
events in a distributed system. Just as in Lamport timestamps,
interprocess messages contain the state of the sending process's
logical clock. Vector clock of a system of N processes is an array
of N logical clocks, one per process, a local copy of which is kept
in each process with the following rules for clock updates:
* initially all clocks are zero
* each time a process experiences an internal event, it increments
its own logical clock in the vector by one
* each time a process prepares to send a message, it increments its
own logical clock in the vector by one and then sends its entire
vector along with the message being sent
* each time a process receives a message, it increments its own
logical clock in the vector by one and updates each element in its
vector by taking the maximum of the value in its own vector clock
and the value in the vector in the received message (for every
element).
view all matches for this distribution
view release on metacpan or search on metacpan
t/Relativity.test view on Meta::CPAN
Geometry sets out form certain conceptions such as "plane," "point,"
and "straight line," with which we are able to associate more or less
definite ideas, and from certain simple propositions (axioms) which,
in virtue of these ideas, we are inclined to accept as "true." Then,
on the basis of a logical process, the justification of which we feel
ourselves compelled to admit, all remaining propositions are shown to
follow from those axioms, i.e. they are proven. A proposition is then
correct ("true") when it has been derived in the recognised manner
from the axioms. The question of "truth" of the individual geometrical
propositions is thus reduced to one of the "truth" of the axioms. Now
t/Relativity.test view on Meta::CPAN
Let us suppose our old friend the railway carriage to be travelling
along the rails with a constant velocity v, and that a man traverses
the length of the carriage in the direction of travel with a velocity
w. How quickly or, in other words, with what velocity W does the man
advance relative to the embankment during the process ? The only
possible answer seems to result from the following consideration: If
the man were to stand still for a second, he would advance relative to
the embankment through a distance v equal numerically to the velocity
of the carriage. As a consequence of his walking, however, he
traverses an additional distance w relative to the carriage, and hence
t/Relativity.test view on Meta::CPAN
velocity of light c (in vacuum) is justifiably believed by the child
at school. Who would imagine that this simple law has plunged the
conscientiously thoughtful physicist into the greatest intellectual
difficulties? Let us consider how these difficulties arise.
Of course we must refer the process of the propagation of light (and
indeed every other process) to a rigid reference-body (co-ordinate
system). As such a system let us again choose our embankment. We shall
imagine the air above it to have been removed. If a ray of light be
sent along the embankment, we see from the above that the tip of the
ray will be transmitted with the velocity c relative to the
embankment. Now let us suppose that our railway carriage is again
t/Relativity.test view on Meta::CPAN
following manner. Experience has led to the conviction that, on the
one hand, the principle of relativity holds true and that on the other
hand the velocity of transmission of light in vacuo has to be
considered equal to a constant c. By uniting these two postulates we
obtained the law of transformation for the rectangular co-ordinates x,
y, z and the time t of the events which constitute the processes of
nature. In this connection we did not obtain the Galilei
transformation, but, differing from classical mechanics, the Lorentz
transformation.
The law of transmission of light, the acceptance of which is justified
by our actual knowledge, played an important part in this process of
thought. Once in possession of the Lorentz transformation, however, we
can combine this with the principle of relativity, and sum up the
theory thus:
Every general law of nature must be so constituted that it is
t/Relativity.test view on Meta::CPAN
By means of comparatively simple considerations we are led to draw the
following conclusion from these premises, in conjunction with the
fundamental equations of the electrodynamics of Maxwell: A body moving
with the velocity v, which absorbs * an amount of energy E[0] in
the form of radiation without suffering an alteration in velocity in
the process, has, as a consequence, its energy increased by an amount
eq. 19: file eq19.gif
In consideration of the expression given above for the kinetic energy
of the body, the required energy of the body comes out to be
t/Relativity.test view on Meta::CPAN
mirrors so arranged on a rigid body that the reflecting surfaces face
each other. A ray of light requires a perfectly definite time T to
pass from one mirror to the other and back again, if the whole system
be at rest with respect to the æther. It is found by calculation,
however, that a slightly different time T1 is required for this
process, if the body, together with the mirrors, be moving relatively
to the æther. And yet another point: it is shown by calculation that
for a given velocity v with reference to the æther, this time T1 is
different when the body is moving perpendicularly to the planes of the
mirrors from that resulting when the motion is parallel to these
planes. Although the estimated difference between these two times is
t/Relativity.test view on Meta::CPAN
(b) the railway carriage as reference-body,
then these general laws of nature (e.g. the laws of mechanics or the
law of the propagation of light in vacuo) have exactly the same form
in both cases. This can also be expressed as follows : For the
physical description of natural processes, neither of the reference
bodies K, K1 is unique (lit. " specially marked out ") as compared
with the other. Unlike the first, this latter statement need not of
necessity hold a priori; it is not contained in the conceptions of "
motion" and " reference-body " and derivable from them; only
experience can decide as to its correctness or incorrectness.
t/Relativity.test view on Meta::CPAN
"If we pick up a stone and then let it go, why does it fall to the
ground ?" The usual answer to this question is: "Because it is
attracted by the earth." Modern physics formulates the answer rather
differently for the following reason. As a result of the more careful
study of electromagnetic phenomena, we have come to regard action at a
distance as a process impossible without the intervention of some
intermediary medium. If, for instance, a magnet attracts a piece of
iron, we cannot be content to regard this as meaning that the magnet
acts directly on the iron through the intermediate empty space, but we
are constrained to imagine -- after the manner of Faraday -- that the
magnet always calls into being something physically real in the space
t/Relativity.test view on Meta::CPAN
The considerations of Section 20 show that the general principle of
relativity puts us in a position to derive properties of the
gravitational field in a purely theoretical manner. Let us suppose,
for instance, that we know the space-time " course " for any natural
process whatsoever, as regards the manner in which it takes place in
the Galileian domain relative to a Galileian body of reference K. By
means of purely theoretical operations (i.e. simply by calculation) we
are then able to find how this known natural process appears, as seen
from a reference-body K1 which is accelerated relatively to K. But
since a gravitational field exists with respect to this new body of
reference K1, our consideration also teaches us how the gravitational
field influences the process studied.
For example, we learn that a body which is in a state of uniform
rectilinear motion with respect to K (in accordance with the law of
Galilei) is executing an accelerated and in general curvilinear motion
with respect to the accelerated reference-body K1 (chest). This
t/Relativity.test view on Meta::CPAN
comprehensive theory, in which it lives on as a limiting case.
In the example of the transmission of light just dealt with, we have
seen that the general theory of relativity enables us to derive
theoretically the influence of a gravitational field on the course of
natural processes, the Iaws of which are already known when a
gravitational field is absent. But the most attractive problem, to the
solution of which the general theory of relativity supplies the key,
concerns the investigation of the laws satisfied by the gravitational
field itself. Let us consider this for a moment.
t/Relativity.test view on Meta::CPAN
The surface of a marble table is spread out in front of me. I can get
from any one point on this table to any other point by passing
continuously from one point to a " neighbouring " one, and repeating
this process a (large) number of times, or, in other words, by going
from point to point without executing "jumps." I am sure the reader
will appreciate with sufficient clearness what I mean here by "
neighbouring " and by " jumps " (if he is not too pedantic). We
express this property of the surface by describing the latter as a
continuum.
t/Relativity.test view on Meta::CPAN
(c) Gravitational field and matter together must satisfy the law of
the conservation of energy (and of impulse).
Finally, the general principle of relativity permits us to determine
the influence of the gravitational field on the course of all those
processes which take place according to known laws when a
gravitational field is absent i.e. which have already been fitted into
the frame of the special theory of relativity. In this connection we
proceed in principle according to the method which has already been
explained for measuring-rods, clocks and freely moving material
points.
t/Relativity.test view on Meta::CPAN
universe " to which they have access is in both cases practically
plane, or Euclidean. It follows directly from this discussion, that
for our sphere-beings the circumference of a circle first increases
with the radius until the " circumference of the universe " is
reached, and that it thenceforward gradually decreases to zero for
still further increasing values of the radius. During this process the
area of the circle continues to increase more and more, until finally
it becomes equal to the total area of the whole " world-sphere."
Perhaps the reader will wonder why we have placed our " beings " on a
sphere rather than on another closed surface. But this choice has its
t/Relativity.test view on Meta::CPAN
THE EXPERIMENTAL CONFIRMATION OF THE GENERAL THEORY OF RELATIVITY
From a systematic theoretical point of view, we may imagine the
process of evolution of an empirical science to be a continuous
process of induction. Theories are evolved and are expressed in short
compass as statements of a large number of individual observations in
the form of empirical laws, from which the general laws can be
ascertained by comparison. Regarded in this way, the development of a
science bears some resemblance to the compilation of a classified
catalogue. It is, as it were, a purely empirical enterprise.
But this point of view by no means embraces the whole of the actual
process ; for it slurs over the important part played by intuition and
deductive thought in the development of an exact science. As soon as a
science has emerged from its initial stages, theoretical advances are
no longer achieved merely by a process of arrangement. Guided by
empirical data, the investigator rather develops a system of thought
which, in general, is built up logically from a small number of
fundamental assumptions, the so-called axioms. We call such a system
of thought a theory. The theory finds the justification for its
existence in the fact that it correlates a large number of single
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Algorithm/X/DLX.pm view on Meta::CPAN
There only are two notable deviations from the original:
=over
=item * The backtracking in Algorithm::X::DLX is done iteratively, without recursion.
So it's possible to process huge matrices without worrying about memory.
=item * It's still possible to compare performances between selecting random colummns with lowest node count and just picking the first one (left most) of these by providing the option C<choose_random_column>, but the ability to further differentiate...
=back
view all matches for this distribution
view release on metacpan or search on metacpan
#define PL_stack_sp stack_sp
#endif
static void process_flag _((char *varname, SV **svp, char **strp, STRLEN *lenp));
static void
process_flag(varname, svp, strp, lenp)
char *varname;
SV **svp;
char **strp;
STRLEN *lenp;
{
I32 klen;
SV *keypfx, *attrpfx, *deref;
char *keypfx_c, *attrpfx_c, *deref_c;
STRLEN keypfx_l, attrpfx_l, deref_l;
process_flag("Alias::KeyFilter", &keypfx, &keypfx_c, &keypfx_l);
process_flag("Alias::AttrPrefix", &attrpfx, &attrpfx_c, &attrpfx_l);
process_flag("Alias::Deref", &deref, &deref_c, &deref_l);
deref_call = (deref && !deref_c);
LEAVE; /* operate at a higher level */
(void)hv_iterinit(hv);
view all matches for this distribution
view release on metacpan or search on metacpan
inc/Module/Install.pm view on Meta::CPAN
}
close FH or die "close($_[0]): $!";
}
END_OLD
# _version is for processing module versions (eg, 1.03_05) not
# Perl versions (eg, 5.8.1).
sub _version ($) {
my $s = shift || 0;
my $d =()= $s =~ /(\.)/g;
if ( $d >= 2 ) {
view all matches for this distribution
view release on metacpan or search on metacpan
inc/Module/AutoInstall.pm view on Meta::CPAN
.
if (
eval '$>' and lc(`sudo -V`) =~ /version/ and _prompt(
qq(
==> Should we try to re-execute the autoinstall process with 'sudo'?),
((-t STDIN) ? 'y' : 'n')
) =~ /^[Yy]/
)
{
# try to bootstrap ourselves from sudo
print << ".";
*** Trying to re-execute the autoinstall process with 'sudo'...
.
my $missing = join( ',', @Missing );
my $config = join( ',',
UNIVERSAL::isa( $Config, 'HASH' ) ? %{$Config} : @{$Config} )
if $Config;
view all matches for this distribution