view release on metacpan or search on metacpan
lib/Crypt/XkcdPassword/Words/EN.pm view on Meta::CPAN
victims
transfer
stanley
response
channel
backup
identity
differently
campus
spy
ninety
view all matches for this distribution
view release on metacpan or search on metacpan
include/yescrypt.h view on Meta::CPAN
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* This file was originally written by Colin Percival as part of the Tarsnap
* online backup system.
*/
#ifndef _YESCRYPT_H_
#define _YESCRYPT_H_
#include <stdint.h>
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Crypt/ZCert.pm view on Meta::CPAN
$self
}
print
qq[<OvrLrdQ> only copy of keys to decrypt inside encrypted duplicity backup\n],
qq[<Schroedingers_hat> Yo dawg, I herd you liked encryption so I put yo keys],
qq[ in yo encrypted file so you can decrypt while....damnit.\n]
unless caller; 1;
=pod
view all matches for this distribution
view release on metacpan or search on metacpan
src/ppport.h view on Meta::CPAN
av_tindex|5.017009|5.017009|p
av_top_index|5.017009|5.017009|p
av_undef|||
av_unshift|||
ax|||n
backup_one_GCB|||
backup_one_LB|||
backup_one_SB|||
backup_one_WB|||
bad_type_gv|||
bad_type_pv|||
bind_match|||
block_end||5.004000|
block_gimme||5.004000|
view all matches for this distribution
view release on metacpan or search on metacpan
av_top_index|5.017009|5.003007|p
av_top_index_skip_len_mg|5.025010||Viu
av_undef|5.003007|5.003007|
av_unshift|5.003007|5.003007|
ax|5.003007|5.003007|
backup_one_GCB|5.025003||Viu
backup_one_LB|5.023007||Viu
backup_one_SB|5.021009||Viu
backup_one_WB|5.021009||Viu
bad_type_gv|5.019002||Viu
bad_type_pv|5.016000||Viu
BADVERSION|5.011004||Viu
BASEOP|5.003007||Viu
BhkDISABLE|5.013003||xV
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Csistck/Test/FileBase.pm view on Meta::CPAN
use strict;
use warnings;
use base 'Csistck::Test';
use Csistck::Oper qw/debug/;
use Csistck::Util qw/backup_file hash_file hash_string/;
use Digest::MD5;
use File::Basename;
use File::Copy;
use FindBin;
lib/Csistck/Test/FileBase.pm view on Meta::CPAN
if (-e $self->dest) {
die("Destination ${\$self->dest} is not a file")
if (-d $self->dest);
die("Destination ${\$self->dest} exists is is not writable")
if (-f $self->dest and ! -w $self->dest);
backup_file($self->dest);
}
$ret &= $self->file_repair;
}
$ret &= $self->mode_process(\&mode_repair);
view all matches for this distribution
view release on metacpan or search on metacpan
bin/cstocs.PL view on Meta::CPAN
=over 4
=item -i, -i.ext, --inplace.ext
Files specified will be converted in-place, using Perl C<-i> facility.
Optionaly, an extension for backup copies may be specified after dot.
This parameter B<has> to be the first one, if specified.
=item --dir directory
Encoding files are taken from F<directory> instead of the default,
view all matches for this distribution
view release on metacpan or search on metacpan
lib/CtrlO/Crypt/XkcdPassword/Wordlist/eff_large.pm view on Meta::CPAN
backspin
backstab
backstage
backtalk
backtrack
backup
backward
backwash
backwater
backyard
bacon
view all matches for this distribution
view release on metacpan or search on metacpan
bin/qbix/qbix view on Meta::CPAN
# can probably be accomplished if you assume that the client has
# chosen a front side before doing a move lookup. The client would
# likely pursue many incorrect paths but that would be acceptable
# if one right one could ultimately be determined. The steps the
# client software would have to take are as follows:
# 0) save a backup of the entire state of the initial rube
# 1) lookup based on state
# 1b) if lookup found result, follow path for each possible front (6)
# 2) else try each possible turn of each of the 6 sides && try to
# find a lookup with a result from any of those
# 3) make random turns && return to step 1
view all matches for this distribution
view release on metacpan or search on metacpan
lib/Curses/UI/TextEditor.pm view on Meta::CPAN
} else {
$this->do_new_pastebuffer(0);
}
# Backup, in case illegal input is done.
my %backup = %{$this};
# Process bindings.
my $ret = $this->process_bindings($key);
# Did the widget loose focus, due to the keypress?
lib/Curses/UI/TextEditor.pm view on Meta::CPAN
eval $e;
}
if ($is_illegal) # Illegal input? Then restore and bail out.
{
while (my ($k,$v) = each %backup) {
$this->{$k} = $v;
}
$this->dobeep();
} else { # Legal input? Redraw the text.
$this->run_event('-onchange');
view all matches for this distribution
view release on metacpan or search on metacpan
Another subroutine will copy that information after object instantiation
in order to support the reset method. Also note that everything stored
in this should *not* be more than one additional level deep (in other
words, values can be hash or array refs, but none of the values in *that*
structure should be refs), otherwise those refs will be copied over, instead
of the data inside the structure. This essentially destroys your backup.
If you have special requirements, override the _copy method as well.
=cut
view all matches for this distribution
view release on metacpan or search on metacpan
MANIFEST.SKIP view on Meta::CPAN
\bBuild.bat$
\bBuild.COM$
\bBUILD.COM$
\bbuild.com$
# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$
view all matches for this distribution
view release on metacpan or search on metacpan
MANIFEST.SKIP view on Meta::CPAN
\bbuild.com$
# and Module::Build::Tiny generated files
\b_build_params$
# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DB/CouchDB/Schema.pm view on Meta::CPAN
endkey => '"_design/ZZZZZ"'});
}
=head2 dump_db
dumps the entire db to a file for backup
=cut
#TODO(jwall) tool to dump the whole db to a backup file
sub dump_whole_db {
my $self = shift;
my $pretty = shift;
my $db = $self->server;
#load our schema
view all matches for this distribution
view release on metacpan or search on metacpan
MANIFEST.SKIP view on Meta::CPAN
CPAN.SKIP
t/000_standard__*
Debian_CPANTS.txt
nytprof.out
# Temp, old, emacs, vim, backup files.
~$
\.old$
\.swp$
\.tar$
\.tar\.gz$
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DB2/Admin.pm view on Meta::CPAN
unless (ref $params{Target}) {
$params{Target} = [ $params{Target} ];
}
#
# A full backup is indicated by an empty list of tablespaces.
#
$params{Tablespaces} ||= [];
#
# Handle default options
lib/DB2/Admin.pm view on Meta::CPAN
'Schema' => $schema_name,
'Package' => $pkg_name);
# Backup a database (or database partition)
DB2::Admin->Backup('Database' => $db_name,
'Target' => $backup_dir,
'Options' => { 'Online' => 1, 'Compress' => 1, });
# Backup all nodes of a DPF database (V9.5 only)
DB2::Admin->Backup('Database' => $db_name,
'Target' => $backup_dir,
'Options' => { 'Online' => 1, 'Nodes' => 'All', });
=head1 DESCRIPTION
This module provides perl language support for the DB2 administrative
lib/DB2/Admin.pm view on Meta::CPAN
=head2 Import
This method is used to import a file into a table. Existing data can
be added to (insert mode), replaced (replace mode), or overwritten on
duplicate keys (insert_update mode). The import functions go through
the transaction log; no tablespace backup is required once the
operation succeeds.
Importing data is less efficient than the C<Load> method. IBM
recommends load over import for more than 50,000 rows or 50MB of data.
lib/DB2/Admin.pm view on Meta::CPAN
=item *
If the load is marked as non-recoverable, it is not subject to use
restrictions once the load completes. However, the table will be
unavailable if the database is restarted before a backup is taken.
This is different from Sybase, where the table will be available in
the pre-load state.
=item *
If the load is marked as recoverable (the default), either the loaded
data must be copied by the server (see the C<CopyDirectory> argument),
or a database or tablespace backup must be performed by the DBAs. If
this is not done, the table may be put in a mode where data can be
read but not updated.
=item *
lib/DB2/Admin.pm view on Meta::CPAN
=back
=head2 ListHistory
This method is used to query the history of backups, roll forwards,
loads, tablespace actions, etc. It applies to a database, but doesn't
require a database connection (just an instance attachment) - IBM is
not very consistent here. This method can be quite slow if selection
criteria are not specified. The selection criteria (action, object
name and start time) are logically ANDed.
lib/DB2/Admin.pm view on Meta::CPAN
The return value from this method is a hash with the same four fields,
all of which will be present only if the value is non-empty.
=head2 Backup
This method performs a database backup. For a DPF database, it backs
up the node specified in the C<DB2NODE> environment variable. In DB2
V9.5, it can back up all nodes of a DPF database.
This method takes four named parameters and returns a hash reference,
described in more detail after the parameters.
lib/DB2/Admin.pm view on Meta::CPAN
required.
=item Tablespaces
An optional array reference with a lkist of tablespace names to back
up. Specifying this parameter switches from a database backup to a
tablespace backup.
=item Options
A required hash reference with backup options.
=over 4
=item Type
The type of backup. This cna be C<Full>, C<Incremental> or C<Delta>.
=item Action
The backup action. Technically, the abckup cna either eb fully
automated (the default), or it can go through multiple phases:
parameter check, start, promt, continue, etc. This parameter allows
the user to specify the backup type/stage. Supported values are
C<NoInterrupt> (the default), C<Start>, C<Continue>, C<Terminate>,
C<DeviceTerminate>, C<ParamCheck> and C<ParamCheckOnly>.
=item Nodes
This parameter is only valid on DB2 V9.5 and only for DPF databases.
It can be C<All> for a system-wide backup of all DPF nodes, or a
reference to an array of node numbers to back up. Use of this
parameter triggers the creation of the C<NodeInfo> field in the return
value. It is mutually exclusive with the C<ExceptNodes> parameter.
=item ExceptNodes
lib/DB2/Admin.pm view on Meta::CPAN
this parameter triggers the creation of the C<NodeInfo> field in the
return value. It is mutually exclusive with the C<Nodes> parameter.
=item Online
A boolean option specifying an online or offline backup. The default
is an offline backup.
=item Compress
A boolean option specifying whether to compress the backup. The
default is a non-compressed backup.
=item IncludeLogs
A boolean option specifying that database logs must be included. This
parameter is mutually exclusive with the C<ExcludeLogs> option.
Omitting both C<IncludeLogs> and C<ExcludeLogs> selects the default
for the backup type, which is to include logs for snapshot backups and
to exclude logs in all other cases.
=item ExcludeLogs
A boolean option specifying that database logs must be excluded. This
parameter is mutually exclusive with the C<IncludeLogs> option.
Omitting both C<IncludeLogs> and C<ExcludeLogs> selects the default
for the backup type, which is to include logs for snapshot backups and
to exclude logs in all other cases.
=item ImpactPriority
An integer specifying the impact priority. When omitted, the backup
runs unthrottled.
=item Parallelism
An integer specifying the degree of parallelism (number of buffer
manipulators).
=item NumBuffers
An integer specifying the number of backup buffers to be used.
=item BufferSize
An integer specifying the size of the abckup buffer in 4K pages.
=item TargetType
The backup target type. The default is C<Local>, i.e. a backup to a
filesystem. Other options are C<XBSA>, C<TSM>, C<Snapshot> and
C<Other>.
=item Userid
lib/DB2/Admin.pm view on Meta::CPAN
=item Timestamp
=item BackupSize
The size of the backup in megabytes
=item SQLCode
=item Message
view all matches for this distribution
view release on metacpan or search on metacpan
Added tracing into SQLBindParameter (help diagnose oracle odbc bug)
Fixed/worked around bug/result from Latest Oracle ODBC driver where in
SQLColAttribute cbInfoValue was changed to 0 to indicate fDesc had a value
Added work around for compiling w/ActiveState PRK (PERL_OBJECT)
Updated tests to include date insert and type
Added more "backup" SQL_xxx types for tests
Updated bind test to test binding select
NOTE: bind insert fails on Paradox driver (don't know why)
Added support for: (see notes below)
view all matches for this distribution
view release on metacpan or search on metacpan
This command gets the latest version of DBI (and its prerequisite
modules) and the latest version of DBD::Informix, and compiles, tests,
and install them all completely automatically. Before doing this, you
need to be confident that things will work correctly (or that you've
got good backups of your Perl installation). On the other hand, it is
an extremely convenient method of updating your Perl software.
When you first use the CPAN module, it will ask you many questions,
including the name of the CPAN site from which to download the
material, but the CPAN module saves this information for the next time
view all matches for this distribution
view release on metacpan or search on metacpan
# - change ld to ilink32.exe
# - change libc to cw32.lib
# - change make to dmake
# - add new value: bcc_dir
#
# = convert perl56.lib from coff to omf format (make backup of course)
################################################################################
use strict;
use Config;
use ExtUtils::MakeMaker qw(prompt);
view all matches for this distribution
view release on metacpan or search on metacpan
#define ODBC_EXEC_DIRECT 0x8339
/* ODBC_DEFAULT_BIND_TYPE_VALUE is now set to 0, which means that
* DBD::ODBC will call SQLDescribeParam to find out what type of
* binding should be set. If, for some reason, SQLDescribeParam
* fails, then the bind type will be set to SQL_VARCHAR as a backup.
* Hopefully -- we won't have to do that...
* */
#define ODBC_DEFAULT_BIND_TYPE_VALUE 0
#define ODBC_BACKUP_BIND_TYPE_VALUE SQL_VARCHAR
view all matches for this distribution
view release on metacpan or search on metacpan
$dbh->prepare($sql)->execute()
It does this to avoid a round-trip to the server so it is faster.
Normally this is good but some people fall foul of this with MS SQL
Server if they call a procedure which outputs print statements (e.g.,
backup) as the procedure may not complete. See the DBD::ODBC FAQ and
in general you are better to use prepare/execute when calling
procedures.
In addition, you should realise that since DBD::ODBC does not create a
DBI statement for do calls, if you set up an error handler the handle
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DBD/Oracle/Troubleshooting/Macos.pod view on Meta::CPAN
perl IO modules. I could not successfully repeat the report for the
former, but I did succeed by doing the latter. Instructions for both
follow nonetheless.
2a) SKIP IF YOU WANT TO OR HAVE SUCCESSFULLY TRIED 2b). Make a
backup copy of the $ORACLE_HOME/lib/libclntsh.dylib.9.0 file, or
the file this name points to, since we're about to modify that
library. Note that the ".9.0" suffix of the file name is version
dependent, and that you want to work with the file pointed to
through one or a series of symbolic links rather than any of the
symbolic links (e.g., one will be called libclntsh.dylib).
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DBD/PgLite/MirrorPgToSQLite.pm view on Meta::CPAN
$opt{pg_dbh}->disconnect if $disconnect;
$opt{sl_dbh}->disconnect;
if (-f $fn) {
copy $fn, "$fn.bak" or warn "WARNING: Could not make backup copy of $fn: $!\n";
}
move "$fn.tmp", $fn or die "ERROR: Could not move temporary SQLite file $fn.tmp to $fn";
lockfile('clear',$lockfile);
}
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DBD/SQLAnywhere/GetInfo.pm view on Meta::CPAN
and
any
as
asc
attach
backup
begin
between
bigint
binary
bit
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DBD/SQLcipher.pm view on Meta::CPAN
DBD::SQLcipher::db->install_method('sqlite_progress_handler');
DBD::SQLcipher::db->install_method('sqlite_commit_hook');
DBD::SQLcipher::db->install_method('sqlite_rollback_hook');
DBD::SQLcipher::db->install_method('sqlite_update_hook');
DBD::SQLcipher::db->install_method('sqlite_set_authorizer');
DBD::SQLcipher::db->install_method('sqlite_backup_from_file');
DBD::SQLcipher::db->install_method('sqlite_backup_to_file');
DBD::SQLcipher::db->install_method('sqlite_enable_load_extension');
DBD::SQLcipher::db->install_method('sqlite_load_extension');
DBD::SQLcipher::db->install_method('sqlite_register_fts3_perl_tokenizer');
DBD::SQLcipher::db->install_method('sqlite_trace', { O => 0x0004 });
DBD::SQLcipher::db->install_method('sqlite_profile', { O => 0x0004 });
lib/DBD/SQLcipher.pm view on Meta::CPAN
the access attempt, or C<undef> if this access attempt is directly from
top-level SQL code.
=back
=head2 $dbh->sqlite_backup_from_file( $filename )
This method accesses the SQLcipher Online Backup API, and will take a backup of
the named database file, copying it to, and overwriting, your current database
connection. This can be particularly handy if your current connection is to the
special :memory: database, and you wish to populate it from an existing DB.
=head2 $dbh->sqlite_backup_to_file( $filename )
This method accesses the SQLcipher Online Backup API, and will take a backup of
the currently connected database, and write it out to the named file.
=head2 $dbh->sqlite_enable_load_extension( $bool )
Calling this method with a true value enables loading (external)
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DBD/SQLeet.pm view on Meta::CPAN
DBD::SQLeet::db->install_method('sqlite_progress_handler');
DBD::SQLeet::db->install_method('sqlite_commit_hook');
DBD::SQLeet::db->install_method('sqlite_rollback_hook');
DBD::SQLeet::db->install_method('sqlite_update_hook');
DBD::SQLeet::db->install_method('sqlite_set_authorizer');
DBD::SQLeet::db->install_method('sqlite_backup_from_file');
DBD::SQLeet::db->install_method('sqlite_backup_to_file');
DBD::SQLeet::db->install_method('sqlite_enable_load_extension');
DBD::SQLeet::db->install_method('sqlite_load_extension');
DBD::SQLeet::db->install_method('sqlite_register_fts3_perl_tokenizer');
DBD::SQLeet::db->install_method('sqlite_trace', { O => 0x0004 });
DBD::SQLeet::db->install_method('sqlite_profile', { O => 0x0004 });
view all matches for this distribution
view release on metacpan or search on metacpan
sqlite-amalgamation.c view on Meta::CPAN
** if the pointer can possibly be shared with
** another database connection.
**
** The pointers are kept in sorted order by pBtree->pBt. That
** way when we go to enter all the mutexes, we can enter them
** in order without every having to backup and retry and without
** worrying about deadlock.
**
** The number of shared btrees will always be small (usually 0 or 1)
** so an insertion sort is an adequate algorithm here.
*/
sqlite-amalgamation.c view on Meta::CPAN
*/
/* Opcode: Prev P1 P2 * * *
**
** Back up cursor P1 so that it points to the previous key/data pair in its
** table or index. If there is no previous key/value pairs then fall through
** to the following instruction. But if the cursor backup was successful,
** jump immediately to P2.
**
** The P1 cursor must be for a real table, not a pseudo-table.
*/
case OP_Prev: /* jump */
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DBD/SQLite.pm view on Meta::CPAN
DBD::SQLite::db->install_method('sqlite_progress_handler');
DBD::SQLite::db->install_method('sqlite_commit_hook');
DBD::SQLite::db->install_method('sqlite_rollback_hook');
DBD::SQLite::db->install_method('sqlite_update_hook');
DBD::SQLite::db->install_method('sqlite_set_authorizer');
DBD::SQLite::db->install_method('sqlite_backup_from_file');
DBD::SQLite::db->install_method('sqlite_backup_to_file');
DBD::SQLite::db->install_method('sqlite_backup_from_dbh');
DBD::SQLite::db->install_method('sqlite_backup_to_dbh');
DBD::SQLite::db->install_method('sqlite_enable_load_extension');
DBD::SQLite::db->install_method('sqlite_load_extension');
DBD::SQLite::db->install_method('sqlite_register_fts3_perl_tokenizer');
DBD::SQLite::db->install_method('sqlite_trace', { O => 0x0004 });
DBD::SQLite::db->install_method('sqlite_profile', { O => 0x0004 });
lib/DBD/SQLite.pm view on Meta::CPAN
the access attempt, or C<undef> if this access attempt is directly from
top-level SQL code.
=back
=head2 $dbh->sqlite_backup_from_file( $filename )
This method accesses the SQLite Online Backup API, and will take a backup of
the named database file, copying it to, and overwriting, your current database
connection. This can be particularly handy if your current connection is to the
special :memory: database, and you wish to populate it from an existing DB.
=head2 $dbh->sqlite_backup_to_file( $filename )
This method accesses the SQLite Online Backup API, and will take a backup of
the currently connected database, and write it out to the named file.
=head2 $dbh->sqlite_backup_from_dbh( $another_dbh )
This method accesses the SQLite Online Backup API, and will take a backup of
the database for the passed handle, copying it to, and overwriting, your current database
connection. This can be particularly handy if your current connection is to the
special :memory: database, and you wish to populate it from an existing DB.
You can use this to backup from an in-memory database to another in-memory database.
=head2 $dbh->sqlite_backup_to_dbh( $another_dbh )
This method accesses the SQLite Online Backup API, and will take a backup of
the currently connected database, and write it out to the passed database handle.
=head2 $dbh->sqlite_enable_load_extension( $bool )
Calling this method with a true value enables loading (external)
view all matches for this distribution
view release on metacpan or search on metacpan
*/
/* Opcode: Prev P1 P2 *
**
** Back up cursor P1 so that it points to the previous key/data pair in its
** table or index. If there is no previous key/value pairs then fall through
** to the following instruction. But if the cursor backup was successful,
** jump immediately to P2.
*/
case OP_Prev:
case OP_Next: {
Cursor *pC;
view all matches for this distribution
view release on metacpan or search on metacpan
lib/DBD/Sys/Roadmap.pod view on Meta::CPAN
... like routing information, firewall states, ...
=item Health Information
like last backup time, backup sizes, database roles, login failures (system,
databases, ...)
=back
And there may be many, many more which can be collected via an SQL interface,
view all matches for this distribution