Result:
found more than 893 distributions - search limited to the first 2001 files matching your query ( run in 0.826 )


App-MtAws

 view release on metacpan or  search on metacpan

README.md  view on Meta::CPAN

==============
Perl Multithreaded multipart sync to Amazon Glacier service.

## Intro

Amazon Glacier is an archive/backup service with very low storage price. However with some caveats in usage and archive retrieval prices.
[Read more about Amazon Glacier][amazon glacier]

*mt-aws-glacier* is a client application for Amazon Glacier, written in Perl programming language, for *nix.

[amazon glacier]:http://aws.amazon.com/glacier/

README.md  view on Meta::CPAN

* Read Amazon Glacier pricing [FAQ][Amazon Glacier faq] again, really. Beware of retrieval fee.

* Before using this program, you should read Amazon Glacier documentation and understand, in general, Amazon Glacier workflows and entities. This documentation
does not define any new layer of abstraction over Amazon Glacier entities.

* In general, all Amazon Glacier clients store metadata (filenames, file metadata) in own formats, incompatible with each other. To restore backup made with `mt-aws-glacier` you'll
need `mt-aws-glacier`, other software most likely will restore your data but loose filenames.

* With low "partsize" option you pay a bit more (Amazon charges for each upload request)

* For backup created with older versions (0.7x) of mt-aws-glacier, Journal file **required to restore backup**.

* Use a **Journal file** only with **same vault** ( more info [here](#what-is-journal) and [here](#how-to-maintain-a-relation-between-my-journal-files-and-my-vaults) and [here](https://github.com/vsespb/mt-aws-glacier/issues/50))

* When work with CD-ROM/CIFS/other non-Unix/non-POSIX filesystems, you might need set `leaf-optimization` to `0`

README.md  view on Meta::CPAN

* Please report any bugs or issues (using GitHub issues). Well, any feedback is welcomed.
* If you want to contribute to the source code, please contact me first and describe what you want to do

## Usage

1. Create a directory containing files to backup. Example `/data/backup`
2. Create config file, say, glacier.cfg

		key=YOURKEY
		secret=YOURSECRET
		# region: eu-west-1, us-east-1 etc

README.md  view on Meta::CPAN

	(note that Amazon Glacier does not return error if vault already exists etc)

4. Choose a filename for the Journal, for example, `journal.log`
5. Sync your files

		./mtglacier sync --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log --concurrency 3

6. Add more files and sync again
7. Check that your local files not modified since last sync

		./mtglacier check-local-hash --config glacier.cfg --dir /data/backup --journal journal.log

8. Delete some files from your backup location
9. Initiate archive restore job on Amazon side

		./mtglacier restore --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log --max-number-of-files 10

10. Wait 4+ hours for Amazon Glacier to complete archive retrieval
11. Download restored files back to backup location

		./mtglacier restore-completed --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log

12. Delete all your files from vault

		./mtglacier purge-vault --config glacier.cfg --vault myvault --journal journal.log

README.md  view on Meta::CPAN

* Each text line in a file represent one record

* It's an append-only file. File opened in append-only mode, and new records only added to the end. This guarantees that
you can recover Journal file to previous state in case of bug in program/crash/some power/filesystem issues. You can even use `chattr +a` to set append-only protection to the Journal.

* As Journal file is append-only, it's easy to perform incremental backups of it

#### Why Journal is a file in local filesystem file, but not in online Cloud storage (like Amazon S3 or Amazon DynamoDB)?

Journal is needed to restore backup, and we can expect that if you need to restore a backup, that means that you lost your filesystem, together with Journal.

However Journal also needed to perform *new backups* (`sync` command), to determine which files are already in Glacier and which are not. And also to checking local file integrity (`check-local-hash` command).
Actually, usually you perform new backups every day. And you restore backups (and loose your filesystem) very rare.

So fast (local) journal is essential to perform new backups fast and cheap (important for users who backups thousands or millions of files).

And if you lost your journal, you can restore it from Amazon Glacier (see `retrieve-inventory` command). Also it's recommended to backup your journal
to another backup system (Amazon S3 ? Dropbox ?) with another tool, because retrieving inventory from Amazon Glacier is pretty slow.

Also some users might want to backup *same* files from *multiple* different locations. They will need *synchronization* solution for journal files.

Anyway I think problem of putting Journals into cloud can be automated and solved with 3 lines bash script..

#### How to maintain a relation between my journal files and my vaults?

README.md  view on Meta::CPAN


9. It's better to keep relation between *vault* and transfer root (`--dir` option) in one place, such as config file.

#### Why Journal (and metadata stored in Amazon Glacier) does not contain file's metadata (like permissions)?

If you want to store permissions, put your files to archives before backup to Amazon Glacier. There are lot's of different possible things to store as file metadata information,
most of them are not portable. Take a look on archives file formats - different formats allows to store different metadata.

It's possible that in the future `mtglacier` will support some other metadata things.

## Specification for some commands

README.md  view on Meta::CPAN


	_Uploads what_: a file, pointed by `filename`.

	_Filename in Journal and Amazon Glacier metadata_: A relative path from `dir` to `filename`

		./mtglacier upload-file --config glacier.cfg --vault myvault --journal journal.log --dir /data/backup --filename /data/backup/dir1/myfile

	(this will upload content of `/data/backup/dir1/myfile` to Amazon Glacier and use `dir1/myfile` as filename for Journal )

		./mtglacier upload-file --config glacier.cfg --vault myvault --journal journal.log --dir data/backup --filename data/backup/dir1/myfile

	(Let's assume current directory is `/home`. Then this will upload content of `/home/data/backup/dir1/myfile` to Amazon Glacier and use `dir1/myfile` as filename for Journal)

	NOTE: file `filename` should be inside directory `dir`

	NOTE: both `-filename` and `--dir` resolved to full paths, before determining relative path from `--dir` to `--filename`. Thus yo'll get an error
	if parent directories are unreadable. Also if you have `/dir/ds` symlink to `/dir/d3` directory, then `--dir /dir` `--filename /dir/ds/file` will result in relative

 view all matches for this distribution


App-Mxpress-PDF

 view release on metacpan or  search on metacpan

public/javascripts/ace/mode-pgsql.js  view on Meta::CPAN

        "path_mul_pt|path_n_eq|path_n_ge|path_n_gt|path_n_le|path_n_lt|path_npoints|path_out|" +
        "path_recv|path_send|path_sub_pt|pclose|percent_rank|pg_advisory_lock|" +
        "pg_advisory_lock_shared|pg_advisory_unlock|pg_advisory_unlock_all|" +
        "pg_advisory_unlock_shared|pg_advisory_xact_lock|pg_advisory_xact_lock_shared|" +
        "pg_available_extension_versions|pg_available_extensions|pg_backend_pid|" +
        "pg_backup_start_time|pg_cancel_backend|pg_char_to_encoding|pg_client_encoding|" +
        "pg_collation_for|pg_collation_is_visible|pg_column_is_updatable|pg_column_size|" +
        "pg_conf_load_time|pg_conversion_is_visible|pg_create_restore_point|" +
        "pg_current_xlog_insert_location|pg_current_xlog_location|pg_cursor|pg_database_size|" +
        "pg_describe_object|pg_encoding_max_length|pg_encoding_to_char|" +
        "pg_event_trigger_dropped_objects|pg_export_snapshot|pg_extension_config_dump|" +
        "pg_extension_update_paths|pg_function_is_visible|pg_get_constraintdef|pg_get_expr|" +
        "pg_get_function_arguments|pg_get_function_identity_arguments|" +
        "pg_get_function_result|pg_get_functiondef|pg_get_indexdef|pg_get_keywords|" +
        "pg_get_multixact_members|pg_get_ruledef|pg_get_serial_sequence|pg_get_triggerdef|" +
        "pg_get_userbyid|pg_get_viewdef|pg_has_role|pg_identify_object|pg_indexes_size|" +
        "pg_is_in_backup|pg_is_in_recovery|pg_is_other_temp_schema|pg_is_xlog_replay_paused|" +
        "pg_last_xact_replay_timestamp|pg_last_xlog_receive_location|" +
        "pg_last_xlog_replay_location|pg_listening_channels|pg_lock_status|pg_ls_dir|" +
        "pg_my_temp_schema|pg_node_tree_in|pg_node_tree_out|pg_node_tree_recv|" +
        "pg_node_tree_send|pg_notify|pg_opclass_is_visible|pg_operator_is_visible|" +
        "pg_opfamily_is_visible|pg_options_to_table|pg_postmaster_start_time|" +
        "pg_prepared_statement|pg_prepared_xact|pg_read_binary_file|pg_read_file|" +
        "pg_relation_filenode|pg_relation_filepath|pg_relation_is_updatable|pg_relation_size|" +
        "pg_reload_conf|pg_rotate_logfile|pg_sequence_parameters|pg_show_all_settings|" +
        "pg_size_pretty|pg_sleep|pg_start_backup|pg_stat_clear_snapshot|pg_stat_file|" +
        "pg_stat_get_activity|pg_stat_get_analyze_count|pg_stat_get_autoanalyze_count|" +
        "pg_stat_get_autovacuum_count|pg_stat_get_backend_activity|" +
        "pg_stat_get_backend_activity_start|pg_stat_get_backend_client_addr|" +
        "pg_stat_get_backend_client_port|pg_stat_get_backend_dbid|pg_stat_get_backend_idset|" +
        "pg_stat_get_backend_pid|pg_stat_get_backend_start|pg_stat_get_backend_userid|" +

public/javascripts/ace/mode-pgsql.js  view on Meta::CPAN

        "pg_stat_get_xact_numscans|pg_stat_get_xact_tuples_deleted|" +
        "pg_stat_get_xact_tuples_fetched|pg_stat_get_xact_tuples_hot_updated|" +
        "pg_stat_get_xact_tuples_inserted|pg_stat_get_xact_tuples_returned|" +
        "pg_stat_get_xact_tuples_updated|pg_stat_reset|pg_stat_reset_shared|" +
        "pg_stat_reset_single_function_counters|pg_stat_reset_single_table_counters|" +
        "pg_stop_backup|pg_switch_xlog|pg_table_is_visible|pg_table_size|" +
        "pg_tablespace_databases|pg_tablespace_location|pg_tablespace_size|" +
        "pg_terminate_backend|pg_timezone_abbrevs|pg_timezone_names|pg_total_relation_size|" +
        "pg_trigger_depth|pg_try_advisory_lock|pg_try_advisory_lock_shared|" +
        "pg_try_advisory_xact_lock|pg_try_advisory_xact_lock_shared|pg_ts_config_is_visible|" +
        "pg_ts_dict_is_visible|pg_ts_parser_is_visible|pg_ts_template_is_visible|" +

 view all matches for this distribution


App-Netdisco

 view release on metacpan or  search on metacpan

lib/App/Netdisco/Worker/Plugin/Discover/Properties.pm  view on Meta::CPAN

  vars->{'hook_data'}->{'ports'} = [values %deviceports];

  schema('netdisco')->resultset('DevicePort')->txn_do_locked(ACCESS_EXCLUSIVE, sub {
    my $coder = JSON::PP->new->utf8(0)->allow_nonref(1)->allow_unknown(1);

    # backup the custom_fields
    my %fields = map  {($_->{port} => $coder->decode(Encode::encode('UTF-8',$_->{custom_fields} || '{}')))}
                 grep {exists $deviceports{$_->{port}}}
                      $device->ports
                             ->search(undef, {columns => [qw/port custom_fields/]})
                             ->hri->all;

 view all matches for this distribution


App-NetdiscoX-Web-Plugin-RANCID

 view release on metacpan or  search on metacpan

lib/App/NetdiscoX/Web/Plugin/RANCID.pm  view on Meta::CPAN


=cut

=head1 NAME

App::NetdiscoX::Web::Plugin::RANCID - Link to device backups in RANCID/WebSVN

=head1 DEPRECATED

This plugin is deprecated and no longer maintained!

 view all matches for this distribution


App-Netsync

 view release on metacpan or  search on metacpan

share/mib/CISCO-STACK-MIB.my  view on Meta::CPAN

        DESCRIPTION
                 "Modified chassisComponentType to include:
                 'fanMod4Hs'.

                 Modified syslogMessageFacility to include:
                 'eou', 'backup', 'eoam', 'webauth'.

                 Modified sysErrDisableTimeoutEnable to include:
                 'ethernetOam', 'gl2ptEoamThresholdExceed'.

                 Updated chassisPs1Type and chassisPs2Type to include:

share/mib/CISCO-STACK-MIB.my  view on Meta::CPAN

        SYNTAX        INTEGER { true(1), false(2) }
        MAX-ACCESS    read-only
        STATUS        current
        DESCRIPTION   "This object reflects whether or not the TR-CRF
                      VLAN associated with this entry is configured
                      as a backup TR-CRF. A value of true(1) indicates
                      the TR-CRF is a configured as a backup. A value
                      of false(2) indicates the TR-CRF is not configured
                      as a backup."
        ::= { tokenRingDripLocalVlanStatusEntry 6 }

tokenRingDripOwnerNodeID OBJECT-TYPE
        SYNTAX        OCTET STRING (SIZE(6))
        MAX-ACCESS    read-only

share/mib/CISCO-STACK-MIB.my  view on Meta::CPAN

                                gl2pt(36),
                                callhome(37),
                                dhcpsnooping(38),
                                diags(40),
                                eou(42),
                                backup(43),
                                eoam(44),
                                webauth(45),
                                dom(46),
                                mvrp(47)
                              }

 view all matches for this distribution


App-NoodlePay

 view release on metacpan or  search on metacpan

bin/noodlepay.pl  view on Meta::CPAN

desktop or laptop computer, and a USB cable to connect it to the
hardware wallet device.

Compared to other hardware wallet solutions, Noodle Pay also greatly
simplifies physically securing your private keys, and keeping
backups. You can simply pop the MicroSD card out of the Noodle Air,
and keep it physically secure. For backups, you can just duplicate the
MicroSD card, and keep multiple copies in safe locations.

=head1 CONFIGURATION

The $electrum variable at the top of noodlepay.pl should be set to the

 view all matches for this distribution


App-Office-CMS

 view release on metacpan or  search on metacpan

lib/App/Office/CMS.pm  view on Meta::CPAN


=item o How do I back up the database?

See the config file .htoffice.cms.conf:

	backup_command = pg_dump -U cms cms
	backup_file = /tmp/pg.cms.backup.dat

When backup_command has a value, the Edit Contents tab gets a [Backup] button, and when this button
is clicked:

=over 4

=item o The command is run

lib/App/Office/CMS.pm  view on Meta::CPAN


=item o Otherwise, STDOUT is written to the output file

=back

So, why are there 2 lines, and not something like 'pg_dump -U cms cms > /tmp/pg.cms.backup.dat'?

Because I use L<Capture::Tiny>, which does not want you to use redirection.

Lastly, the output is written using L<File::Slurper>.

lib/App/Office/CMS.pm  view on Meta::CPAN


=head1 TODO

=over 4

=item o Adopt Git::Repository for versioned backup

=item o Clean up error handling

For example, when build_error_result is called, rather than build_success_result, the
data sent to Javascript must be handled slightly differently.

lib/App/Office/CMS.pm  view on Meta::CPAN


=item o Add an option, perhaps, to escape entities when inputting HTML

=item o Adopt DBIx::Connector

=item o Implement user-initiated backup and restore

=item o Change class hierarchy

This is so View does not have to pass so many parameters to its 'has-a' attributes

 view all matches for this distribution


App-PDFUtils

 view release on metacpan or  search on metacpan

lib/App/PDFUtils.pm  view on Meta::CPAN

This program is a wrapper for <prog:qpdf> to password-protect PDF files
(in-place). This is the counterpart for <prog:remove-pdf-password>. Why use this
wrapper instead of **qpdf** directly? This wrapper offers configuration file
support, where you can put the password(s) you want to use there. The wrapper
also offers multiple file support and additional options, e.g. whether to create
backup.

MARKDOWN
    args => {
        %argspec0_files,
        password => {
            schema => ['str*', min_len=>1],
            req => 1,
        },
        backup => {
            summary => 'Whether to backup the original file to ORIG~',
            schema => 'bool*',
            default => 1,
        },
        # XXX key_length (see qpdf, but when 256 can't be opened by evince)
        # XXX other options (see qpdf)

lib/App/PDFUtils.pm  view on Meta::CPAN

            next FILE;
        }

      BACKUP:
        {
            last unless $args{backup};
            unless (rename $f, "$f~") {
                warn "Can't backup original '$f' to '$f~': $!, skipped backup\n";
                last;
            };
        }
        unless (rename $tempf, $f) {
            $envres->add_result(500, "Can't rename $tempf to $f: $!", {item_id=>$f});

lib/App/PDFUtils.pm  view on Meta::CPAN

use the file. (The banks could've sent the PDF in a password-protected .zip, or
use PGP-encrypted email, but I digress.)

Compared to using **qpdf** directly, this wrapper offers some additional
features/options and convenience, for example: multiple file support, multiple
password matching attempt, configuration file, option whether you want backup,
etc.

You can provide the passwords to be tried in a configuration file,
`~/remove-pdf-password.conf`, e.g.:

lib/App/PDFUtils.pm  view on Meta::CPAN

    args => {
        %argspec0_files,
        passwords => {
            schema => ['array*', of=>['str*', min_len=>1], min_len=>1],
        },
        backup => {
            summary => 'Whether to backup the original file to ORIG~',
            schema => 'bool*',
            default => 1,
        },
    },
    deps => {

lib/App/PDFUtils.pm  view on Meta::CPAN

            next FILE;
        }

      BACKUP:
        {
            last unless $args{backup};
            unless (rename $f, "$f~") {
                warn "Can't backup original '$f' to '$f~': $!, skipped backup\n";
                last;
            };
        }
        unless (rename $tempf, $f) {
            $envres->add_result(500, "Can't rename $tempf to $f: $!", {item_id=>$f});

lib/App/PDFUtils.pm  view on Meta::CPAN

This program is a wrapper for L<qpdf> to password-protect PDF files
(in-place). This is the counterpart for L<remove-pdf-password>. Why use this
wrapper instead of B<qpdf> directly? This wrapper offers configuration file
support, where you can put the password(s) you want to use there. The wrapper
also offers multiple file support and additional options, e.g. whether to create
backup.

This function is not exported.

Arguments ('*' denotes required arguments):

=over 4

=item * B<backup> => I<bool> (default: 1)

Whether to backup the original file to ORIG~.

=item * B<files>* => I<array[filename]>

(No description)

lib/App/PDFUtils.pm  view on Meta::CPAN

use the file. (The banks could've sent the PDF in a password-protected .zip, or
use PGP-encrypted email, but I digress.)

Compared to using B<qpdf> directly, this wrapper offers some additional
features/options and convenience, for example: multiple file support, multiple
password matching attempt, configuration file, option whether you want backup,
etc.

You can provide the passwords to be tried in a configuration file,
C<~/remove-pdf-password.conf>, e.g.:

lib/App/PDFUtils.pm  view on Meta::CPAN


Arguments ('*' denotes required arguments):

=over 4

=item * B<backup> => I<bool> (default: 1)

Whether to backup the original file to ORIG~.

=item * B<files>* => I<array[filename]>

(No description)

 view all matches for this distribution


App-PLab

 view release on metacpan or  search on metacpan

bin/ManCen  view on Meta::CPAN

      # big rename from .bak to .cen
      $statwin-> text("Restoring .cen files...");
      while ( $curr != $to + $incr) {
         my $cenname = $w-> win_extname( $w-> win_formfilename( $curr));
         if ( -f "$cenname.bak") {
              Prima::MsgBox::message("Cannot rename backup file. Please note that it $cenname.bak file contains actual information.")
                 if !unlink($cenname) || !rename( "$cenname.bak", $cenname);
         } else {
            Prima::MsgBox::message("Cannot delete $cenname. Note that it contains non actual information.")
               if !unlink($cenname);
    	 }

 view all matches for this distribution


App-PPI-Dumper

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\bbuild.com$

# and Module::Build::Tiny generated files
\b_build_params$

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-PTP

 view release on metacpan or  search on metacpan

script/ptp  view on Meta::CPAN

regular expression language (B<grep> has a B<-P> flag but B<sed> has nothing of
the like).

=item * Provide a powerful input/output files support, that is lacking when
using vanilla-Perl one-liner (recursion in directories, output in-place with
optional backups, etc.).

=item * Pipelining of multiple operations on multiple files (using a pipeline
made of several standard tool usually makes it difficult to process several
input files at once).

 view all matches for this distribution


App-Pastebin-sprunge

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\bBuild.bat$
\bBuild.COM$
\bBUILD.COM$
\bbuild.com$

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-PersistentSSH

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN


# Avoid Module::Build generated and utility files.
\bBuild$
\b_build/

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-Phoebe

 view release on metacpan or  search on metacpan

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

from other devices or friends. The file format is line oriented, each line
containing two fingerprints, C<FROM> and C<TO>.

🔥 The capsule name I<login> is reserved.

🔥 The file names I<archive>, I<backup>, and I<upload> are reserved.

=head1 NO WIKI, ONLY CAPSULES

Here's how to disable all wiki functions of Phoebe and just use capsules. The
C<nothing_else> function comes right after C<capsules> as an extension and

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

    return result($stream, "30", "gemini://$host:$port/$capsule_space/$capsule/$id");
  } elsif (($host) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/login$!) {
    return serve_capsule_login($stream, $host);
  } elsif (($host, $capsule) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/archive$!) {
    return serve_capsule_archive($stream, $host, decode_utf8(uri_unescape($capsule)));
  } elsif (($host, $capsule, $id) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/backup(?:/([^/]+))?$!) {
    return serve_capsule_backup($stream, $host, map { decode_utf8(uri_unescape($_)) } $capsule, $id||"");
  } elsif (($host, $capsule, $id) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/delete(?:/([^/]+))?$!) {
    return serve_capsule_delete($stream, $host, map { decode_utf8(uri_unescape($_)) } $capsule, $id||"");
  } elsif ($url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/access$!) {
    return result($stream, "10", "Password");
  } elsif (($host, $capsule, $token) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/access\?(.+)$!) {

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

  my ($stream, $host, $capsule) = @_;
  my $name = capsule_name($stream);
  return 1 unless is_my_capsule($stream, $name, $capsule, 'archive');
  # use /bin/tar instead of Archive::Tar to save memory
  my $dir = wiki_dir($host, $capsule_space) . "/" . encode_utf8($capsule);
  my $file = "$dir/backup/data.tar.gz";
  if (-e $file and time() - modified($file) <= 300) { # data is valid for 5 minutes
    $log->info("Serving cached data archive for $capsule");
    success($stream, "application/tar");
    $stream->write(read_binary($file));
  } else {
    write_binary($file, ""); # truncate in order to avoid "file changed as we read it" warning
    my @command = ('/bin/tar', '--create', '--gzip',
		   '--file', $file,
		   '--exclude', "backup",
		   '--directory', "$dir/..",
		   encode_utf8($capsule));
    $log->debug("@command");
    if (system(@command) == 0) {
      $log->info("Serving new data archive for $capsule");

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

    }
  }
  return 1;
}

sub serve_capsule_backup {
  my ($stream, $host, $capsule, $id) = @_;
  my $name = capsule_name($stream);
  return 1 unless is_my_capsule($stream, $name, $capsule, 'view the backup of');
  my $dir = capsule_dir($host, $capsule) . "/backup";
  if ($id) {
    $log->info("Serving $capsule backup $id");
    # this works for text files, too!
    success($stream, mime_type($id));
    my $file = $dir . "/" . encode_utf8($id);
    $stream->write(read_binary($file));
  } else {
    $log->info("Backup for $capsule");
    success($stream);
    $stream->write("# " . ucfirst($capsule) . " backup\n");
    $stream->write("When editing a page, a backup is saved here as long as at least 10 minutes have passed.\n");
    my @files;
    @files = read_dir($dir) if -d $dir;
    if (not @files) {
      $stream->write("There are no backup files, yet.\n") unless @files;
    } else {
      $stream->write("Files:\n");
      for my $file (sort @files) {
	print_link($stream, $host, $capsule_space, $file, "$capsule/backup/$file");
      };
    }
  }
  return 1;
}

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

  return 1 unless is_my_capsule($stream, $name, $capsule, 'delete a file in');
  my $dir = capsule_dir($host, $capsule);
  if ($id) {
    $log->info("Delete $id from $capsule");
    my $file = $dir . "/" . encode_utf8($id);
    my $backup_dir = "$dir/backup";
    my $backup_file = $backup_dir . "/" . encode_utf8($id);
    mkdir($backup_dir) unless -d $backup_dir;
    rename $file, $backup_file if -f $file;
    result($stream, "30", to_url($stream, $host, $capsule_space, $capsule));
  } else {
    $log->info("Delete for $capsule");
    success($stream);
    $stream->write("# Delete a file in " . ucfirst($capsule) . "\n");
    $stream->write("Deleting a file moves it to the backup.\n");
    my @files;
    @files = grep { $_ ne "backup" } read_dir($dir) if -d $dir;
    if (not @files) {
      $stream->write("There are no files to delete.\n") unless @files;
    } else {
      $stream->write("Files:\n");
      for my $file (sort @files) {

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

  my ($stream, $host, $capsule) = @_;
  my $name = capsule_name($stream);
  my $dir = capsule_dir($host, $capsule);
  my @files;
  @files = read_dir($dir) if -d $dir;
  my $has_backup = first { $_ eq "backup" } @files;
  @files = grep { $_ ne "backup" } @files if $has_backup;
  success($stream);
  $log->info("Serving $capsule");
  $stream->write("# " . ucfirst($capsule) . "\n");
  if ($name) {
    if ($name eq $capsule) {
      print_link($stream, $host, $capsule_space, "Specify file for upload", "$capsule/upload");
      print_link($stream, $host, $capsule_space, "Delete file", "$capsule/delete") if @files;
      print_link($stream, $host, $capsule_space, "Share access with other people or other devices", "$capsule/share");
      print_link($stream, $host, $capsule_space, "Access backup", "$capsule/backup") if $has_backup;
      print_link($stream, $host, $capsule_space, "Download archive", "$capsule/archive") if @files;
    } elsif (@capsule_tokens) {
      print_link($stream, $host, $capsule_space, "Access this capsule", "$capsule/access");
    }
  }

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

    return result($stream, "51", "This file does not exist") unless -f $file;
    return result($stream, "40", "Cannot delete this file") unless unlink $file;
    $log->info("Deleted $file");
  } else {
    mkdir($dir) unless -d $dir;
    backup($dir, $id);
    write_binary($file, $buffer);
    $log->info("Wrote $file");
    return result($stream, "30", to_url($stream, $host, $capsule_space, $capsule));
  }
}

sub backup {
  my ($dir, $id) = @_;
  my $file = $dir . "/" . encode_utf8($id);
  my $backup_dir = "$dir/backup";
  my $backup_file = $backup_dir . "/" . encode_utf8($id);
  return unless -f $file and (time - (stat($file))[9]) > 600;
  # make a backup if the last edit was more than 10 minutes ago
  mkdir($backup_dir) unless -d $backup_dir;
  write_binary($backup_file, read_binary($file));
}

1;

 view all matches for this distribution


App-PhotoDB

 view release on metacpan or  search on metacpan

lib/App/PhotoDB/commands.pm  view on Meta::CPAN


=head2 db

The C<db> command provides a set of subcommands for managing the database backend.

=head3 db backup

Back up the contents of the database

=head3 db logs

lib/App/PhotoDB/commands.pm  view on Meta::CPAN


Upgrade database to the latest schema

=cut
	$handlers{db} = {
		'backup' => { 'handler' => \&notimplemented, 'desc' => 'Back up the contents of the database' },
		'logs'   => { 'handler' => \&db_logs,        'desc' => 'Show activity logs from the database' },
		'stats'  => { 'handler' => \&db_stats,       'desc' => 'Show statistics about database usage' },
		'test'   => { 'handler' => \&db_test,        'desc' => 'Test database connectivity' },
	};

 view all matches for this distribution


App-Pod

 view release on metacpan or  search on metacpan

t/cpan/Mojo2/File.pm  view on Meta::CPAN

Change file permissions.

=head2 copy_to

  my $destination = $path->copy_to('/home/sri');
  my $destination = $path->copy_to('/home/sri/.vimrc.backup');

Copy file with L<File::Copy> and return the destination as a L<Mojo::File> object.

=head2 dirname

t/cpan/Mojo2/File.pm  view on Meta::CPAN

Create the directories if they don't already exist, any additional arguments are passed through to L<File::Path>.

=head2 move_to

  my $destination = $path->move_to('/home/sri');
  my $destination = $path->move_to('/home/sri/.vimrc.backup');

Move file with L<File::Copy> and return the destination as a L<Mojo::File> object.

=head2 new

 view all matches for this distribution


App-PureProxy

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\btmon.out$

# Avoid Devel::NYTProf generated files
\bnytprof

# Avoid temp and backup files.
~$
\.tmp$
\.old$
\.bak$
\#$

 view all matches for this distribution


App-Rad

 view release on metacpan or  search on metacpan

lib/App/Rad/Include.pm  view on Meta::CPAN

sub _get_oneliner_code {
    return _sanitize( _deparse($_[0]) );
}


#TODO: option to do it saving a backup file
# (behavior probably set via 'setup')
# inserts the string received
# (hopefully code) inside the
# user's program file as a 'sub'
sub _insert_code_in_file {

 view all matches for this distribution


App-Rcsync

 view release on metacpan or  search on metacpan

bin/rcsync  view on Meta::CPAN


    <vim>
        filename /home/supermario/.vimrc
        template vimrc.tt
        <params>
            backupdir /tmp/vim
        </params>
    </vim>

In C<$HOME/rcsync/vimrc.tt>:

    set nocompatible
    set backup
    set backupdir=[% backupdir %]
    ...

From the command line:

    rcsync vim

 view all matches for this distribution


App-RecordStream

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\bBuild.bat$
\bBuild.COM$
\bBUILD.COM$
\bbuild.com$

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-Relate-Complex

 view release on metacpan or  search on metacpan

relate_complex  view on Meta::CPAN

Essentially this script is the equivalent of

      locate primary_term | egrep term2 | egrep term3 [....| termN]

Though by default relate_complex also automatically screens out
some "uninteresting" files (emacs backups, CVS/RCS
repositories, etc.).

An important performance hint: you can speed up relate_complex
tremendously by using a relatively unique first term.
For example, if you're on a unix box, you don't want to

relate_complex  view on Meta::CPAN

omitting uninteresting files, but it's guaranteed that you'll
be confused by this some day, so don't say I didn't warn you.
As of this writing, by default files are ignored that match
these patterns:

      '~$';      # emacs backups
      '/\#';     # emacs autosaves
      ',v$';     # cvs/rcs repository files
      '/CVS$'
      '/CVS/'
      '/RCS$'

relate_complex  view on Meta::CPAN

wouldn't be very useful (consider that with full paths *all*
listings match "^/") So instead we DWIM and turn "^" into a
beginning-of-name match.  An embedded or trailing "^" is left
alone.  Ditto for a "^" in front of a slash: if you ask for
'^/home/bin', maybe that's really what you want (as opposed to,
say '/tmp/backup/home/bin').

=head2 alternate filters

You can always determine which filters are available with the
"-L" option:

 view all matches for this distribution


App-Relate

 view release on metacpan or  search on metacpan

bin/relate  view on Meta::CPAN


  locate primary_term | egrep term2 | egrep term3 [....| termN]

Though it also has a few other features, B<relate>:

  o  screens out "uninteresting" files by default (emacs backups, CVS/RCS files)
  o  has options to restrict reports to files, directories or symlinks.

An important performance hint: you can speed up relate
tremendously by using a relatively unique first term.  For
example, if you're on a unix box, you don't want to start

bin/relate  view on Meta::CPAN

which overrides the default filter and returns all matches.

As of this writing, by default files are ignored that match
these patterns:

      '~$'       # emacs backups
      '/\#'      # emacs autosaves
      ',v$'      # cvs/rcs repository files
      '/CVS$'
      '/CVS/'
      '/RCS$'

 view all matches for this distribution


App-Requirement-Arch

 view release on metacpan or  search on metacpan

scripts/ra_edit.pl  view on Meta::CPAN

  --master_template_file     file containing the master template
  --master_categories_file   file containing the categories template
  --free_form_template       user defined template matching the master template
  --no_check_categories      do not check the requirement categories
  --no_spellcheck            perform no spellchecking
  --no_backup                do not save a backup file

FILES
	~/.ra/templates/master_template.pl
	~/.ra/templates/master_categories.pl
	~/.ra/templates/free_form_template.rat

scripts/ra_edit.pl  view on Meta::CPAN

}

#------------------------------------------------------------------------------------------------------------------

my ($master_template_file, $master_categories_file, $free_form_template) ;
my ($no_spellcheck, $raw, $no_backup, $no_check_categories) ;

die 'Error parsing options!'unless 
	GetOptions
		(
		'master_template_file=s' => \$master_template_file,
		'master_categories_file=s' => \$master_categories_file,
		'free_form_template=s' => \$free_form_template,
		'no_spellcheck' => \$no_spellcheck,
		'raw=s' => \$raw,
		'no_backup' => \$no_backup,
		'no_check_categories' => \$no_check_categories,
		'h|help' => \&display_help, 
		
		'dump_options' => 
			sub 

scripts/ra_edit.pl  view on Meta::CPAN

					master_template_file
					master_categories_file
					free_form_template
					no_spellcheck
					raw
					no_backup
					no_check_categories
					help
					) ;
					
				exit(0) ;

scripts/ra_edit.pl  view on Meta::CPAN

	
eval
	{
	my $edited_requirement_text = Proc::InvokeEditor->edit($requirement_text, '.pl') ;

	# save backup
	write_file("$requirement_file.bak", $requirement_text) unless $no_backup ;

	# remove violation message
	$edited_requirement_text =~ s/\Q$violations_text// ;
	
	# save edited requirement

 view all matches for this distribution


App-RoboBot

 view release on metacpan or  search on metacpan

doc/upgrade/index.rst  view on Meta::CPAN

data, follow the steps outlined here before running any newer version.

#. Stop your current |RB| instance and ensure nothing else is accessing or
   modifying its database.

#. Perform a *data only* database backup of the entire |RB| database. Assuming
   the PostgreSQL database is running on the default port of localhost and is
   named ``robobot``, the command will look like this::

       pg_dump -a -Fp -h localhost -d robobot -U robobot > robobot-dataonly.pgdump

doc/upgrade/index.rst  view on Meta::CPAN


#. Stop the |RB| instance once migrations have been applied.

#. Use ``psql`` to connect to your |RB| database, as whatever user you have
   configured for access, and issue the following SQL statement to remove the
   redundant skill level names (your data-only backup from earlier is going to
   try to recreate these, which will be a problem if they already exist)::

       DELETE FROM skills_levels;

#. Restore the data-only backup you created earlier (again, assuming your |RB|
   database is on the default port of localhost; adjust the connection settings
   as necessary to match your actual environment)::

       psql -h localhost -U robobot -d robobot < robobot-dataonly.pgdump

 view all matches for this distribution


App-SCM-Digest

 view release on metacpan or  search on metacpan

lib/App/SCM/Digest.pm  view on Meta::CPAN

            $method->($repo_path, $db_path, $repository);
        };
        if (my $error = $@) {
            chdir $repo_path;
            my ($name, $impl) = _load_repository($repository);
            my $backup_dir = tempdir(CLEANUP => 1);
            my $backup_path = $backup_dir.'/temporary';
            my $do_backup = (-e $name);
            if ($do_backup) {
                my $res = move($name, $backup_path);
                if (not $res) {
                    warn "Unable to backup repository for re-clone: $!";
                }
            }
            eval {
                $impl->clone($repository->{'url'}, $name);
                $method->($repo_path, $db_path, $repository);
            };
            if (my $sub_error = $@) {
                if ($do_backup) {
                    my $rm_error;
                    rmtree($name, { error => \$rm_error });
                    if ($rm_error and @{$rm_error}) {
                        my $info =
                            join ', ',
                            map { join ':', %{$_} }
                                @{$rm_error};
                        warn "Unable to restore repository: ".$info;
                    } else {
                        my $res = move($backup_path, $name);
                        if (not $res) {
                            warn "Unable to restore repository on ".
                                 "failed rerun: $!";
                        }
                    }

 view all matches for this distribution


App-SD

 view release on metacpan or  search on metacpan

lib/App/SD/Config.pm  view on Meta::CPAN

                }, {
                    key   => "replica.'$name'.uuid",
                    value => $uuid,
                };
            }
            # move the old config file to a backup
            my $backup_file = $file;
            unless ( $self->_deprecated_repo_config_names->{$file} ) {
                $backup_file = "$file.bak";
                rename $file, $backup_file;
            }

            # we want to write the new file to a supported filename if
            # it's from a deprecated config name (replica/prophetrc)
            $file = File::Spec->catfile( $self->app_handle->handle->fs_root, 'config' )

lib/App/SD/Config.pm  view on Meta::CPAN


            # write the new config file (with group_set)
            $self->group_set( $file, \@config_to_set, 1);

            # tell the user that we're done
            print "done.\nOld config can be found at $backup_file; "
                  ,"new config is $file.\n\n";

}

sub _deprecated_repo_config_names {

 view all matches for this distribution


App-Sandy

 view release on metacpan or  search on metacpan

ppport.h  view on Meta::CPAN

av_top_index|5.017009|5.003007|p
av_top_index_skip_len_mg|5.025010||Viu
av_undef|5.003007|5.003007|
av_unshift|5.003007|5.003007|
ax|5.003007|5.003007|
backup_one_GCB|5.025003||Viu
backup_one_LB|5.023007||Viu
backup_one_SB|5.021009||Viu
backup_one_WB|5.021009||Viu
bad_type_gv|5.019002||Viu
bad_type_pv|5.016000||Viu
BADVERSION|5.011004||Viu
BASEOP|5.003007||Viu
BhkDISABLE|5.013003||xV

 view all matches for this distribution


App-ScanPrereqs

 view release on metacpan or  search on metacpan

lib/App/ScanPrereqs.pm  view on Meta::CPAN

        sub {
            no warnings 'once';

            return unless -f;
            my $path = "$File::Find::dir/$_";
            if (Filename::Backup::check_backup_filename(filename=>$_)) {
                log_debug("Skipping backup file %s ...", $path);
                return;
            }
            if (/\A(\.git)\z/) {
                log_debug("Skipping %s ...", $path);
                return;

 view all matches for this distribution


App-SeismicUnixGui

 view release on metacpan or  search on metacpan

lib/App/SeismicUnixGui/big_streams/BackupProject.pl  view on Meta::CPAN

my $tar_input      = $HOME . '/'. $project_directory;

=head2 Verify project is a true SUG project

collect project names
compare backup project against project names

=cut

my @PROJECT_HOME_aref = $L_SU_local_user_constants->get_PROJECT_HOMES_aref();
my @project_name_aref = $L_SU_local_user_constants->get_project_names();

lib/App/SeismicUnixGui/big_streams/BackupProject.pl  view on Meta::CPAN


=pod
 
 check to see that the project directory contains Project.config
 If Project.config exists then
 copy this file with the Project during the backup
 
=cut

if ( $project_exists ) {

 view all matches for this distribution


App-SimpleBackuper

 view release on metacpan or  search on metacpan

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

}

sub Backup {
	my($options, $state) = @_;
	
	my($backups, $files, $parts, $blocks) = @{ $state->{db} }{qw(backups files parts blocks)};
	
	die "Backup '$options->{\"backup-name\"}' already exists" if grep { $backups->unpack($_)->{name} eq $options->{'backup-name'} } @$backups;
	
	$state->{$_} = 0 foreach qw(last_backup_id last_file_id last_block_id bytes_processed bytes_in_last_backup total_weight);
	
	print "Preparing to backup: " if $options->{verbose};
	$state->{profile}->{init_ids} = - time();
	foreach (@$backups) {
		my $id = $backups->unpack($_)->{id};
		$state->{last_backup_id} = $id if ! $state->{last_backup_id} or $state->{last_backup_id} < $id;
	}
	#print "last backup id $state->{last_backup_id}, ";
	foreach (@$files) {
		my $file = $files->unpack($_);
		$state->{last_file_id} = $file->{id} if ! $state->{last_file_id} or $state->{last_file_id} < $file->{id};
		if($file->{versions} and @{ $file->{versions} } and $file->{versions}->[-1]->{backup_id_max} == $state->{last_backup_id}) {
			$state->{bytes_in_last_backup} += $file->{versions}->[-1]->{size};
		}
	}
	#print "last file id $state->{last_file_id}, ";
	foreach (@$blocks) {
		my $id = $blocks->unpack($_)->{id};

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

	for(my $q = 0; $q <= $#$parts; $q++) {
		$state->{total_weight} += $parts->unpack($parts->[ $q ])->{size};
	}
	print fmt_weight($state->{total_weight}).", " if $options->{verbose};
	
	my $cur_backup = {name => $options->{'backup-name'}, id => ++$state->{last_backup_id}, files_cnt => 0, max_files_cnt => 0};
	$backups->upsert({ id => $cur_backup->{id} }, $cur_backup);
	
	{
		$state->{blocks_info} = _BlocksInfo($options, $state);
		
		$state->{blocks2delete_prio2size2chunks} = {};

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

					$file //= {
						parent_id	=> $file_id,
						id			=> ++$state->{last_file_id},
						name		=> $path_node,
						versions	=> [ {
							backup_id_min	=> $state->{last_backup_id},
							backup_id_max	=> 0,
							uid				=> 0,
							gid				=> 0,
							size			=> 0,
							mode			=> 0,
							mtime			=> 0,

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

	
	while(my($full_path, $dir2upd) = each %dirs2upd) {
		print "Updating dir $full_path..." if $options->{verbose};
		my $file = $files->find_by_parent_id_name($dir2upd->{parent_id}, $dir2upd->{filename});
		my @stat = lstat($full_path);
		if(@stat and $file->{versions}->[-1]->{backup_id_max} != $state->{last_backup_id}) {
			my($uid, $gid) =_proc_uid_gid($stat[4], $stat[5], $state->{db}->{uids_gids});
			if($file->{versions}->[-1]->{backup_id_max} == $state->{last_backup_id} - 1) {
				$file->{versions}->[-1] = {
					%{ $file->{versions}->[-1] },
					backup_id_max	=> $state->{last_backup_id},
					uid				=> $uid,
					gid				=> $gid,
					size			=> $stat[7],
					mode			=> $stat[2],
					mtime			=> $stat[9],

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

					symlink_to		=> undef,
					parts			=> [],
				};
			} else {
				push @{ $file->{versions} }, {
					backup_id_min	=> $state->{last_backup_id},
					backup_id_max	=> $state->{last_backup_id},
					uid				=> $uid,
					gid				=> $gid,
					size			=> $stat[7],
					mode			=> $stat[2],
					mtime			=> $stat[9],

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

					parts			=> [],
				}
			}
			$files->upsert({ id => $file->{id}, parent_id => $file->{parent_id} }, $file);
			
			my $backup = $backups->find_row({ id => $state->{last_backup_id} });
			$backup->{files_cnt}++;
			$backup->{max_files_cnt}++;
			$backups->upsert({ id => $backup->{id} }, $backup );
		}
		
		print "OK\n" if $options->{verbose};
	}
	
	my $backup = $backups->find_row({ id => $state->{last_backup_id} });
	$backup->{is_done} = 1;
	$backups->upsert({ id => $backup->{id} }, $backup );
	
	App::SimpleBackuper::BackupDB($options, $state);
	
	_print_progress($state) if ! $options->{quiet};
}

sub _print_progress {
	print "Progress: ";
	if($_[0]->{bytes_in_last_backup}) {
		printf "processed %s of %s in last backup, ", fmt_weight($_[0]->{bytes_processed}), fmt_weight($_[0]->{bytes_in_last_backup});
	}
	printf "total backups weight %s.\n", fmt_weight($_[0]->{total_weight});
}

use Text::Glob qw(match_glob);
use Fcntl ':mode'; # For S_ISDIR & same
use App::SimpleBackuper::RegularFile;

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

	else {
		printf ", stat: %s:%s %o %s modified at %s", scalar getpwuid($stat[4]), scalar getgrgid($stat[5]), $stat[2], fmt_weight($stat[7]), fmt_datetime($stat[9]) if $options->{verbose};
	}
	
	
	my($backups, $blocks, $files, $parts, $uids_gids) = @{ $state->{db} }{qw(backups blocks files parts uids_gids)};
	
	
	my($uid, $gid) = _proc_uid_gid($stat[4], $stat[5], $uids_gids);
	
	
	my($file); {
		my($filename) = $task->[0] =~ /([^\/]+)\/?$/;
		$file = $files->find_by_parent_id_name($task->[2], $filename);
		if($file) {
			print ", is old file #$file->{id}" if $options->{verbose};
			if($file->{versions}->[-1]->{backup_id_max} == $state->{last_backup_id}) {
				print ", is already backuped.\n" if $options->{verbose};
				return;
			}
		} else {
			$file = {
				parent_id	=> $task->[2],

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

	}
	
	$state->{bytes_processed} += $file->{versions}->[-1]->{size} if @{ $file->{versions} };
	
	my %version = (
		backup_id_min	=> $state->{last_backup_id},
		backup_id_max	=> $state->{last_backup_id},
		uid				=> $uid,
		gid				=> $gid,
		size			=> $stat[7],
		mode			=> $stat[2],
		mtime			=> $stat[9],

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

			$version{parts} = $file->{versions}->[-1]->{parts}; # If mtime not changed then file not changed
			
			$version{block_id} = $file->{versions}->[-1]->{block_id};
			
			my $block = $blocks->find_row({ id => $version{block_id} });
			confess "File has lost block #$version{block_id} in backup "
				.$backups->find_row({ id => $version{backup_id_min} })->{name}
				."..".$backups->find_row({ id => $version{backup_id_max} })->{name}
				if ! $block;
			$block->{last_backup_id} = $state->{last_backup_id};
			$blocks->upsert({ id => $block->{id} }, $block);
			
			print ", mtime is not changed.\n" if $options->{verbose};
		} else {
			print @{ $file->{versions} } ? ", mtime changed.\n" : "\n" if $options->{verbose};

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

						$block_ids{ $part->{block_id} }++;
					}
					$part{size} = $part->{size};
					$part{aes_key} = $part->{aes_key};
					$part{aes_iv} = $part->{aes_iv};
					print "backuped earlier (".fmt_weight($read)." -> ".fmt_weight($part->{size}).");\n" if $options->{verbose};
				} else {
					
					print fmt_weight($read) if $options->{verbose};
					
					$state->{profile}->{math} -= time;

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

				$part->{block_id} //= $block->{id};
				$parts->upsert({ hash => $part->{hash} }, $part);
			}
				
			
			$block->{last_backup_id} = $state->{last_backup_id};
			$blocks->upsert({ id => $block->{id} }, $block);
			
			$version{block_id} = $block->{id};
		}
	}

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

		print ", skip not supported file type\n" if $options->{verbose};
		return;
	}
	
	
	# If file version not changed, use old version with wider backup ids range
	if(	@{ $file->{versions} }
		and (
			$file->{versions}->[-1]->{backup_id_max} + 1 == $state->{last_backup_id}
			or $file->{versions}->[-1]->{backup_id_max} == $state->{last_backup_id}
		)
		and $file->{versions}->[-1]->{uid}	== $version{uid}
		and $file->{versions}->[-1]->{gid}	== $version{gid}
		and $file->{versions}->[-1]->{size}	== $version{size}
		and $file->{versions}->[-1]->{mode}	== $version{mode}

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

				or $file->{versions}->[-1]->{symlink_to} eq $version{symlink_to}
			)
		)
		and join(' ', map { $_->{hash} } @{ $file->{versions}->[-1]->{parts} }) eq join(' ', map { $_->{hash} } @{ $version{parts} })
	) {
		$file->{versions}->[-1]->{backup_id_max} = $state->{last_backup_id};
	} else {
		push @{ $file->{versions} }, \%version;
	}
	
	$files->upsert({ parent_id => $file->{parent_id}, id => $file->{id} }, $file );
	
	my $backup = $backups->find_row({ id => $state->{last_backup_id} });
	$backup->{files_cnt}++;
	$backup->{max_files_cnt}++;
	$backups->upsert({ id => $backup->{id} }, $backup );

	
	$state->{longest_files} ||= [];
	if(	@{ $state->{longest_files} } < $SIZE_OF_TOP_FILES
		or $state->{longest_files}->[-1]->{time} < $file_time_spent

lib/App/SimpleBackuper/Backup.pm  view on Meta::CPAN

}

sub _free_up_space {
	my($options, $state, $protected_block_ids) = @_;
	
	my($backups, $files, $blocks, $parts) = @{ $state->{db} }{qw(backups files blocks parts)};
	
	my $deleted = 0;
	while(1) {
		my($block_id, @files) = _get_block_to_delete($state);
		last if ! $block_id;
		next if exists $protected_block_ids->{ $block_id };
		my $block = $blocks->find_row({ id => $block_id });
		next if ! $block;
		next if $block->{last_backup_id} == $state->{last_backup_id};
		
		$deleted += App::SimpleBackuper::_BlockDelete($options, $state, $block, $state->{blocks_info}->{ $block_id }->[2]);
		last if $deleted;
	}
	

 view all matches for this distribution


( run in 0.826 second using v1.01-cache-2.11-cpan-496ff517765 )