Result:
found more than 866 distributions - search limited to the first 2001 files matching your query ( run in 1.086 )


App-LintPrereqs

 view release on metacpan or  search on metacpan

lib/App/LintPrereqs.pm  view on Meta::CPAN

use Config::IOD;
use Exporter 'import';
use Fcntl qw(:DEFAULT);
use File::Find;
use File::Which;
use Filename::Type::Backup qw(check_backup_filename);
use IPC::System::Options 'system', -log=>1;
use Module::CoreList::More;
use Proc::ChildError qw(explain_child_error);
use Scalar::Util 'looks_like_number';
use Sort::Sub qw(prereq_ala_perlancar);

lib/App/LintPrereqs.pm  view on Meta::CPAN

        ));
        last unless @dirs;
        find(
            sub {
                return unless -f;
                return if check_backup_filename(filename=>$_);
                push @{$files{Runtime}}, "$File::Find::dir/$_";
            },
            @dirs
        );
    }

lib/App/LintPrereqs.pm  view on Meta::CPAN

        ));
        last unless @dirs;
        find(
            sub {
                return unless -f;
                return if check_backup_filename(filename=>$_);
                return unless /\.(t|pl|pm)$/;
                push @{$files{Test}}, "$File::Find::dir/$_";
            },
            @dirs
        );

lib/App/LintPrereqs.pm  view on Meta::CPAN

            cmdline_aliases => {F=>{}},
            description => <<'MARKDOWN',

`lint-prereqs` can attempt to automatically fix the errors by
adding/removing/moving prereqs in `dist.ini`. Not all errors can be
automatically fixed. When modifying `dist.ini`, a backup in `dist.ini~` will be
created.

MARKDOWN
        },
    },

lib/App/LintPrereqs.pm  view on Meta::CPAN

                    for my $cmd (@{ $e->{remedy_cmds} }) {
                        system @$cmd;
                        if ($?) {
                            $e->{remedy} = "(FIX FAILED: ".explain_child_error().") $e->{remedy}";
                            $resmeta->{'cmdline.exit_code'} = 1;
                            # restore dist.ini from backup
                            rename "dist.ini~", "dist.ini";
                            last FIX;
                        }
                    }
                }

lib/App/LintPrereqs.pm  view on Meta::CPAN


Attempt to automatically fix the errors.

C<lint-prereqs> can attempt to automatically fix the errors by
adding/removing/moving prereqs in C<dist.ini>. Not all errors can be
automatically fixed. When modifying C<dist.ini>, a backup in C<dist.ini~> will be
created.

=item * B<perl_version> => I<str>

Perl version to use (overrides scan_prereqsE<sol>dist.ini).

 view all matches for this distribution


App-MBUtiny

 view release on metacpan or  search on metacpan

lib/App/MBUtiny.pm  view on Meta::CPAN


=encoding utf-8

=head1 NAME

App::MBUtiny - Websites and any file system elements backup tool

=head1 VERSION

Version 1.13

=head1 SYNOPSIS

    # mbutiny test

    # mbutiny backup

    # mbutiny restore

    # mbutiny report

=head1 DESCRIPTION

Websites and any file system elements backup tool

=head2 FEATURES

=over 4

lib/App/MBUtiny.pm  view on Meta::CPAN


=item Backup small databases

=item Run external utilities for object preparation

=item Supported storage of backups on local drives

=item Supported storage of backups on remote SFTP storages

=item Supported storage of backups on remote FTP storages

=item Supported storage of backups on remote HTTP storages

=item Easy configuration

=item Monitoring feature enabled

lib/App/MBUtiny.pm  view on Meta::CPAN

=head2 CONFIGURATION

By default configuration file located in C</etc/mbutiny> directory

Every configuration directive detailed described in C<mbutiny.conf> file, also
see C<hosts/foo.conf.sample> file for MBUtiny backup hosts configuration

=head2 CRONTAB

To automatically launch the program, we recommend using standard scheduling tools, such as crontab

    0 2 * * * mbutiny -l backup >/dev/null 2>>/var/log/mbutiny-error.log

Or for selected hosts only:

    0 2 * * * mbutiny -l backup foo bar >/dev/null 2>>/var/log/mbutiny-error.log
    15 2 * * * mbutiny -l backup baz >/dev/null 2>>/var/log/mbutiny-error.log

For daily reporting:

    0 9 * * * mbutiny -l report >/dev/null 2>>/var/log/mbutiny-error.log

=head2 COLLECTOR

Collector is a monitoring server that allows you to collect data on the status of performs backups.
The collector allows you to build reports on the collected data from various servers.

How it work?

    +------------+

lib/App/MBUtiny.pm  view on Meta::CPAN

For installation of the collector Your need Apache 2.2/2.4 web server and CGI/FastCGI script.
See C<collector.cgi.sample> in C</etc/mbutiny> directory

=head2 HTTP SERVER

If you want to use the HTTP server as a storage for backups, you need to install the CGI/FastCGI
script on Apache 2.2/2.4 web server.

See C<server.cgi>

=head1 INTERNAL METHODS

lib/App/MBUtiny.pm  view on Meta::CPAN


=item B<rstdir>

    my $rstdir = $app->rstdir;

Returns path to restored backups

=back

=head1 HISTORY

lib/App/MBUtiny.pm  view on Meta::CPAN

        my $name = _getName($pair); # Backup name
        my $host = node($pair, $name); # Config section
        my $hostskip = (!@arguments || grep {lc($name) eq lc($_)} @arguments) ? 0 : 1;
        my $enabled = value($host, 'enable') ? 1 : 0;
        if ($hostskip || !$enabled) {
            $self->log_info("Skip testing for \"%s\" backup host section", $name);
            next;
        }
        my $tbl = Text::SimpleTable->new(@{(TEST_HEADERS)});
        $self->log_info("Start testing for \"%s\" backup host section", $name);
        push @header, ["Backup name", $name];
        push @errors, $self->getdbi->dsn, $self->getdbi->error, "" if $self->getdbi->error;


        #
        # Loading backup data
        #
        my $buday   = (value($host, 'buday')   // $self->config('buday'))   || 0;
        my $buweek  = (value($host, 'buweek')  // $self->config('buweek'))  || 0;
        my $bumonth = (value($host, 'bumonth') // $self->config('bumonth')) || 0;
        push @header, (
                ["Daily backups", $buday],
                ["Weekly backups", $buweek],
                ["Monthly backups", $bumonth],
            );

        # Get mask vars
        my $arc = $self->_getArc($host);
        my $arcmask = value($host, 'arcmask') || ARC_MASK;

lib/App/MBUtiny.pm  view on Meta::CPAN

        my @dates = $self->_getDates($buday, $buweek, $bumonth);

        # Get paths
        push @header, (
                ["Work directory", $self->datadir],
                ["Directory for backups", $self->objdir],
                ["Directory for restores", $self->rstdir],
            );

        # Regular objects
        my $objects = array($host, 'object');

lib/App/MBUtiny.pm  view on Meta::CPAN

            push @errors, $storage->error, "";
            $ostat = 0;
        };
        my $last_file = (sort {$b cmp $a} @filelist)[0];
        if ($files_number && $last_file) {
            push @header, ["Last backup file", $last_file];
            my $list = hash($storage->{list});
            foreach my $k (keys %$list) {
                my $l = array($list, $k);
                my $st = (grep {$_ eq $last_file} @$l) ? 1 : 0;
                $tbl->row(sprintf("%s storage", $k),

lib/App/MBUtiny.pm  view on Meta::CPAN

        push @report, $self->_report_summary($ostat ? "All tests successful" : "Errors occurred while testing"); # Summary table
        push @report, $tbl->draw() || ''; # Table
        push @report, $self->_report_errors(@errors); # List of occurred errors
        if ($TTY || $self->verbosemode) { # Draw to TTY
            printf("%s\n\n", "~" x 94);
            printf("The %s for %s backup host\n\n", $report_name, $name);
            print join("\n", @report, "");
        }


        #

lib/App/MBUtiny.pm  view on Meta::CPAN

            if ($sent) { $self->debug(sprintf("Mail has been sent to: %s", $to)) }
            else { $self->error(sprintf("Mail was not sent to: %s", $to)) }
        }

        # Finish testing
        $self->log_info("Finish testing for \"%s\" backup host section", $name);

        # General status
        $status = 0 unless $ostat;
    }

    return $status;
});

__PACKAGE__->register_handler(
    handler     => "backup",
    description => "Backup hosts",
    code => sub {
### CODE:
    my ($self, $meta, @arguments) = @_;
    $self->configure or return 0;

lib/App/MBUtiny.pm  view on Meta::CPAN

        my $name = _getName($pair); # Backup name
        my $host = node($pair, $name); # Config section
        my $hostskip = (!@arguments || grep {lc($name) eq lc($_)} @arguments) ? 0 : 1;
        my $enabled = value($host, 'enable') ? 1 : 0;
        if ($hostskip || !$enabled) {
            $self->log_info("Skip backup process for \"%s\" backup host section", $name);
            next;
        }
        my $tbl = Text::SimpleTable->new(@{(TABLE_HEADERS)});
        $self->log_info("Start backup process for \"%s\" backup host section", $name);
        push @header, ["Backup name", $name];
        push @errors, $self->getdbi->dsn, $self->getdbi->error, "" if $self->getdbi->error;


        #
        # Loading backup data
        #
        my $buday   = (value($host, 'buday')   // $self->config('buday'))   || 0;
        my $buweek  = (value($host, 'buweek')  // $self->config('buweek'))  || 0;
        my $bumonth = (value($host, 'bumonth') // $self->config('bumonth')) || 0;
        push @header, (
                ["Daily backups", $buday],
                ["Weekly backups", $buweek],
                ["Monthly backups", $bumonth],
            );

        # Get mask vars
        my $arc = $self->_getArc($host);
        my $arcmask = value($host, 'arcmask') || ARC_MASK;

lib/App/MBUtiny.pm  view on Meta::CPAN

        $step = "Storages testing";
        $self->debug($step);
        my $storage = new App::MBUtiny::Storage(
                name => $name, # Backup name
                host => $host, # Host config section
                path => $self->objdir, # Where is located backup archive
                fixup => sub {
                    my $strg = shift; # Storage object
                    my $oper = shift // 'noop'; # Operation name
                    my $colret;
                    if ($oper =~ /^(del)|(rem)/i) {

lib/App/MBUtiny.pm  view on Meta::CPAN

            $ostat ? 'All processes successful' : 'Errors have occurred!',
            $ostat ? 'PASS' : 'FAIL'
        );
        push @header, ["Summary status", $ostat ? 'PASS' : 'FAIL'];
        my @report;
        my $report_name = $ostat ? "backup report" : "backup error report";
        push @report, $self->_report_common(@header); # Common information
        push @report, $self->_report_summary($ostat ? "Backup is done" : "Errors occurred while performing backup"); # Summary table
        push @report, $tbl->draw() || ''; # Table
        push @report, $self->_report_errors(@errors); # List of occurred errors
        if ($TTY || $self->verbosemode) { # Draw to TTY
            printf("%s\n\n", "~" x 114);
            printf("The %s for %s backup host\n\n", $report_name, $name);
            print join("\n", @report, "");
        }


        #

lib/App/MBUtiny.pm  view on Meta::CPAN

            my $sent = sendmail(%ma);
            if ($sent) { $self->debug(sprintf("Mail has been sent to: %s", $to)) }
            else { $self->error(sprintf("Mail was not sent to: %s", $to)) }
        }

        # Finish backup
        $self->log_info("Finish backup process for \"%s\" backup host section", $name);

        # General status
        $status = 0 unless $ostat;
    }

lib/App/MBUtiny.pm  view on Meta::CPAN

        my $name = _getName($pair); # Backup name
        my $host = node($pair, $name); # Config section
        my $hostskip = (!@arguments || grep {lc($name) eq lc($_)} @arguments) ? 0 : 1;
        my $enabled = value($host, 'enable') ? 1 : 0;
        if ($hostskip || !$enabled) {
            $self->log_info("Skip restore process for \"%s\" backup host section", $name);
            next;
        }
        my $tbl = Text::SimpleTable->new(@{(TABLE_HEADERS)});
        $self->log_info("Start restore process for \"%s\" backup host section", $name);
        push @header, ["Backup name", $name];
        push @errors, $self->getdbi->dsn, $self->getdbi->error, "" if $self->getdbi->error;

        # Get mask vars
        my $arc = $self->_getArc($host);

lib/App/MBUtiny.pm  view on Meta::CPAN

        $step = "Storages testing";
        $self->debug($step);
        my $storage = new App::MBUtiny::Storage(
                name => $name, # Backup name
                host => $host, # Host config section
                path => $self->rstdir, # Where is located restored backup archive
                validate => sub {
                    my $strg = shift; # storage object
                    my $file = shift; # fetched file
                    if ($info{size}) { # Valid sizes
                        my $size = filesize($file) // 0;

lib/App/MBUtiny.pm  view on Meta::CPAN

                arcdef => $arc,
                archive=> $archive_file,
                dirdst => $restore_dir,
            );
            if ($st) {
                push @header, ["Location of restored backup", $restore_dir];
                $self->log_info("Downloaded backup archive: %s", $archive_file);
                $self->log_info("Location of restored backup: %s", $restore_dir);
            } else {
                my $msg = sprintf("Extracting archive \"%s\" failed: %s", $archive_file, $self->error);
                $self->log_error($msg);
                push @errors, $msg, "";
                $ostat = 0;

lib/App/MBUtiny.pm  view on Meta::CPAN

        push @report, $self->_report_summary($ostat ? "Restore is done" : "Errors occurred while performing restore"); # Summary table
        push @report, $tbl->draw() || ''; # Table
        push @report, $self->_report_errors(@errors); # List of occurred errors
        if ($TTY || $self->verbosemode) { # Draw to TTY
            printf("%s\n\n", "~" x 114);
            printf("The %s for %s backup host\n\n", $report_name, $name);
            print join("\n", @report, "");
        }


        # Finish restore
        $self->log_info("Finish restore process for \"%s\" backup host section", $name);

        # General status
        $status = 0 unless $ostat;
    }

lib/App/MBUtiny.pm  view on Meta::CPAN

        my $host = node($pair, $name); # Config section
        my $hostskip = (!@arguments || grep {lc($name) eq lc($_)} @arguments) ? 0 : 1;
        my $enabled = value($host, 'enable') ? 1 : 0;
        $tbl_hosts->row($name, ($hostskip || !$enabled) ? 'SKIP' : 'PASS');
        if ($hostskip || !$enabled) {
            $self->log_info("Skip reporting for \"%s\" backup host section", $name);
            next;
        }
        my $lcols = $self->_getCollector($host);
        push @collectors, @$lcols;
    }

lib/App/MBUtiny.pm  view on Meta::CPAN

    }

    #
    # Collectors processing
    #
    my @backups;
    if (@ok_collectors) {
        my $collector = new App::MBUtiny::Collector(
                collector_config => [@ok_collectors],
                dbi => $self->getdbi, # For local storage only
            );
        @backups = $collector->report(); # start => 1561799600;
        if ($collector->error) {
            $self->log_error(sprintf("Collector error: %s", $collector->error));
            push @errors, $collector->error, "";
        }
    }

    #
    # Get report data about LAST backups on collector for each available host
    #
    my %requires;
    foreach (@req_hosts) {$requires{$_} = 0};
    foreach my $rec (@backups) {
        push @comments, sprintf("%s: %s", uv2null($rec->{file}), $rec->{comment}), "" if $rec->{comment};
        push @errors, uv2null($rec->{file}), $rec->{error}, "" if $rec->{error};
        my $nm = $rec->{name} || 'virtual';
        $tbl_report->row(
            sprintf("%s\n%s", $nm, uv2null($rec->{addr})),

lib/App/MBUtiny.pm  view on Meta::CPAN

    push @report, $tbl_report->draw(); # Report table
    push @report, "Comments:", "", @comments, "" if @comments;
    push @report, $self->_report_errors(@errors); # List of occurred errors
    if ($TTY || $self->verbosemode) { # Draw to TTY
        printf("%s\n\n", "~" x 106);
        printf("The %s for all backup hosts on %s\n\n", $report_name, $hostname);
        print join("\n", @report, "");
    }


    #
    # SendMail (Send report)
    #
    if ($send_report) {
        unshift @report, $self->_report_title($report_name, "last backups");
        push @report, $self->_report_footer();
        my %ma = (); foreach my $k (keys %$sm) { $ma{"-".$k} = $sm->{$k} };
        $ma{"-subject"} = sprintf("%s %s (%s on %s)", PROJECTNAME, $report_name, "last backups", $hostname);
        $ma{"-message"} = join("\n", @report);

        # Send!
        my $sent = sendmail(%ma);
        if ($sent) { $self->debug(sprintf("Mail has been sent to: %s", $to)) }

lib/App/MBUtiny.pm  view on Meta::CPAN

    for (my $i=0; $i<$period; $i++) {
        my ( $y, $m, $d, $wd ) = (localtime( time - $i * 86400 ))[5,4,3,6];
        my $date = sprintf( "%04d%02d%02d", ($y+1900), ($m+1), $d );

        if (($i < $buday)
                || (($i < $buweek * 7) && $wd == 0) # do weekly backups on sunday
                || (($i < $bumonth * 30) && $d == 1)) # do monthly backups on 1-st day of month
        {
            $dates{ $date } = 1;
        } else {
            $dates{ $date } = 0;
        }

lib/App/MBUtiny.pm  view on Meta::CPAN

    my $self = shift;
    my $title = shift || "report";
    my $name = shift || "virtual";
    return (
        sprintf("Dear %s user,", PROJECTNAME),"",
        sprintf("This is a automatic-generated %s for %s backup\non %s, created by %s/%s",
            $title, $name, $hostname, __PACKAGE__, $VERSION),"",
        "Sections of this report:","",
        " * Common information",
        " * Summary",
        " * List of occurred errors","",

 view all matches for this distribution


App-MHFS

 view release on metacpan or  search on metacpan

share/public_html/static/music_worklet_inprogress/decoder/src/mhfs_cl_track.h  view on Meta::CPAN

    blockvf_error code;
    uint32_t extradata;
} mhfs_cl_track_blockvf_data;

typedef struct {
    // for backup and restore
    ma_decoder backupDecoder;
    unsigned backupFileOffset;
    mhfs_cl_track_allocs allocs;

    ma_decoder_config decoderConfig;
    ma_decoder decoder;
    bool dec_initialized;

share/public_html/static/music_worklet_inprogress/decoder/src/mhfs_cl_track.h  view on Meta::CPAN

        if(pAllocs->allocptrs[i] == p)
        {
            const size_t osz = pAllocs->allocsizes[i];
            const size_t orsz = ceil8(pAllocs->allocsizes[i]);
            const size_t rsz = ceil8(sz);
            // avoid losing the start of backup by moving it down
            if(rsz < orsz)
            {
                uint8_t *ogalloc = p;
                memmove(ogalloc+rsz, ogalloc+orsz, sz);
            }

share/public_html/static/music_worklet_inprogress/decoder/src/mhfs_cl_track.h  view on Meta::CPAN

                    return NULL;
                }
                // we moved the data down so we can't fail
                newalloc = p;
            }
            // move the backup data forward
            else if(rsz > orsz)
            {
                memmove(newalloc+rsz, newalloc+orsz, osz);
            }

share/public_html/static/music_worklet_inprogress/decoder/src/mhfs_cl_track.h  view on Meta::CPAN

    }
    MHFSCLTR_PRINT("%s: %zu failed to find\n", __func__, sz);
    return NULL;
}

static inline void mhfs_cl_track_allocs_backup_or_restore(mhfs_cl_track *pTrack, const bool backup)
{
    // copy ma_decoder and blockvf fileoffset
    if(backup)
    {
        pTrack->backupDecoder    = pTrack->decoder;
        pTrack->backupFileOffset = pTrack->vf.fileoffset;
    }
    else
    {
        pTrack->decoder       = pTrack->backupDecoder;
        pTrack->vf.fileoffset = pTrack->backupFileOffset;
    }

    // copy the allocations
    mhfs_cl_track_allocs *pAllocs = &pTrack->allocs;
    for(unsigned i = 0; i < MHFS_CL_TRACK_MAX_ALLOCS; i++)

share/public_html/static/music_worklet_inprogress/decoder/src/mhfs_cl_track.h  view on Meta::CPAN

        {
            const size_t offset = ceil8(pAllocs->allocsizes[i]);
            uint8_t *allocBuf = pAllocs->allocptrs[i];
            const uint8_t *srcBuf;
            uint8_t *destBuf;
            if(backup)
            {
                srcBuf = allocBuf;
                destBuf = allocBuf + offset;
            }
            else

share/public_html/static/music_worklet_inprogress/decoder/src/mhfs_cl_track.h  view on Meta::CPAN

            memcpy(destBuf, srcBuf, pAllocs->allocsizes[i]);
        }
    }
}

static inline void mhfs_cl_track_allocs_backup(mhfs_cl_track *pTrack)
{
    return mhfs_cl_track_allocs_backup_or_restore(pTrack, true);
}

static inline void mhfs_cl_track_allocs_restore(mhfs_cl_track *pTrack)
{
    return mhfs_cl_track_allocs_backup_or_restore(pTrack, false);
}

void mhfs_cl_track_init(mhfs_cl_track *pTrack, const unsigned blocksize)
{
    for(unsigned i = 0; i < MHFS_CL_TRACK_MAX_ALLOCS; i++)

share/public_html/static/music_worklet_inprogress/decoder/src/mhfs_cl_track.h  view on Meta::CPAN

static inline void mhfs_cl_track_blockvf_ma_decoder_call_before(mhfs_cl_track *pTrack, const bool bSaveDecoder)
{
    pTrack->vfData.code = BLOCKVF_SUCCESS;
    if(bSaveDecoder)
    {
        mhfs_cl_track_allocs_backup(pTrack);
    }
}

static inline mhfs_cl_error mhfs_cl_track_blockvf_ma_decoder_call_after(mhfs_cl_track *pTrack, const bool bRestoreDecoder, uint32_t *pNeededOffset)
{

 view all matches for this distribution


App-MatrixTool

 view release on metacpan or  search on metacpan

lib/App/MatrixTool/Command/resolve.pm  view on Meta::CPAN

   my $self = shift;
   my ( $opts, $server_name ) = @_;

   $self->http_client->resolve_matrix( $server_name )->then( sub {
      my @res = @_;
      # SRV records yield a 'weight' field, A/AAAA-based backup does not
      defined $res[0]->{weight}
         ? $self->output_info( "Resolved $server_name by SRV" )
         : $self->output_info( "Using legacy IP address fallback" );

      try_repeat {

 view all matches for this distribution


App-MediaWiki2Git

 view release on metacpan or  search on metacpan

dist.ini  view on Meta::CPAN

repository.type = git

[GatherDir]
exclude_match = ^\.git(ignore|/.*)$
exclude_match = (^|/)#[^/]+#$ ; emacs autosave
exclude_match = ~$ ; emacs backup

[ExecDir]

[PruneCruft]
[MinimumPerl] ; with Perl::MinimumVersion

 view all matches for this distribution


App-Module-Lister

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\bbuild.com$

# and Module::Build::Tiny generated files
\b_build_params$

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-ModuleBuildTiny

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\bbuild.com$

# and Module::Build::Tiny generated files
\b_build_params$

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-MtAws

 view release on metacpan or  search on metacpan

README.md  view on Meta::CPAN

==============
Perl Multithreaded multipart sync to Amazon Glacier service.

## Intro

Amazon Glacier is an archive/backup service with very low storage price. However with some caveats in usage and archive retrieval prices.
[Read more about Amazon Glacier][amazon glacier]

*mt-aws-glacier* is a client application for Amazon Glacier, written in Perl programming language, for *nix.

[amazon glacier]:http://aws.amazon.com/glacier/

README.md  view on Meta::CPAN

* Read Amazon Glacier pricing [FAQ][Amazon Glacier faq] again, really. Beware of retrieval fee.

* Before using this program, you should read Amazon Glacier documentation and understand, in general, Amazon Glacier workflows and entities. This documentation
does not define any new layer of abstraction over Amazon Glacier entities.

* In general, all Amazon Glacier clients store metadata (filenames, file metadata) in own formats, incompatible with each other. To restore backup made with `mt-aws-glacier` you'll
need `mt-aws-glacier`, other software most likely will restore your data but loose filenames.

* With low "partsize" option you pay a bit more (Amazon charges for each upload request)

* For backup created with older versions (0.7x) of mt-aws-glacier, Journal file **required to restore backup**.

* Use a **Journal file** only with **same vault** ( more info [here](#what-is-journal) and [here](#how-to-maintain-a-relation-between-my-journal-files-and-my-vaults) and [here](https://github.com/vsespb/mt-aws-glacier/issues/50))

* When work with CD-ROM/CIFS/other non-Unix/non-POSIX filesystems, you might need set `leaf-optimization` to `0`

README.md  view on Meta::CPAN

* Please report any bugs or issues (using GitHub issues). Well, any feedback is welcomed.
* If you want to contribute to the source code, please contact me first and describe what you want to do

## Usage

1. Create a directory containing files to backup. Example `/data/backup`
2. Create config file, say, glacier.cfg

		key=YOURKEY
		secret=YOURSECRET
		# region: eu-west-1, us-east-1 etc

README.md  view on Meta::CPAN

	(note that Amazon Glacier does not return error if vault already exists etc)

4. Choose a filename for the Journal, for example, `journal.log`
5. Sync your files

		./mtglacier sync --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log --concurrency 3

6. Add more files and sync again
7. Check that your local files not modified since last sync

		./mtglacier check-local-hash --config glacier.cfg --dir /data/backup --journal journal.log

8. Delete some files from your backup location
9. Initiate archive restore job on Amazon side

		./mtglacier restore --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log --max-number-of-files 10

10. Wait 4+ hours for Amazon Glacier to complete archive retrieval
11. Download restored files back to backup location

		./mtglacier restore-completed --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log

12. Delete all your files from vault

		./mtglacier purge-vault --config glacier.cfg --vault myvault --journal journal.log

README.md  view on Meta::CPAN

* Each text line in a file represent one record

* It's an append-only file. File opened in append-only mode, and new records only added to the end. This guarantees that
you can recover Journal file to previous state in case of bug in program/crash/some power/filesystem issues. You can even use `chattr +a` to set append-only protection to the Journal.

* As Journal file is append-only, it's easy to perform incremental backups of it

#### Why Journal is a file in local filesystem file, but not in online Cloud storage (like Amazon S3 or Amazon DynamoDB)?

Journal is needed to restore backup, and we can expect that if you need to restore a backup, that means that you lost your filesystem, together with Journal.

However Journal also needed to perform *new backups* (`sync` command), to determine which files are already in Glacier and which are not. And also to checking local file integrity (`check-local-hash` command).
Actually, usually you perform new backups every day. And you restore backups (and loose your filesystem) very rare.

So fast (local) journal is essential to perform new backups fast and cheap (important for users who backups thousands or millions of files).

And if you lost your journal, you can restore it from Amazon Glacier (see `retrieve-inventory` command). Also it's recommended to backup your journal
to another backup system (Amazon S3 ? Dropbox ?) with another tool, because retrieving inventory from Amazon Glacier is pretty slow.

Also some users might want to backup *same* files from *multiple* different locations. They will need *synchronization* solution for journal files.

Anyway I think problem of putting Journals into cloud can be automated and solved with 3 lines bash script..

#### How to maintain a relation between my journal files and my vaults?

README.md  view on Meta::CPAN


9. It's better to keep relation between *vault* and transfer root (`--dir` option) in one place, such as config file.

#### Why Journal (and metadata stored in Amazon Glacier) does not contain file's metadata (like permissions)?

If you want to store permissions, put your files to archives before backup to Amazon Glacier. There are lot's of different possible things to store as file metadata information,
most of them are not portable. Take a look on archives file formats - different formats allows to store different metadata.

It's possible that in the future `mtglacier` will support some other metadata things.

## Specification for some commands

README.md  view on Meta::CPAN


	_Uploads what_: a file, pointed by `filename`.

	_Filename in Journal and Amazon Glacier metadata_: A relative path from `dir` to `filename`

		./mtglacier upload-file --config glacier.cfg --vault myvault --journal journal.log --dir /data/backup --filename /data/backup/dir1/myfile

	(this will upload content of `/data/backup/dir1/myfile` to Amazon Glacier and use `dir1/myfile` as filename for Journal )

		./mtglacier upload-file --config glacier.cfg --vault myvault --journal journal.log --dir data/backup --filename data/backup/dir1/myfile

	(Let's assume current directory is `/home`. Then this will upload content of `/home/data/backup/dir1/myfile` to Amazon Glacier and use `dir1/myfile` as filename for Journal)

	NOTE: file `filename` should be inside directory `dir`

	NOTE: both `-filename` and `--dir` resolved to full paths, before determining relative path from `--dir` to `--filename`. Thus yo'll get an error
	if parent directories are unreadable. Also if you have `/dir/ds` symlink to `/dir/d3` directory, then `--dir /dir` `--filename /dir/ds/file` will result in relative

 view all matches for this distribution


App-Mxpress-PDF

 view release on metacpan or  search on metacpan

public/javascripts/ace/mode-pgsql.js  view on Meta::CPAN

        "path_mul_pt|path_n_eq|path_n_ge|path_n_gt|path_n_le|path_n_lt|path_npoints|path_out|" +
        "path_recv|path_send|path_sub_pt|pclose|percent_rank|pg_advisory_lock|" +
        "pg_advisory_lock_shared|pg_advisory_unlock|pg_advisory_unlock_all|" +
        "pg_advisory_unlock_shared|pg_advisory_xact_lock|pg_advisory_xact_lock_shared|" +
        "pg_available_extension_versions|pg_available_extensions|pg_backend_pid|" +
        "pg_backup_start_time|pg_cancel_backend|pg_char_to_encoding|pg_client_encoding|" +
        "pg_collation_for|pg_collation_is_visible|pg_column_is_updatable|pg_column_size|" +
        "pg_conf_load_time|pg_conversion_is_visible|pg_create_restore_point|" +
        "pg_current_xlog_insert_location|pg_current_xlog_location|pg_cursor|pg_database_size|" +
        "pg_describe_object|pg_encoding_max_length|pg_encoding_to_char|" +
        "pg_event_trigger_dropped_objects|pg_export_snapshot|pg_extension_config_dump|" +
        "pg_extension_update_paths|pg_function_is_visible|pg_get_constraintdef|pg_get_expr|" +
        "pg_get_function_arguments|pg_get_function_identity_arguments|" +
        "pg_get_function_result|pg_get_functiondef|pg_get_indexdef|pg_get_keywords|" +
        "pg_get_multixact_members|pg_get_ruledef|pg_get_serial_sequence|pg_get_triggerdef|" +
        "pg_get_userbyid|pg_get_viewdef|pg_has_role|pg_identify_object|pg_indexes_size|" +
        "pg_is_in_backup|pg_is_in_recovery|pg_is_other_temp_schema|pg_is_xlog_replay_paused|" +
        "pg_last_xact_replay_timestamp|pg_last_xlog_receive_location|" +
        "pg_last_xlog_replay_location|pg_listening_channels|pg_lock_status|pg_ls_dir|" +
        "pg_my_temp_schema|pg_node_tree_in|pg_node_tree_out|pg_node_tree_recv|" +
        "pg_node_tree_send|pg_notify|pg_opclass_is_visible|pg_operator_is_visible|" +
        "pg_opfamily_is_visible|pg_options_to_table|pg_postmaster_start_time|" +
        "pg_prepared_statement|pg_prepared_xact|pg_read_binary_file|pg_read_file|" +
        "pg_relation_filenode|pg_relation_filepath|pg_relation_is_updatable|pg_relation_size|" +
        "pg_reload_conf|pg_rotate_logfile|pg_sequence_parameters|pg_show_all_settings|" +
        "pg_size_pretty|pg_sleep|pg_start_backup|pg_stat_clear_snapshot|pg_stat_file|" +
        "pg_stat_get_activity|pg_stat_get_analyze_count|pg_stat_get_autoanalyze_count|" +
        "pg_stat_get_autovacuum_count|pg_stat_get_backend_activity|" +
        "pg_stat_get_backend_activity_start|pg_stat_get_backend_client_addr|" +
        "pg_stat_get_backend_client_port|pg_stat_get_backend_dbid|pg_stat_get_backend_idset|" +
        "pg_stat_get_backend_pid|pg_stat_get_backend_start|pg_stat_get_backend_userid|" +

public/javascripts/ace/mode-pgsql.js  view on Meta::CPAN

        "pg_stat_get_xact_numscans|pg_stat_get_xact_tuples_deleted|" +
        "pg_stat_get_xact_tuples_fetched|pg_stat_get_xact_tuples_hot_updated|" +
        "pg_stat_get_xact_tuples_inserted|pg_stat_get_xact_tuples_returned|" +
        "pg_stat_get_xact_tuples_updated|pg_stat_reset|pg_stat_reset_shared|" +
        "pg_stat_reset_single_function_counters|pg_stat_reset_single_table_counters|" +
        "pg_stop_backup|pg_switch_xlog|pg_table_is_visible|pg_table_size|" +
        "pg_tablespace_databases|pg_tablespace_location|pg_tablespace_size|" +
        "pg_terminate_backend|pg_timezone_abbrevs|pg_timezone_names|pg_total_relation_size|" +
        "pg_trigger_depth|pg_try_advisory_lock|pg_try_advisory_lock_shared|" +
        "pg_try_advisory_xact_lock|pg_try_advisory_xact_lock_shared|pg_ts_config_is_visible|" +
        "pg_ts_dict_is_visible|pg_ts_parser_is_visible|pg_ts_template_is_visible|" +

 view all matches for this distribution


App-Netdisco

 view release on metacpan or  search on metacpan

lib/App/Netdisco/Util/Port.pm  view on Meta::CPAN

C<portctl_role> config. This should only be done lazily and near to the
time of use, to be efficient and also to get latest ACL settings.

If there exists an entry in C<portctl_role> config from C<deployment.yml>
with the same name as a database role, then the database role overwrites
it. If such a role is removed, then a backup of the original is restored.

=cut

sub sync_portctl_roles {
  my @db_roles = schema(vars->{'tenant'})

 view all matches for this distribution


App-NetdiscoX-Web-Plugin-RANCID

 view release on metacpan or  search on metacpan

lib/App/NetdiscoX/Web/Plugin/RANCID.pm  view on Meta::CPAN


=cut

=head1 NAME

App::NetdiscoX::Web::Plugin::RANCID - Link to device backups in RANCID/WebSVN

=head1 DEPRECATED

This plugin is deprecated and no longer maintained!

 view all matches for this distribution


App-Netsync

 view release on metacpan or  search on metacpan

share/mib/CISCO-STACK-MIB.my  view on Meta::CPAN

        DESCRIPTION
                 "Modified chassisComponentType to include:
                 'fanMod4Hs'.

                 Modified syslogMessageFacility to include:
                 'eou', 'backup', 'eoam', 'webauth'.

                 Modified sysErrDisableTimeoutEnable to include:
                 'ethernetOam', 'gl2ptEoamThresholdExceed'.

                 Updated chassisPs1Type and chassisPs2Type to include:

share/mib/CISCO-STACK-MIB.my  view on Meta::CPAN

        SYNTAX        INTEGER { true(1), false(2) }
        MAX-ACCESS    read-only
        STATUS        current
        DESCRIPTION   "This object reflects whether or not the TR-CRF
                      VLAN associated with this entry is configured
                      as a backup TR-CRF. A value of true(1) indicates
                      the TR-CRF is a configured as a backup. A value
                      of false(2) indicates the TR-CRF is not configured
                      as a backup."
        ::= { tokenRingDripLocalVlanStatusEntry 6 }

tokenRingDripOwnerNodeID OBJECT-TYPE
        SYNTAX        OCTET STRING (SIZE(6))
        MAX-ACCESS    read-only

share/mib/CISCO-STACK-MIB.my  view on Meta::CPAN

                                gl2pt(36),
                                callhome(37),
                                dhcpsnooping(38),
                                diags(40),
                                eou(42),
                                backup(43),
                                eoam(44),
                                webauth(45),
                                dom(46),
                                mvrp(47)
                              }

 view all matches for this distribution


App-NoodlePay

 view release on metacpan or  search on metacpan

bin/noodlepay.pl  view on Meta::CPAN

desktop or laptop computer, and a USB cable to connect it to the
hardware wallet device.

Compared to other hardware wallet solutions, Noodle Pay also greatly
simplifies physically securing your private keys, and keeping
backups. You can simply pop the MicroSD card out of the Noodle Air,
and keep it physically secure. For backups, you can just duplicate the
MicroSD card, and keep multiple copies in safe locations.

=head1 CONFIGURATION

The $electrum variable at the top of noodlepay.pl should be set to the

 view all matches for this distribution


App-Office-CMS

 view release on metacpan or  search on metacpan

lib/App/Office/CMS.pm  view on Meta::CPAN


=item o How do I back up the database?

See the config file .htoffice.cms.conf:

	backup_command = pg_dump -U cms cms
	backup_file = /tmp/pg.cms.backup.dat

When backup_command has a value, the Edit Contents tab gets a [Backup] button, and when this button
is clicked:

=over 4

=item o The command is run

lib/App/Office/CMS.pm  view on Meta::CPAN


=item o Otherwise, STDOUT is written to the output file

=back

So, why are there 2 lines, and not something like 'pg_dump -U cms cms > /tmp/pg.cms.backup.dat'?

Because I use L<Capture::Tiny>, which does not want you to use redirection.

Lastly, the output is written using L<File::Slurper>.

lib/App/Office/CMS.pm  view on Meta::CPAN


=head1 TODO

=over 4

=item o Adopt Git::Repository for versioned backup

=item o Clean up error handling

For example, when build_error_result is called, rather than build_success_result, the
data sent to Javascript must be handled slightly differently.

lib/App/Office/CMS.pm  view on Meta::CPAN


=item o Add an option, perhaps, to escape entities when inputting HTML

=item o Adopt DBIx::Connector

=item o Implement user-initiated backup and restore

=item o Change class hierarchy

This is so View does not have to pass so many parameters to its 'has-a' attributes

 view all matches for this distribution


App-PDFUtils

 view release on metacpan or  search on metacpan

lib/App/PDFUtils.pm  view on Meta::CPAN

This program is a wrapper for <prog:qpdf> to password-protect PDF files
(in-place). This is the counterpart for <prog:remove-pdf-password>. Why use this
wrapper instead of **qpdf** directly? This wrapper offers configuration file
support, where you can put the password(s) you want to use there. The wrapper
also offers multiple file support and additional options, e.g. whether to create
backup.

MARKDOWN
    args => {
        %argspec0_files,
        password => {
            schema => ['str*', min_len=>1],
            req => 1,
        },
        backup => {
            summary => 'Whether to backup the original file to ORIG~',
            schema => 'bool*',
            default => 1,
        },
        # XXX key_length (see qpdf, but when 256 can't be opened by evince)
        # XXX other options (see qpdf)

lib/App/PDFUtils.pm  view on Meta::CPAN

            next FILE;
        }

      BACKUP:
        {
            last unless $args{backup};
            unless (rename $f, "$f~") {
                warn "Can't backup original '$f' to '$f~': $!, skipped backup\n";
                last;
            };
        }
        unless (rename $tempf, $f) {
            $envres->add_result(500, "Can't rename $tempf to $f: $!", {item_id=>$f});

lib/App/PDFUtils.pm  view on Meta::CPAN

use the file. (The banks could've sent the PDF in a password-protected .zip, or
use PGP-encrypted email, but I digress.)

Compared to using **qpdf** directly, this wrapper offers some additional
features/options and convenience, for example: multiple file support, multiple
password matching attempt, configuration file, option whether you want backup,
etc.

You can provide the passwords to be tried in a configuration file,
`~/remove-pdf-password.conf`, e.g.:

lib/App/PDFUtils.pm  view on Meta::CPAN

    args => {
        %argspec0_files,
        passwords => {
            schema => ['array*', of=>['str*', min_len=>1], min_len=>1],
        },
        backup => {
            summary => 'Whether to backup the original file to ORIG~',
            schema => 'bool*',
            default => 1,
        },
    },
    deps => {

lib/App/PDFUtils.pm  view on Meta::CPAN

            next FILE;
        }

      BACKUP:
        {
            last unless $args{backup};
            unless (rename $f, "$f~") {
                warn "Can't backup original '$f' to '$f~': $!, skipped backup\n";
                last;
            };
        }
        unless (rename $tempf, $f) {
            $envres->add_result(500, "Can't rename $tempf to $f: $!", {item_id=>$f});

lib/App/PDFUtils.pm  view on Meta::CPAN

This program is a wrapper for L<qpdf> to password-protect PDF files
(in-place). This is the counterpart for L<remove-pdf-password>. Why use this
wrapper instead of B<qpdf> directly? This wrapper offers configuration file
support, where you can put the password(s) you want to use there. The wrapper
also offers multiple file support and additional options, e.g. whether to create
backup.

This function is not exported.

Arguments ('*' denotes required arguments):

=over 4

=item * B<backup> => I<bool> (default: 1)

Whether to backup the original file to ORIG~.

=item * B<files>* => I<array[filename]>

(No description)

lib/App/PDFUtils.pm  view on Meta::CPAN

use the file. (The banks could've sent the PDF in a password-protected .zip, or
use PGP-encrypted email, but I digress.)

Compared to using B<qpdf> directly, this wrapper offers some additional
features/options and convenience, for example: multiple file support, multiple
password matching attempt, configuration file, option whether you want backup,
etc.

You can provide the passwords to be tried in a configuration file,
C<~/remove-pdf-password.conf>, e.g.:

lib/App/PDFUtils.pm  view on Meta::CPAN


Arguments ('*' denotes required arguments):

=over 4

=item * B<backup> => I<bool> (default: 1)

Whether to backup the original file to ORIG~.

=item * B<files>* => I<array[filename]>

(No description)

 view all matches for this distribution


App-PLab

 view release on metacpan or  search on metacpan

bin/ManCen  view on Meta::CPAN

      # big rename from .bak to .cen
      $statwin-> text("Restoring .cen files...");
      while ( $curr != $to + $incr) {
         my $cenname = $w-> win_extname( $w-> win_formfilename( $curr));
         if ( -f "$cenname.bak") {
              Prima::MsgBox::message("Cannot rename backup file. Please note that it $cenname.bak file contains actual information.")
                 if !unlink($cenname) || !rename( "$cenname.bak", $cenname);
         } else {
            Prima::MsgBox::message("Cannot delete $cenname. Note that it contains non actual information.")
               if !unlink($cenname);
    	 }

 view all matches for this distribution


App-PPI-Dumper

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\bbuild.com$

# and Module::Build::Tiny generated files
\b_build_params$

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-PTP

 view release on metacpan or  search on metacpan

script/ptp  view on Meta::CPAN

regular expression language (B<grep> has a B<-P> flag but B<sed> has nothing of
the like).

=item * Provide a powerful input/output files support, that is lacking when
using vanilla-Perl one-liner (recursion in directories, output in-place with
optional backups, etc.).

=item * Pipelining of multiple operations on multiple files (using a pipeline
made of several standard tool usually makes it difficult to process several
input files at once).

 view all matches for this distribution


App-Pastebin-sprunge

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\bBuild.bat$
\bBuild.COM$
\bBUILD.COM$
\bbuild.com$

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-PersistentSSH

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN


# Avoid Module::Build generated and utility files.
\bBuild$
\b_build/

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-Phoebe

 view release on metacpan or  search on metacpan

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

from other devices or friends. The file format is line oriented, each line
containing two fingerprints, C<FROM> and C<TO>.

🔥 The capsule name I<login> is reserved.

🔥 The file names I<archive>, I<backup>, and I<upload> are reserved.

=head1 NO WIKI, ONLY CAPSULES

Here's how to disable all wiki functions of Phoebe and just use capsules. The
C<nothing_else> function comes right after C<capsules> as an extension and

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

    return result($stream, "30", "gemini://$host:$port/$capsule_space/$capsule/$id");
  } elsif (($host) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/login$!) {
    return serve_capsule_login($stream, $host);
  } elsif (($host, $capsule) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/archive$!) {
    return serve_capsule_archive($stream, $host, decode_utf8(uri_unescape($capsule)));
  } elsif (($host, $capsule, $id) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/backup(?:/([^/]+))?$!) {
    return serve_capsule_backup($stream, $host, map { decode_utf8(uri_unescape($_)) } $capsule, $id||"");
  } elsif (($host, $capsule, $id) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/delete(?:/([^/]+))?$!) {
    return serve_capsule_delete($stream, $host, map { decode_utf8(uri_unescape($_)) } $capsule, $id||"");
  } elsif ($url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/access$!) {
    return result($stream, "10", "Password");
  } elsif (($host, $capsule, $token) = $url =~ m!^gemini://($hosts)(?::$port)?/$capsule_space/([^/]+)/access\?(.+)$!) {

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

  my ($stream, $host, $capsule) = @_;
  my $name = capsule_name($stream);
  return 1 unless is_my_capsule($stream, $name, $capsule, 'archive');
  # use /bin/tar instead of Archive::Tar to save memory
  my $dir = wiki_dir($host, $capsule_space) . "/" . encode_utf8($capsule);
  my $file = "$dir/backup/data.tar.gz";
  if (-e $file and time() - modified($file) <= 300) { # data is valid for 5 minutes
    $log->info("Serving cached data archive for $capsule");
    success($stream, "application/tar");
    $stream->write(read_binary($file));
  } else {
    write_binary($file, ""); # truncate in order to avoid "file changed as we read it" warning
    my @command = ('/bin/tar', '--create', '--gzip',
		   '--file', $file,
		   '--exclude', "backup",
		   '--directory', "$dir/..",
		   encode_utf8($capsule));
    $log->debug("@command");
    if (system(@command) == 0) {
      $log->info("Serving new data archive for $capsule");

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

    }
  }
  return 1;
}

sub serve_capsule_backup {
  my ($stream, $host, $capsule, $id) = @_;
  my $name = capsule_name($stream);
  return 1 unless is_my_capsule($stream, $name, $capsule, 'view the backup of');
  my $dir = capsule_dir($host, $capsule) . "/backup";
  if ($id) {
    $log->info("Serving $capsule backup $id");
    # this works for text files, too!
    success($stream, mime_type($id));
    my $file = $dir . "/" . encode_utf8($id);
    $stream->write(read_binary($file));
  } else {
    $log->info("Backup for $capsule");
    success($stream);
    $stream->write("# " . ucfirst($capsule) . " backup\n");
    $stream->write("When editing a page, a backup is saved here as long as at least 10 minutes have passed.\n");
    my @files;
    @files = read_dir($dir) if -d $dir;
    if (not @files) {
      $stream->write("There are no backup files, yet.\n") unless @files;
    } else {
      $stream->write("Files:\n");
      for my $file (sort @files) {
	print_link($stream, $host, $capsule_space, $file, "$capsule/backup/$file");
      };
    }
  }
  return 1;
}

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

  return 1 unless is_my_capsule($stream, $name, $capsule, 'delete a file in');
  my $dir = capsule_dir($host, $capsule);
  if ($id) {
    $log->info("Delete $id from $capsule");
    my $file = $dir . "/" . encode_utf8($id);
    my $backup_dir = "$dir/backup";
    my $backup_file = $backup_dir . "/" . encode_utf8($id);
    mkdir($backup_dir) unless -d $backup_dir;
    rename $file, $backup_file if -f $file;
    result($stream, "30", to_url($stream, $host, $capsule_space, $capsule));
  } else {
    $log->info("Delete for $capsule");
    success($stream);
    $stream->write("# Delete a file in " . ucfirst($capsule) . "\n");
    $stream->write("Deleting a file moves it to the backup.\n");
    my @files;
    @files = grep { $_ ne "backup" } read_dir($dir) if -d $dir;
    if (not @files) {
      $stream->write("There are no files to delete.\n") unless @files;
    } else {
      $stream->write("Files:\n");
      for my $file (sort @files) {

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

  my ($stream, $host, $capsule) = @_;
  my $name = capsule_name($stream);
  my $dir = capsule_dir($host, $capsule);
  my @files;
  @files = read_dir($dir) if -d $dir;
  my $has_backup = first { $_ eq "backup" } @files;
  @files = grep { $_ ne "backup" } @files if $has_backup;
  success($stream);
  $log->info("Serving $capsule");
  $stream->write("# " . ucfirst($capsule) . "\n");
  if ($name) {
    if ($name eq $capsule) {
      print_link($stream, $host, $capsule_space, "Specify file for upload", "$capsule/upload");
      print_link($stream, $host, $capsule_space, "Delete file", "$capsule/delete") if @files;
      print_link($stream, $host, $capsule_space, "Share access with other people or other devices", "$capsule/share");
      print_link($stream, $host, $capsule_space, "Access backup", "$capsule/backup") if $has_backup;
      print_link($stream, $host, $capsule_space, "Download archive", "$capsule/archive") if @files;
    } elsif (@capsule_tokens) {
      print_link($stream, $host, $capsule_space, "Access this capsule", "$capsule/access");
    }
  }

lib/App/Phoebe/Capsules.pm  view on Meta::CPAN

    return result($stream, "51", "This file does not exist") unless -f $file;
    return result($stream, "40", "Cannot delete this file") unless unlink $file;
    $log->info("Deleted $file");
  } else {
    mkdir($dir) unless -d $dir;
    backup($dir, $id);
    write_binary($file, $buffer);
    $log->info("Wrote $file");
    return result($stream, "30", to_url($stream, $host, $capsule_space, $capsule));
  }
}

sub backup {
  my ($dir, $id) = @_;
  my $file = $dir . "/" . encode_utf8($id);
  my $backup_dir = "$dir/backup";
  my $backup_file = $backup_dir . "/" . encode_utf8($id);
  return unless -f $file and (time - (stat($file))[9]) > 600;
  # make a backup if the last edit was more than 10 minutes ago
  mkdir($backup_dir) unless -d $backup_dir;
  write_binary($backup_file, read_binary($file));
}

1;

 view all matches for this distribution


App-PhotoDB

 view release on metacpan or  search on metacpan

lib/App/PhotoDB/commands.pm  view on Meta::CPAN


=head2 db

The C<db> command provides a set of subcommands for managing the database backend.

=head3 db backup

Back up the contents of the database

=head3 db logs

lib/App/PhotoDB/commands.pm  view on Meta::CPAN


Upgrade database to the latest schema

=cut
	$handlers{db} = {
		'backup' => { 'handler' => \&notimplemented, 'desc' => 'Back up the contents of the database' },
		'logs'   => { 'handler' => \&db_logs,        'desc' => 'Show activity logs from the database' },
		'stats'  => { 'handler' => \&db_stats,       'desc' => 'Show statistics about database usage' },
		'test'   => { 'handler' => \&db_test,        'desc' => 'Test database connectivity' },
	};

 view all matches for this distribution


App-Plex-Archiver

 view release on metacpan or  search on metacpan

dist.ini  view on Meta::CPAN

[GithubMeta]
; Enable issue tracking using Github
issues = 1
 
[PruneFiles]
; Git rid of backup files
match = ~$
match = \.bak$
 
[NextRelease]
format = %-7v %{yyyy-MM-dd}d

 view all matches for this distribution


App-Pod

 view release on metacpan or  search on metacpan

t/cpan/Mojo2/File.pm  view on Meta::CPAN

Change file permissions.

=head2 copy_to

  my $destination = $path->copy_to('/home/sri');
  my $destination = $path->copy_to('/home/sri/.vimrc.backup');

Copy file with L<File::Copy> and return the destination as a L<Mojo::File> object.

=head2 dirname

t/cpan/Mojo2/File.pm  view on Meta::CPAN

Create the directories if they don't already exist, any additional arguments are passed through to L<File::Path>.

=head2 move_to

  my $destination = $path->move_to('/home/sri');
  my $destination = $path->move_to('/home/sri/.vimrc.backup');

Move file with L<File::Copy> and return the destination as a L<Mojo::File> object.

=head2 new

 view all matches for this distribution


App-Prove-Plugin-Metrics

 view release on metacpan or  search on metacpan

t/app/prove/plugin/metrics.t  view on Meta::CPAN

use strict;
use warnings;
use App::Prove;
use Test::More tests=>6;

my $sbackup = {};
sub steal_stderr {
    my ($sref) = @_;
    if (!defined($$sbackup{stderr})) {
        open($$sbackup{stderr}, '>&STDERR');
        close(STDERR);
    }
    $$sref = undef;
    open(STDERR, '>', $sref);
}
sub return_stderr {
    if (defined($$sbackup{stderr})) {
        close(STDERR);
        open(STDERR, '>&', $$sbackup{stderr});
        delete($$sbackup{stderr});
    }
}

subtest 'stderr, all data'=>sub {
	plan tests=>16;

 view all matches for this distribution


App-PureProxy

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\btmon.out$

# Avoid Devel::NYTProf generated files
\bnytprof

# Avoid temp and backup files.
~$
\.tmp$
\.old$
\.bak$
\#$

 view all matches for this distribution


App-Rad

 view release on metacpan or  search on metacpan

lib/App/Rad/Include.pm  view on Meta::CPAN

sub _get_oneliner_code {
    return _sanitize( _deparse($_[0]) );
}


#TODO: option to do it saving a backup file
# (behavior probably set via 'setup')
# inserts the string received
# (hopefully code) inside the
# user's program file as a 'sub'
sub _insert_code_in_file {

 view all matches for this distribution


App-Rcsync

 view release on metacpan or  search on metacpan

bin/rcsync  view on Meta::CPAN


    <vim>
        filename /home/supermario/.vimrc
        template vimrc.tt
        <params>
            backupdir /tmp/vim
        </params>
    </vim>

In C<$HOME/rcsync/vimrc.tt>:

    set nocompatible
    set backup
    set backupdir=[% backupdir %]
    ...

From the command line:

    rcsync vim

 view all matches for this distribution


App-RecordStream

 view release on metacpan or  search on metacpan

MANIFEST.SKIP  view on Meta::CPAN

\bBuild.bat$
\bBuild.COM$
\bBUILD.COM$
\bbuild.com$

# Avoid temp and backup files.
~$
\.old$
\#$
\b\.#
\.bak$

 view all matches for this distribution


App-Relate

 view release on metacpan or  search on metacpan

bin/relate  view on Meta::CPAN


  locate primary_term | egrep term2 | egrep term3 [....| termN]

Though it also has a few other features, B<relate>:

  o  screens out "uninteresting" files by default (emacs backups, CVS/RCS files)
  o  has options to restrict reports to files, directories or symlinks.

An important performance hint: you can speed up relate
tremendously by using a relatively unique first term.  For
example, if you're on a unix box, you don't want to start

bin/relate  view on Meta::CPAN

which overrides the default filter and returns all matches.

As of this writing, by default files are ignored that match
these patterns:

      '~$'       # emacs backups
      '/\#'      # emacs autosaves
      ',v$'      # cvs/rcs repository files
      '/CVS$'
      '/CVS/'
      '/RCS$'

 view all matches for this distribution


( run in 1.086 second using v1.01-cache-2.11-cpan-39bf76dae61 )