App-MtAws
view release on metacpan or search on metacpan
mt-aws-glacier
==============
Perl Multithreaded multipart sync to Amazon Glacier service.
## Intro
Amazon Glacier is an archive/backup service with very low storage price. However with some caveats in usage and archive retrieval prices.
[Read more about Amazon Glacier][amazon glacier]
*mt-aws-glacier* is a client application for Amazon Glacier, written in Perl programming language, for *nix.
[amazon glacier]:http://aws.amazon.com/glacier/
## Version
* Version 1.120 (See [ChangeLog][mt-aws glacier changelog] or follow [@mtglacier](https://twitter.com/mtglacier) for updates) [](https://travis-ci.org/vsespb/mt-aws-glacie...
## Warnings ( *MUST READ* )
* When playing with Glacier make sure you will be able to delete all your archives, it's impossible to delete archive
or non-empty vault in amazon console now. Also make sure you have read _all_ Amazon Glacier pricing/faq.
* Read Amazon Glacier pricing [FAQ][Amazon Glacier faq] again, really. Beware of retrieval fee.
* Before using this program, you should read Amazon Glacier documentation and understand, in general, Amazon Glacier workflows and entities. This documentation
does not define any new layer of abstraction over Amazon Glacier entities.
* In general, all Amazon Glacier clients store metadata (filenames, file metadata) in own formats, incompatible with each other. To restore backup made with `mt-aws-glacier` you'll
need `mt-aws-glacier`, other software most likely will restore your data but loose filenames.
* With low "partsize" option you pay a bit more (Amazon charges for each upload request)
* For backup created with older versions (0.7x) of mt-aws-glacier, Journal file **required to restore backup**.
* Use a **Journal file** only with **same vault** ( more info [here](#what-is-journal) and [here](#how-to-maintain-a-relation-between-my-journal-files-and-my-vaults) and [here](https://github.com/vsespb/mt-aws-glacier/issues/50))
* When work with CD-ROM/CIFS/other non-Unix/non-POSIX filesystems, you might need set `leaf-optimization` to `0`
* Please read [ChangeLog][mt-aws glacier changelog] when upgrading to new version, and especially when downgrading.
(See "Compatibility" sections when downgrading)
* Zero length files and empty directories are ignored (as Amazon Glacier does not support it)
[mt-aws glacier changelog]:https://github.com/vsespb/mt-aws-glacier/blob/master/ChangeLog
## Help/contribute this project
* If you like *mt-aws-glacier*, and registered on GitHub, please **Star** it on GitHUb, this way you'll help promote the project.
* Please report any bugs or issues (using GitHub issues). Well, any feedback is welcomed.
* If you want to contribute to the source code, please contact me first and describe what you want to do
## Usage
1. Create a directory containing files to backup. Example `/data/backup`
2. Create config file, say, glacier.cfg
key=YOURKEY
secret=YOURSECRET
# region: eu-west-1, us-east-1 etc
region=us-east-1
# protocol=http (default) or https
protocol=http
(you can skip any config option and specify it directly in command line, command line options override same options in config)
3. Create a vault in specified region, using Amazon Console (`myvault`) or using mtglacier
./mtglacier create-vault myvault --config glacier.cfg
(note that Amazon Glacier does not return error if vault already exists etc)
4. Choose a filename for the Journal, for example, `journal.log`
5. Sync your files
./mtglacier sync --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log --concurrency 3
6. Add more files and sync again
7. Check that your local files not modified since last sync
./mtglacier check-local-hash --config glacier.cfg --dir /data/backup --journal journal.log
8. Delete some files from your backup location
9. Initiate archive restore job on Amazon side
./mtglacier restore --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log --max-number-of-files 10
10. Wait 4+ hours for Amazon Glacier to complete archive retrieval
11. Download restored files back to backup location
./mtglacier restore-completed --config glacier.cfg --dir /data/backup --vault myvault --journal journal.log
12. Delete all your files from vault
./mtglacier purge-vault --config glacier.cfg --vault myvault --journal journal.log
13. Wait ~ 24-48 hours and you can try deleting your vault
./mtglacier delete-vault myvault --config glacier.cfg
(note: currently Amazon Glacier does not return error if vault is not exists)
cut journal -f 4,5,6,7,8 |sort > journal.cut
cut new-journal -f 4,5,6,7,8 |sort > new-journal.cut
diff journal.cut new-journal.cut
* Each text line in a file represent one record
* It's an append-only file. File opened in append-only mode, and new records only added to the end. This guarantees that
you can recover Journal file to previous state in case of bug in program/crash/some power/filesystem issues. You can even use `chattr +a` to set append-only protection to the Journal.
* As Journal file is append-only, it's easy to perform incremental backups of it
#### Why Journal is a file in local filesystem file, but not in online Cloud storage (like Amazon S3 or Amazon DynamoDB)?
Journal is needed to restore backup, and we can expect that if you need to restore a backup, that means that you lost your filesystem, together with Journal.
However Journal also needed to perform *new backups* (`sync` command), to determine which files are already in Glacier and which are not. And also to checking local file integrity (`check-local-hash` command).
Actually, usually you perform new backups every day. And you restore backups (and loose your filesystem) very rare.
So fast (local) journal is essential to perform new backups fast and cheap (important for users who backups thousands or millions of files).
And if you lost your journal, you can restore it from Amazon Glacier (see `retrieve-inventory` command). Also it's recommended to backup your journal
to another backup system (Amazon S3 ? Dropbox ?) with another tool, because retrieving inventory from Amazon Glacier is pretty slow.
Also some users might want to backup *same* files from *multiple* different locations. They will need *synchronization* solution for journal files.
Anyway I think problem of putting Journals into cloud can be automated and solved with 3 lines bash script..
#### How to maintain a relation between my journal files and my vaults?
1. You can name journal with same name as your vault. Example: Vault name is `Photos`. Journal file name is `Photos.journal`. Or `eu-west-1-Photos.journal`
2. (Almost) Any command line option can be used in config file, so you can create `myphotos.cfg` with following content:
key=YOURKEY
7. In the future, there can be other features and options added, such as compression/encryption, which might require to decide again where to put new attributes for it.
8. Usually there is different policy for backing up config files and journal files (modifiable). So if you loose your journal file, you won't be sure which config corresponds to which *vault* (and journal file
can be restored from a *vault*)
9. It's better to keep relation between *vault* and transfer root (`--dir` option) in one place, such as config file.
#### Why Journal (and metadata stored in Amazon Glacier) does not contain file's metadata (like permissions)?
If you want to store permissions, put your files to archives before backup to Amazon Glacier. There are lot's of different possible things to store as file metadata information,
most of them are not portable. Take a look on archives file formats - different formats allows to store different metadata.
It's possible that in the future `mtglacier` will support some other metadata things.
## Specification for some commands
### `sync`
Propagates current local filesystem state to Amazon Glacier server.
Uploads a single file into Amazon Glacier. File will be tracked with Journal (just like when using `sync` command).
There are several possible combinations of options for `upload-file`:
1. **--filename** and **--dir**
_Uploads what_: a file, pointed by `filename`.
_Filename in Journal and Amazon Glacier metadata_: A relative path from `dir` to `filename`
./mtglacier upload-file --config glacier.cfg --vault myvault --journal journal.log --dir /data/backup --filename /data/backup/dir1/myfile
(this will upload content of `/data/backup/dir1/myfile` to Amazon Glacier and use `dir1/myfile` as filename for Journal )
./mtglacier upload-file --config glacier.cfg --vault myvault --journal journal.log --dir data/backup --filename data/backup/dir1/myfile
(Let's assume current directory is `/home`. Then this will upload content of `/home/data/backup/dir1/myfile` to Amazon Glacier and use `dir1/myfile` as filename for Journal)
NOTE: file `filename` should be inside directory `dir`
NOTE: both `-filename` and `--dir` resolved to full paths, before determining relative path from `--dir` to `--filename`. Thus yo'll get an error
if parent directories are unreadable. Also if you have `/dir/ds` symlink to `/dir/d3` directory, then `--dir /dir` `--filename /dir/ds/file` will result in relative
filename `d3/file` not `ds/file`
2. **--filename** and **--set-rel-filename**
_Uploads what_: a file, pointed by `filename`.
t/integration/config_engine_v078.t view on Meta::CPAN
# print Dumper({errors => $errors, warnings => $warnings, result => $result});
# v0.78 regressions test
my ($default_concurrency, $default_partsize) = (4, 16);
my %misc_opts = ('journal-encoding' => 'UTF-8', 'filenames-encoding' => 'UTF-8', 'terminal-encoding' => 'UTF-8', 'config-encoding' => 'UTF-8', timeout => 180);
# SYNC
for (
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency!,
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log!,
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --partsize=$default_partsize!,
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency --partsize=$default_partsize!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
config=>'glacier.cfg',
dir => '/data/backup',
concurrency => $default_concurrency,
partsize => $default_partsize,
journal => 'journal.log',
new => 1,
detect => 'mtime-and-treehash',
'leaf-optimization' => '1',
}, "$_ result");
is_deeply($warnings, ['from-dir deprecated, use dir instead', 'to-vault deprecated, use vault instead'], "$_ warnings text");
};
}
for (
qq!sync --key=mykey --secret=mysecret --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency!,
qq!sync --key=mykey --secret=mysecret --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log!,
qq!sync --key=mykey --secret=mysecret --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log --partsize=$default_partsize!,
qq!sync --key=mykey --secret=mysecret --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency --partsize=$default_partsize!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors, "should understand line without config $_");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
dir => '/data/backup',
new => 1,
detect => 'mtime-and-treehash',
concurrency => $default_concurrency,
partsize => $default_partsize,
journal => 'journal.log',
'leaf-optimization' => '1',
}, "$_ result");
};
}
for (
qq!sync --config=glacier.cfg --key=mykey --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency!,
qq!sync --config=glacier.cfg --key=mykey --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log!,
qq!sync --config=glacier.cfg --key=mykey --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log --partsize=$default_partsize!,
qq!sync --config=glacier.cfg --key=mykey --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency --partsize=$default_partsize!,
){
fake_config secret => 'mysecret', sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors, "should understand part of config $_");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
config => 'glacier.cfg',
vault=>'myvault',
dir => '/data/backup',
new => 1,
detect => 'mtime-and-treehash',
concurrency => $default_concurrency,
partsize => $default_partsize,
journal => 'journal.log',
'leaf-optimization' => '1',
}, "$_ result");
}
}
for (
qq!sync --config=glacier.cfg --key=mykey --secret=newsecret --region myregion --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency!,
){
fake_config secret => 'mysecret', sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors, "command line should override config $_");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'newsecret',
region => 'myregion',
protocol => 'http',
config => 'glacier.cfg',
vault=>'myvault',
dir => '/data/backup',
new => 1,
detect => 'mtime-and-treehash',
concurrency => $default_concurrency,
partsize => $default_partsize,
journal => 'journal.log',
'leaf-optimization' => '1',
}, "$_ result");
};
}
for (
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=8 --partsize=2!,
qq!sync --partsize=2 --from-dir /data/backup --config=glacier.cfg --to-vault=myvault --journal=journal.log --concurrency=8 !,
qq!sync -partsize=2 -from-dir /data/backup -config=glacier.cfg -to-vault=myvault -journal=journal.log -concurrency=8 !,
qq!sync -partsize 2 -from-dir /data/backup -config glacier.cfg -to-vault=myvault -journal=journal.log -concurrency 8 !,
# TODO: this one will not work
# qq! -partsize 2 -from-dir /data/backup -config glacier.cfg -to-vault=myvault -journal=journal.log -concurrency 8 sync !,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
config=>'glacier.cfg',
dir => '/data/backup',
concurrency => 8,
partsize => 2,
new => 1,
detect => 'mtime-and-treehash',
journal => 'journal.log',
'leaf-optimization' => '1',
}, "$_ result");
is_deeply($warnings, ['from-dir deprecated, use dir instead', 'to-vault deprecated, use vault instead'], "$_ warnings text");
}
}
for (
qq!sync --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=8 --partsize=2!,
qq!sync --config=glacier.cfg --to-vault=myvault --journal=journal.log --concurrency=8 --partsize=2!,
qq!sync --config=glacier.cfg --from-dir /data/backup --journal=journal.log --concurrency=8 --partsize=2!,
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --concurrency=8 --partsize=2!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( $errors && !$result, "$_ - should catch missed options");
ok( $errors->[0] =~ /Please specify/, "$_ - should catch missed options and give error");
}
}
for (
qq!sync --dir x --config y -journal z -to-va va -conc 9 --partsize=2 extra!,
t/integration/config_engine_v078.t view on Meta::CPAN
qq!sync sync --dir x --config y -journal z -to-va va -conc 9 --partsize=2!,
qq!sync --dir x --config y -journal z -to-va va extra -conc 9 --partsize=2!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( $errors && !$result, "$_ - should catch non option");
ok( $errors->[0] =~ /Extra argument/, "$_ - should catch non option");
}
}
for (
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency --max-number-of-files=42!,
qq!sync --max-number-of-files=42 --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log!,
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --max-number-of-files=42 --partsize=$default_partsize!,
qq!sync --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency --max-number-of-files=42 --partsize=$default_partsize!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
config=>'glacier.cfg',
dir => '/data/backup',
concurrency => $default_concurrency,
'max-number-of-files' => 42,
new => 1,
detect => 'mtime-and-treehash',
partsize => $default_partsize,
journal => 'journal.log',
'leaf-optimization' => '1',
}, "$_ result");
is_deeply($warnings, ['from-dir deprecated, use dir instead', 'to-vault deprecated, use vault instead'], "$_ warnings text");
}
}
#
# CHECK-LOCAL-HASH
for (
qq!check-local-hash --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log!,
qq!check-local-hash --from-dir /data/backup --to-vault=myvault --journal=journal.log --config=glacier.cfg!,
qq!check-local-hash --from-dir=/data/backup --to-vault=myvault -journal journal.log --config=glacier.cfg!,
# TODO: this one will not work
# qq! -partsize 2 -from-dir /data/backup -config glacier.cfg -to-vault=myvault -journal=journal.log -concurrency 8 sync !,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
# print $errors->[0];
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
config=>'glacier.cfg',
dir => '/data/backup',
journal => 'journal.log',
}, "$_ result");
cmp_deeply($warnings, set('Option "--to-vault" deprecated for this command','from-dir deprecated, use dir instead', 'to-vault deprecated, use vault instead'),
"$_ warnings text");
}
}
for (qw/vault to-vault/){
fake_config key=>'mykey', secret => 'mysecret', region => 'myregion', $_ => 'myvault', sub {
my ($errors, $warnings, $command, $result) =
config_create_and_parse(split(' ', qq!check-local-hash --config=glacier.cfg --from-dir /data/backup --journal=journal.log!));
ok( !$errors && $warnings, "error/warnings when $_ is in config");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
config=>'glacier.cfg',
dir => '/data/backup',
journal => 'journal.log',
}, "result when $_ option is in config");
cmp_deeply($warnings, set('from-dir deprecated, use dir instead'),
"warnings text when $_ is in config");
};
}
for (
qq!check-local-hash --from-dir /data/backup --to-vault=myvault --journal=journal.log!,
qq!check-local-hash --config=glacier.cfg --to-vault=myvault --journal=journal.log!,
qq!check-local-hash --config=glacier.cfg --from-dir /data/backup --to-vault=myvault!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( $errors && !$result, "$_ - should catch missed options");
ok( $errors->[0] =~ /Please specify/, "$_ - should catch missed options and give error");
};
}
# RESTORE
for (
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --max-number-of-files=21 --concurrency=$default_concurrency!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --max-number-of-files=21!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
config=>'glacier.cfg',
dir => '/data/backup',
concurrency => $default_concurrency,
'max-number-of-files' => 21,
journal => 'journal.log',
}, "$_ result");
cmp_deeply($warnings, set('to-vault deprecated, use vault instead','from-dir deprecated, use dir instead'), "$_ warnings text");
}
}
for (
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --max-number-of-files=21 --concurrency=9!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=9 --max-number-of-files=21!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal journal.log --max-number-of-files=21 --concurrency=9!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
config=>'glacier.cfg',
dir => '/data/backup',
concurrency => 9,
'max-number-of-files' => 21,
journal => 'journal.log',
}, "$_ result");
cmp_deeply($warnings, set('to-vault deprecated, use vault instead','from-dir deprecated, use dir instead'), "$_ warnings text");
};
}
for (
qq!restore --from-dir /data/backup --to-vault=myvault --journal=journal.log --max-number-of-files=21!,
qq!restore --config=glacier.cfg --to-vault=myvault --journal=journal.log --max-number-of-files=21!,
qq!restore --config=glacier.cfg --from-dir /data/backup --journal=journal.log --max-number-of-files=21!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --max-number-of-files=21!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log !,
qq!restore --from-dir /data/backup --to-vault=myvault --journal=journal.log --max-number-of-files=21 --concurrency=9!,
qq!restore --config=glacier.cfg --to-vault=myvault --journal=journal.log --max-number-of-files=21 --concurrency=9!,
qq!restore --config=glacier.cfg --from-dir /data/backup --journal=journal.log --max-number-of-files=21 --concurrency=9!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --max-number-of-files=21 --concurrency=9!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=9!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( $errors && !$result, "$_ - should catch missed options");
ok( $errors->[0] =~ /Please specify/, "$_ - should catch missed options and give error");
};
}
# RESTORE-COMPLETED
for (
qq!restore-completed --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency!,
qq!restore-completed --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log !,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
config=>'glacier.cfg',
dir => '/data/backup',
concurrency => $default_concurrency,
journal => 'journal.log',
}, "$_ result");
cmp_deeply($warnings, set('to-vault deprecated, use vault instead','from-dir deprecated, use dir instead'), "$_ warnings text");
};
}
for (
qq!restore-completed --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=9!,
qq!restore-completed --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=9 !,
qq!restore-completed --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal journal.log --concurrency=9!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
config=>'glacier.cfg',
dir => '/data/backup',
concurrency => 9,
journal => 'journal.log',
}, "$_ result");
cmp_deeply($warnings, set('to-vault deprecated, use vault instead','from-dir deprecated, use dir instead'), "$_ warnings text");
};
}
for (
qq!restore --from-dir /data/backup --to-vault=myvault --journal=journal.log!,
qq!restore --config=glacier.cfg --to-vault=myvault --journal=journal.log!,
qq!restore --config=glacier.cfg --from-dir /data/backup --journal=journal.log!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log !,
qq!restore --from-dir /data/backup --to-vault=myvault --journal=journal.log--concurrency=9!,
qq!restore --config=glacier.cfg --to-vault=myvault --journal=journal.log --concurrency=9!,
qq!restore --config=glacier.cfg --from-dir /data/backup --journal=journal.log --concurrency=9!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --concurrency=9!,
qq!restore --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=9!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( $errors && !$result, "$_ - should catch missed options");
ok( $errors->[0] =~ /Please specify/, "$_ - should catch missed options and give error");
};
}
# PURGE-VAULT
for (
qq!purge-vault --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=$default_concurrency!,
qq!purge-vault --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log !,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
vault=>'myvault',
config=>'glacier.cfg',
concurrency => $default_concurrency,
journal => 'journal.log',
}, "$_ result");
cmp_deeply($warnings, set('to-vault deprecated, use vault instead','from-dir deprecated, use dir instead', 'Option "--from-dir" deprecated for this command'), "$_ warnings text");
};
}
for (
qq!purge-vault --config=glacier.cfg --from-dir /data/backup --to-vault=myvault --journal=journal.log --concurrency=9!,
qq!purge-vault --config=glacier.cfg --from-dir /data/backup --journal=journal.log --concurrency=9 --to-vault=myvault!,
qq!purge-vault --config glacier.cfg --from-dir=/data/backup --journal=journal.log --concurrency=9 --to-vault=myvault!,
){
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_));
ok( !$errors && $warnings, "$_ error/warnings");
is_deeply($result, {
%misc_opts,
key=>'mykey',
secret => 'mysecret',
region => 'myregion',
protocol => 'http',
t/unit/config_engine_parse.t view on Meta::CPAN
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ',
'sync --dir x --config y -journal z -to-va va -conc 9 --partsize=3 '
));
ok( $errors && $errors->[0] =~ /must be power of two/, 'check partsize');
}
}
{
fake_config sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ',
'purge-vault --config=glacier.cfg --dir /data/backup --to-vault=myvault -journal=journal.log'
));
ok( !$errors && $warnings && $result, "should accept dir just like from-dir" );
}
}
{
fake_config key=>'mykey', secret => 'mysecret', region => 'myregion', vault => 'newvault', sub {
my ($errors, $warnings, $command, $result)= config_create_and_parse(split(' ',
'purge-vault --config=glacier.cfg --vault=myvault -journal=journal.log'
));
t/unit/filter.t view on Meta::CPAN
cmp_deeply [$F->check_filenames(@$list)], $expected, $msg;
}
test_check_filenames '+*.gz -/data/ +', [qw{1.gz 1.txt data/1.txt data/z/1.txt data/2.gz f data/p/33.gz}],
[qw{1.gz 1.txt data/2.gz f data/p/33.gz}], "should work";
test_check_filenames '-/data/ +*.gz -', [qw{1.gz p/1.gz data/ data/1.gz data/a/1.gz}], [qw{1.gz p/1.gz}], "should work again";
test_check_filenames '+*.gz -/data/', [qw{1.gz 1.txt data/1.txt data/z/1.txt data/2.gz f data/p/33.gz}],
[qw{1.gz 1.txt data/2.gz f data/p/33.gz}], "default action - include";
test_check_filenames '+*.gz +/data/ -', [qw{x/y x/y/z.gz /data/1 /data/d/2 abc}], [qw{x/y/z.gz /data/1 /data/d/2}], "default action - exclude";
test_check_filenames '-!/data/ +*.gz +/data/backup/ -',
[qw{data/1 dir/1.gz data/2 data/3.gz data/x/4.gz data/backup/5.gz data/backup/6/7.gz data/backup/z/1.txt}],
[qw{data/3.gz data/x/4.gz data/backup/5.gz data/backup/6/7.gz data/backup/z/1.txt}], "exclamation mark should work";
test_check_filenames '-0.* -Ñexclude/a/ +*.gz -', [qw{fexclude/b Ñexclude/b.gz}], [qw{Ñexclude/b.gz}], "exclamation mark should work";
#
# check_dir
#
sub test_check_dir
{
my ($filters, $dir, $res, $subdirs) = @_;
my $F = App::MtAws::Filter->new();
( run in 0.521 second using v1.01-cache-2.11-cpan-49f99fa48dc )