App-MtAws
view release on metacpan or search on metacpan
mtglacier installed without CPAN). Now reverting the check on configuration stage and leave only check when we are
really getting error that SSL is broken. So it will advice to install Mozilla::CA only on systems where HTTPS indeed
broken.
* Dropping Ubuntu Saucy and Quantal PPA build, as it's EOL and Ubuntu PPA refuses to build packages.
### 2014-07-27 v1.116
* Fixed - there can be issue on MacOSX that HTTPS is not working: All requests end up with errors "HTTP connection
problem (timeout?)". Found that Apple ships LWP::Protocol::https without Mozilla::CA module (and they have no rights to
do so). So now a README install instructions updated and runtime error thrown if Mozilla::CA is missing and yo're trying
to use HTTPS. More technical info: http://blogs.perl.org/users/vsespb/2014/07/broken-lwp-in-the-wild.html
https://github.com/vsespb/mt-aws-glacier/issues/87
* Fixed - typo in error message.
### 2014-05-26 v1.115
* Fixed - crash/error when uploading large files with partsize=1024, when "old" Digest::SHA (< 5.63; shipped with most
of current linux distros) is installed. Old Digest::SHA has a bug, there was a workaround for it (i.e. message asking
to upgrade module) when it's used with large files on 32bit machines, but apparently seems 64bit machines
CPAN modules installed) worked around.
* Fixed: upload-file with --filename and --dir were not working correctly for most of perl installations
if dir started with "..". This due to bug https://rt.perl.org/Public/Bug/Display.html?id=111510 in File::Spec module.
Currently upload-file behaviour changed (see above in ChangeLog) so mtglacier not affected. In previous versions this
would result in wrong relative filenames in journal and Amazon glacier metadata
(precisely, those filenames are without path prefix, as if they would be in current directory, otherwise filename
part is correct).
* Workaround: Digest::SHA perl module prior to version 5.62 calculates SHA256 wrong on 32bit machines, when data
size is more than 2^29 bytes. Now mtglacier throws an error if --partsize >= 512Mb and machine is 32bit and digest-sha
version is below 5.62. Commands which don't use --partsize are unaffected.
* Fixed: Amazon CSV format parsing: Amazon escapes doublequote with backslash but.. does not escape backslash itself.
https://forums.aws.amazon.com/thread.jspa?threadID=141807
This format is undocumented and broken by design. Fixing parser now to parse this.
this bug was not affecting any real use of mtglacier as mtglacier does not use backslashes in metadata and ignores
foreign metadata.
* Cosmetic changes to process manager code
### 2013-06-01 v0.961 beta
* Enhancement: segment-size option added for restore-completed command for multi-segment downloads
* Documentation: restore-completed documentation updated
* Enhancement: Now all downloads performed to temp files. Temp file renamed to real files only when download succeed.
* Fixed: If server closes connection, after sending headers (when downloading files), this was not detected and no
error was thrown (it's not reported by underlying HTTP library for case non-chunked-transfer)
* Fixed: some other errors from underlying HTTP library detected
* Fixed: it there was more than 100 retries for a request (due to timeout/etc), program now terminates with error,
instead of reporting success.
* Fixed: for systems with non-UTF-8 filesystems, file modification time sometime was not adjusted after files
downloaded with restore-completed command
* Fixed: SIGHUP added to list of signal for graceful handling (i.e terminate fast when you close terminal)
mean journal is truncated/damaged.
* Possibility to work with Journal files, which use CRLF as line separator
* Error message (instead of unexpected error) added for case when filenames are too big (limit
is 700 ASCII or 350 2-bytes UTF-8). Limitation section updated for this case in README.
### 2013-05-19 v0.952 beta
* Some error handling rework. Errors for file opens (permissions problems), invalid characters in filenames, some misc
errors, are thrown with sane error message and helpful information
* Some more internal changes related to internal IPC protocol
* Documentation: 'Limitations' section added.
### 2013-05-09 v0.951 beta
* Bug fixed: In previous version (v0.944) a bug was introduced when downloading inventory (download_inventory_command).
If inventory size (JSON encoded) was more than 999999 bytes (~1Mb), download failed with a message:
"unexpected end of string while parsing JSON string, at character offset NNNNN (before "(end of string)")
* Fix crash in 'help' command, bug was introduced a day ago.
* New version numbering for beta versions - 0.9XY. Where 'X' is a month number of year 2013. 'Y' is number a of
a release in this month. So 0.933beta is 3rd release in April 2013
### 2013-03-12: v0.89 beta
* Single-byte character encodings support for *BSD systems added (see "Configuring Character Encoding") in README.
* When uploading from STDIN and file is empty, don't create Amazon Glacier upload id before throwing error
### 2013-03-04: v0.88 beta
* upload-file command implemented (upload from STDIN or single upload from file). See README.
* Internal: New Config Engine (config/command line options processing) - will help implement
advanced functionality in the future
* Fix possible crash when read(2) system call can return partial result,
see http://www.perlmonks.org/?node_id=435814
download for this file performed in multiple segments, i.e. using HTTP `Range:` header (each of size `segment-size` MiB, except last,
which can be smaller). Segments are downloaded in parallel (and different segments from different files can
be downloaded at same time).
Only values that are power of two supported for `segment-size` now.
Currenly if download breaks due to network problem, no resumption is performed, download of file or of current segment
started from beginning.
In case multi-segment downloads, TreeHash reported by Amazon Glacier for each segment is compared with actual TreeHash, calculated for segment at runtime.
In case of mismatch error is thrown and process stopped. Final TreeHash for whole file not checked yet.
In case full-file downloads, TreeHash reported by Amazon Glacier for whole file is compared with one calculated runtime and with one found in Journal file,
in case of mismatch, error is thrown and process stopped.
Unlike `partsize` option, `segment-size` does not allocate buffers in memory of the size specified, so you can use large `segment-size`.
### `upload-file`
Uploads a single file into Amazon Glacier. File will be tracked with Journal (just like when using `sync` command).
There are several possible combinations of options for `upload-file`:
1. **--filename** and **--dir**
(NOTE: `set-rel-filename` should be a _relative_ filename i.e. must not start with `/`)
3. **--stdin**, **--set-rel-filename** and **--check-max-file-size**
_Uploads what_: a file, read from STDIN
_Filename in Journal and Amazon Glacier metadata_: As specified in `set-rel-filename`
Also, as file size is not known until the very end of upload, need to be sure that file will not exceed 10 000 parts limit, and you must
specify `check-max-file-size` -- maximum possible size of file (in Megabytes), that you can expect. What this option do is simply throw error
if `check-max-file-size`/`partsize` > 10 000 parts (in that case it's recommended to adjust `partsize`). That's all. I remind that you can put this (and
any other option to config file)
./mtglacier upload-file --config glacier.cfg --vault myvault --journal journal.log --stdin --set-rel-filename path/to/file --check-max-file-size 131
(this will upload content of file read from STDIN to Amazon Glacier and use `path/to/file` as filename for Journal. )
(NOTE: `set-rel-filename` should be a _relative_ filename i.e. must not start with `/`)
* Only support filenames, which consist of octets, that can be mapped to a valid character sequence in desired encoding (i.e. filename
which are made of random bytes/garbage is not supported. usually it's not a problem).
* Filenames with CR (Carriage return, code 0x0D) LF (Line feed, code 0x0A) and TAB (0x09) are not supported (usually not a problem too).
* Length of relative filenames. Currently limit is about 700 ASCII characters or 350 2-byte UTF-8 character (.. or 230 3-byte characters).
* File modification time should be in range from year 1000 to year 9999.
(NOTE: if above requirements are not met, error will be thrown)
* If you uploaded files with file modifications dates past Y2038 on system which supports it, and then restored on system
which does not (like Linux 32bit), resulting file timestamp (of course) wrong and also
unpredictible (undefined behaviour). The only thing is guaranteed that if you restore journal from Amazon servers on affected (i.e. 32bit)
machine - journal will contain correct timestamp (same as on 64bit).
* Memory usage (for 'sync') formula is ~ min(NUMBER_OF_FILES_TO_SYNC, max-number-of-files) + partsize*concurrency
* With high partsize*concurrency there is a risk of getting network timeouts HTTP 408/500.
lib/App/MtAws/Glacier/ListJobs.pm view on Meta::CPAN
}
sub _parse
{
my ($self) = @_;
return if $self->{data};
$self->{data} = JSON::XS->new->allow_nonref->decode(${ delete $self->{rawdata} || confess });
# get rid of JSON::XS boolean object, just in case.
# also JSON::XS between versions 1.0 and 2.1 (inclusive) do not allow to modify this field
# (modification of read only error thrown)
$_->{Completed} = !!(delete $_->{Completed}) for @{$self->{data}{JobList}};
}
sub _completed
{
$_->{Completed} && $_->{StatusCode} eq 'Succeeded'
}
sub _full_inventory
lib/App/MtAws/HttpSegmentWriter.pm view on Meta::CPAN
use strict;
use warnings;
use utf8;
use App::MtAws::Utils;
use Fcntl qw/SEEK_SET LOCK_EX/;
use Carp;
use base qw/App::MtAws::HttpWriter/;
# when file not found/etc error happen, it can mean Temp file deleted by another process, so we
# don't need to throw error, most likelly signal will arrive in a few milliseconds
sub delayed_confess(@)
{
sleep 2;
confess @_;
}
sub new
{
my ($class, %args) = @_;
lib/App/MtAws/Journal.pm view on Meta::CPAN
File::Find::find({ wanted => sub {
if ($self->_listing_exceeed_max_number_of_files($max_number_of_files)) {
$File::Find::prune = 1;
return;
}
if (++$i % 1000 == 0) {
print "Found $i local files\n";
}
# note that this exception is probably thrown even if a directory below transfer root contains invalid chars
die exception(invalid_chars_filename => "Not allowed characters in filename: %filename%", filename => hex_dump_string($_))
if /[\r\n\t]/;
if (-d) {
my $dir = character_filename($_);
$dir =~ s!/$!!; # make sure there is no trailing slash. just in case.
my $reldir = abs2rel($dir, $self->{root_dir}, allow_rel_base => 1);
if ($self->{filter} && $reldir ne '.') {
my ($match, $matchsubdirs) = $self->{filter}->check_dir($reldir."/");
if (!$match && $matchsubdirs) {
t/integration/journal_parselines.t view on Meta::CPAN
print F for (@_);
close F;
}
sub assert_last_line_exception
{
my ($line) = @_;
my $err = $@;
cmp_deeply $err, superhashof(exception 'journal_format_error' => "Invalid format of journal, line %lineno% not fully written", lineno => $line),
"should throw exception if last line broken";
}
unlink $journal;
{
create_journal "A\t$data->{time}\tCREATED\t$data->{archive_id}\t$data->{size}\t$data->{mtime}\t$data->{treehash}\t$data->{relfilename}\n";
my $J = App::MtAws::Journal->new(output_version => 'A', journal_file=> $journal, root_dir => $rootdir);
$J->read_journal(should_exist => 1);
ok $J->{journal_h}->{$data->{relfilename}}, "should work";
}
t/unit/config_engine_new.t view on Meta::CPAN
};
};
it "should decode UTF-8" => sub {
localize sub {
positional 'o1';
Context->{positional_tail} = [encode("UTF-8", 'ÑеÑÑ')];
App::MtAws::ConfigEngine::seen('o1');
cmp_deeply Context->{options}->{o1}, { name => 'o1', seen => 1, positional => 1, source => 'positional', value => 'ÑеÑÑ'};
};
};
it "should throw error if broken UTF found" => sub {
localize sub {
positional 'o1';
message 'options_encoding_error', 'bad coding';
Context->{positional_tail} = ["\xA0"];
App::MtAws::ConfigEngine::seen('o1');
cmp_deeply Context->{errors}, [{format => 'options_encoding_error', encoding => 'UTF-8'}];
cmp_deeply Context->{options}->{o1}, { name => 'o1', seen => 1, positional => 1 };
};
};
};
t/unit/config_engine_parse.t view on Meta::CPAN
#
# This time we test both current config and current code together
#
my $max_concurrency = 30;
my $too_big_concurrency = $max_concurrency+1;
sub assert_config_throw_error($$$)
{
my ($config, $errorre, $text) = @_;
fake_config %$config => sub {
disable_validations 'journal' => sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ', $_ ));
ok( $errors && $errors->[0] =~ $errorre, $text);
}
}
}
t/unit/config_engine_parse.t view on Meta::CPAN
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ',
'restore --from-dir x --config y -journal z -to-va va -conc 9 --max-n 1'
));
ok( $errors && $errors->[0] =~ /Journal file not found/i, "should catch non existing journal" );
}
}
}
for ('restore --from-dir x --config y -journal z -to-va va -conc 9 --max-n 1') {
assert_config_throw_error { key=>'!'x20, secret => 's'x40, region => 'myregion', vault => 'newvault' }, qr/Invalid format of "key"/, "should catch bad key" ;
assert_config_throw_error { key=>'a'x21, secret => 's'x40, region => 'myregion', vault => 'newvault' }, qr/Invalid format of "key"/, "should catch bad key" ;
assert_config_throw_error { key=>'a'x20, secret => 's'x41, region => 'myregion', vault => 'newvault' }, qr/Invalid format of "secret"/, "should catch bad key" ;
assert_config_throw_error { key=>'a'x20, secret => ' 'x40, region => 'myregion', vault => 'newvault' }, qr/Invalid format of "secret"/, "should catch bad key" ;
assert_config_throw_error { key=>'a'x20, secret => 'a'x40, region => 'my_region', vault => 'newvault' }, qr/Invalid format of "region"/, "should catch bad key" ;
assert_config_throw_error { key=>'a'x20, secret => 'a'x40, region => 'x'x80, vault => 'newvault' }, qr/Invalid format of "region"/, "should catch bad key" ;
}
{
fake_config key=>'mykey', secret => 'mysecret', region => 'myregion', vault => 'newvault', sub {
my $file = "$mtroot/journal_t_1";
unlink $file || confess if -e $file;
t/unit/config_engine_parse.t view on Meta::CPAN
ok( $result->{'vault-name'} eq 'myvault', "should parse positional arguments after options");
}
}
{
fake_config key=>'mykey', secret => 'mysecret', region => 'myregion', vault => 'newvault', sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ',
'create-vault --config=glacier.cfg'
));
ok( $errors && $errors->[0] eq 'Positional argument #1 (vault-name) is mandatory', "show throw error is positional argument is missing" );
}
}
{
fake_config key =>'mykey', secret => 'mysecret', region => 'myregion', vault => 'newvault', sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ',
'create-vault --config=glacier.cfg arg1 arg2'
));
ok( $errors && $errors->[0] eq 'Extra argument in command line: arg2', "show throw error is there is extra positional argument" );
}
}
{
fake_config key=>'mykey', secret => 'mysecret', region => 'myregion', vault => 'newvault', sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ',
'create-vault --config=glacier.cfg arg1 arg2'
));
ok( $errors && $errors->[0] eq 'Extra argument in command line: arg2', "show throw error is there is extra positional argument" );
}
}
{
fake_config key=>'mykey', secret => 'mysecret', region => 'myregion', vault => 'newvault', sub {
my ($errors, $warnings, $command, $result) = config_create_and_parse(split(' ',
'create-vault --config=glacier.cfg my#vault'
));
ok( $errors && $errors->[0] eq 'Vault name should be 255 characters or less and consisting of a-z, A-Z, 0-9, ".", "-", and "_"', "should validate positional arguments" );
}
t/unit/glacier_request.t view on Meta::CPAN
ok ! eval { App::MtAws::GlacierRequest->new({key=>'key', region=>'region', secret=>'secret', protocol => 'xyz', timeout => 180}) };
};
it "should not die with https" => sub {
ok eval { App::MtAws::GlacierRequest->new({key=>'key', region=>'region', secret=>'secret', protocol => 'https', timeout => 180}) };
};
};
describe "create_multipart_upload" => sub {
it "should throw exception if filename too long" => sub {
my $g = App::MtAws::GlacierRequest->new({%common_options});
my $filename = 'x' x 2000;
my $t = time();
ok ! defined eval { $g->create_multipart_upload(2, $filename, $t); 1 };
ok is_exception('file_name_too_big');
is get_exception->{filename}, $filename;
is exception_message(get_exception),
"Either relative filename \"$filename\" is too big to store in Amazon Glacier metadata. ".
"(Limit is about 700 ASCII characters or 350 2-byte UTF-8 characters)".
" or file modification time \"$t\" out of range".
t/unit/intermediate_file.t view on Meta::CPAN
}, "permanent file not discarded";
}
SKIP: {
skip "Cannot run under root", 5 if is_posix_root;
my $dir = "$rootdir/denied1";
ok mkpath($dir), "path is created";
ok -d $dir, "path is created";;
chmod 0444, $dir;
ok ! defined eval { App::MtAws::IntermediateFile->new(target_file => "$dir/somefile"); 1 }, "File::Temp should throw exception";
is get_exception->{code}, 'cannot_create_tempfile', "File::Temp correct code for exception";
is get_exception->{dir}, $dir, "File::Temp correct dir for exception";
}
SKIP: {
skip "Cannot run under root", 5 if is_posix_root;
my $dir = "$rootdir/denied2";
ok mkpath($dir), "path is created";
ok -d $dir, "path is created";;
chmod 0444, $dir;
ok ! defined eval { App::MtAws::IntermediateFile->new(target_file => "$dir/b/c/somefile"); 1 }, "mkpath() should throw exception";
is get_exception->{code}, 'cannot_create_directory', "mkpath correct code for exception";
is get_exception->{dir}, "$dir/b/c", "mkpath correct dir for exception";
}
SKIP: {
skip "Cannot run under root", 7 if is_posix_root;
my $dir = "$rootdir/testpermanent";
ok ! -e $dir, "not yet exists";
ok mkpath($dir), "path is created";
ok -d $dir, "path is created";
my $dest = "$dir/dest";
mkdir "$dir/dest";
my $I = App::MtAws::IntermediateFile->new(target_file => $dest);
my $tmpfile = $I->tempfilename;
ok ! defined eval { $I->make_permanent; 1 }, "should throw exception if cant rename files";
is get_exception->{code}, 'cannot_rename_file', "correct exception code";
is get_exception->{from}, $tmpfile, "correct exception 'from'";
is get_exception->{to}, $dest, "correct exception 'to'";
}
{
is get_filename_encoding, 'UTF-8', "assume utf8 encoding is set";
my $dir = "$rootdir/ÑеÑÑ2";
my $I = App::MtAws::IntermediateFile->new(target_file => "$dir/somefile");
like $I->tempfilename, qr/\Q$dir\E/, "filename should contain directory name, thus be in UTF8";
( run in 0.319 second using v1.01-cache-2.11-cpan-8d75d55dd25 )