view release on metacpan or search on metacpan
Changelog.ini view on Meta::CPAN
the first and last quote was removed, so a name like `public.t` became
public.t. But with Oracle, quotes are used around the schema and table
names separately, so "X"."T" became X"."T :-(.
Now, all quotes are removed, so `public.t` still becomes public.t, but also
"X"."T" becomes X.T.
EOT
[V 1.08]
Date=2005-07-19T10:45:00
Comments= <<EOT
- The previous patches did not include sub backup() calling sub process_table()
to split the schema name off from the table name, so table names with schema
names still attached were not being skipped. Big mistake. My aplologies
- Add XML::Parser, XML::Records and XML::TokeParser to Build.PL and Makefile.PL.
They should have been included since V 1.06
EOT
[V 1.07]
Date=2005-06-28T13:41:11
Comments= <<EOT
- The default behaviour of this version is the same as for previous version,
so there is no need to upgrade unless you need the new features.
- Fix bug whereby sub backup could output <row></row> because all columns
in a row were null, & sub restore couldn't cope.
- Document that all spaces are converted to underscores in table and column names.
- New options:
o The value of the database handle attribute FetchHashKeyName governs
how table names are handled. Values are:
'NAME': The default - use the value returned by the database server
'NAME_uc': Convert table names to upper case
'NAME_lc': Convert to lower case. This is the recommended value
This possible conversion of the case of table names affects how you
specify table names in the constructor options:
- rename_columns
- rename_tables
- skip_tables
The following new options can be passed to the constructor of this module.
o croak_on_error => 0 or 1. 1 is the default, for backwards compatibility.
During backup(), the $sth -> execute() is now wrapped in eval{}, and if
an error occurs, and croak_on_error is 1, we Carp::croak.
If croak_on_error is 0, we continue. Not only that, but if verbose is 1,
the error is printed to STDERR.
o odbc => 0 or 1. 0 is the default.
During backup, if odbc is 1 we use the simplified call $dbh -> tables()
to get the list of table names. This list includes what MS Access calls
Queries, which are possibly equivalent to views. MS Access does not
support the syntax used in the non-ODBC situation:
$dbh -> tables('%', '%', '%', 'table').
o rename_columns => {}. You can specify a hash ref in the form:
rename_columns => {'old name' => 'new name', ...}.
For example, 'order' is a reserved word under MySQL, so you would use:
rename_columns => {order => 'orders'} (or whatever you want).
The option affects all tables.
The database handle attribute FetchHashKeyName affects this option.
Changelog.ini view on Meta::CPAN
rename_tables => {'old name' => 'new name', ...}.
The database handle attribute FetchHashKeyName affects this option.
Renaming takes place after the effect of FetchHashKeyName.
- Add method get_column_names(). This returns a hash ref, where the keys are
table names, possibly transformed according to the database handle attribute
FetchHashKeyName, and the values are array refs of column names, also converted
according to the aforesaid and understated FetchHashKeyName. Further, these
column names are sorted, and all spaces in column names are converted to
underscores.
This hashref is acceptable to the module DBIx::Admin::CreateTrivialSchema :-).
- The demo examples/backup-db.pl contains a list of MS Access tables which you
almost certainly want to supply to the skip_tables option if exporting
from MS Access.
EOT
[V 1.06]
Date=2005-05-20T15:45:00
Comments= <<EOT
- Correct docs discussing the value 2 for the fiddle_timestamp option, which said
timestamp and should have said datetime.
- Add an FAQ to the docs
- Add method restore_in_order(), which lets you specify the order in which tables
are restored. This allows you to define a column with a clause such as
'references foreign_table (foreign_column)', and to populate the foreign_table
before the dependent table.
But mutually-dependent and self-referential tables are still not catered for.
- Add method split(), which reads an XML file output by backup() and splits out into
a separate file each table you are not skipping. The file names are the tables'
names, including schema if any, and with an extension of 'xml'. The output files
have headers and footers so they are identical in structure to the file output
by backup(). Hence they can be fed back in to restore() and restore_in_order().
This method helps circumvent the drawback of restore_in_order(), which reads its
input file once per table.
Since this is a file-to-file operation, the dbh parameter to new() is no longer
mandatory.
See examples/split-xml.pl and all-tables.xml for a demo.
- Change methods backup(), restore() and the new restore_in_order() and split(),
to use lower case XML tags 'dbi', 'resultset', and 'row', as they should have
been in the first place.
- Methods restore() and split() will read a file containing upper or lower case
tags.
- Warning: restore_in_order() only handles lower case tags, due to the way
XML::Records works.
- This module now requires these modules, installed in this order:
o XML::Parser
o XML::TokeParser
o XML::Records
Changelog.ini view on Meta::CPAN
2005-04-15 09:34:00.
- Expand the docs referring to fiddle_timestamp
EOT
[V 1.04]
Date=2005-03-02T13:20:00
Comments= <<EOT
- A new option has been added to the constructor: skip_schema.
Here, 'schema' is defined to be the prefix on a table name,
and to be separated from the table name by a '.'.
Eg: The backup phase, with Postgres as the input database, will output tables
with names like 'information_schema.sql_features' and 'pg_catalog.pg_aggregate'.
If new is called as new(skip_schema => ['some_schema_name']), the restore phase
does not restore any tables in the named schema.
This parameter is optional. It defaults to [], so the module behaves as it did
before.
- A new option has been added to the constructor: transform_tablenames.
Now, new(transform_tablenames => 1) chops the schema, up to and including the
first '.', off table names. Thus a table exported from Postgres as
'public.service' can be renamed 'service' when being imported into another
database, eg MySQL.
- Switch from DBI's table() method to table_info(). This is for Oracle
- Discard table names which match /^BIN\$.+\$./. This is for Oracle
- Change the handling of quotes around schema and table names. Previously,
the first and last quote was removed, so a name like `public.t` became
public.t. But with Oracle, quotes are used around the schema and table
names separately, so "X"."T" became X"."T :-(.
Now, all quotes are removed, so `public.t` still becomes public.t, but also
"X"."T" becomes X.T.
1.08 2005-07-19T10:45:00
- The previous patches did not include sub backup() calling sub process_table()
to split the schema name off from the table name, so table names with schema
names still attached were not being skipped. Big mistake. My aplologies
- Add XML::Parser, XML::Records and XML::TokeParser to Build.PL and Makefile.PL.
They should have been included since V 1.06
1.07 2005-06-28T13:41:11
- The default behaviour of this version is the same as for previous version,
so there is no need to upgrade unless you need the new features.
- Fix bug whereby sub backup could output <row></row> because all columns
in a row were null, & sub restore couldn't cope.
- Document that all spaces are converted to underscores in table and column names.
- New options:
o The value of the database handle attribute FetchHashKeyName governs
how table names are handled. Values are:
'NAME': The default - use the value returned by the database server
'NAME_uc': Convert table names to upper case
'NAME_lc': Convert to lower case. This is the recommended value
This possible conversion of the case of table names affects how you
specify table names in the constructor options:
- rename_columns
- rename_tables
- skip_tables
The following new options can be passed to the constructor of this module.
o croak_on_error => 0 or 1. 1 is the default, for backwards compatibility.
During backup(), the $sth -> execute() is now wrapped in eval{}, and if
an error occurs, and croak_on_error is 1, we Carp::croak.
If croak_on_error is 0, we continue. Not only that, but if verbose is 1,
the error is printed to STDERR.
o odbc => 0 or 1. 0 is the default.
During backup, if odbc is 1 we use the simplified call $dbh -> tables()
to get the list of table names. This list includes what MS Access calls
Queries, which are possibly equivalent to views. MS Access does not
support the syntax used in the non-ODBC situation:
$dbh -> tables('%', '%', '%', 'table').
o rename_columns => {}. You can specify a hash ref in the form:
rename_columns => {'old name' => 'new name', ...}.
For example, 'order' is a reserved word under MySQL, so you would use:
rename_columns => {order => 'orders'} (or whatever you want).
The option affects all tables.
The database handle attribute FetchHashKeyName affects this option.
rename_tables => {'old name' => 'new name', ...}.
The database handle attribute FetchHashKeyName affects this option.
Renaming takes place after the effect of FetchHashKeyName.
- Add method get_column_names(). This returns a hash ref, where the keys are
table names, possibly transformed according to the database handle attribute
FetchHashKeyName, and the values are array refs of column names, also converted
according to the aforesaid and understated FetchHashKeyName. Further, these
column names are sorted, and all spaces in column names are converted to
underscores.
This hashref is acceptable to the module DBIx::Admin::CreateTrivialSchema :-).
- The demo examples/backup-db.pl contains a list of MS Access tables which you
almost certainly want to supply to the skip_tables option if exporting
from MS Access.
1.06 2005-05-20T15:45:00
- Correct docs discussing the value 2 for the fiddle_timestamp option, which said
timestamp and should have said datetime.
- Add an FAQ to the docs
- Add method restore_in_order(), which lets you specify the order in which tables
are restored. This allows you to define a column with a clause such as
'references foreign_table (foreign_column)', and to populate the foreign_table
before the dependent table.
But mutually-dependent and self-referential tables are still not catered for.
- Add method split(), which reads an XML file output by backup() and splits out into
a separate file each table you are not skipping. The file names are the tables'
names, including schema if any, and with an extension of 'xml'. The output files
have headers and footers so they are identical in structure to the file output
by backup(). Hence they can be fed back in to restore() and restore_in_order().
This method helps circumvent the drawback of restore_in_order(), which reads its
input file once per table.
Since this is a file-to-file operation, the dbh parameter to new() is no longer
mandatory.
See examples/split-xml.pl and all-tables.xml for a demo.
- Change methods backup(), restore() and the new restore_in_order() and split(),
to use lower case XML tags 'dbi', 'resultset', and 'row', as they should have
been in the first place.
- Methods restore() and split() will read a file containing upper or lower case
tags.
- Warning: restore_in_order() only handles lower case tags, due to the way
XML::Records works.
- This module now requires these modules, installed in this order:
o XML::Parser
o XML::TokeParser
o XML::Records
You would use this option when transferring data from MySQL's 'datetime'
type to Postgres' 'datetime' type, and some MySQL output values match
/0000-00-00 00:00:00/ and some values are real dates, such as
2005-04-15 09:34:00.
- Expand the docs referring to fiddle_timestamp
1.04 2005-03-02T13:20:00
- A new option has been added to the constructor: skip_schema.
Here, 'schema' is defined to be the prefix on a table name,
and to be separated from the table name by a '.'.
Eg: The backup phase, with Postgres as the input database, will output tables
with names like 'information_schema.sql_features' and 'pg_catalog.pg_aggregate'.
If new is called as new(skip_schema => ['some_schema_name']), the restore phase
does not restore any tables in the named schema.
This parameter is optional. It defaults to [], so the module behaves as it did
before.
- A new option has been added to the constructor: transform_tablenames.
Now, new(transform_tablenames => 1) chops the schema, up to and including the
first '.', off table names. Thus a table exported from Postgres as
'public.service' can be renamed 'service' when being imported into another
database, eg MySQL.
Changelog.ini
Changes
examples/all-tables.xml
examples/backup-db.pl
examples/restore-db.pl
examples/split-xml.pl
lib/DBIx/Admin/BackupRestore.pm
LICENSE
Makefile.PL
MANIFEST This list of files
MANIFEST.SKIP
README
t/00.versions.t
t/00.versions.tx
MANIFEST.SKIP view on Meta::CPAN
\bMakeMaker-\d
# Avoid Module::Build generated and utility files.
\b_build
\bBuild$
\bBuild.bat$
# Avoid Devel::Cover generated files
\bcover_db
# Avoid temp and backup files.
~$
\#$
\.#
\.bak$
\.old$
\.rej$
\.tmp$
# Avoid OS-specific files/dirs
# Mac OSX metadata
examples/backup-db.pl view on Meta::CPAN
#!/usr/bin/perl
#
# Name:
# backup-db.pl.
use strict;
use warnings;
use DBI;
use DBIx::Admin::BackupRestore;
# -----------------------------------------------
my($file_name) = shift || die("Usage: perl backup-db.pl backup-db.xml");
my($driver) = ($^O eq 'MSWin32') ? 'mysql' : 'Pg';
my($dsn) = ($driver eq 'mysql') ? 'dbi:mysql:db_name' : 'dbi:Pg:dbname=db_name';
my($username) = ($driver eq 'mysql') ? 'root' : 'postgres';
my($password) = ($driver eq 'mysql') ? 'pass' : '';
my($dbh) = DBI -> connect
(
$dsn, $username, $password,
{
AutoCommit => 1,
FetchHashKeyName => 'NAME_lc',
examples/backup-db.pl view on Meta::CPAN
open(OUT, "> $file_name") || die("Can't open(> $file_name): $!");
print OUT DBIx::Admin::BackupRestore -> new
(
clean => 1,
dbh => $dbh,
skip_tables =>
[ # For exporting from MS Access only.
qw/msysaces msysaccessobjects msyscolumns msysimexcolumns msysimexspecs msysindexes msysmacros msysmodules2 msysmodules msysobjects msysqueries msysrelationships/
],
verbose => 1,
) -> backup('db_name');
close OUT;
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
sub adjust_case
{
my($self, $s) = @_;
$$self{'_dbh'}{'FetchHashKeyName'} eq 'NAME_uc' ? uc $s : $$self{'_dbh'}{'FetchHashKeyName'} eq 'NAME_lc' ? lc $s : $s;
} # End of adjust_case.
# -----------------------------------------------
sub backup
{
my($self, $database) = @_;
Carp::croak('Missing parameter to new(): dbh') if (! $$self{'_dbh'});
$$self{'_quote'} = $$self{'_dbh'} ? $$self{'_dbh'} -> get_info(29) : ''; # SQL_IDENTIFIER_QUOTE_CHAR.
$$self{'_tables'} = $$self{'_odbc'} ? $self -> odbc_tables() : $self -> tables();
$$self{'_xml'} = qq|<?xml version = "1.0"?>\n|;
$$self{'_xml'} .= qq|<dbi database = "|. $self -> encode_xml($database) . qq|">\n|;
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
my($data, $display_sql, $display_table);
my($field);
my($i);
my($output_column_name);
my($sql, $sth);
my($table_name);
my($xml);
for $table_name (@{$$self{'_tables'} })
{
$self -> process_table('backup', $table_name);
next if ($$self{'_skipping'});
$display_table = $self -> adjust_case($$self{'_current_table'});
$sql = "select * from $$self{'_current_table'}";
$display_table = $$self{'_rename_tables'}{$display_table} ? $$self{'_rename_tables'}{$display_table} : $display_table;
$display_sql = "select * from $display_table";
$display_sql = $self -> adjust_case($display_sql);
$display_sql = $self -> encode_xml($display_sql);
$$self{'_xml'} .= qq|\t<resultset statement = "$display_sql">\n|;
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
$$self{'_xml'} .= "\t\t<row>\n$xml\t\t</row>\n" if ($xml);
}
Carp::croak("Can't fetchrow_hashref($sql): $DBI::errstr") if ($DBI::errstr);
$$self{'_xml'} .= "\t</resultset>\n";
}
$$self{'_xml'} .= "</dbi>\n";
} # End of backup.
# -----------------------------------------------
sub decode_xml
{
my($self, $s) = @_;
for my $key (keys %_decode_xml)
{
$s =~ s/$key/$_decode_xml{$key}/eg;
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
{
next if ($line =~ m!^(<\?xml|<dbi|</dbi)!i);
if ($line =~ m!<resultset .+? from (.+)">!i)
{
$self -> process_table('Restore', $1);
}
elsif ( (! $$self{'_skipping'}) && ($line =~ m!<row>!i) )
{
# There may be a different number of fields from one row to the next.
# Remember, only non-null fields are output by method backup().
$$self{'_key'} = [];
$$self{'_value'} = [];
while ( ($line = <INX>) !~ m!</row>!i)
{
if ($line =~ m!^\s*<(.+?)>(.*?)</\1>!i)
{
push @{$$self{'_key'} }, $1;
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
# Warning. If the XML file contains 1 'record', XML::Records
# returns text or a hash ref, not an array ref containing one element.
# Due to the nature of our data, we can ignore the case of textual data.
$$record{'row'} = [$$record{'row'}] if (ref $$record{'row'} ne 'ARRAY');
for $row (@{$$record{'row'} })
{
# There may be a different number of fields from one row to the next.
# Remember, only non-null fields are output by method backup().
@{$$self{'_key'} } = keys %$row;
$$self{'_value'} = [];
$self -> transform($_, $$row{$_}) for @{$$self{'_key'} };
$self -> write_row();
}
# Exit if table restored.
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
open(OUT, "> $output_file_name") || Carp::croak("Can't open($output_file_name): $!");
print OUT qq|<?xml version = "1.0"?>\n|;
print OUT qq|<dbi database = "$$self{'_database'}">\n|;
print OUT qq|\t<resultset statement = "select * from $table_name">\n|;
}
}
elsif ( (! $$self{'_skipping'}) && ($line =~ m!<row>!i) )
{
# There may be a different number of fields from one row to the next.
# Remember, only non-null fields are output by method backup().
print OUT qq|\t\t<row>\n|;
while ( ($line = <INX>) !~ m!</row>!i)
{
print OUT $line;
}
print OUT qq|\t\t</row>\n|;
}
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
C<DBIx::Admin::BackupRestore> - Backup all tables in a database to XML, and restore them
=head1 Synopsis
use DBIx::Admin::BackupRestore;
# Backup.
open(OUT, "> $file_name") || die("Can't open(> $file_name): $!");
print OUT DBIx::Admin::BackupRestore -> new(dbh => $dbh) -> backup('db_name');
close OUT;
# Restore.
DBIx::Admin::BackupRestore -> new(dbh => $dbh) -> restore($file_name);
=head1 Description
C<DBIx::Admin::BackupRestore> is a pure Perl module.
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
This method takes a set of parameters. Only the dbh parameter is mandatory.
For each parameter you wish to use, call new as new(param_1 => value_1, ...).
=over 4
=item clean
The default value is 0.
If new is called as new(clean => 1), the backup phase deletes any characters outside
the range 20 .. 7E (hex).
The restore phase ignores this parameter.
This parameter is optional.
=item croak_on_error
This parameter takes one of these values: 0 or 1.
The default value is 1, for backwards compatibility.
During backup(), the $sth -> execute() is now wrapped in eval{}, and if
an error occurs, and croak_on_error is 1, we Carp::croak.
If croak_on_error is 0, we continue. Not only that, but if verbose is 1,
the error is printed to STDERR.
This parameter is optional.
=item dbh
This is a database handle.
This parameter is mandatory when calling methods C<backup()> and C<restore*()>,
but is not required when calling method C<split()>, since the latter is just a
file-to-file operation.
=item dbi_catalog, dbi_schema, dbi_table, dbi_type
These 4 parameters are passed to DBI's C<table_info()> method, to get a list of table
names.
The default values suit MySQL:
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
under the strict option for MySQL V 5, whereas 1970-01-01 00:00:00 is invalid.
This parameter is optional.
=item odbc
This parameter takes one of these values: 0 or 1.
The default value is 0.
During backup, if odbc is 1 we use the simplified call $dbh -> tables()
to get the list of table names. This list includes what MS Access calls
Queries, which are possibly equivalent to views. MS Access does not
support the syntax used in the non-ODBC situation:
$dbh -> tables('%', '%', '%', 'table').
This parameter is optional.
=item rename_columns
This parameter takes a hash href.
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
The default value is [].
If new is called as new(skip_tables => ['some_table_name', ...]), the restore phase
does not restore the tables named in the call to C<new()>.
This option is designed to work with CGI scripts using the module CGI::Sessions.
Now, the CGI script can run with the current CGI::Session data, and stale CGI::Session
data is not restored from the XML file.
See examples/backup-db.pl for a list of MS Access tables names which you are unlikely
to want to transfer to an RDBMS.
This parameter is optional.
=item transform_tablenames
The default value is 0.
The only other value currently recognized by this option is 1.
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
Note: You would normally use these options to port data from Postgres to MySQL:
new(skip_schema => ['information_schema', 'pg_catalog'], transform_tablenames => 1).
This parameter is optional.
=item verbose
The default value is 0.
If new is called as new(verbose => 1), the backup and restore phases both print the
names of the tables to STDERR.
When beginning to use this module, you are strongly encouraged to use the verbose option
as a progress monitor.
This parameter is optional.
=back
=head1 Method: backup($database_name)
Returns a potentially-huge string of XML.
You would normally write this straight to disk.
The database name is passed in here to help decorate the XML.
As of version 1.06, the XML tags are in lower case.
Method restore() will read a file containing upper or lower case tags.
lib/DBIx/Admin/BackupRestore.pm view on Meta::CPAN
Further, these column names are sorted, and all spaces in column names are converted
to underscores.
This hashref is acceptable to the module DBIx::Admin::CreateTrivialSchema :-).
=head1 Method: C<restore($file_name)>
Returns an array ref of imported table names. They are sorted by name.
Opens and reads the given file, presumably one output by a previous call to backup().
The data read in is used to populate database tables. Use method C<split()>
to output to disk files.
=head1 Method: C<restore_in_order($file_name, [array ref of table names])>
Returns nothing.
Opens and reads the given file, presumably one output by a previous call to backup().
The data read in is used to populate database tables. Use method C<split()>
to output to disk files.
Restores the tables in the order given in the array ref parameter.
This allows you to define a column with a clause such as 'references foreign_table
(foreign_column)', and to populate the foreign_table before the dependent table.
And no, mutually-dependent and self-referential tables are still not catered for.
And yes, it does read the file once per table. Luckily, XML::Records is fast.
But if this seems like too much overhead, see method C<split()>.
=head1 Method C<split($file_name)>
Returns an array ref of imported table names. They are sorted by name.
Opens and reads the given file, presumably one output by a previous call to backup().
Each table not being skipped is output to a separate disk file, with headers and footers
the same as output by method C<backup()>.
This means each file can be input to methods C<restore()> and C<restore_in_order()>.
The tables' schema names and table names are used to construct the file names, together
with an extension of '.xml'.
See examples/split-xml.pl and all-tables.xml for a demo.
Lastly, method C<split()> uses lower-case XML tags.
=head1 Example code
See the examples/ directory in the distro.
There are 2 demo programs:
=over 4
=item backup-db.pl
=item restore-db.pl
=back
=head1 FAQ
=over 4
=item Are there any known problems with this module?
my(@opts) = ($ENV{DBI_DSN}, $ENV{DBI_USER}, $ENV{DBI_PASS});
try
{
populate(\@opts, 't0');
populate(\@opts, 't1');
# Backup phase.
open(OUT, "> $xml_file") || die("Can't open(> $xml_file): $!");
print OUT DBIx::Admin::BackupRestore -> new(dbh => generate_dbh(\@opts) ) -> backup($db_file);
close OUT;
ok(-r $db_file, "$db_file is readable");
ok(-r $xml_file, "$xml_file is readable");
# Restore phase.
$db_file = File::Spec -> catfile($out_dir, 'restore.sqlite');
unlink $db_file;
t/03.empty.table.t view on Meta::CPAN
my(@opts) = ($ENV{DBI_DSN}, $ENV{DBI_USER}, $ENV{DBI_PASS});
try
{
populate(\@opts, 't0', false);
populate(\@opts, 't1', true);
populate(\@opts, 't2', false);
# Backup phase.
open(OUT, "> $xml_file") || die("Can't open(> $xml_file): $!");
print OUT DBIx::Admin::BackupRestore -> new(dbh => generate_dbh(\@opts) ) -> backup($db_file);
close OUT;
ok(-r $db_file, "$db_file is readable");
ok(-r $xml_file, "$xml_file is readable");
# Restore phase.
$db_file = File::Spec -> catfile($out_dir, 'restore.sqlite');
unlink $db_file;