view release on metacpan or search on metacpan
- Remove warning about potential side effects of RT#79576 (scheduled)
- Various doc improvements (GH#35, GH#62, GH#66, GH#70, GH#71, GH#72)
- Depend on newer Moo, to benefit from a safer runtime (RT#93004)
- Fix intermittent failures in the LeakTracer on 5.18+
- Fix failures of t/54taint.t on Windows with spaces in the $^X
executable path (RT#101615)
0.082810 2014-10-25 13:58 (UTC)
* Fixes
- Fix incorrect collapsing-parser source being generated in the
presence of unicode data among the collapse-points
- Fix endless loop on BareSourcelessResultClass->throw_exception(...)
* Misc
- Depend on newer SQL::Abstract (fixing overly-aggressive parenthesis
opener: RT#99503)
- Depend on newer Moo, fixing some interoperability issues:
http://lists.scsys.co.uk/pipermail/dbix-class/2014-October/011787.html
0.082801 2014-10-05 23:55 (UTC)
* Known Issues
- Fix inability to handle multiple consecutive transactions with
savepoints on DBD::SQLite < 1.39
- Fix CDBICompat to match Class::DBI behavior handling non-result
blessed has_a (implicit deflate via stringification and inflate via
blind new) (GH#51)
* Misc
- Ensure source metadata calls always take place on the result source
instance registered with the caller
- IFF DBIC_TRACE output defaults to STDERR we now silence the possible
wide-char warnings if the trace happens to contain unicode
0.08270 2014-01-30 21:54 (PST)
* Fixes
- Fix 0.08260 regression in DBD::SQLite bound int handling. Inserted
data was not affected, but any function <=> integer comparison would
have failed (originally fixed way back in 0e773352)
- Fix failure to load DateTime formatter when connecting to Firebird
over ODBC
* Misc
Makefile.PL view on Meta::CPAN
my ($rtype, $ver) = @{$final_req{$mod}};
no strict 'refs';
$rtype->($mod, $ver);
}
# author-mode or not - this is where we show a list of missing deps
# IFF we are running interactively
auto_install();
{
# M::I understands unicode in meta but does not write with the right
# layers - fhtagn!!!
local $SIG{__WARN__} = sub { warn $_[0] unless $_[0] =~ /Wide character in print/ };
WriteAll();
}
exit 0;
###
### Nothing user-serviceable beyond this point
lib/DBIx/Class.pm view on Meta::CPAN
1;
__END__
# This is the only file where an explicit =encoding is needed,
# as the distbuild-time injected author list is utf8 encoded
# Without this pod2text output is less than ideal
#
# A bit regarding selection/compatiblity:
# Before 5.8.7 UTF-8 was == utf8, both behaving like the (lax) utf8 we know today
# Then https://www.nntp.perl.org/group/perl.unicode/2004/12/msg2705.html happened
# Encode way way before 5.8.0 supported UTF-8: https://metacpan.org/source/DANKOGAI/Encode-1.00/lib/Encode/Supported.pod#L44
# so it is safe for the oldest toolchains.
# Additionally we inject all the utf8 programattically and test its well-formedness
# so all is well
#
=encoding UTF-8
=head1 NAME
DBIx::Class - Extensible and flexible object <-> relational mapper.
lib/DBIx/Class/Componentised.pm view on Meta::CPAN
# the author does not respond, and the Catalyst wiki used to recommend it
for (qw/DBIx::Class::UTF8Columns DBIx::Class::ForceUTF8/) {
if ($comp->isa ($_) ) {
$keep_checking = 0; # no use to check from this point on
carp_once "Use of $_ is strongly discouraged. See documentation of DBIx::Class::UTF8Columns for more info\n"
unless $ENV{DBIC_UTF8COLUMNS_OK};
last;
}
}
# something unset $keep_checking - we got a unicode mangler
if (! $keep_checking) {
my $base_store_column = do { require DBIx::Class::Row; DBIx::Class::Row->can ('store_column') };
my @broken;
for my $existing_comp (@target_isa) {
my $sc = $existing_comp->can ('store_column')
or next;
if ($sc ne $base_store_column) {
lib/DBIx/Class/Manual/Cookbook.pod view on Meta::CPAN
This kludge is necessary only for conditions passed to
L<search|DBIx::Class::ResultSet/search> and L<DBIx::Class::ResultSet/find>,
whereas L<create|DBIx::Class::ResultSet/create> and
L<DBIx::Class::Row/update> (but not L<DBIx::Class::ResultSet/update>) are
L<DBIx::Class::InflateColumn>-aware and will do the right thing when supplied
an inflated L<DateTime> object.
=head2 Using Unicode
When using unicode character data there are two alternatives -
either your database supports unicode characters (including setting
the utf8 flag on the returned string), or you need to encode/decode
data appropriately each time a string field is inserted into or
retrieved from the database. It is better to avoid
encoding/decoding data and to use your database's own unicode
capabilities if at all possible.
The L<DBIx::Class::UTF8Columns> component handles storing selected
unicode columns in a database that does not directly support
unicode. If used with a database that does correctly handle unicode
then strange and unexpected data corrupt B<will> occur.
The Catalyst Wiki Unicode page at
L<http://wiki.catalystframework.org/wiki/tutorialsandhowtos/using_unicode>
has additional information on the use of Unicode with Catalyst and
DBIx::Class.
The following databases do correctly handle unicode data:-
=head3 MySQL
MySQL supports unicode, and will correctly flag utf8 data from the
database if the C<mysql_enable_utf8> is set in the connect options.
my $schema = My::Schema->connection('dbi:mysql:dbname=test',
$user, $pass,
{ mysql_enable_utf8 => 1} );
When set, a data retrieved from a textual column type (char,
varchar, etc) will have the UTF-8 flag turned on if necessary. This
enables character semantics on that string. You will also need to
ensure that your database / table / column is configured to use
UTF8. See Chapter 10 of the mysql manual for details.
See L<DBD::mysql> for further details.
=head3 Oracle
Information about Oracle support for unicode can be found in
L<DBD::Oracle/UNICODE>.
=head3 PostgreSQL
PostgreSQL supports unicode if the character set is correctly set
at database creation time. Additionally the C<pg_enable_utf8>
should be set to ensure unicode data is correctly marked.
my $schema = My::Schema->connection('dbi:Pg:dbname=test',
$user, $pass,
{ pg_enable_utf8 => 1} );
Further information can be found in L<DBD::Pg>.
=head3 SQLite
SQLite version 3 and above natively use unicode internally. To
correctly mark unicode strings taken from the database, the
C<sqlite_unicode> flag should be set at connect time (in versions
of L<DBD::SQLite> prior to 1.27 this attribute was named
C<unicode>).
my $schema = My::Schema->connection('dbi:SQLite:/tmp/test.db',
'', '',
{ sqlite_unicode => 1} );
=head1 BOOTSTRAPPING/MIGRATING
=head2 Easy migration from class-based to schema-based setup
You want to start using the schema-based approach to L<DBIx::Class>
(see L<DBIx::Class::Manual::Intro/Setting it up manually>), but have an
established class-based setup with lots of existing classes that you don't
want to move by hand. Try this nifty script instead:
lib/DBIx/Class/Storage/DBI.pm view on Meta::CPAN
) {
$self->_populate_dbh;
$drv = $self->_dbh->{Driver}{Name};
}
else {
# try to use dsn to not require being connected, the driver may still
# force a connection later in _rebless to determine version
# (dsn may not be supplied at all if all we do is make a mock-schema)
#
# Use the same regex as the one used by DBI itself (even if the use of
# \w is odd given unicode):
# https://metacpan.org/source/TIMB/DBI-1.634/DBI.pm#L621
#
# DO NOT use https://metacpan.org/source/TIMB/DBI-1.634/DBI.pm#L559-566
# as there is a long-standing precedent of not loading DBI.pm until the
# very moment we are actually connecting
#
($drv) = ($self->_dbi_connect_info->[0] || '') =~ /^dbi:(\w*)/i;
$drv ||= $ENV{DBI_DRIVER};
}
lib/DBIx/Class/Storage/DBI.pm view on Meta::CPAN
if ($data_type =~ /^(?:
l? (?:var)? char(?:acter)? (?:\s*varying)?
|
(?:var)? binary (?:\s*varying)?
|
raw
)\b/x
) {
$max_size = $attr->{sqlt_size};
}
# Other charset/unicode types, assume scale of 4
elsif ($data_type =~ /^(?:
national \s* character (?:\s*varying)?
|
nchar
|
univarchar
|
nvarchar
)\b/x
) {
lib/DBIx/Class/Storage/DBI/ADO/Microsoft_SQL_Server.pm view on Meta::CPAN
# SQL Anywhere types
'long varbit' => $lob_max,
'long bit varying' => $lob_max,
uniqueidentifierstr => 100,
'long binary' => $lob_max,
'long varchar' => $lob_max,
'long nvarchar' => $lob_max,
# Firebird types
'char(x) character set unicode_fss' => 16000,
'varchar(x) character set unicode_fss' => 16000,
'blob sub_type text' => $lob_max,
'blob sub_type text character set unicode_fss' => $lob_max,
# Informix types
smallfloat => 100,
byte => $lob_max,
lvarchar => 8000,
'datetime year to fraction(5)' => 100,
# FIXME add other datetime types
# MS Access types
autoincrement => 100,
lib/DBIx/Class/UTF8Columns.pm view on Meta::CPAN
__PACKAGE__->load_components(qw/UTF8Columns/);
__PACKAGE__->utf8_columns(qw/name description/);
# then belows return strings with utf8 flag
$artist->name;
$artist->get_column('description');
=head1 DESCRIPTION
This module allows you to get and store utf8 (unicode) column data
in a database that does not natively support unicode. It ensures
that column data is correctly serialised as a byte stream when
stored and de-serialised to unicode strings on retrieval.
THE USE OF THIS MODULE (AND ITS COUSIN DBIx::Class::ForceUTF8) IS VERY
STRONGLY DISCOURAGED, PLEASE READ THE WARNINGS BELOW FOR AN EXPLANATION.
If you want to continue using this module and do not want to receive
further warnings set the environment variable C<DBIC_UTF8COLUMNS_OK>
to a true value.
=head2 Warning - Module does not function properly on create/insert
lib/DBIx/Class/UTF8Columns.pm view on Meta::CPAN
database engines, as explained below.
If you have specific questions about the integrity of your data in light
of this development - please
L<join us on IRC or the mailing list|DBIx::Class/GETTING HELP/SUPPORT>
to further discuss your concerns with the team.
=head2 Warning - Native Database Unicode Support
If your database natively supports Unicode (as does SQLite with the
C<sqlite_unicode> connect flag, MySQL with C<mysql_enable_utf8>
connect flag or Postgres with the C<pg_enable_utf8> connect flag),
then this component should B<not> be used, and will corrupt unicode
data in a subtle and unexpected manner.
It is far better to do Unicode support within the database if
possible rather than converting data to and from raw bytes on every
database round trip.
=head2 Warning - Component Overloading
Note that this module overloads L<DBIx::Class::Row/store_column> in a way
that may prevent other components overloading the same method from working
maint/Makefile.PL.inc/12_authordeps.pl view on Meta::CPAN
push @{Meta->{values}{build_requires}}, $_;
}
else {
$removed_build_requires{$_->[0]} = $_->[1]
unless $_->[0] eq 'ExtUtils::MakeMaker';
}
}
if (keys %removed_build_requires) {
print "Regenerating META with author requires excluded\n";
# M::I understands unicode in meta but does not write with the right
# layers - fhtagn!!!
local $SIG{__WARN__} = sub { warn $_[0] unless $_[0] =~ /Wide character in print/ };
Meta->write;
}
# strip possible crlf from META
if ($^O eq 'MSWin32' or $^O eq 'cygwin') {
local $ENV{PERLIO} = 'unix';
system( $^X, qw( -MExtUtils::Command -e dos2unix -- META.yml), );
}
maint/Makefile.PL.inc/21_set_meta.pl view on Meta::CPAN
# <ribasushi> it basically allows you to first consider any "high level intermediate dist" advertising "all my stuff works" so that larger swaths of CPAN get installed first under parallel
# <ribasushi> note - this is not "spur of the moment" - I first started testing my depchain in parallel 3 years ago
# <ribasushi> and have had it stable ( religiously tested on travis on any commit ) for about 2 years now
#
Meta->{values}{x_parallel_test_certified} = 1;
Meta->{values}{x_dependencies_parallel_test_certified} = 1;
# populate x_contributors
# a direct dump of the sort is ok - xt/authors.t guarantees source sanity
Meta->{values}{x_contributors} = [ do {
# according to #p5p this is how one safely reads random unicode
# this set of boilerplate is insane... wasn't perl unicode-king...?
no warnings 'once';
require Encode;
require PerlIO::encoding;
local $PerlIO::encoding::fallback = Encode::FB_CROAK();
open (my $fh, '<:encoding(UTF-8)', 'AUTHORS') or die "Unable to open AUTHORS - can't happen: $!\n";
map { chomp; ( (! $_ or $_ =~ /^\s*\#/) ? () : $_ ) } <$fh>;
}];
t/52leaks.t view on Meta::CPAN
my $obj = CORE::bless(
$_[0], (@_ > 1) ? $_[1] : do {
my ($class, $fn, $line) = caller();
fail ("bless() of $_[0] into $class without explicit class specification at $fn line $line")
if $class =~ /^ (?: DBIx\:\:Class | DBICTest ) /x;
$class;
}
);
# unicode is tricky, and now we happen to invoke it early via a
# regex in connection()
return $obj if (ref $obj) =~ /^utf8/;
# Test Builder is now making a new object for every pass/fail (que bloat?)
# and as such we can't really store any of its objects (since it will
# re-populate the registry while checking it, ewwww!)
return $obj if (ref $obj) =~ /^TB2::|^Test::Stream/;
# populate immediately to avoid weird side effects
return populate_weakregistry ($weak_registry, $obj );
my $schema = DBICTest->init_schema();
DBICTest::Schema::CD->load_components('UTF8Columns');
DBICTest::Schema::CD->utf8_columns('title');
Class::C3->reinitialize() if DBIx::Class::_ENV_::OLD_MRO;
# as per http://search.cpan.org/dist/Test-Simple/lib/Test/More.pm#utf8
binmode (Test::More->builder->$_, ':utf8') for qw/output failure_output todo_output/;
my $bytestream_title = my $utf8_title = "weird \x{466} stuff";
utf8::encode($bytestream_title);
cmp_ok ($bytestream_title, 'ne', $utf8_title, 'unicode/raw differ (sanity check)');
my $cd;
{
local $TODO = "This has been broken since rev 1191, Mar 2006";
$schema->is_executed_sql_bind( sub {
$cd = $schema->resultset('CD')->create( { artist => 1, title => $utf8_title, year => '2048' } )
}, [[
'INSERT INTO cd ( artist, title, year) VALUES ( ?, ?, ? )',
[ { dbic_colname => "artist", sqlt_datatype => "integer" }
my $test = $reloaded ? 'reloaded' : 'stored';
$cd->discard_changes if $reloaded;
ok( utf8::is_utf8( $cd->title ), "got $test title with utf8 flag" );
ok(! utf8::is_utf8( $cd->{_column_data}{title} ), "in-object $test title without utf8" );
ok(! utf8::is_utf8( $cd->year ), "got $test year without utf8 flag" );
ok(! utf8::is_utf8( $cd->{_column_data}{year} ), "in-object $test year without utf8" );
}
$cd->title('nonunicode');
ok(! utf8::is_utf8( $cd->title ), 'update title without utf8 flag' );
ok(! utf8::is_utf8( $cd->{_column_data}{title} ), 'store utf8-less title' );
$cd->update;
$cd->discard_changes;
ok(! utf8::is_utf8( $cd->title ), 'reloaded title without utf8 flag' );
ok(! utf8::is_utf8( $cd->{_column_data}{title} ), 'reloaded utf8-less title' );
$bytestream_title = $utf8_title = "something \x{219} else";
utf8::encode($bytestream_title);
($raw_db_title) = $schema->resultset('CD')
->search ($cd->ident_condition)
->get_column('title')
->_resultset
->cursor
->next;
is ($raw_db_title, $bytestream_title, 'UPDATE: raw bytes retrieved from database');
$cd->discard_changes;
$cd->title($utf8_title);
ok( !$cd->is_column_changed('title'), 'column is not dirty after setting the same unicode value' );
$cd->update ({ title => $utf8_title });
$cd->title('something_else');
ok( $cd->is_column_changed('title'), 'column is dirty after setting to something completely different');
{
local $TODO = 'There is currently no way to propagate aliases to inflate_result()';
$cd = $schema->resultset('CD')->find ({ title => $utf8_title }, { select => 'title', as => 'name' });
ok (utf8::is_utf8( $cd->get_column ('name') ), 'utf8 flag propagates via as');
}
t/debug/core.t view on Meta::CPAN
$_;
} finally {
# restore STDERR
close STDERR;
open(STDERR, '>&STDERRCOPY');
};
die "How did that fail... $exception"
if $exception;
is_deeply(\@warnings, [], 'No warnings with unicode on STDERR');
# test debugcb and debugobj protocol
{
my $rs = $schema->resultset('CD')->search( {
artist => 1,
cdid => { -between => [ 1, 3 ] },
title => { '!=' => \[ '?', undef ] }
});
my $sql_trace = 'SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE ( ( artist = ? AND ( cdid BETWEEN ? AND ? ) AND title != ? ) )';
t/resultset/rowparser_internals.t view on Meta::CPAN
$#{$_[0]} = $result_pos - 1;
',
'Multiple has_many on multiple branches with underdefined root, HRI-direct torture test',
);
done_testing;
my $deparser;
sub is_same_src { SKIP: {
skip "Skipping comparison of unicode-posioned source", 1
if DBIx::Class::_ENV_::STRESSTEST_UTF8_UPGRADE_GENERATED_COLLAPSER_SOURCE;
$deparser ||= B::Deparse->new;
local $Test::Builder::Level = $Test::Builder::Level + 1;
my ($got, $expect) = @_;
skip "Not testing equality of source containing defined-or operator on this perl $]", 1
if ($] < 5.010 and$expect =~ m!\Q//=!);
xt/authors.t view on Meta::CPAN
use warnings;
use strict;
use Test::More;
use Config;
use File::Spec;
my @known_authors = do {
# according to #p5p this is how one safely reads random unicode
# this set of boilerplate is insane... wasn't perl unicode-king...?
no warnings 'once';
require Encode;
require PerlIO::encoding;
local $PerlIO::encoding::fallback = Encode::FB_CROAK();
open (my $fh, '<:encoding(UTF-8)', 'AUTHORS') or die "Unable to open AUTHORS - can't happen: $!\n";
map { chomp; ( ( ! $_ or $_ =~ /^\s*\#/ ) ? () : $_ ) } <$fh>;
} or die "Known AUTHORS file seems empty... can't happen...";