Result:
found more than 809 distributions - search limited to the first 2001 files matching your query ( run in 2.230 )


DBD-KB

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN


 - Prevent core dump when checking $dbh->{standard_conforming_strings}
     on older servers.
   [Greg Sabino Mullane]

 - Skip unicode tests if server is set to 'LATIN1'
   [Greg Sabino Mullane]


Version 2.10.5  (released September 16, 2008)

 view all matches for this distribution


DBD-MariaDB

 view release on metacpan or  search on metacpan

t/lib.pl  view on Meta::CPAN

    my $err;
    my $dbh = eval { DBI->connect(@_) };
    if ( $dbh ) {
        my ($current_charset, $current_collation) = $dbh->selectrow_array('SELECT @@character_set_database, @@collation_database');
        my $expected_charset = $dbh->selectrow_array("SHOW CHARSET LIKE 'utf8mb4'") ? 'utf8mb4' : 'utf8';
        my $expected_collation = "${expected_charset}_unicode_ci";
        if ($current_charset ne $expected_charset) {
            $err = "Database charset is not $expected_charset, but $current_charset";
        } elsif ($current_collation ne $expected_collation) {
            $err = "Database collation is not $expected_collation, but $current_collation";
        }

 view all matches for this distribution


DBD-MaxDB

 view release on metacpan or  search on metacpan

MaxDB.pm  view on Meta::CPAN

        $sth;
    }

    sub type_info_all {
        my ($dbh) = @_;
  my $res = DBD::MaxDB::db::_isunicode($dbh);
        if ($res) {
    require DBD::MaxDB::TypeInfoUnicode;
    return $DBD::MaxDB::TypeInfoUnicode::type_info_all;
  } else {
    require DBD::MaxDB::TypeInfoAscii;

MaxDB.pm  view on Meta::CPAN

=item C<maxdb_sqlmode (string)>

Gets/Sets the SQL mode of the current connection. Possible values are
C<ORACLE | INTERNAL>.

=item C<maxdb_unicode (boolean, read-only)>

Indicates whether the current connection supports unicode (true) or not (false)

=back

=head2 Statement Handle Attributes

MaxDB.pm  view on Meta::CPAN

           or die "Can't execute statement $DBI::err $DBI::errstr\n";
  ...

=head1 UNICODE

DBD::MaxDB supports Unicode. Perl's internal unicode format is UTF-8
but MaxDB uses UCS-2. Therefor the support is limited to UTF-8 characters 
that also contained in the UCS-2 standard.

=head2 Perl and Unicode

Perl began implementing Unicode with version 5.6. But if you plan to use Unicode
it is strongly recomended to use perl 5.8.2 or later. Details about using
unicode in perl can you find in the perl documentation:

   perldoc perluniintro
   perldoc perlunicode

=head2 MaxDB and Unicode

MaxDB supports the code attribute Unicode for the data type CHAR and is able to
display various presentation codes in Unicode format. As well as storing data in Unicode, 

 view all matches for this distribution


DBD-Mimer

 view release on metacpan or  search on metacpan

sqlext.h  view on Meta::CPAN

#define SQL_INTERVAL_MINUTE_TO_SECOND		(-92)
#endif	/* ODBCVER >= 0x0300 */


/*
 *   SQL unicode data types
 */
#if (ODBCVER <= 0x0300)
/* These definitions are historical and obsolete */
#define SQL_UNICODE				(-95)
#define SQL_UNICODE_VARCHAR			(-96)

 view all matches for this distribution


DBD-ODBC

 view release on metacpan or  search on metacpan

ODBC.pm  view on Meta::CPAN

        diags => \@EXPORT_DIAGS,
        taf => \@EXPORT_TAF);

    sub parse_trace_flag {
        my ($class, $name) = @_;
        return 0x02_00_00_00 if $name eq 'odbcunicode';
        return 0x04_00_00_00 if $name eq 'odbcconnection';
        return DBI::parse_trace_flag($class, $name);
    }

    sub parse_trace_flags {

ODBC.pm  view on Meta::CPAN

            odbc_describe_parameters       => undef,
            odbc_SQL_ROWSET_SIZE           => undef,
            odbc_SQL_DRIVER_ODBC_VER       => undef,
            odbc_cursortype                => undef,
            odbc_query_timeout             => undef, # sth and dbh
            odbc_has_unicode               => undef,
            odbc_out_connect_string        => undef,
            odbc_version                   => undef,
            odbc_err_handler               => undef,
            odbc_putdata_start             => undef, # sth and dbh
            odbc_column_display_size       => undef, # sth and dbh

ODBC.pm  view on Meta::CPAN

This documentation refers to DBD::ODBC version 1.61.


=head1 WARNING

This version of DBD::ODBC contains a significant fix to unicode when
inserting into CHAR/VARCHAR columns and it is a change in behaviour
from 1.45. The change B<only> applies to unicode builds of DBD::ODBC
(the default on Windows but you can build it for unicode on unix too)
and char/varchar columns and not nchar/nvarchar columns.

Prior to this release of DBD::ODBC when you are using the unicode
build of DBD::ODBC and inserted data into a CHAR/VARCHAR columns using
parameters DBD::ODBC did this:

1 if you set odbc_describe_parameters to 0, (thus preventing DBD::ODBC
  from calling SQLDescribeParam) parameters for CHAR/VARCHAR columns

ODBC.pm  view on Meta::CPAN

  type. This usually returns SQL_CHAR or SQL_VARCHAR for CHAR/VARCHAR
  columns unsurprisingly. The parameter was then bound as SQL_VARCHAR.

Items 1 to 4 still apply. 5 now has a different behaviour. In this
release, DBD::ODBC now looks at your bound data first before using the
type returned by SQLDescribeParam. If you data looks like unicode
(i.e., SvUTF8() is true) it now binds the parameter as SQL_WVARCHAR.

What might this might mean to you?

If you had Perl scalars that were bound to CHAR/VARCHAR columns in an
insert/update/delete and those scalars contained unicode, DBD::ODBC
would actually pass the individual octets in your scalar not
characters.  For instance, if you had the Perl scalar "\x{20ac}" (the
Euro unicode character) and you bound it to a CHAR/VARCHAR, DBD::ODBC
would pass 0xe2, 0x82, 0xc2 as separate characters because those bytes
were Perl's UTF-8 encoding of a euro. These would probably be
interpreted by your database engine as 3 characters in its current
codepage. If you queried your database to find the length of the data
inserted you'd probably get back 3, not 1.

ODBC.pm  view on Meta::CPAN

statement, it would bind the column as SQL_WCHAR and you'd get back 3
characters with the utf8 flag on (what those characters were depends
on how your database or driver translates code page characters to wide
characters).

What should happen now is that if your bound parameters are unicode,
DBD::ODBC will bind them as wide characters (unicode) and your driver
or database will attempt to convert them into the code page it is
using. This means so long as your database can store the data you are
inserting, when you read it back you should get what you inserted.

=head1 SYNOPSIS

ODBC.pm  view on Meta::CPAN

Older versions of DBD::ODBC assumed that the parameter binding type
was 12 (C<SQL_VARCHAR>).  Newer versions always attempt to call
C<SQLDescribeParam> to find the parameter types but if
C<SQLDescribeParam> is unavailable DBD::ODBC falls back to a default
bind type. The internal default bind type is C<SQL_VARCHAR> (for
non-unicode build) and C<SQL_WVARCHAR> or C<SQL_VARCHAR> (for a
unicode build depending on whether the parameter is unicode or
not). If you set C<odbc_default_bind_type> to a value other than 0 you
override the internal default.

B<N.B> If you call the C<bind_param> method with a SQL type this
overrides everything else above.

ODBC.pm  view on Meta::CPAN

describe the parameters accurately (MS SQL Server sometimes does this
with some SQL like I<select myfunc(?)  where 1 = 1>). Setting
C<odbc_force_bind_type> to C<SQL_VARCHAR> will force DBD::ODBC to bind
all the parameters as C<SQL_VARCHAR> and ignore SQLDescribeParam.

Bear in mind that if you are inserting unicode data you probably want
to use C<SQL_WVARCHAR>/C<SQL_WCHAR>/C<SQL_WLONGVARCHAR> and not
C<SQL_VARCHAR>.

As this attribute was created to work around buggy ODBC Drivers which
support SQLDescribeParam but describe the parameters incorrectly you

ODBC.pm  view on Meta::CPAN


Set this flag to treat all strings returned from the ODBC driver
(except columns described as SQL_BINARY or SQL_TIMESTAMP and its
variations) as UTF-8 encoded.  Some ODBC drivers (like Aster and maybe
PostgreSQL) return UTF-8 encoded data but do not support the SQLxxxW
unicode API. Enabling this flag will cause DBD::ODBC to treat driver
returned data as UTF-8 encoded and it will be marked as such in Perl.

Do not confuse this with DBD::ODBC's unicode support. The
C<odbc_utf8_on> attribute only applies to non-unicode enabled builds
of DBD::ODBC.

=head3 odbc_describe_parameters

Defaults to on. When set this allows DBD::ODBC to call SQLDescribeParam
(if the driver supports it) to retrieve information about any
parameters.

When off/false DBD::ODBC will not call SQLDescribeParam and defaults
to binding parameters as SQL_CHAR/SQL_WCHAR depending on the build
type and whether your data is unicode or not.

You do not have to disable odbc_describe_parameters just because your
driver does not support SQLDescribeParam as DBD::ODBC will work this
out at the start via SQLGetFunctions.

ODBC.pm  view on Meta::CPAN


and

    $dbh->{odbc_exec_direct} = 1;

B<NOTE:> Even if you build DBD::ODBC with unicode support you can
still not pass unicode strings to the prepare method if you also set
odbc_exec_direct. This is a restriction in this attribute which is
unavoidable.

=head3 odbc_SQL_DRIVER_ODBC_VER

ODBC.pm  view on Meta::CPAN


See F<t/20SqlServer.t> for an example.

In versions of SQL Server 2005 and later see "Multiple Active Statements (MAS)" in the DBD::ODBC::FAQ instead of using this attribute.

=head3 odbc_has_unicode

A read-only attribute signifying whether DBD::ODBC was built with the
C macro WITH_UNICODE or not. A value of 1 indicates DBD::ODBC was built
with WITH_UNICODE else the value returned is 0.

Building WITH_UNICODE affects columns and parameters which are
SQL_C_WCHAR, SQL_WCHAR, SQL_WVARCHAR, and SQL_WLONGVARCHAR, SQL,
the connect method and a lot more. See L</Unicode>.

When odbc_has_unicode is 1, DBD::ODBC will:

=over

=item bind all string columns as wide characters (SQL_Wxxx)

This means that UNICODE data stored in these columns will be returned
to Perl correctly as unicode (i.e., encoded in UTF-8 and the UTF-8 flag set).

=item bind parameters the database declares as wide characters or unicode parameters as SQL_Wxxx

Parameters bound where the database declares the parameter as being a
wide character, or where the parameter data is unicode, or where the
parameter type is explicitly set to a wide type (e.g., SQL_Wxxx) are bound
as wide characters in the ODBC API and DBD::ODBC encodes the perl parameters
as UTF-16 before passing them to the driver.

=item SQL

ODBC.pm  view on Meta::CPAN


=back

NOTE: You will need at least Perl 5.8.1 to use UNICODE with DBD::ODBC.

NOTE: Binding of unicode output parameters is coded but untested.

NOTE: When building DBD::ODBC on Windows ($^O eq 'MSWin32') the
WITH_UNICODE macro is automatically added. To disable specify -nou as
an argument to Makefile.PL (e.g. C<perl Makefile.PL -nou>). On non-Windows
platforms the WITH_UNICODE macro is B<not> enabled by default and to enable

ODBC.pm  view on Meta::CPAN


  export DBD_ODBC_UNICODE=1
  cpanm DBD::ODBC

UNICODE support in ODBC Drivers differs considerably. Please read the
README.unicode file for further details.

=head3 odbc_out_connect_string

After calling the connect method this will be the ODBC driver's
out connection string - see documentation on SQLDriverConnect.

ODBC.pm  view on Meta::CPAN


The type the lob is retrieved as may be overridden in C<%attr> using
C<TYPE =E<gt> sql_type>. C<%attr> is optional and if omitted defaults
to SQL_C_BINARY for binary columns and SQL_C_CHAR/SQL_C_WCHAR for
other column types depending on whether DBD::ODBC is built with
unicode support. C<$chrs_or_bytes_read> will by the bytes read when
the column types SQL_C_CHAR or SQL_C_BINARY are used and characters
read if the column type is SQL_C_WCHAR.

When built with unicode support C<$length> specifies the amount of
buffer space to be used when retrieving the lob data but as it is
returned as SQLWCHAR characters this means you at most retrieve
C<$length/2> characters. When those retrieved characters are encoded
in UTF-8 for Perl, the C<$lob> scalar may need to be larger than
C<$length> so DBD::ODBC grows it appropriately.

ODBC.pm  view on Meta::CPAN

DBD::ODBC is 'SQL' which DBD::ODBC supports by outputting the SQL
strings (after modification) passed to the prepare and do methods.

From DBI 1.617 DBI also defines ENC (encoding), CON (connection) TXN
(transaction) and DBD (DBD only) trace flags. DBI's ENC and CON trace
flags are synonymous with DBD::ODBC's odbcunicode and odbcconnection
trace flags though I may remove the DBD::ODBC ones in the
future. DBI's DBD trace flag allows output of only DBD::ODBC trace
messages without DBI's trace messages.

Currently DBD::ODBC supports two private trace flags. The
'odbcunicode' flag traces some unicode operations and the
odbcconnection traces the connect process.

To enable tracing of particular flags you use:

  $h->trace($h->parse_trace_flags('SQL|odbcconnection'));
  $h->trace($h->parse_trace_flags('1|odbcunicode'));

In the first case 'SQL' and 'odbcconnection' tracing is enabled on
$h. In the second case trace level 1 is set and 'odbcunicode' tracing
is enabled.

If you want to enable a DBD::ODBC private trace flag before connecting
you need to do something like:

ODBC.pm  view on Meta::CPAN

  DBI->trace(DBD::ODBC->parse_trace_flag('odbcconnection'));

or

  use DBD::ODBC;
  DBI->trace(DBD::ODBC->parse_trace_flags('odbcconnection|odbcunicode'));

or

  DBI_TRACE=odbcconnection|odbcunicode perl myscript.pl

From DBI 1.617 you can output only DBD::ODBC trace messages using

  DBI_TRACE=DBD perl myscript.pl

ODBC.pm  view on Meta::CPAN


unixODBC will happily recognise ODBC drivers which only have the ANSI
versions of the ODBC API and those that have the wide versions
too.

unixODBC will allow an ANSI application to work with a unicode
ODBC driver and vice versa (although in the latter case you obviously
cannot actually use unicode).

unixODBC does not prevent you sending UTF-8 in the ANSI versions of
the ODBC APIs but whether that is understood by your ODBC driver is
another matter.

unixODBC differs in only one way from the Microsoft ODBC driver in
terms of unicode support in that it avoids unnecessary translations
between single byte and double byte characters when an ANSI
application is using a unicode-aware ODBC driver by requiring unicode
applications to signal their intent by calling SQLDriverConnectW
first. On Windows, the ODBC driver manager always uses the wide
versions of the ODBC API in ODBC drivers which provide the wide
versions regardless of what the application really needs and this
results in a lot of unnecessary character translations when you have
an ANSI application and a unicode ODBC driver.

=item iODBC

The wide character versions expect and return wchar_t types.

=back

DBD::ODBC has gone with unixODBC so you cannot use iODBC with a
unicode build of DBD::ODBC. However, some ODBC drivers support UTF-8
(although how they do this with SQLGetData reliably I don't know)
and so you should be able to use those with DBD::ODBC not built for
unicode.

=head3 Enabling and Disabling Unicode support

On Windows Unicode support is enabled by default and to disable it
you will need to specify C<-nou> to F<Makefile.PL> to get back to the

ODBC.pm  view on Meta::CPAN


  perl Makefile.PL -u

=head3 Unicode - What is supported?

As of version 1.17 DBD::ODBC has the following unicode support:

=over

=item SQL (introduced in 1.16_2)

Unicode strings in calls to the C<prepare> and C<do> methods are
supported so long as the C<odbc_execdirect> attribute is not used.

=item unicode connection strings (introduced in 1.16_2)

Unicode connection strings are supported but you will need a DBI
post 1.607 for that.

=item column names

ODBC.pm  view on Meta::CPAN

As of DBD::ODBC 1.32_3 meta data calls accept Unicode strings.

=back

Since version 1.16_4, the default parameter bind type is SQL_WVARCHAR
for unicode builds of DBD::ODBC. This only affects ODBC drivers which
do not support SQLDescribeParam and only then if you do not
specifically set a SQL type on the bind_param method call.

The above Unicode support has been tested with the SQL Server, Oracle
9.2+ and Postgres drivers on Windows and various Easysoft ODBC drivers
on UNIX.

=head3 Unicode - What is not supported?

You cannot use unicode parameter names e.g.,

  select * from table where column = :unicode_param_name

You cannot use unicode strings in calls to prepare if you set the
odbc_execdirect attribute.

You cannot use the iODBC driver manager with DBD::ODBC built for
unicode.

=head3 Unicode - Caveats

For Unicode support on any platform in Perl you will need at least
Perl 5.8.1 - sorry but this is the way it is with Perl.

ODBC.pm  view on Meta::CPAN

(http://www.unixodbc.org) with Unicode support and it was built with
defaults which set WCHAR as 2 bytes.

I believe that the iODBC driver manager expects wide characters to be
wchar_t types (which are usually 4 bytes) and hence DBD::ODBC will not
work iODBC when built for unicode.

The ODBC Driver must expect Unicode data specified in SQLBindParameter
and SQLBindCol to be UTF-16 in local endianness. Similarly, in calls to
SQLPrepareW, SQLDescribeColW and SQLDriverConnectW.

ODBC.pm  view on Meta::CPAN

patches welcome.

=head3 Unicode implementation in DBD::ODBC

DBD::ODBC uses the wide character versions of the ODBC API and the
SQL_WCHAR ODBC type to support unicode in Perl.

Wide characters returned from the ODBC driver will be converted to
UTF-8 and the perl scalars will have the utf8 flag set (by using
sv_utf8_decode).

ODBC.pm  view on Meta::CPAN

from table" with a single Unicode character above 0xFFFF may
return 2 and not 1 so you cannot use database functions on that
data like upper/lower/length etc but you can at least save the data in
your database and get it back.

When built for unicode, DBD::ODBC will always call SQLDriverConnectW
(and not SQLDriverConnect) even if a) your connection string is not
unicode b) you have not got a DBI later than 1.607, because unixODBC
requires SQLDriverConnectW to be called if you want to call other
unicode ODBC APIs later. As a result, if you build for unicode and
pass ASCII strings to the connect method they will be converted to
UTF-16 and passed to SQLDriverConnectW. This should make no real
difference to perl not using unicode connection strings.

You will need a DBI later than 1.607 to support unicode connection
strings because until post 1.607 there was no way for DBI to pass
unicode strings to the DBD.

=head3 Unicode and Oracle

You have to set the environment variables C<NLS_NCHAR=AL32UTF8> and
C<NLS_LANG=AMERICAN_AMERICA.AL32UTF8> (or any other language setting

ODBC.pm  view on Meta::CPAN

modification except for the Oracle driver you will need to set you
NLS_LANG as mentioned above.

=head3 Unicode and other ODBC drivers

If you have a unicode-enabled ODBC driver and it works with DBD::ODBC
let me know and I will include it here.

=head2 ODBC Support in ODBC Drivers

=head3 Drivers without SQLDescribeParam

ODBC.pm  view on Meta::CPAN


DBD::ODBC uses the C<SQLDescribeParam> API when parameters are bound
to your SQL to find the types of the parameters. If the ODBC driver
does not support C<SQLDescribeParam>, DBD::ODBC assumes the parameters
are C<SQL_VARCHAR> or C<SQL_WVARCHAR> types (depending on whether
DBD::ODBC is built for unicode or not and whether your parameter is
unicode data). In any case, if you bind a parameter and specify a SQL
type this overrides any type DBD::ODBC would choose.

For ODBC drivers which do not support C<SQLDescribeParam> the default
behavior in DBD::ODBC may not be what you want. To change the default
parameter bind type set L</odbc_default_bind_type>. If, after that you

ODBC.pm  view on Meta::CPAN


L<http://www.easysoft.com/support/kb/kb01043.html>

Some Common Unicode Problems and Solutions using Perl DBD::ODBC and MS SQL Server

L<http://www.easysoft.com/developer/languages/perl/sql-server-unicode.html>

and a version possibly kept more up to date:

L<https://github.com/mjegh/dbd_odbc_sql_server_unicode/blob/master/common_problems.pod>

How do I use SQL Server Query Notifications from Linux and UNIX?

L<http://www.easysoft.com/support/kb/kb01069.html>

 view all matches for this distribution


DBD-Oracle

 view release on metacpan or  search on metacpan

lib/DBD/Oracle.pm  view on Meta::CPAN

            ora_lob_trim
            ora_lob_length
            ora_lob_chunk_size
            ora_lob_is_init
            ora_nls_parameters
            ora_can_unicode
            ora_can_taf
            ora_db_startup
            ora_db_shutdown
        /;

lib/DBD/Oracle.pm  view on Meta::CPAN

        # return copy of params to protect against accidental editing
        my %nls = %{$dbh->{ora_nls_parameters}};
        return \%nls;
    }

    sub ora_can_unicode {
        my $dbh = shift;
        my $refresh = shift;
        # 0 = No Unicode support.
        # 1 = National character set is Unicode-based.
        # 2 = Database character set is Unicode-based.
        # 3 = Both character sets are Unicode-based.

        return $dbh->{ora_can_unicode}
            if defined $dbh->{ora_can_unicode} && !$refresh;

        my $nls = $dbh->ora_nls_parameters($refresh);

        $dbh->{ora_can_unicode}  = 0;
        $dbh->{ora_can_unicode} += 1 if $nls->{NLS_NCHAR_CHARACTERSET} =~ m/UTF/;
        $dbh->{ora_can_unicode} += 2 if $nls->{NLS_CHARACTERSET}       =~ m/UTF/;

        return $dbh->{ora_can_unicode};
    }

}   # end of package DBD::Oracle::db


lib/DBD/Oracle.pm  view on Meta::CPAN


It also has the effect of disabling the 'quick FETCH' of attribute values from the handles attribute cache. So all attribute values are handled by the drivers own FETCH method. This makes them slightly slower but is useful for special-purpose drivers...

=head1 ORACLE-SPECIFIC DATABASE HANDLE METHODS

=head2 B<ora_can_unicode ( [ $refresh ] )>

Returns a number indicating whether either of the database character sets
is a Unicode encoding. Calls ora_nls_parameters() and passes the optional
$refresh parameter to it.

lib/DBD/Oracle.pm  view on Meta::CPAN


In this section we'll discuss "Perl and Unicode", then "Oracle and
Unicode", and finally "DBD::Oracle and Unicode".

Information about Unicode in general can be found at:
L<http://www.unicode.org/>. It is well worth reading because there are
many misconceptions about Unicode and you may be holding some of them.

=head2 Perl and Unicode

Perl began implementing Unicode with version 5.6, but the implementation
did not mature until version 5.8 and later. If you plan to use Unicode
you are I<strongly> urged to use Perl 5.8.2 or later and to I<carefully> read
the Perl documentation on Unicode:

   perldoc perluniintro    # in Perl 5.8 or later
   perldoc perlunicode

And then read it again.

Perl's internal Unicode format is UTF-8
which corresponds to the Oracle character set called AL32UTF8.

lib/DBD/Oracle.pm  view on Meta::CPAN

words characters beyond the Unicode BMP (Basic Multilingual Plane).

That's because the character set that Oracle calls "UTF8" doesn't
conform to the UTF-8 standard in its handling of surrogate characters.
Technically the encoding that Oracle calls "UTF8" is known as "CESU-8".
Here are a couple of extracts from L<http://www.unicode.org/reports/tr26/>:

  CESU-8 is useful in 8-bit processing environments where binary
  collation with UTF-16 is required. It is designed and recommended
  for use only within products requiring this UTF-16 binary collation
  equivalence. It is not intended nor recommended for open interchange.

 view all matches for this distribution


DBD-PO

 view release on metacpan or  search on metacpan

example/11_read_using_Locale-Maketext.pl  view on Meta::CPAN

    # and has 1 utf-8 po file.
    Locale::Maketext::Lexicon->import({
        de      => [
            Gettext => "$path/$table",
        ],
        _decode => 1, # unicode mode
    });
}

use Carp qw(croak);
use Tie::Sub (); # allow to write a subroutine call as fetch hash

 view all matches for this distribution


DBD-PassThrough

 view release on metacpan or  search on metacpan

t/01_simple.t  view on Meta::CPAN

    my $dbh = create_dbh();
    {
        no utf8;
        $dbh->do(q{INSERT INTO member (id, name) VALUES (?, ?)}, {}, 3, 'さいきろん');
    }
    $dbh->{sqlite_unicode} = 1;
    my ($name) = $dbh->selectrow_array(q{SELECT name FROM member WHERE id=3});
    is($name, 'さいきろん');
};
subtest 'can_ok' => sub {
    my $dbh = create_dbh();

 view all matches for this distribution


DBD-Pg

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN


 - Prevent core dump when checking $dbh->{standard_conforming_strings}
     on older servers.
   [Greg Sabino Mullane]

 - Skip unicode tests if server is set to 'LATIN1'
   [Greg Sabino Mullane]


Version 2.10.5  (released September 16, 2008)

 view all matches for this distribution


DBD-PgAsync

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN


 - Prevent core dump when checking $dbh->{standard_conforming_strings}
     on older servers.
   [Greg Sabino Mullane]

 - Skip unicode tests if server is set to 'LATIN1'
   [Greg Sabino Mullane]


Version 2.10.5  (released September 16, 2008)

 view all matches for this distribution


DBD-Redbase

 view release on metacpan or  search on metacpan

Redbase/DataStream.pm  view on Meta::CPAN

# This function return string compatible with with javas Input/OutputStream
# readUTF method
###############################################################################
sub _writeUTF($)
{
	my $unicode_string = utf8(shift());
	while ((my $pos = index($unicode_string, "\000")) > -1)
	{
		$unicode_string = substr($unicode_string, 0, $pos) . chr(192) . chr(128) . substr($unicode_string, $pos + 1);
	}
	return $unicode_string;
}

###############################################################################
# This method writes binary char compatible with Java
###############################################################################

Redbase/DataStream.pm  view on Meta::CPAN

###############################################################################
# This method converts Java UTF-8 string into current encoding
###############################################################################
sub _readUTF($)
{
	my $unicode_string = utf8(shift());
	while ((my $pos = index($unicode_string, chr(192) . chr(128))) > -1)
	{
		$unicode_string = substr($unicode_string, 0, $pos) . chr(0) . substr($unicode_string, $pos + 2);
	}
	return $unicode_string->latin1();
}

###############################################################################
# This method reads binary char compatible with Java
###############################################################################

 view all matches for this distribution


DBD-SQLcipher

 view release on metacpan or  search on metacpan

lib/DBD/SQLcipher.pm  view on Meta::CPAN

        unless ($flags & (DBD::SQLcipher::OPEN_READONLY() | DBD::SQLcipher::OPEN_READWRITE())) {
            $attr->{sqlite_open_flags} |= DBD::SQLcipher::OPEN_READWRITE() | DBD::SQLcipher::OPEN_CREATE();
        }
    }

    # To avoid unicode and long file name problems on Windows,
    # convert to the shortname if the file (or parent directory) exists.
    if ( $^O =~ /MSWin32/ and $real ne ':memory:' and $real ne '' and $real !~ /^file:/ and !-f $real ) {
        require File::Basename;
        my ($file, $dir, $suffix) = File::Basename::fileparse($real);
        # We are creating a new file.

lib/DBD/SQLcipher.pm  view on Meta::CPAN

=item sqlite_version

Returns the version of the SQLcipher library which B<DBD::SQLcipher> is using,
e.g., "2.8.0". Can only be read.

=item sqlite_unicode

If set to a true value, B<DBD::SQLcipher> will turn the UTF-8 flag on for all
text strings coming out of the database (this feature is currently disabled
for perl < 5.8.5). For more details on the UTF-8 flag see
L<perlunicode>. The default is for the UTF-8 flag to be turned off.

Also note that due to some bizarreness in SQLcipher's type system (see
L<http://www.sqlite.org/datatype3.html>), if you want to retain
blob-style behavior for B<some> columns under C<< $dbh->{sqlite_unicode} = 1
>> (say, to store images in the database), you have to state so
explicitly using the 3-argument form of L<DBI/bind_param> when doing
updates:

  use DBI qw(:sql_types);
  $dbh->{sqlite_unicode} = 1;
  my $sth = $dbh->prepare("INSERT INTO mytable (blobcolumn) VALUES (?)");
  
  # Binary_data will be stored as is.
  $sth->bind_param(1, $binary_data, SQL_BLOB);

Defining the column type as C<BLOB> in the DDL is B<not> sufficient.

This attribute was originally named as C<unicode>, and renamed to
C<sqlite_unicode> for integrity since version 1.26_06. Old C<unicode>
attribute is still accessible but will be deprecated in the near future.

=item sqlite_allow_multiple_statements

If you set this to true, C<do> method will process multiple

lib/DBD/SQLcipher.pm  view on Meta::CPAN


  SELECT * FROM foo ORDER BY name COLLATE perllocale

=head2 Unicode handling

If the attribute C<< $dbh->{sqlite_unicode} >> is set, strings coming from
the database and passed to the collation function will be properly
tagged with the utf8 flag; but this only works if the
C<sqlite_unicode> attribute is set B<before> the first call to
a perl collation sequence . The recommended way to activate unicode
is to set the parameter at connection time :

  my $dbh = DBI->connect(
      "dbi:SQLcipher:dbname=foo", "", "",
      {
          RaiseError     => 1,
          sqlite_unicode => 1,
      }
  );

=head2 Adding user-defined collations

 view all matches for this distribution


DBD-SQLeet

 view release on metacpan or  search on metacpan

lib/DBD/SQLeet.pm  view on Meta::CPAN

    unless ($flags & (DBD::SQLeet::OPEN_READONLY() | DBD::SQLeet::OPEN_READWRITE())) {
      $attr->{sqlite_open_flags} |= DBD::SQLeet::OPEN_READWRITE() | DBD::SQLeet::OPEN_CREATE();
    }
  }

  # To avoid unicode and long file name problems on Windows,
  # convert to the shortname if the file (or parent directory) exists.
  if ($^O =~ /MSWin32/ and $real ne ':memory:' and $real ne '' and $real !~ /^file:/ and !-f $real) {
    require File::Basename;
    my ($file, $dir, $suffix) = File::Basename::fileparse($real);
    # We are creating a new file.

 view all matches for this distribution


DBD-SQLite-Amalgamation

 view release on metacpan or  search on metacpan

lib/DBD/SQLite.pm  view on Meta::CPAN

=item sqlite_version

Returns the version of the SQLite library which DBD::SQLite is using,
e.g., "2.8.0". Can only be read.

=item unicode

If set to a true value, DBD::SQLite will turn the UTF-8 flag on for all text
strings coming out of the database. For more details on the UTF-8 flag see
L<perlunicode>. The default is for the UTF-8 flag to be turned off.

Also note that due to some bizareness in SQLite's type system (see
http://www.sqlite.org/datatype3.html), if you want to retain
blob-style behavior for B<some> columns under C<< $dbh->{unicode} = 1
>> (say, to store images in the database), you have to state so
explicitely using the 3-argument form of L<DBI/bind_param> when doing
updates:

    use DBI qw(:sql_types);
    $dbh->{unicode} = 1;
    my $sth = $dbh->prepare
         ("INSERT INTO mytable (blobcolumn) VALUES (?)");
    $sth->bind_param(1, $binary_data, SQL_BLOB); # binary_data will
    # be stored as-is.

lib/DBD/SQLite.pm  view on Meta::CPAN


  CREATE TABLE foo(txt1 COLLATE perl,
                   txt2 COLLATE perllocale,
                   txt3 COLLATE nocase)

If the attribute C<< $dbh->{unicode} >> is set, strings coming from
the database and passed to the collation function will be properly
tagged with the utf8 flag; but this only works if the 
C<unicode> attribute is set B<before> the call to 
C<create_collation>. The recommended way to activate unicode
is to set the parameter at connection time :

  my $dbh = DBI->connect("dbi:SQLite:dbname=foo", "", "", 
                          { RaiseError => 1,
                            unicode    => 1} );


=head2 $dbh->func( $n_opcodes, $handler, 'progress_handler' )

This method registers a handler to be invoked 

 view all matches for this distribution


DBD-SQLite

 view release on metacpan or  search on metacpan

lib/DBD/SQLite.pm  view on Meta::CPAN

        unless ($flags & (DBD::SQLite::OPEN_READONLY() | DBD::SQLite::OPEN_READWRITE())) {
            $attr->{sqlite_open_flags} |= DBD::SQLite::OPEN_READWRITE() | DBD::SQLite::OPEN_CREATE();
        }
    }

    # To avoid unicode and long file name problems on Windows,
    # convert to the shortname if the file (or parent directory) exists.
    if ( $^O =~ /MSWin32/ and $real ne ':memory:' and $real ne '' and $real !~ /^file:/ and !-f $real ) {
        require File::Basename;
        my ($file, $dir, $suffix) = File::Basename::fileparse($real);
        # We are creating a new file.

lib/DBD/SQLite.pm  view on Meta::CPAN

B<bad>, but it's been the default for many years, and changing that would
break existing applications.

=back

=item C<sqlite_unicode> or C<unicode> (deprecated)

If truthy, equivalent to setting C<sqlite_string_mode> to
DBD_SQLITE_STRING_MODE_UNICODE_NAIVE; if falsy, equivalent to
DBD_SQLITE_STRING_MODE_PV.

lib/DBD/SQLite.pm  view on Meta::CPAN


Depending on the C<< $dbh->{sqlite_string_mode} >> value, strings coming
from the database and passed to the collation function may be decoded as
UTF-8. This only works, though, if the C<sqlite_string_mode> attribute is
set B<before> the first call to a perl collation sequence. The recommended
way to activate unicode is to set C<sqlite_string_mode> at connection time:

  my $dbh = DBI->connect(
      "dbi:SQLite:dbname=foo", "", "",
      {
          RaiseError         => 1,

 view all matches for this distribution


DBD-SQLite2

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

    - Fixed major crash bug affecting Mac OS X
    - Removed test.pl from distribution
    - Upgraded to sqlite 2.7.6

0.23
    - Fixed unicode tests

0.22
    - Merge with sqlite 2.7.4

0.21

Changes  view on Meta::CPAN

0.15
    - Upgraded to SQLite 2.4.5

0.14
    - Added NoUTF8Flag option, so that returned strings don't get flagged
      with SvUTF8_on() - needed when you're storing non-unicode in the database

0.13
    - Upgraded to SQLite 2.4.3
    - Added script to download sqlite core library when it's upgraded

 view all matches for this distribution


DBD-Solid

 view release on metacpan or  search on metacpan

Const/Const.pm  view on Meta::CPAN

# Items to export into callers namespace by default. Note: do not export
# names by default without a very good reason. Use EXPORT_OK instead.
# Do not simply export all your public functions/methods/constants.
@DBD::Solid::Const::EXPORT = ();

# I added the three unicode types. --mms
%DBD::Solid::Const::EXPORT_TAGS = 
    ( 
    sql_types => [ qw(SQL_CHAR
		       SQL_NUMERIC
		       SQL_DECIMAL

 view all matches for this distribution


DBD-Sybase

 view release on metacpan or  search on metacpan

Sybase.pm  view on Meta::CPAN


     $dbh = DBI->connect("dbi:Sybase:charset=iso_1",
			 $user, $passwd);

The default charset used depends on the locale that the application runs
in. If you wish to interact with unicode varaiables (see syb_enable_utf8, below) then
you should set charset=utf8. Note however that this means that Sybase will expect all
data sent to it for char/varchar columns to be encoded in utf8 (e.g. sending iso8859-1 characters
like e-grave, etc).

=item language

Sybase.pm  view on Meta::CPAN


=item syb_enable_utf8 (bool)

If this attribute is set then DBD::Sybase will convert UNIVARCHAR, UNICHAR,
and UNITEXT data to Perl's internal utf-8 encoding when they are
retrieved. Updating a unicode column will cause Sybase to convert any incoming
data from utf-8 to its internal utf-16 encoding.

This feature requires OpenClient 15.x to work.

Default: off

 view all matches for this distribution


DBD-Teradata

 view release on metacpan or  search on metacpan

t/test.pl  view on Meta::CPAN


my $ctsth = $dbh->prepare( 'CREATE TABLE alltypetst, NO FALLBACK (
col1 integer,
col2 smallint,
col3 byteint,
col4 char(20) character set unicode,
col5 varchar(100) character set unicode,
col6 float,
col7 decimal(2,1),
col8 decimal(4,2),
col9 decimal(8,4),
col10 decimal(14,5),

t/test.pl  view on Meta::CPAN


my $cmsth = $dbh->prepare(
'CREATE MACRO dbitest(col1 integer,
col2 smallint,
col3 byteint,
col4 char(20) character set unicode,
col5 varchar(100) character set unicode,
col6 float,
col7 decimal(2,1),
col8 decimal(4,2),
col9 decimal(8,4),
col10 decimal(14,5),

 view all matches for this distribution


DBD-TimesTen

 view release on metacpan or  search on metacpan

ChangeLog  view on Meta::CPAN


	* Added RPM spec file.

2006-11-26 [r542]  Chad Wagner <chad.wagner@gmail.com>

	* Added 50unicode.t to MANIFEST.

2006-11-26 [r541]  Chad Wagner <chad.wagner@gmail.com>

	* Changed SQL_WVARCHAR and SQL_WCHAR to use SQL_C_BINARY data type.
	* Added unicode test cases.

2006-11-26 [r540]  Chad Wagner <chad.wagner@gmail.com>

	* Added a few more test cases.

 view all matches for this distribution


DBD-Unify

 view release on metacpan or  search on metacpan

lib/DBD/Unify.pm  view on Meta::CPAN

 # man DBI for explanation of each method (there's more than listed here)

 $dbh = DBI->connect ("DBI:Unify:[\$dbname]", "", $schema, {
                         AutoCommit    => 0,
                         ChopBlanks    => 1,
                         uni_unicode   => 0,
                         uni_verbose   => 0,
                         uni_scanlevel => 2,
                         });
 $dbh = DBI->connect_cached (...);                   # NYT
 $dbh->do ($statement);

lib/DBD/Unify.pm  view on Meta::CPAN

sub private_attribute_info {
    return {
	dbd_verbose	=> undef,

	uni_verbose	=> undef,
	uni_unicode	=> undef,
	};
    } # private_attribute_info

sub ping {
    my $dbh = shift;

lib/DBD/Unify.pm  view on Meta::CPAN


By default, this driver is completely Unicode unaware: what you put into
the database will be returned to you without the encoding applied.

To enable automatic decoding of UTF-8 when fetching from the database,
set the C<uni_unicode> attribute to a true value for the database handle
(statement handles will inherit) or to the statement handle.

  $dbh->{uni_unicode} = 1;

When CHAR or TEXT fields are retrieved and the content fetched is valid
UTF-8, the value will be marked as such.

=item re-connect

 view all matches for this distribution


DBD-cego

 view release on metacpan or  search on metacpan

dbdimp.c  view on Meta::CPAN

        DBIc_set(imp_dbh, DBIcf_AutoCommit, SvTRUE(valuesv));
        return TRUE;
    }
    else if (strncmp(key, "NoUTF8Flag", 10) == 0) 
    {
        warn("NoUTF8Flag is deprecated due to perl unicode weirdness\n");
        if (SvTRUE(valuesv)) {
            imp_dbh->no_utf8_flag = TRUE;
        }
        else {
            imp_dbh->no_utf8_flag = FALSE;

 view all matches for this distribution


DBD-mysql

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

* just remove unnecessary "my" - https://github.com/CaptTofu/DBD-mysql/pull/34 - Shoichi Kaji
* eval $ExtUtils::MakeMaker::VERSION requires for old ExtUtils::MakeMaker - https://github.com/CaptTofu/DBD-mysql/pull/32 - Daisuke Murase
* Updated documentation to reflect that bugs will be reported at rt.cpan.org
* Updated version
* Chased tail finding issue with -1 being converted to max unsigned int in PS mode
* Various typos and other unicode fixes dsteinbrunner <dsteinbrunner@gmail.com>
* Fixed permissions on files.
* Clarified documentation and bumped version for next release

2012-08-28 Patrick Galbraith et open source community <patg at patg dot net> (4.022)
* Fixes for Win32 from Rom Hoelz (https://github.com/hoelzro)

 view all matches for this distribution


DBD-mysqlx

 view release on metacpan or  search on metacpan

dbdimp.c  view on Meta::CPAN

/* Check if a collation is using UTF-8
 *
 * To get the ID's:
 * SELECT ID FROM information_schema.COLLATIONS WHERE CHARACTER_SET_NAME LIKE
 * 'utf8%' ORDER BY IS_DEFAULT DESC, COLLATION_NAME LIKE '%\_general\_%' DESC,
 * COLLATION_NAME LIKE '%\_bin%' DESC, COLLATION_NAME LIKE '%\_unicode\_%' DESC,
 * COLLATION_NAME LIKE 'utf8mb4_0900\_%' DESC
 *
 * Note that default and generic collations are moved to the front of the list
 */
bool dbd_mysqlx_is_utf8_collation(uint16_t collation) {

 view all matches for this distribution


DBI

 view release on metacpan or  search on metacpan

DBI.pm  view on Meta::CPAN


  ALL - turn on all DBI and driver flags (not recommended)
  SQL - trace SQL statements executed
        (not yet implemented in DBI but implemented in some DBDs)
  CON - trace connection process
  ENC - trace encoding (unicode translations etc)
        (not yet implemented in DBI but implemented in some DBDs)
  DBD - trace only DBD messages
        (not implemented by all DBDs yet)
  TXN - trace transactions
        (not implemented in all DBDs yet)

 view all matches for this distribution


DBICx-TestDatabase

 view release on metacpan or  search on metacpan

lib/DBICx/TestDatabase.pm  view on Meta::CPAN

        (undef, $filename) = tempfile;
        push @TMPFILES, $filename;
    }

    my $schema = $schema_class->connect( "DBI:SQLite:$filename", '', '',
        { sqlite_unicode => 1 } )
        or die "failed to connect to DBI:SQLite:$filename ($schema_class)";

    $schema->deploy unless $opts->{nodeploy};
    return $schema;
}

 view all matches for this distribution


DBIx-Admin-TableInfo

 view release on metacpan or  search on metacpan

lib/DBIx/Admin/TableInfo.pm  view on Meta::CPAN

	use Text::Table::Manifold ':constants';

	# ---------------------

	my($attr)              = {};
	$$attr{sqlite_unicode} = 1 if ($ENV{DBI_DSN} =~ /SQLite/i);
	my($dbh)               = DBI -> connect($ENV{DBI_DSN}, $ENV{DBI_USER}, $ENV{DBI_PASS}, $attr);
	my($vendor_name)       = uc $dbh -> get_info(17);
	my($info)              = DBIx::Admin::TableInfo -> new(dbh => $dbh) -> info;

	$dbh -> do('pragma foreign_keys = on') if ($ENV{DBI_DSN} =~ /SQLite/i);

lib/DBIx/Admin/TableInfo.pm  view on Meta::CPAN

			align_left,
			align_left,
			align_left,
			align_left,
		],
		format => format_text_unicodebox_table,
		headers => \@header,
		join   => "\n",
	);
	my(%type) =
	(

 view all matches for this distribution


DBIx-CSVDumper

 view release on metacpan or  search on metacpan

t/01_csv.t  view on Meta::CPAN

my $dir = tempdir(CLEANUP => 1);
my (undef, $db) = tempfile(DIR => $dir, SUFFIX => '.db');

my $dbh = DBI->connect(
    "dbi:SQLite:dbname=$db" , '', '', {
         sqlite_unicode => 1,
     },
);

$dbh->do('CREATE TABLE item (
    id INTEGER PRIMARY KEY AUTOINCREMENT,

 view all matches for this distribution


DBIx-Class-CustomPrefetch

 view release on metacpan or  search on metacpan

Debian_CPANTS.txt  view on Meta::CPAN

"libtree-dagnode-perl", "Tree-DAG_Node", "1.06", "0", "0"
"libtree-redblack-perl", "Tree-RedBlack", "0.5", "0", "0"
"libtree-simple-perl", "Tree-Simple", "1.18", "0", "0"
"libtree-simple-visitorfactory-perl", "Tree-Simple-VisitorFactory", "0.10", "0", "0"
"libtry-tiny-perl", "Try-Tiny", "0.02", "0", "0"
"libunicode-map-perl", "Unicode-Map", "0.112", "1", "0"
"libunicode-map8-perl-dfsg", "Unicode-Map8", "0.12", "0", "0"
"libunicode-maputf8-perl", "Unicode-MapUTF8", "1.11", "0", "0"
"libunicode-string-perl", "Unicode-String", "2.09", "0", "0"
"libuniversal-can-perl", "UNIVERSAL-can", "1.15", "0", "0"
"libuniversal-isa-perl", "UNIVERSAL-isa", "1.02", "0", "0"
"libuniversal-moniker-perl", "UNIVERSAL-moniker", "0.08", "0", "0"
"libuniversal-require-perl", "UNIVERSAL-require", "0.13", "0", "0"
"libunix-syslog-perl", "Unix-Syslog", "1.1", "0", "0"

 view all matches for this distribution


DBIx-Class

 view release on metacpan or  search on metacpan

lib/DBIx/Class.pm  view on Meta::CPAN

# as the distbuild-time injected author list is utf8 encoded
# Without this pod2text output is less than ideal
#
# A bit regarding selection/compatiblity:
# Before 5.8.7 UTF-8 was == utf8, both behaving like the (lax) utf8 we know today
# Then https://www.nntp.perl.org/group/perl.unicode/2004/12/msg2705.html happened
# Encode way way before 5.8.0 supported UTF-8: https://metacpan.org/source/DANKOGAI/Encode-1.00/lib/Encode/Supported.pod#L44
# so it is safe for the oldest toolchains.
# Additionally we inject all the utf8 programattically and test its well-formedness
# so all is well
#

 view all matches for this distribution


( run in 2.230 seconds using v1.01-cache-2.11-cpan-39bf76dae61 )