view release on metacpan or search on metacpan
[DOCUMENTATION]
Revised the query notification example and documentation.
Added a link to a better Query Notification article.
1.49_3 2014-05-01
[CHANGE IN BEHAVIOUR]
As warned years ago, this release removes the odbc_old_unicode attribute.
If you have a good reason to use it speak up now before the next non-development
release.
[BUG FIXES]
Fix rt89255: Fails to create test table for tests using PostgreSQL odbc driver.
Change test suite to fallback on PRECISION if COLUMN_SIZE is not found.
[ENHANCEMENTS]
Added FAQ entry of maximum number of allowed parameters.
1.48 2014-03-03
[MISCELLANEOUS]
Manifest has wrong filename for 90_trace_flags.t
Forgot to remove warning from ODBC.pm that this is a development
release and unicode change when I released 1.47.
1.47 2014-02-19
Full release of the 1.46 development releases.
[MISCELLANEOUS]
Just some tidying up of dbdimp.c - shouldn't make a difference to anyone.
Further changes to this change file to make it CPAN::Changes spec.
NOTE the changes.cpanhq.com site does not yet support "unknown" for
dates.
1.46_2 2013-12-17
[BUG FIXES]
When built with unicode support and odbc_old_unicode is not enabled
columns reported as SQL_LONGVARCHAR were not by default bound as
SQL_WCHAR and hence were not returned correctly unless the bind was
overridden.
[MISCELLANEOUS]
Added test 90_trace_flag.t
1.46_1 2013-11-16
[CHANGE IN BEHAVIOUR]
As warned in release 1.45, the binding of unicode parameters to
char/varchar columns has changed significantly. If you don't attempt
to insert unicode into char/varchar columns or if you only inserted
unicode into nchar/nvarchar columns you should see no difference.
From this release, unicode data inserted into
char/varchar/longvarchar columns is bound as SQL_WCHAR and not
whatever the driver reports the parameter as (which is mostly
SQL_CHAR).
Previously if DBD::ODBC received an error or (SQL_SUCCESS_WITH_INFO)
from an ODBC API call and then the driver refused to return the
error state/text DBD::ODBC would issue its own error saying "Unable
to fetch information about the error" and state IM008. That state
was wrong and has been changed to HY000.
Some drivers cannot support catalogs and/or schema names in
SQLTables. Recent changes set the schema/catalog name to the empty
string (good reasons below) which causes "optional feature not
implemented" from MS Access (which does not support schemas - even
for a simply ping (which uses SQLTables)). Now we call
SQLCATALOG_NAME and SQLSCHEMA_USAGE on connect to ascertain support
which modifies SQLTables call.
[MISCELLANEOUS]
Added test 45_unicode_varchar.t for MS SQL Server only so far.
1.45 2013-10-28
[CHANGE IN BEHAVIOUR]
There is no intentional change in behaviour in this release but I'm
adding a warning that the next development release is highly liking
to contain some significant unicode changes in behaviour to fix some
bugs which have been around for quite a long time now.
[BUG FIXES]
If an SQLExecute ODBC API call returned SQL_NO_DATA DBD::ODBC was
still calling SQLError (which was a waste of time).
Since 1.44_1 odbc_out_connect_string stopped returning anything.
[MISCELLANEOUS]
[CHANGE IN BEHAVIOUR]
As I warned literally years ago DBD::ODBC's private function
DescribeCol has been removed. You can use DBI's statement attributes
like NAME, PRECISION etc, instead. All test code has been changed to
remove calls to DescribeCol and GetTypeInfo.
[MISCELLANEOUS]
New example sqlserver_supplementary_chrs.pl added which shows that
in MS SQL Server 2012 you can now store unicode characters
over 0xFFFF (ones which are surrogate pairs).
More documentation for odbc_out_connect_string.
1.40_2 2012-09-06
[BUG FIXES]
Fixed rt 78838 - bind_param does not correctly stringify blessed
objects when connected to MS SQL Server
[OTHER]
* Reduced usage of D_imp_xxx to avoid calls to dbih_getcom2. See
thread on dbi-dev at
http://www.mail-archive.com/dbi-dev@perl.org/msg06675.html
* Changed the 70execute_array.t test to run it twice, once using
DBI's methods and once using the native one in DBD::ODBC.
* Made the 2 unicode tests work with DB2 ODBC driver.
1.35 2012-03-06
Full release of the 1.34 development releases
1.34_7 2012-03-02
[BUG FIXES]
* Fixed more compiler errors highlighed by a smoker using MS Visual
[BUG FIXES]
* remove debugging printf which output "HERE" in some rare cases.
rt 72534 - thanks John Deighan for spotting this.
* The test 70execute_array.t could fail due to warning being output
if the driver does not support Multiple Active Statements.
[ENHANCEMENTS]
* Use SQLGetTypeInfoW on unicode builds.
1.32_3 2011-11-15
[BUG FIXES]
* Fix bug in utf16_copy which was not adding a trailing NUL but I'm
not sure this affected anyone until I changed table_info this
release.
[ENHANCEMENTS]
* DBD::ODBC now allows unicode catalog/schema/table parameters to be
passed to table_info. Of course they will only reliably work with
a supporting Unicode ODBC driver.
1.32_2 2011-10-22
[ENHANCEMENTS]
* Added new odbc_driver_complete attribute allowing the ODBC Driver
Manager and ODBC Driver to throw dialogues for incomplete
connection strings or expired passwords etc.
[OTHER]
* added more examples
[DOCUMENTATION]
* new FAQ entries
* added note saying you cannot pass unicode schema/table/column
names to metadata calls like table_info/column_info currently.
1.32_1 2011-06-24
[BUG FIXES]
* I omitted rt_68720.t from the 1.31 distribution which leads
to a warning as it is mentioned in the MANIFEST.
[OTHER]
* Changed line endings in README.af and README.unicode to be unix
line endings and native eol-style in subversion.
* Minor changes to Makefile.PL to save the opensuse guys patching.
* Added unicode_sql.pl and unicode_params.pl examples
1.31 2011-06-21
[BUG FIXES]
Recently introduced test sql_type_cast.t cannot work with DBI less
than 1.611.
Minor change to Makefile.PL to avoid use of unitialised warning on
$ENV{LD_LIBRARY_PATH} in warning when it is not set.
[CHANGE IN BEHAVIOUR]
* DBD::ODBC used to quietly rollback any transactions when
disconnect was called and AutoCommit was off. This can mask a
problem and leads to different behaviour when disconnect is called
vs not calling disconnect (where you get a warning). This release
issues a warning if you call disconnect and a transaction is in
progress then it is rolled back.
* DBD::ODBC used to bind char/varchar/longvarchar columns as SQL_CHAR
meaning that in the unicode build of DBD::ODBC the bound column
data would be returned 8bit in whatever character-set (codepage) the
data was in, in the database. This was inconvenient and arguably a
mistake. Columns like nchar/nvarchar etc were bound as SQL_WCHAR and
returned as Unicode. This release changes the behaviour in a unicode
build of DBD::ODBC to bind all char columns as SQL_WCHAR. This may
inconvenience a few people who expected 8bit chars back, knew the
char set and decoded them (sorry). See odbc_old_unicode to return
to old behaviour.
[ENHANCEMENTS]
* added -w option to Makefile.PL to add "-Wall" to CCFLAGS and
-fno-strict-aliasing so I can find warnings.
* Cope with broken ODBC drivers that describe a parameter as SQL
type 0.
* Fixed missing SQL_MAX_TABLE_NAME_LEN definition from test.
* Fixed problem with some drivers which batch "insert;select" where
SQLMoreResults is not required and an extra describe is done.
* Fixed "select 1" in 02simple.t for Firebird ODBC Driver.
* disconnect call added to 70execute_array.t was in the wrong place.
* In non-unicode mode we bind strings as SQL_CHAR but the driver may
have described them as SQL_WCHAR and we were not doing ChopBlanks
processing in that case.
[REQUIREMENTS]
* Now needs Test::Simple 0.90.
[OTHER]
* Added dml_counts.pl example
you've not set LD_LIBRARY_PATH correctly.
[ENHANCEMENTS]
* Added Perl and ExtUtils::MakeMaker version output to build process.
* Added support for DBI's new trace flags ENC, CON, TXN and
DBD. From DBI 1.617 you should be able to use: DBI_TRACE=DBD to
ONLY get DBD::ODBC tracing without DBI tracing ENC and CON DBI
flags are synonymous with DBD::ODBC's odbcconnection and
odbcunicode trace flags which you can still use for now.
[OTHER]
* From now on I'm changing the way the Changes file is written as
per article at
http://blog.urth.org/2011/01/changes-file-how-and-how-not-to.html
* Some broken drivers (freeTDS in this case) can return SQL_ERROR
from an ODBC API function and then SQLError does not return
error details. In this case set a generic error saying an error
Fixed panic: sv_setpvn called with negative strlen at
blib/lib/DBD/ODBC.pm line 107.
Added rt_61370.t for rt 61370.
Removed last remaining sprintf calls and replaced with snprintf.
Changed the point at which DBD::ODBC switches from VARCHAR to
LONGVARCHAR or WVARCHAR to WLONGVARCHAR when SQLDesribeParam fails. It
was 4000 and is now 2000 for unicode builds. Works around a daft issue
in the MS SQL Server driver which will not allow 'x' x 2001 converted to
wide characters to be inserted into a varchar(8000).
Minor change to Makefile.PL to print out found libs for iODBC and
unixODBC.
Added some FAQs for problems with iODBC and a recent bug in DBI.
Added FAQ on my_snprintf problem.
Localised more variable declarations
1.25 2010-09-22
Official release of 1.25 combining all the changes in the 1.24_x
development releases.
1.24_6 2010-09-16
rt 61370 - default XML type parameters in SQL Server to SQL_WCHAR so
they accept unicode strings.
1.24_5 2010-09-15
Fixed missing SvSETMAGIC on a bound scalar which was causing length() to
return the wrong result - see http://www.perlmonks.org/?node_id=860211
and a big thank you to Perl Monks and in particular ikegami.
Changed bind_col so it actually pays attention to the TYPE attribute as
you could not override the bind type of a bound column before.
recommendations for changes to dbdimp.c.
Added change to Makefile.PL provided by Shawn Zong to make
Windows/Cygwin work again.
Minor change to Makefile.PL to output env vars to help in debugging
peoples build failures.
Added odbc_utf8_on attribute to dbh and sth handles to mark all strings
coming from the database as utf8. This is for Aster (based on
PostgreSQL) which returns all strings as UTF-8 encoded unicode. Thanks
to Noel Burton-Krahn.
1.23 2009-09-11
Only a readme change and version bumped to 1.23. This is a full release
of all the 1.22_x development releases.
1.22_3 2009-08-19
Fix skip count in rt_38977.t and typo in ok call.
Fixed bug in 02simple.t test which is supposed to check you have at
least one data source defined. Unfortunately, it was checking you had
more than 1 data source defined.
rt_null_varchar had wrong skip count meaning non-sql-server drivers or
sql server drivers too old skipped 2 tests more than were planned.
1.22 2009-06-10
Fixed bug which led to "Use of uninitialized value in subroutine entry"
warnings when writing a NULL into a NVARCHAR with a unicode-enabled
DBD::ODBC. Thanks to Jirka Novak and Pavel Richter who found, reported
and patched a fix.
Fixed serious bug in unicode_helper.c for utf16_len which I'm ashamed
to say was using an unsigned short to return the length. This meant you
could never have UTF16 strings of more than ~64K without risking
serious problems. The DBD::ODBC test code actually got a
*** glibc detected *** /usr/bin/perl: double free or corruption
(out): 0x406dd008 ***
If you use a UNICODE enabled DBD::ODBC (the default on Windows) and
unicode strings larger than 64K you should definitely upgrade now.
1.21_1 2009-06-02
Fixed bug referred to in rt 46597 reported by taioba and identified by
Tim Bunce. In Calls to bind_param for a given statement handle if you
specify a SQL type to bind as, this should be "sticky" for that
parameter. That means if you do:
$sth->bind_param(1, $param, DBI::SQL_LONGVARCHAR)
was still using the size returned by SQLDescribeParam. Thanks to Brian
Becker for finding, diagnosing and fixing this issue.
Added FAQ entry about SQL Server and calling procedures with named
parameters out of order.
Added test_results.txt containing some supplied make test results.
1.20 2009-04-20
Fix bug in handling of SQL_WLONGVARCHAR when not built with unicode
support. The column was not identified as a long column and hence the
size of the column was not restricted to LongReadLen. Can cause
DBD::ODBC to attempt to allocate a huge amount of memory.
Minor changes to Makefile.PL to help diagnose how it decided which
driver manager to use and where it was found.
Offer suggestion to debian-based systems when some of unixODBC is found
(the bin part) but the development part is missing.
being and end, handle failures from prepare there were two ENDs.
In ODBCTEST.pm when no acceptable test column type is found output all
the found types and BAIL_OUT the entire test.
Skip rt_39841.t unless actually using the SQL Server ODBC driver or
native client.
Handle drivers which return 0 for SQL_MAX_COLUMN_NAME_LEN.
Double the buffer size used for column names if built with unicode.
1.19 2009-04-02
Some minor diagnostic output during tests when running against freeTDS
to show we know of issues in freeTDS.
Fixed issue in 20SqlServer.t where the connection string got set with
two consecutive semi-colons. Most drivers don't mind this but freeTDS
ignores everything after that point in the connection string.
Change some if tests to Test::More->is tests in 02simple.t.
Fix "invalid precision" error during tests with the new ACEODBC.DLL MS
Access driver. Same workaround applied for the old MS Access driver
(ODBCJT32.DLL) some time ago.
Fix out of memory error during tests against the new MS Access driver
(ACEODBC.DLL). The problem appears to be that the new Access driver
reports ridiculously large parameter sizes for "select ?" queries and
there are some of these in the unicode round trip test.
Fixed minor typo in Makefile.PL - diagnostic message mentioned "ODBC
HOME" instead of ODBCHOME.
12blob.t test somehow got lost from MANIFEST - replaced. Also changed
algorithm to get a long char type column as some MS Access drivers only
show SQL_WLONGVARCHAR type in unicode.
Added diagnostic output to 02simple.t to show the state of
odbc_has_unicode.
1.18_4 2009-03-13
A mistake in the MANIFEST lead to the rt_43384.t test being omitted.
Brian Becker reported the tables PERL_DBD_39897 and PERL_DBD_TEST are
left behind after testing. I've fixed the former but not the latter yet.
Yet another variation on the changes for rt 43384. If the parameter is
bound specifically as SQL_VARCHAR, you got invalid precision
which leads to HY104, "Invalid precision value" in the rt_39841.t test.
1.18_1 2009-03-06
Fixed bug reported by Toni Salomäki leading to a describe failed error
when calling procedures with no results. Test cases added to
20SqlServer.t.
Fixed bug rt 43384 reported by Ãystein Torget where you cannot insert
more than 127 characters into a Microsoft Access text(255) column when
DBD::ODBC is built in unicode mode.
1.18 2009-01-16
Major release of all the 1.17 development releases below.
1.17_3 2008-12-19
Reinstated the answer in the FAQ for "Why do I get invalid value for
cast specification" which had got lost - thanks to EvanCarroll in
rt41663.
Added support for ParamTypes (see DBI spec) and notes in DBD::ODBC pod.
1.16_4 2008-09-12
Small change to Makefile.PL to work around problem in darwin 8 with
iODBC which leads to "Symbol not found: _SQLGetPrivateProfileString"
errors.
Added new [n]varXXX(max) column type tests to 20SqlServer.t.
Fixed support for SQL_WCHAR and SQL_WVARCHAR support in non-unicode
build. These types had ended up only being included for unicode builds.
More changes to ODBC pod to 1) encourage people to use CPAN::Reporter,
2) rework contributing section, 3) mention DBIx::Log4perl 4) add a BUGS
section 5) add a "ODBC Support in ODBC Drivers" section etc.
Changed default fallback parameter bind type to SQL_WVARCHAR for unicode
builds. This affects ODBC drivers which don't have
SQLDescribeParam. Problem reported by Vasili Galka with MS Access when
reading unicode data from one table and inserting it into another table.
The read data was unicode but it was inserted as SQL_CHARs because
SQLDescribeParam does not exist in MS Access so we fallback to either a
default bind type (which was SQL_VARCHAR) or whatever was specified in
the bind_param call.
Fixed bug in 20SqlServer.t when DBI_DSN is defined including "DSN=".
1.16_3 2008-09-03
Changed Makefile.PL to add "-framework CoreFoundation" to linker line on
OSX/darwin.
Disallow building with iODBC if it is a unicode build.
More tracing for odbcconnect flag.
Fix bug in out connection string handling that attempted to use an out
connection string when SQLDriverConnect[W] fails.
Fixed yet more test count problems due to Test::NoWarnings not being
installed.
Skip private_attribute_info tests if DBI < 1.54
support SQLDescribeParam or do support SQLDescribeParam but cannot
describe all parameters e.g., MS SQL Server ODBC driver cannot describe
"select ?, LEN(?)". If you specify the bound parameter type in your
calls to bind_param and run them to an ODBC driver which supports
SQLDescribeParam you may want to check carefully and probably remove the
parameter type from the bind_param method call.
Added rt_38977.t test to test suite to test varchar(max) and
varbinary(max) columns in SQL Server.
Moved most of README.unicode to ODBC.pm pod.
Added workaround for problem with the Microsoft SQL Server driver when
attempting to insert more than 400K into a varbinary(max) or
varchar(max) column. Thanks to Julian Lishev for finding the problem and
identifying 2 possible solutions.
1.16_2 2008-09-02
Removed szDummyBuffer field from imp_fbh_st and code in dbd_describe
which clears it. It was never used so this was a waste of time.
SQLAllocStmt and their respective free calls to the ODBC 3.0
SQLAllocHandle and SQLFreeHandle equivalents.
Rewrote ColAttributes code to understand string and numeric attributes
rather than trying to guess by what the driver returns. If you see any
change in behaviour in ColAttributes calls you'll have to let me know as
there were a number of undocumented workarounds for drivers.
Unicode build of DBD::ODBC now supports:
column names
The retrieval of unicode column names
SQL strings
Unicode in prepare strings (but not unicode parameter names) e.g.,
select unicode_column from unicode_table
is fine but
select * from table where column = :unicode_param_name
is not so stick to ascii parameter names if you use named
parameters.
Unicode SQL strings passed to the do method are supported.
SQL strings passed to DBD::ODBC when the odbc_exec_direct attribute
is set will not be passed as unicode strings - this is a
limitation of the odbc_exec_direct attribute.
connection strings
True unicode connection string support will require a new version of
DBI (post 1.607).
Note that even though unicode connection strings are not
supported currently DBD::ODBC has had to be changed to call
SQLDriverConnectW/SQLConnectW to indicate to the driver manager it's
intention to use some of the ODBC wide APIs. This only affects
DBD::ODBC when built for unicode.
odbcunicode trace flag
There is a new odbcunicode trace flag to enable unicode-specific
tracing.
Skipped 40Unicode.t test if the ODBC driver is Oracle's ODBC as I cannot
make it work.
Changes internally to use sv_utf8_decode (where defined) instead of
setting utf8 flag.
Fix problems in the test when Test::NoWarnings is not installed.
and tracing is enabled.
Fixed issue with TRACESTATUS change in 20SqlServer.t tests 28, 31, 32
and 33 leading to those tests failing when testing with SQL Server 2005
or Express.
Many compiler warnings fixed - especially for incompatible types.
Add provisional Unicode support - thanks to Alexander Foken. This change
is very experimental (especially on UNIX). Please see ODBC.pm
documentation. Also see README.unicode and README.af. New database
attribute odbc_has_unicode to test if DBD::ODBC was built with UNICODE
support. New tests for Unicode. New requirement for Perl 5.8.1 if
Unicode support required. New -[no]u argument to Makefile.PL. New
warning in Makefile.PL if Unicode support built for UNIX.
Fix use of unitialised var in Makefile.PL.
Fix use of scalar with no effect on Makefile.PL
Added warning to Makefile.PL about building/running with LANG using
UTF8.
ConvertUTF.c view on Meta::CPAN
* Limitations on Rights to Redistribute This Code
*
* Unicode, Inc. hereby grants the right to freely use the information
* supplied in this file in the creation of products supporting the
* Unicode Standard, and to make copies of this file in any form
* for internal or external distribution as long as this notice
* remains attached.
*/
/*
* NOTE: The original version of this code can be found at
* http://www.unicode.org/Public/PROGRAMS/CVTUTF/
* This version was slightly modified to allow ConvertUTF8toUTF16 and
* ConvertUTF16toUTF8 to calculate the bytes required without writing to
* the target buffer.
*
*/
/* ---------------------------------------------------------------------
Conversions between UTF32, UTF-16, and UTF-8. Source code file.
Author: Mark E. Davis, 1994.
Rev History: Rick McGowan, fixes & updates May 2001.
ConvertUTF.h view on Meta::CPAN
* Limitations on Rights to Redistribute This Code
*
* Unicode, Inc. hereby grants the right to freely use the information
* supplied in this file in the creation of products supporting the
* Unicode Standard, and to make copies of this file in any form
* for internal or external distribution as long as this notice
* remains attached.
*/
/*
* NOTE: The original version of this code can be found at
* http://www.unicode.org/Public/PROGRAMS/CVTUTF/
* This version was slightly modified to allow ConvertUTF8toUTF16 and
* ConvertUTF16toUTF8 to calculate the bytes required without writing to
* the target buffer.
*
*/
/* ---------------------------------------------------------------------
Conversions between UTF32, UTF-16, and UTF-8. Header file.
size of 10 it takes 2 batches to execute all the parameters and
freeTDS will return 1 row affected for each batch hence returns 2
instead of 15.
See rt 75687.
=head2 Why are my pound signs (£), dashes and quotes (and other characters) returned garbled
The first question in response is why do you think what you got back was incorrect? Did you print the data to a terminal and it looks wrong, or perhaps sent it to a browser in a piece of CGI or even wrote it to a file? The mantra you need to stick to...
The classic case I keep seeing I've repeated here because it illustrates the most common problem. Database is MS SQL Server, data is viewed in the management console and looks good but when retrieved via DBD::ODBC it looks wrong. The most common caus...
Bear in mind that in a unicode build of DBD::ODBC (the default on Windows) all string data is retrieved as unicode. When you output your unicode data anywhere you need to encode it with Encode::encode e.g.,
binmode(STDOUT, ":encoding(cp1252)");
Just because you think you are working in a single codepage does not mean the data you retrieve will be returned as single byte characters in that codepage. DBD::ODBC (in a unicode build) retrieves all string data as wide (unicode) characters and mos...
If you are absolutely sure you are using a single code page and don't want to be bothered with unicode, look up the odbc_old_unicode attribute but better still, rebuild DBD::ODBC without unicode support using:
perl Makefile.PL -nou
=head2 Does DBD::ODBC support the new table valued parameters?
Not yet. Patches welcome.
=head2 Why do I get "COUNT field incorrect or syntax error (SQL-07002)"?
In general this error is telling you the number of parameters bound or
examples/testproc3.pl
examples/testproc4.pl
examples/testspmulti.pl
examples/testundef.pl
examples/testundef2.pl
examples/testundef3.pl
examples/testver.pl
examples/testxml.pl
examples/thrtest.pl
examples/timetest.pl
examples/unicode_params.pl
examples/unicode_sql.pl
examples/sqlserver_supplementary_chrs.pl
examples/perl-DBD-ODBC.spec
ODBC.h
ODBC.pm
ODBC.xs
README
README.adabas
README.af
README.hpux
README.informix
README.RH9
README.unicode
README.windows
README.osx
README.sqlserver
t/01base.t
t/02simple.t
t/03dbatt.t
t/05meth.t
t/07bind.t
t/08bind2.t
t/09multi.t
t/10handler.t
t/12blob.t
t/20SqlServer.t
t/30Oracle.t
t/40UnicodeRoundTrip.t
t/41Unicode.t
t/45_unicode_varchar.t
t/50_odbc_utf8_on.t
t/70execute_array_dbi.t
t/70execute_array_native.t
t/80_odbc_diags.t
t/82_table_info.t
t/87_odbc_lob_read.t
t/90_trace_flags.t
t/ODBCTEST.pm
t/ExecuteArray.pm
t/UChelp.pm
t/rt_61370.t
t/rt_62033.t
t/rt_63550.t
t/rt_78838.t
t/rt_79190.t
t/rt_79397.t
t/rt_81911.t
t/rt_101579.t
t/sql_type_cast.t
t/odbc_describe_parameter.t
unicode_helper.c
unicode_helper.h
if_you_are_taking_over_this_code.txt
Makefile.PL view on Meta::CPAN
}
}
};
}
if ($eumm >= 5.48) {
$opts{PREREQ_PRINT} = 1;
}
my $opt_g = 0; # build debug
my $opt_o = q{}; # odbc home overrides ODBCHOME
my $opt_u = undef; # build unicode version
my $opt_e = undef; # easysoft
my $opt_x = undef; # prefer unixODBC over iODBC
my $opt_w = undef; # enable -Wall (gcc only)
$opt_u = 1 if $ENV{DBD_ODBC_UNICODE};
my @options = ("g!" => \$opt_g,
"o=s" => \$opt_o,
"u!" => \$opt_u,
"e!" => \$opt_e,
Makefile.PL view on Meta::CPAN
# some specific checks for incompatibilities
if (defined($opt_u) && $opt_u) {
if (-e File::Spec->catfile($odbcincdir, 'sql.h')) {
my $fh;
open($fh, q/</, "$odbchome/include/sql.h") or
die "Failed to open $odbchome/include/sql.h - $!";
my @lines = <$fh>;
my @found = grep {/iODBC driver manager/i} @lines;
if (scalar(@found)) {
die "\n\nDBD::ODBC does not support unicode with iODBC and this looks like iODBC. The iODBC driver manager expects wide characters to be 4 bytes long and DBD::ODBC wants wide characters to be UTF16.\nEither\no) Rerun without the -u sw...
}
close $fh or warn "Failed to close sql.h - $!";
}
}
if ($myodbc eq 'Microsoft ODBC') {
print "\nBuilding for Microsoft under Cygwin\n";
$opts{LIBS} = "-L/usr/lib/w32api -lodbc32";
print {$sqlhfh} "#include <windows.h>\n";
print {$sqlhfh} "#include <sql.h>\n";
};
our @EXPORT_DIAGS = qw(SQL_DIAG_CURSOR_ROW_COUNT SQL_DIAG_DYNAMIC_FUNCTION SQL_DIAG_DYNAMIC_FUNCTION_CODE SQL_DIAG_NUMBER SQL_DIAG_RETURNCODE SQL_DIAG_ROW_COUNT SQL_DIAG_CLASS_ORIGIN SQL_DIAG_COLUMN_NUMBER SQL_DIAG_CONNECTION_NAME SQL_DIAG_MESSAG...
our @EXPORT_TAF = qw(OCI_FO_END OCI_FO_ABORT OCI_FO_REAUTH OCI_FO_BEGIN OCI_FO_ERROR OCI_FO_RETRY OCI_FO_NONE OCI_FO_SESSION OCI_FO_SELECT OCI_FO_TXNAL);
our @EXPORT_OK = (@EXPORT_DIAGS, @EXPORT_TAF);
our %EXPORT_TAGS = (
diags => \@EXPORT_DIAGS,
taf => \@EXPORT_TAF);
sub parse_trace_flag {
my ($class, $name) = @_;
return 0x02_00_00_00 if $name eq 'odbcunicode';
return 0x04_00_00_00 if $name eq 'odbcconnection';
return DBI::parse_trace_flag($class, $name);
}
sub parse_trace_flags {
my ($class, $flags) = @_;
return DBI::parse_trace_flags($class, $flags);
}
my $methods_are_installed = 0;
odbc_default_bind_type => undef, # sth and dbh
odbc_force_bind_type => undef, # sth and dbh
odbc_force_rebind => undef, # sth and dbh
odbc_async_exec => undef, # sth and dbh
odbc_exec_direct => undef,
odbc_describe_parameters => undef,
odbc_SQL_ROWSET_SIZE => undef,
odbc_SQL_DRIVER_ODBC_VER => undef,
odbc_cursortype => undef,
odbc_query_timeout => undef, # sth and dbh
odbc_has_unicode => undef,
odbc_out_connect_string => undef,
odbc_version => undef,
odbc_err_handler => undef,
odbc_putdata_start => undef, # sth and dbh
odbc_column_display_size => undef, # sth and dbh
odbc_utf8_on => undef, # sth and dbh
odbc_driver_complete => undef,
odbc_batch_size => undef,
odbc_array_operations => undef, # sth and dbh
odbc_taf_callback => undef,
<a href="https://travis-ci.org/perl5-dbi/DBD-ODBC"><img src="https://travis-ci.org/perl5-dbi/DBD-ODBC.svg?branch=master"></a>
<a href="http://badge.fury.io/pl/DBD-ODBC"><img src="https://badge.fury.io/pl/DBD-ODBC.svg" alt="CPAN version" height="18"></a>
=head1 VERSION
This documentation refers to DBD::ODBC version 1.61.
=head1 WARNING
This version of DBD::ODBC contains a significant fix to unicode when
inserting into CHAR/VARCHAR columns and it is a change in behaviour
from 1.45. The change B<only> applies to unicode builds of DBD::ODBC
(the default on Windows but you can build it for unicode on unix too)
and char/varchar columns and not nchar/nvarchar columns.
Prior to this release of DBD::ODBC when you are using the unicode
build of DBD::ODBC and inserted data into a CHAR/VARCHAR columns using
parameters DBD::ODBC did this:
1 if you set odbc_describe_parameters to 0, (thus preventing DBD::ODBC
from calling SQLDescribeParam) parameters for CHAR/VARCHAR columns
were bound as SQL_WVARCHAR or SQL_WLONGVARCHAR (depending on the
length of the parameter).
2 if you set odbc_force_bind_type then all parameters are bound as you
specified.
4 if the driver does not support SQLDescribeParam or SQLDescribeParam
was called and failed then the bind type defaulted as in 1.
5 if none of the above (and I'd guess that is the normal case for most
people) then DBD::ODBC calls SQLDescribeParam to find the parameter
type. This usually returns SQL_CHAR or SQL_VARCHAR for CHAR/VARCHAR
columns unsurprisingly. The parameter was then bound as SQL_VARCHAR.
Items 1 to 4 still apply. 5 now has a different behaviour. In this
release, DBD::ODBC now looks at your bound data first before using the
type returned by SQLDescribeParam. If you data looks like unicode
(i.e., SvUTF8() is true) it now binds the parameter as SQL_WVARCHAR.
What might this might mean to you?
If you had Perl scalars that were bound to CHAR/VARCHAR columns in an
insert/update/delete and those scalars contained unicode, DBD::ODBC
would actually pass the individual octets in your scalar not
characters. For instance, if you had the Perl scalar "\x{20ac}" (the
Euro unicode character) and you bound it to a CHAR/VARCHAR, DBD::ODBC
would pass 0xe2, 0x82, 0xc2 as separate characters because those bytes
were Perl's UTF-8 encoding of a euro. These would probably be
interpreted by your database engine as 3 characters in its current
codepage. If you queried your database to find the length of the data
inserted you'd probably get back 3, not 1.
However, when DBD::ODBC read that column back in a select
statement, it would bind the column as SQL_WCHAR and you'd get back 3
characters with the utf8 flag on (what those characters were depends
on how your database or driver translates code page characters to wide
characters).
What should happen now is that if your bound parameters are unicode,
DBD::ODBC will bind them as wide characters (unicode) and your driver
or database will attempt to convert them into the code page it is
using. This means so long as your database can store the data you are
inserting, when you read it back you should get what you inserted.
=head1 SYNOPSIS
use DBI;
$dbh = DBI->connect('dbi:ODBC:DSN=mydsn', 'user', 'password');
=head3 odbc_default_bind_type
This value defaults to 0.
Older versions of DBD::ODBC assumed that the parameter binding type
was 12 (C<SQL_VARCHAR>). Newer versions always attempt to call
C<SQLDescribeParam> to find the parameter types but if
C<SQLDescribeParam> is unavailable DBD::ODBC falls back to a default
bind type. The internal default bind type is C<SQL_VARCHAR> (for
non-unicode build) and C<SQL_WVARCHAR> or C<SQL_VARCHAR> (for a
unicode build depending on whether the parameter is unicode or
not). If you set C<odbc_default_bind_type> to a value other than 0 you
override the internal default.
B<N.B> If you call the C<bind_param> method with a SQL type this
overrides everything else above.
=head3 odbc_force_bind_type
This value defaults to 0.
Older versions of DBD::ODBC assumed the parameter binding type was 12
(C<SQL_VARCHAR>) and newer versions always attempt to call
C<SQLDescribeParam> to find the parameter types. If your driver
supports C<SQLDescribeParam> and it succeeds it may still fail to
describe the parameters accurately (MS SQL Server sometimes does this
with some SQL like I<select myfunc(?) where 1 = 1>). Setting
C<odbc_force_bind_type> to C<SQL_VARCHAR> will force DBD::ODBC to bind
all the parameters as C<SQL_VARCHAR> and ignore SQLDescribeParam.
Bear in mind that if you are inserting unicode data you probably want
to use C<SQL_WVARCHAR>/C<SQL_WCHAR>/C<SQL_WLONGVARCHAR> and not
C<SQL_VARCHAR>.
As this attribute was created to work around buggy ODBC Drivers which
support SQLDescribeParam but describe the parameters incorrectly you
are probably better specifying the bind type on the C<bind_param> call
on a per statement level rather than blindly setting
C<odbc_force_bind_type> across a whole connection.
B<N.B> If you call the C<bind_param> method with a SQL type this
The default for odbc_column_display_size is 2001 because this value was
hard-coded in DBD::ODBC until 1.17_3.
=head3 odbc_utf8_on
Set this flag to treat all strings returned from the ODBC driver
(except columns described as SQL_BINARY or SQL_TIMESTAMP and its
variations) as UTF-8 encoded. Some ODBC drivers (like Aster and maybe
PostgreSQL) return UTF-8 encoded data but do not support the SQLxxxW
unicode API. Enabling this flag will cause DBD::ODBC to treat driver
returned data as UTF-8 encoded and it will be marked as such in Perl.
Do not confuse this with DBD::ODBC's unicode support. The
C<odbc_utf8_on> attribute only applies to non-unicode enabled builds
of DBD::ODBC.
=head3 odbc_describe_parameters
Defaults to on. When set this allows DBD::ODBC to call SQLDescribeParam
(if the driver supports it) to retrieve information about any
parameters.
When off/false DBD::ODBC will not call SQLDescribeParam and defaults
to binding parameters as SQL_CHAR/SQL_WCHAR depending on the build
type and whether your data is unicode or not.
You do not have to disable odbc_describe_parameters just because your
driver does not support SQLDescribeParam as DBD::ODBC will work this
out at the start via SQLGetFunctions.
B<Note>: disabling odbc_describe_parameters when your driver does support
SQLDescribeParam may prevent DBD::ODBC binding parameters for some
column types properly.
You can also set this attribute in the attributes passed to the
object will disappear before you can use it.
There are currently two ways to get this:
$dbh->prepare($sql, { odbc_exec_direct => 1});
and
$dbh->{odbc_exec_direct} = 1;
B<NOTE:> Even if you build DBD::ODBC with unicode support you can
still not pass unicode strings to the prepare method if you also set
odbc_exec_direct. This is a restriction in this attribute which is
unavoidable.
=head3 odbc_SQL_DRIVER_ODBC_VER
This, while available via get_info() is captured here. I may get rid
of this as I only used it for debugging purposes.
=head3 odbc_cursortype
$sth->execute;
my @row;
while (@row = $sth->fetchrow_array) {
$sth2->execute($row[0]);
}
See F<t/20SqlServer.t> for an example.
In versions of SQL Server 2005 and later see "Multiple Active Statements (MAS)" in the DBD::ODBC::FAQ instead of using this attribute.
=head3 odbc_has_unicode
A read-only attribute signifying whether DBD::ODBC was built with the
C macro WITH_UNICODE or not. A value of 1 indicates DBD::ODBC was built
with WITH_UNICODE else the value returned is 0.
Building WITH_UNICODE affects columns and parameters which are
SQL_C_WCHAR, SQL_WCHAR, SQL_WVARCHAR, and SQL_WLONGVARCHAR, SQL,
the connect method and a lot more. See L</Unicode>.
When odbc_has_unicode is 1, DBD::ODBC will:
=over
=item bind all string columns as wide characters (SQL_Wxxx)
This means that UNICODE data stored in these columns will be returned
to Perl correctly as unicode (i.e., encoded in UTF-8 and the UTF-8 flag set).
=item bind parameters the database declares as wide characters or unicode parameters as SQL_Wxxx
Parameters bound where the database declares the parameter as being a
wide character, or where the parameter data is unicode, or where the
parameter type is explicitly set to a wide type (e.g., SQL_Wxxx) are bound
as wide characters in the ODBC API and DBD::ODBC encodes the perl parameters
as UTF-16 before passing them to the driver.
=item SQL
SQL passed to the C<prepare> or C<do> methods which has the UTF-8 flag
set will be converted to UTF-16 before being passed to the ODBC APIs
C<SQLPrepare> or C<SQLExecDirect>.
to UTF-16 before being passed to the ODBC API
C<SQLDriverConnectW>. This happens irrespective of whether the UTF-8
flag is set on the perl connect strings because unixODBC requires an
application to call SQLDriverConnectW to indicate it will be calling
the wide ODBC APIs.
=back
NOTE: You will need at least Perl 5.8.1 to use UNICODE with DBD::ODBC.
NOTE: Binding of unicode output parameters is coded but untested.
NOTE: When building DBD::ODBC on Windows ($^O eq 'MSWin32') the
WITH_UNICODE macro is automatically added. To disable specify -nou as
an argument to Makefile.PL (e.g. C<perl Makefile.PL -nou>). On non-Windows
platforms the WITH_UNICODE macro is B<not> enabled by default and to enable
you need to specify the -u argument to Makefile.PL. Please bear in mind
that some ODBC drivers do not support SQL_Wxxx columns or parameters.
You can also specify that you want UNICODE support by setting the
C<DBD_ODBC_UNICODE> environment variable prior to install:
export DBD_ODBC_UNICODE=1
cpanm DBD::ODBC
UNICODE support in ODBC Drivers differs considerably. Please read the
README.unicode file for further details.
=head3 odbc_out_connect_string
After calling the connect method this will be the ODBC driver's
out connection string - see documentation on SQLDriverConnect.
B<NOTE>: this value is only set if DBD::ODBC calls the
SQLDriverConnect ODBC API (and not SQLConnect) which only happens if a) DSN or
DRIVER is specified in the connection string or b) SQLConnect fails.
NOTE: This is currently an experimental method and may change in the
future e.g., it may support automatic concatenation of the lob
parts onto the end of the C<$lob> with the addition of an extra flag
or destination offset as in DBI's undocumented blob_read.
The type the lob is retrieved as may be overridden in C<%attr> using
C<TYPE =E<gt> sql_type>. C<%attr> is optional and if omitted defaults
to SQL_C_BINARY for binary columns and SQL_C_CHAR/SQL_C_WCHAR for
other column types depending on whether DBD::ODBC is built with
unicode support. C<$chrs_or_bytes_read> will by the bytes read when
the column types SQL_C_CHAR or SQL_C_BINARY are used and characters
read if the column type is SQL_C_WCHAR.
When built with unicode support C<$length> specifies the amount of
buffer space to be used when retrieving the lob data but as it is
returned as SQLWCHAR characters this means you at most retrieve
C<$length/2> characters. When those retrieved characters are encoded
in UTF-8 for Perl, the C<$lob> scalar may need to be larger than
C<$length> so DBD::ODBC grows it appropriately.
You can retrieve a lob in chunks like this:
$sth->bind_col($column, undef, {TreatAsLOB=>1});
while(my $retrieved = $sth->odbc_lob_read($column, \my $data, $length)) {
=head2 Tracing
DBD::ODBC now supports the parse_trace_flag and parse_trace_flags
methods introduced in DBI 1.42 (see DBI for a full description). As
of DBI 1.604, the only trace flag defined which is relevant to
DBD::ODBC is 'SQL' which DBD::ODBC supports by outputting the SQL
strings (after modification) passed to the prepare and do methods.
From DBI 1.617 DBI also defines ENC (encoding), CON (connection) TXN
(transaction) and DBD (DBD only) trace flags. DBI's ENC and CON trace
flags are synonymous with DBD::ODBC's odbcunicode and odbcconnection
trace flags though I may remove the DBD::ODBC ones in the
future. DBI's DBD trace flag allows output of only DBD::ODBC trace
messages without DBI's trace messages.
Currently DBD::ODBC supports two private trace flags. The
'odbcunicode' flag traces some unicode operations and the
odbcconnection traces the connect process.
To enable tracing of particular flags you use:
$h->trace($h->parse_trace_flags('SQL|odbcconnection'));
$h->trace($h->parse_trace_flags('1|odbcunicode'));
In the first case 'SQL' and 'odbcconnection' tracing is enabled on
$h. In the second case trace level 1 is set and 'odbcunicode' tracing
is enabled.
If you want to enable a DBD::ODBC private trace flag before connecting
you need to do something like:
use DBD::ODBC;
DBI->trace(DBD::ODBC->parse_trace_flag('odbcconnection'));
or
use DBD::ODBC;
DBI->trace(DBD::ODBC->parse_trace_flags('odbcconnection|odbcunicode'));
or
DBI_TRACE=odbcconnection|odbcunicode perl myscript.pl
From DBI 1.617 you can output only DBD::ODBC trace messages using
DBI_TRACE=DBD perl myscript.pl
DBD::ODBC outputs tracing at levels 3 and above (as levels 1 and 2 are
reserved for DBI).
For comprehensive tracing of DBI method calls without all the DBI
internals see L<DBIx::Log4perl>.
=item unixODBC
unixODBC mimics the Windows ODBC API precisely meaning the wide
character versions expect and return 2-byte characters in
UCS-2 or UTF-16.
unixODBC will happily recognise ODBC drivers which only have the ANSI
versions of the ODBC API and those that have the wide versions
too.
unixODBC will allow an ANSI application to work with a unicode
ODBC driver and vice versa (although in the latter case you obviously
cannot actually use unicode).
unixODBC does not prevent you sending UTF-8 in the ANSI versions of
the ODBC APIs but whether that is understood by your ODBC driver is
another matter.
unixODBC differs in only one way from the Microsoft ODBC driver in
terms of unicode support in that it avoids unnecessary translations
between single byte and double byte characters when an ANSI
application is using a unicode-aware ODBC driver by requiring unicode
applications to signal their intent by calling SQLDriverConnectW
first. On Windows, the ODBC driver manager always uses the wide
versions of the ODBC API in ODBC drivers which provide the wide
versions regardless of what the application really needs and this
results in a lot of unnecessary character translations when you have
an ANSI application and a unicode ODBC driver.
=item iODBC
The wide character versions expect and return wchar_t types.
=back
DBD::ODBC has gone with unixODBC so you cannot use iODBC with a
unicode build of DBD::ODBC. However, some ODBC drivers support UTF-8
(although how they do this with SQLGetData reliably I don't know)
and so you should be able to use those with DBD::ODBC not built for
unicode.
=head3 Enabling and Disabling Unicode support
On Windows Unicode support is enabled by default and to disable it
you will need to specify C<-nou> to F<Makefile.PL> to get back to the
original behavior of DBD::ODBC before any Unicode support was added.
e.g.,
perl Makfile.PL -nou
On non-Windows platforms Unicode support is disabled by default. To
enable it specify C<-u> to F<Makefile.PL> when you configure DBD::ODBC.
e.g.,
perl Makefile.PL -u
=head3 Unicode - What is supported?
As of version 1.17 DBD::ODBC has the following unicode support:
=over
=item SQL (introduced in 1.16_2)
Unicode strings in calls to the C<prepare> and C<do> methods are
supported so long as the C<odbc_execdirect> attribute is not used.
=item unicode connection strings (introduced in 1.16_2)
Unicode connection strings are supported but you will need a DBI
post 1.607 for that.
=item column names
Unicode column names are returned.
=item bound columns (introduced in 1.15)
as a wide type they will be converted to wide characters and bound as
such.
=item metadata calls like table_info, column_info
As of DBD::ODBC 1.32_3 meta data calls accept Unicode strings.
=back
Since version 1.16_4, the default parameter bind type is SQL_WVARCHAR
for unicode builds of DBD::ODBC. This only affects ODBC drivers which
do not support SQLDescribeParam and only then if you do not
specifically set a SQL type on the bind_param method call.
The above Unicode support has been tested with the SQL Server, Oracle
9.2+ and Postgres drivers on Windows and various Easysoft ODBC drivers
on UNIX.
=head3 Unicode - What is not supported?
You cannot use unicode parameter names e.g.,
select * from table where column = :unicode_param_name
You cannot use unicode strings in calls to prepare if you set the
odbc_execdirect attribute.
You cannot use the iODBC driver manager with DBD::ODBC built for
unicode.
=head3 Unicode - Caveats
For Unicode support on any platform in Perl you will need at least
Perl 5.8.1 - sorry but this is the way it is with Perl.
The Unicode support in DBD::ODBC expects a WCHAR to be 2 bytes (as it
is on Windows and as the ODBC specification suggests it is). Until
ODBC specifies any other Unicode support it is not envisioned this
will change. On UNIX there are a few different ODBC driver
managers. I have only tested the unixODBC driver manager
(http://www.unixodbc.org) with Unicode support and it was built with
defaults which set WCHAR as 2 bytes.
I believe that the iODBC driver manager expects wide characters to be
wchar_t types (which are usually 4 bytes) and hence DBD::ODBC will not
work iODBC when built for unicode.
The ODBC Driver must expect Unicode data specified in SQLBindParameter
and SQLBindCol to be UTF-16 in local endianness. Similarly, in calls to
SQLPrepareW, SQLDescribeColW and SQLDriverConnectW.
You should be aware that once Unicode support is enabled it affects a
number of DBI methods (some of which you might not expect). For
instance, when listing tables, columns etc some drivers
(e.g. Microsoft SQL Server) will report the column types as wide types
even if the strings actually fit in 7-bit ASCII. As a result, there is
disabling Unicode support.
I am at present unsure if ChopBlanks processing on Unicode strings is
working correctly on UNIX. If nothing else the construct L' ' in
dbdimp.c might not work with all UNIX compilers. Reports of issues and
patches welcome.
=head3 Unicode implementation in DBD::ODBC
DBD::ODBC uses the wide character versions of the ODBC API and the
SQL_WCHAR ODBC type to support unicode in Perl.
Wide characters returned from the ODBC driver will be converted to
UTF-8 and the perl scalars will have the utf8 flag set (by using
sv_utf8_decode).
B<IMPORTANT>
Perl scalars which are UTF-8 and are sent through the ODBC API will be
converted to UTF-16 and passed to the ODBC wide APIs or signalled as
SQL_WCHARs (e.g., in the case of bound columns). Retrieved data which
code points above 0xFFFF (if you know better I'd like to hear from
you). However, because DBD::ODBC uses UTF-16 encoding you can still
insert Unicode characters above 0xFFFF into your database and retrieve
them back correctly but they may not being treated as a single
Unicode character in your database e.g., a "select length(a_column)
from table" with a single Unicode character above 0xFFFF may
return 2 and not 1 so you cannot use database functions on that
data like upper/lower/length etc but you can at least save the data in
your database and get it back.
When built for unicode, DBD::ODBC will always call SQLDriverConnectW
(and not SQLDriverConnect) even if a) your connection string is not
unicode b) you have not got a DBI later than 1.607, because unixODBC
requires SQLDriverConnectW to be called if you want to call other
unicode ODBC APIs later. As a result, if you build for unicode and
pass ASCII strings to the connect method they will be converted to
UTF-16 and passed to SQLDriverConnectW. This should make no real
difference to perl not using unicode connection strings.
You will need a DBI later than 1.607 to support unicode connection
strings because until post 1.607 there was no way for DBI to pass
unicode strings to the DBD.
=head3 Unicode and Oracle
You have to set the environment variables C<NLS_NCHAR=AL32UTF8> and
C<NLS_LANG=AMERICAN_AMERICA.AL32UTF8> (or any other language setting
ending with C<.AL32UTF8>) before loading DBD::ODBC to make Oracle
return Unicode data. (See also "Oracle and Unicode" in the POD of
DBD::Oracle.)
On Windows, using the Oracle ODBC Driver you have to enable the B<Force
=head3 Unicode and Easysoft ODBC Drivers
We have tested the Easysoft SQL Server, Oracle and ODBC Bridge drivers
with DBD::ODBC built for Unicode. All work as described without
modification except for the Oracle driver you will need to set you
NLS_LANG as mentioned above.
=head3 Unicode and other ODBC drivers
If you have a unicode-enabled ODBC driver and it works with DBD::ODBC
let me know and I will include it here.
=head2 ODBC Support in ODBC Drivers
=head3 Drivers without SQLDescribeParam
Some drivers do not support the C<SQLDescribeParam> ODBC API (e.g.,
Microsoft Access, FreeTDS).
DBD::ODBC uses the C<SQLDescribeParam> API when parameters are bound
to your SQL to find the types of the parameters. If the ODBC driver
does not support C<SQLDescribeParam>, DBD::ODBC assumes the parameters
are C<SQL_VARCHAR> or C<SQL_WVARCHAR> types (depending on whether
DBD::ODBC is built for unicode or not and whether your parameter is
unicode data). In any case, if you bind a parameter and specify a SQL
type this overrides any type DBD::ODBC would choose.
For ODBC drivers which do not support C<SQLDescribeParam> the default
behavior in DBD::ODBC may not be what you want. To change the default
parameter bind type set L</odbc_default_bind_type>. If, after that you
have some SQL where you need to vary the parameter types used add the
SQL type to the end of the C<bind_param> method.
use DBI qw(:sql_types);
$h = DBI->connect;
64-bit ODBC
L<http://www.easysoft.com/developer/interfaces/odbc/64-bit.html>
How do I insert Unicode supplementary characters into SQL Server from Perl?
L<http://www.easysoft.com/support/kb/kb01043.html>
Some Common Unicode Problems and Solutions using Perl DBD::ODBC and MS SQL Server
L<http://www.easysoft.com/developer/languages/perl/sql-server-unicode.html>
and a version possibly kept more up to date:
L<https://github.com/mjegh/dbd_odbc_sql_server_unicode/blob/master/common_problems.pod>
How do I use SQL Server Query Notifications from Linux and UNIX?
L<http://www.easysoft.com/support/kb/kb01069.html>
=head2 Frequently Asked Questions
Frequently asked questions are now in L<DBD::ODBC::FAQ>. Run
C<perldoc DBD::ODBC::FAQ> to view them.
(Yes, this list should be longer.)
=head2 Known Problems
Perl 5.8.1 or newer is required. Older Perl before 5.8.0 lacked proper Unicode
support. Perl 5.8.0 lacks some auxillary functions for Unicode.
Unicode is supported only for SQL statement parameters and data returned by the
fetch methods, SQL statements are still treated as native encoding. If you need
a unicode constant in an SQL statement, you have to pass it as parameter or use
SQL functions to convert your constant from native encoding to Unicode.
All data passed to the patched DBD::ODBC for C<SQL_C_WCHAR>, C<SQL_WCHAR>,
C<SQL_WVARCHAR>, and C<SQL_WLONGVARCHAR> is treated as Unicode, even if it is
not Unicode. F<unicode_helper.c> should check the UTF8 flag of the scalar and
pass a value different from C<CP_UTF8> as first argument to
C<WideCharToMultiByte()>. The problem is to know what encoding is used for the
data in the scalar.
Binding of unicode output parameters is untested (I don't need them) and likely
to fail.
The patched DBD::ODBC may fail to compile on non-Win32 platforms. It needs a
header file named F<wchar.h> defining at least the following:
=over 4
=item A C<WCHAR> data type capable of storing a single Unicode character.
Microsoft uses C<typedef wchar_t WCHAR> in F<wchar.h>, and C<typedef unsigned
converted to 16 bit Unicode using the Windows API function
C<WideCharToMultiByte()>, return values reported as C<SQL_C_WCHAR>,
C<SQL_WCHAR>, C<SQL_WVARCHAR>, or C<SQL_WLONGVARCHAR> are converted back to 8
bit Unicode (UTF-8) using the Windows API function C<MultiByteToWideChar()> and
have Perl's UTF8 flag set except for empty strings.
=head2 Tests
This patch adds two new tests, F<t/40UnicodeRoundTrip.t> and F<t/41Unicode.t>.
Test 40 checks that Unicode strings can be entered as bound parameters and are
returned unmodified. Test 41 creates a table, writes and reads unicode data
using various bind variants.
When using Oracle, the empty string test in F<t/40UnicodeRoundTrip.t> is skipped
because Oracle converts empty strings to NULL in this situation.
I had to add C<SQL_WCHAR>, C<SQL_WVARCHAR>, C<SQL_WLONGVARCHAR> to
F<t/ODBCTEST.pm>, because Oracle in the setup described above returns Unicode
more often than expected.
I added F<t/UChelp.pm>, that exports two utility functions for unicode string
tests:
=over 4
=item C<dumpstr($)>
Dumps a string, indicating its Unicode flag, length and all characters in ASCII
notation.
=item C<utf_eq_ok($$$)>
=back
=head2 See also
=over 4
=item * Microsoft ODBC documentation
=item * Microsoft API documentation
=item * http://www.unicode.org/
=item * DBI
=item * DBD::ODBC
=item * DBD::Oracle
=item * DBD::Pg
=back
README.unicode view on Meta::CPAN
This document is historical now. If all you want to know is what Unicode
is supported by DBD::ODBC read the DBD::ODBC pod.
The original Unicode support was written by Alexander Foken and posted
as a patch. You can find Alexander's original README document in
README.af (some of this is directly taken from that document). Around
DBD::ODBC 1.14 Alexander's original patch was adapted to include
optional Unicode support for UNIX and introduced into DBD::ODBC.
In DBD::ODBC 1.17 the Unicode support in DBD::ODBC on both Windows and
Unix was enhanced considerably to support unicode SQL strings, column
names and connection strings (although see caveat below).
Most of the unicode documentation that was here has now moved to the main
DBD::ODBC pod.
Attributions
============
A substantial part of the code to translate between UTF8 and UTF16
came from the unicode web site (Unicode Inc) and was authored by Mark
E. Davis and updated by various (see ConvertUTF.c). This code was
modified in minor ways to allow ConvertUTF16toUTF8 and
ConvertUTF8toUTF16 to accept NULL target pointers and return the
number of bytes required in the target string for the conversion.
A substantial part (most in fact) of the original Unicode support in
DBD::ODBC for wide bound columns and parameters was written by
Alexander Foken and simply changed to support UNIX as well as Windows
by me.
README.windows view on Meta::CPAN
C1083: Cannot open include file: 'excpt.h': No such file or directory
This was solved by:
set PASTHRU_INC=-I"c:\Program Files\Microsoft SDK\include" -I"c:\Program Files\Microsoft Visual Studio\VC98\include"
which led to:
ODBC.obj : error LNK2001: unresolved external symbol __fltused
dbdimp.obj : error LNK2001: unresolved external symbol __fltused
unicode_helper.obj : error LNK2001: unresolved external symbol __fltused
ODBC.obj : error LNK2001: unresolved external symbol __imp__sprintf
dbdimp.obj : error LNK2001: unresolved external symbol __imp__sprintf
ODBC.obj : error LNK2001: unresolved external symbol _strcpy
dbdimp.obj : error LNK2001: unresolved external symbol _strcpy
dbdimp.obj : error LNK2001: unresolved external symbol __imp__strncmp
dbdimp.obj : error LNK2001: unresolved external symbol __imp__toupper
dbdimp.obj : error LNK2001: unresolved external symbol __imp__strncpy
dbdimp.obj : error LNK2001: unresolved external symbol __imp__strstr
dbdimp.obj : error LNK2001: unresolved external symbol _memcpy
dbdimp.obj : error LNK2001: unresolved external symbol _memset
dbdimp.obj : error LNK2001: unresolved external symbol _strlen
unicode_helper.obj : error LNK2001: unresolved external symbol _strlen
dbdimp.obj : error LNK2001: unresolved external symbol _strcmp
dbdimp.obj : error LNK2001: unresolved external symbol _strcat
dbdimp.obj : error LNK2001: unresolved external symbol __imp__atoi
dbdimp.obj : error LNK2001: unresolved external symbol _abs
dbdimp.obj : error LNK2001: unresolved external symbol __imp___pctype
dbdimp.obj : error LNK2001: unresolved external symbol __imp___isctype
dbdimp.obj : error LNK2001: unresolved external symbol __imp____mb_cur_max
dbdimp.obj : error LNK2001: unresolved external symbol __imp__strchr
unicode_helper.obj : error LNK2001: unresolved external symbol __imp__wcslen
LINK : error LNK2001: unresolved external symbol __DllMainCRTStartup@12
blib\arch\auto\DBD\ODBC\ODBC.dll : fatal error LNK1120: 20 unresolved externals
NMAKE : fatal error U1077: 'link' : return code '0x460'
which was resolved by adding
c:\Program Files\Microsoft Visual Studio\VC98\Lib\MSVCRTD.LIB"
to end of LDLOADLIBS in the Makefile.
!!DBD::ODBC unsupported attribute passed (PrintError)
!!DBD::ODBC unsupported attribute passed (Username)
!!DBD::ODBC unsupported attribute passed (dbi_connect_closure)
!!DBD::ODBC unsupported attribute passed (LongReadLen)
Add a perlcritic test - see DBD::Pg
Anywhere we are storing a value in an SV that we didn't create
(and thus might have magic) should probably set magic.
Add a test for ChopBlanks and unicode data
Add some private SQLGetInfo values for whether SQL_ROWSET_SIZE hack
works etc. How can you tell a driver supports MARS_CONNECTION.
Might be able to detect MARS capable with SS_COPT_MARS_ENABLED
Bump requirement to Test::Simple 0.96 so we can use subtest which
is really cool and reorganise tests to use it. 0.96, because it seems
to be the first really stable version of subtest.
* http://msdn.microsoft.com/en-us/library/ms716287%28v=vs.85%29.aspx
*/
#include <limits.h>
#define NEED_newRV_noinc
#define NEED_sv_2pv_flags
#define NEED_my_snprintf
#include "ODBC.h"
#if defined(WITH_UNICODE)
# include "unicode_helper.h"
#endif
/* trap iODBC on Unicode builds */
#if defined(WITH_UNICODE) && (defined(_IODBCUNIX_H) || defined(_IODBCEXT_H))
#error DBD::ODBC will not run properly with iODBC in unicode mode as iODBC defines wide characters as being 4 bytes in size
#endif
/* DBI defines the following but not until 1.617 so we replicate here for now */
/* will remove when DBD::ODBC requires 1.617 or above */
#ifndef DBIf_TRACE_SQL
# define DBIf_TRACE_SQL 0x00000100
#endif
#ifndef DBIf_TRACE_CON
# define DBIf_TRACE_CON 0x00000200
#endif
static const char *cSqlGetTypeInfo = "SQLGetTypeInfo(%d)";
static SQLRETURN bind_columns(SV *h, imp_sth_t *imp_sth);
static void AllODBCErrors(HENV henv, HDBC hdbc, HSTMT hstmt, int output,
PerlIO *logfp);
static int check_connection_active(pTHX_ SV *h);
static int build_results(pTHX_ SV *sth, imp_sth_t *imp_sth,
SV *dbh, imp_dbh_t *imp_dbh,
RETCODE orc);
static int rebind_param(pTHX_ SV *sth, imp_sth_t *imp_sth, imp_dbh_t *imp_dbh, phs_t *phs);
static void get_param_type(SV *sth, imp_sth_t *imp_sth, imp_dbh_t *imp_dbh, phs_t *phs);
static void check_for_unicode_param(imp_sth_t *imp_sth, phs_t *phs);
/* Function to get the console window handle which we may use in SQLDriverConnect on WIndows */
#ifdef WIN32
static HWND GetConsoleHwnd(void);
#endif
int dbd_describe(SV *sth, imp_sth_t *imp_sth, int more);
int dbd_db_login6_sv(SV *dbh, imp_dbh_t *imp_dbh, SV *dbname,
SV *uid, SV *pwd, SV *attr);
int dbd_db_login6(SV *dbh, imp_dbh_t *imp_dbh, char *dbname,
if (DBIc_TRACE(imp_dbh, SQL_TRACING, 0, 3)) {
TRACE1(imp_dbh, " SQLExecDirect %s\n", SvPV_nolen(statement));
}
#ifdef WITH_UNICODE
if (SvOK(statement) && DO_UTF8(statement)) {
SQLWCHAR *wsql;
STRLEN wsql_len;
SV *sql_copy;
if (DBIc_TRACE(imp_dbh, UNICODE_TRACING, 0, 0)) /* odbcunicode */
TRACE0(imp_dbh, " Processing utf8 sql in unicode mode\n");
sql_copy = sv_mortalcopy(statement);
SV_toWCHAR(aTHX_ sql_copy);
wsql = (SQLWCHAR *)SvPV(sql_copy, wsql_len);
ret = SQLExecDirectW(stmt, wsql, wsql_len / sizeof(SQLWCHAR));
} else {
if (DBIc_TRACE(imp_dbh, UNICODE_TRACING, 0, 0)) /* odbcunicode */
TRACE0(imp_dbh, " Processing non utf8 sql in unicode mode\n");
ret = SQLExecDirect(stmt, (SQLCHAR *)SvPV_nolen(statement), SQL_NTS);
}
#else
if (DBIc_TRACE(imp_dbh, UNICODE_TRACING, 0, 0)) /* odbcunicode */
TRACE0(imp_dbh, " Processing sql in non-unicode mode\n");
ret = SQLExecDirect(stmt, (SQLCHAR *)SvPV_nolen(statement), SQL_NTS);
#endif
if (DBIc_TRACE(imp_dbh, DBD_TRACING, 0, 3))
TRACE1(imp_dbh, " SQLExecDirect = %d\n", ret);
if (!SQL_SUCCEEDED(ret) && ret != SQL_NO_DATA) {
dbd_error2(dbh, ret, "Execute immediate failed",
imp_dbh->henv, imp_dbh->hdbc, stmt );
rows = -2; /* error */
} else {
if (ret == SQL_NO_DATA) {
/************************************************************************/
/* */
/* dbd_db_login6_sv */
/* ================ */
/* */
/* This API was introduced in DBI after 1.607 (subversion revision */
/* 11723) and is the same as dbd_db_login6 except the connection */
/* strings are SVs so we can detect unicode strings and call */
/* SQLDriveConnectW. */
/* */
/************************************************************************/
int dbd_db_login6_sv(
SV *dbh,
imp_dbh_t *imp_dbh,
SV *dbname,
SV *uid,
SV *pwd,
SV *attr)
/************************************************************************/
/* */
/* odbc_st_prepare_sv */
/* ================== */
/* */
/* dbd_st_prepare_sv is the newer version of dbd_st_prepare taking a */
/* a perl scalar for the sql statement instead of a char* so it may be */
/* unicode */
/* */
/************************************************************************/
int odbc_st_prepare_sv(
SV *sth,
imp_sth_t *imp_sth,
SV *statement,
SV *attribs)
{
dTHX;
D_imp_dbh_from_sth;
if (!imp_sth->odbc_exec_direct) {
if (DBIc_TRACE(imp_dbh, SQL_TRACING, 0, 3)) {
TRACE1(imp_dbh, " SQLPrepare %s\n", imp_sth->statement);
}
#ifdef WITH_UNICODE
if (SvOK(statement) && DO_UTF8(statement)) {
SQLWCHAR *wsql;
STRLEN wsql_len;
SV *sql_copy;
if (DBIc_TRACE(imp_dbh, UNICODE_TRACING, 0, 0)) /* odbcunicode */
TRACE0(imp_dbh, " Processing utf8 sql in unicode mode for SQLPrepareW\n");
sql_copy = sv_newmortal();
sv_setpv(sql_copy, imp_sth->statement);
#ifdef sv_utf8_decode
sv_utf8_decode(sql_copy);
#else
SvUTF8_on(sql_copy);
#endif
SV_toWCHAR(aTHX_ sql_copy);
wsql = (SQLWCHAR *)SvPV(sql_copy, wsql_len);
rc = SQLPrepareW(imp_sth->hstmt, wsql, wsql_len / sizeof(SQLWCHAR));
} else {
if (DBIc_TRACE(imp_dbh, UNICODE_TRACING, 0, 0)) /* odbcunicode */
TRACE0(imp_dbh, " Processing non-utf8 sql in unicode mode\n");
rc = SQLPrepare(imp_sth->hstmt, imp_sth->statement, SQL_NTS);
}
#else /* !WITH_UNICODE */
if (DBIc_TRACE(imp_dbh, UNICODE_TRACING, 0, 0)) /* odbcunicode */
TRACE0(imp_dbh, " Processing sql in non-unicode mode for SQLPrepare\n");
rc = SQLPrepare(imp_sth->hstmt, imp_sth->statement, SQL_NTS);
#endif
if (DBIc_TRACE(imp_dbh, DBD_TRACING, 0, 3))
TRACE1(imp_dbh, " SQLPrepare = %d\n", rc);
if (!SQL_SUCCEEDED(rc)) {
dbd_error(sth, rc, "st_prepare/SQLPrepare");
SQLFreeHandle(SQL_HANDLE_STMT, imp_sth->hstmt);
imp_sth->hstmt = SQL_NULL_HSTMT;
if (ChopBlanks && fbh->ColSqlType == SQL_WCHAR &&
fbh->datalen > 0)
{
SQLWCHAR *p = (SQLWCHAR*)fbh->data;
SQLWCHAR blank = 0x20;
SQLLEN orig_len = fbh->datalen;
while(fbh->datalen && p[fbh->datalen/sizeof(SQLWCHAR)-1] == blank) {
--fbh->datalen;
}
if (DBIc_TRACE(imp_sth, UNICODE_TRACING, 0, 0)) /* odbcunicode */
TRACE2(imp_sth, " Unicode ChopBlanks orig len=%ld, new len=%ld\n",
orig_len, fbh->datalen);
}
sv_setwvn(aTHX_ sv, (SQLWCHAR*)fbh->data,
fbh->datalen/sizeof(SQLWCHAR));
if (DBIc_TRACE(imp_sth, UNICODE_TRACING, 0, 0)) { /* odbcunicode */
/* unsigned char dlog[256]; */
/* unsigned char *src; */
/* char *dst = dlog; */
/* unsigned int n; */
/* STRLEN len; */
/* src = SvPV(sv, len); */
/* dst += sprintf(dst, "0x"); */
/* for (n = 0; (n < 126) && (n < len); n++, src++) { */
/* dst += sprintf(dst, "%2.2x", *src); */
char *p = (char*)fbh->data;
if (DBIc_TRACE(imp_sth, DBD_TRACING, 0, 5))
TRACE0(imp_sth, " chopping blanks\n");
while(fbh->datalen && p[fbh->datalen - 1]==' ')
--fbh->datalen;
}
sv_setpvn(sv, (char*)fbh->data, fbh->datalen);
if (imp_sth->odbc_utf8_on && fbh->ftype != SQL_C_BINARY ) {
if (DBIc_TRACE(imp_sth, UNICODE_TRACING, 0, 0)) /* odbcunicode */
TRACE0(imp_sth, " odbc_utf8 - decoding UTF-8");
#ifdef sv_utf8_decode
sv_utf8_decode(sv);
#else
SvUTF8_on(sv);
#endif
}
if (DBIc_TRACE(imp_sth, DBD_TRACING, 0, 4))
TRACE2(imp_sth, " %s(%ld)\n", neatsvpv(sv, fbh->datalen+5),
fbh->datalen);
TRACE2(imp_sth, " +get_param_type(%p,%s)\n", sth, phs->name);
if (imp_sth->odbc_force_bind_type != 0) {
phs->sql_type = imp_sth->odbc_force_bind_type;
if (DBIc_TRACE(imp_sth, DBD_TRACING, 0, 4))
TRACE1(imp_dbh, " forced param type to %d\n", phs->sql_type);
} else if (imp_dbh->odbc_sqldescribeparam_supported != 1) {
/* As SQLDescribeParam is not supported by the ODBC driver we need to
default a SQL type to bind the parameter as. The default is either
the value set with odbc_default_bind_type or a fallback of
SQL_VARCHAR/SQL_WVARCHAR depending on your data and whether we are unicode build. */
phs->sql_type = default_parameter_type(
"SQLDescribeParam not supported", imp_sth, phs);
} else if (!imp_sth->odbc_describe_parameters) {
phs->sql_type = default_parameter_type(
"SQLDescribeParam disabled", imp_sth, phs);
} else if (!phs->describe_param_called) {
/* If we haven't had a go at calling SQLDescribeParam before for this
parameter, have a go now. If it fails we'll default the sql type
as above when driver does not have SQLDescribeParam */
if (DBIc_TRACE(imp_sth, DBD_TRACING, 0, 5))
TRACE3(imp_dbh,
" Param %s is numeric SQL type %s "
"(param size:%lu) changed to SQL_VARCHAR\n",
phs->name,
S_SqlTypeToString(phs->described_sql_type),
(unsigned long)phs->param_size);
phs->sql_type = SQL_VARCHAR;
break;
default: {
check_for_unicode_param(imp_sth, phs);
break;
}
}
}
} else if (phs->describe_param_called) {
if (DBIc_TRACE(imp_sth, DBD_TRACING, 0, 5))
TRACE1(imp_dbh,
" SQLDescribeParam already run and returned rc=%d\n",
phs->describe_param_status);
check_for_unicode_param(imp_sth, phs);
}
if (phs->requested_type != 0) {
phs->sql_type = phs->requested_type;
if (DBIc_TRACE(imp_sth, DBD_TRACING, 0, 5))
TRACE1(imp_dbh, " Overriding sql type with requested type %d\n",
phs->requested_type);
}
#if defined(WITH_UNICODE)
/*(column_size == 2147483647) && (strlen_or_ind < 0) &&*/
((-strlen_or_ind + SQL_LEN_DATA_AT_EXEC_OFFSET) >= 409600)) {
strlen_or_ind = SQL_LEN_DATA_AT_EXEC(0);
buffer_length = 0;
}
#if defined(WITH_UNICODE)
/*
* rt43384 - MS Access does not seem to like us binding parameters as
* wide characters and then SQLBindParameter column_size to byte length.
* e.g., if you have a text(255) column and try and insert 190 ascii chrs
* then the unicode enabled version of DBD::ODBC will convert those 190
* ascii chrs to wide chrs and hence double the size to 380. If you pass
* 380 to Access for column_size it just returns an invalid precision
* value. This changes to column_size to chrs instead of bytes but
* only if column_size is not reduced to 0 - which also produces
* an access error e.g., in the empty string '' case.
*/
else if (((imp_dbh->driver_type == DT_MS_ACCESS_JET) ||
(imp_dbh->driver_type == DT_MS_ACCESS_ACE)) &&
(value_type == SQL_C_WCHAR) && (column_size > 1)) {
column_size = column_size / 2;
{ "odbc_force_rebind", ODBC_FORCE_REBIND, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_async_exec", ODBC_ASYNC_EXEC, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_err_handler", ODBC_ERR_HANDLER, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_exec_direct", ODBC_EXEC_DIRECT, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_version", ODBC_VERSION, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_cursortype", ODBC_CURSORTYPE, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_query_timeout", ODBC_QUERY_TIMEOUT, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_putdata_start", ODBC_PUTDATA_START, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_column_display_size", ODBC_COLUMN_DISPLAY_SIZE, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_utf8_on", ODBC_UTF8_ON, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_has_unicode", ODBC_HAS_UNICODE, PARAM_READ, PARAM_TYPE_CUSTOM },
{ "odbc_out_connect_string", ODBC_OUTCON_STR, PARAM_READ, PARAM_TYPE_CUSTOM},
{ "odbc_describe_parameters", ODBC_DESCRIBE_PARAMETERS, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_batch_size", ODBC_BATCH_SIZE, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_array_operations", ODBC_ARRAY_OPERATIONS, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{ "odbc_taf_callback", ODBC_TAF_CALLBACK, PARAM_READWRITE, PARAM_TYPE_CUSTOM },
{"odbc_trace", SQL_ATTR_TRACE, PARAM_READWRITE, PARAM_TYPE_BOOL, SQL_OPT_TRACE_ON, SQL_OPT_TRACE_OFF},
{"odbc_trace_file", SQL_ATTR_TRACEFILE, PARAM_READWRITE, PARAM_TYPE_STR, },
{ NULL },
};
case ODBC_COLUMN_DISPLAY_SIZE:
retsv = newSViv(imp_dbh->odbc_column_display_size);
break;
case ODBC_UTF8_ON:
retsv = newSViv(imp_dbh->odbc_utf8_on);
break;
case ODBC_HAS_UNICODE:
retsv = newSViv(imp_dbh->odbc_has_unicode);
break;
case ODBC_DEFAULT_BIND_TYPE:
retsv = newSViv(imp_dbh->odbc_default_bind_type);
break;
case ODBC_FORCE_BIND_TYPE:
retsv = newSViv(imp_dbh->odbc_force_bind_type);
break;
* (e.g. SQL_COLUMN_COUNT, since it doesn't depend on the colcount)
*/
if (colno == 0) {
dbd_error(sth, DBDODBC_INTERNAL_ERROR,
"cannot obtain SQLColAttributes for column 0");
return Nullsv;
}
/*
* workaround a problem in unixODBC 2.2.11 which can write off the
* end of the str_attr buffer when built with unicode - lie about
* buffer size - we've got more than we admit to.
*/
rc = SQLColAttributes(imp_sth->hstmt, (SQLUSMALLINT)colno,
(SQLUSMALLINT)desctype,
str_attr, sizeof(str_attr)/2,
&str_attr_len, &num_attr);
if (!SQL_SUCCEEDED(rc)) {
dbd_error(sth, rc, "odbc_col_attributes/SQLColAttributes");
return Nullsv;
/* Disable array operations by default for some drivers as no version
I've ever seen works and it annoys the dbix-class guys */
if (imp_dbh->driver_type == DT_FREETDS ||
imp_dbh->driver_type == DT_MS_ACCESS_JET ||
imp_dbh->driver_type == DT_MS_ACCESS_ACE) {
imp_dbh->odbc_array_operations = 0;
}
#endif
#ifdef WITH_UNICODE
imp_dbh->odbc_has_unicode = 1;
#else
imp_dbh->odbc_has_unicode = 0;
#endif
if (DBIc_TRACE(imp_dbh, CONNECTION_TRACING, 0, 0))
TRACE1(imp_dbh, "DBD::ODBC is unicode built : %s\n",
imp_dbh->odbc_has_unicode ? "YES" : "NO");
imp_dbh->odbc_default_bind_type = 0;
imp_dbh->odbc_force_bind_type = 0;
#ifdef SQL_ROWSET_SIZE_DEFAULT
imp_dbh->rowset_size = SQL_ROWSET_SIZE_DEFAULT;
#else
/* it should be 1 anyway so above should be redundant but included
here partly to remind me what it is */
imp_dbh->rowset_size = 1;
#endif
/*
* Called when we don't know what to bind a parameter as. This can happen for all sorts
* of reasons like:
*
* o SQLDescribeParam is not supported
* o odbc_describe_parameters is set to 0 (in other words telling us not to describe)
* o SQLDescribeParam was called and failed
* o SQLDescribeParam was called but returned an unrecognised parameter type
*
* If the data to bind is unicode (SvUTF8 is true) it is bound as SQL_WCHAR
* or SQL_WLONGVARCHAR depending on its size. Otherwise it is bound as
* SQL_VARCHAR/SQL_LONGVARCHAR.
*/
static SQLSMALLINT default_parameter_type(
char *why, imp_sth_t *imp_sth, phs_t *phs)
{
SQLSMALLINT sql_type;
struct imp_dbh_st *imp_dbh = NULL;
imp_dbh = (struct imp_dbh_st *)(DBIc_PARENT_COM(imp_sth));
if (imp_sth->odbc_default_bind_type != 0) {
sql_type = imp_sth->odbc_default_bind_type;
} else {
/* MS Access can return an invalid precision error in the 12blob
test unless the large value is bound as an SQL_LONGVARCHAR
or SQL_WLONGVARCHAR. Who knows what large is, but for now it is
4000 */
/*
Changed to 2000 for the varchar max switch as in a unicode build we
can change a string of 'x' x 2001 into 4002 wide chrs and SQL Server
will also return invalid precision in this case on a varchar(4000).
Of course, being SQL Server, it also has this problem with the
newer varchar(8000)! */
if (!SvOK(phs->sv)) {
sql_type = ODBC_BACKUP_BIND_TYPE_VALUE;
if (DBIc_TRACE(imp_sth, DBD_TRACING, 0, 3))
TRACE2(imp_sth, "%s, sv is not OK, defaulting to %d\n",
why, sql_type);
} else if (SvCUR(phs->sv) > imp_dbh->switch_to_longvarchar) {
if (return_count != 1)
croak("Expected one scalar back from taf handler");
ret = POPi;
PUTBACK;
return ret;
}
static void check_for_unicode_param(
imp_sth_t *imp_sth,
phs_t *phs) {
if (DBIc_TRACE(imp_sth, DBD_TRACING, 0, 5)) {
TRACE2(imp_sth, "check_for_unicode_param - sql_type=%s, described=%s\n",
S_SqlTypeToString(phs->sql_type), S_SqlTypeToString(phs->described_sql_type));
}
/* If we didn't called SQLDescribeParam successfully, we've defaulted/guessed so just return
as sql_type will already be set */
if (!phs->described_sql_type) return;
if (SvUTF8(phs->sv)) {
if (phs->described_sql_type == SQL_CHAR) {
phs->sql_type = SQL_WCHAR;
2000 drivers on varchars and is still happening with date */
int odbc_defer_binding;
/* force rebinding the output columns after each execute to
resolve some issues where certain stored procs can return
multiple result sets */
int odbc_force_rebind;
SQLINTEGER odbc_query_timeout;
/* point at which start using SQLPutData */
IV odbc_putdata_start;
/* whether built WITH_UNICODE */
int odbc_has_unicode;
/* flag to set asynchronous execution */
int odbc_async_exec;
/* flag for executing SQLExecDirect instead of SQLPrepare and SQLExecute.
Magic happens at SQLExecute() */
int odbc_exec_direct;
/* flag indicating if we should pass SQL_DRIVER_COMPLETE to
SQLDriverConnect */
int odbc_driver_complete;
/* used to disable describing paramters with SQLDescribeParam */
int odbc_describe_parameters;
examples/sqlserver_supplementary_chrs.pl view on Meta::CPAN
use DBI;
use Unicode::UCD 'charinfo';
use Data::Dumper;
#use charnames ':full';
use Test::More;
use Test::More::UTF8;
binmode(STDOUT, ":encoding(UTF-8)");
binmode(STDERR, ":encoding(UTF-8)");
# unicode chr above FFFF meaning it needs a surrogate pair
my $char = "\x{2317F}";
my $charinfo = charinfo(0x2317F);
print Dumper($charinfo);
#print "0x2317F is : ", charnames::viacode(0x2317F), "\n";
my $h = DBI->connect() or BAIL_OUT("Failed to connect");
BAIL_OUT("Not a unicode build of DBD::ODBC") if !$h->{odbc_has_unicode};
$h->{RaiseError} = 1;
$h->{ChopBlanks} = 1;
$h->{RaiseError} = 1;
eval {
$h->do('drop table mje');
};
# create table ensuring collation specifieds _SC
# for supplementary characters.
$h->do(q/create table mje (a nchar(20) collate Latin1_General_100_CI_AI_SC)/);
my $s = $h->prepare(q/insert into mje values(?)/);
my $inserted = $s->execute("\x{2317F}");
is($inserted, 1, "inserted one row");
my $r = $h->selectall_arrayref(q/select a, len(a), unicode(a), datalength(a) from mje/);
print Dumper($r);
print "Ordinals of received/sent: ", ord($r->[0][0]), ", ", ord($char), "\n";
print DBI::data_diff($r->[0][0], $char);
is($r->[0][0], $char);
is($r->[0][1], 1);
is($r->[0][2], 143743);
done_testing;
examples/unicode_params.pl view on Meta::CPAN
# $Id$
# Quick demo of inserting and retrieving unicode strings
# NOTE: your DBD::ODBC really needs to be built with unicode
# and this script will warn if not. You can comment the die out and it
# will work with some drivers without being built for unicode but you'll
# get slightly different output:
#
# with unicode:
# $VAR1 = [
# [
# "\x{20ac}" # note, is a unicode Perl string
# ]
# ];
# is utf8 1
#
# without unicode:
#
# $VAR1 = [
# [
# 'â¬' # note, not a unicode Perl string
# ]
# ];
# is utf8
#
use DBI;
use strict;
use Data::Dumper;
use utf8;
my $h = DBI->connect();
warn "Warning DBD::ODBC not built for unicode - you probably don't want to do this" if !$h->{'odbc_has_unicode'};
eval {
$h->do(q/drop table mje/);
};
$h->do(q/create table mje (a nvarchar(20))/);
$h->do(q/insert into mje values(?)/, undef, "\x{20ac}");
my $s = $h->prepare(q/select * from mje/);
examples/unicode_sql.pl view on Meta::CPAN
# $Id$
#
# Small example showing how you can insert unicode inline in the SQL
#
# expected output:
#Has unicode: 1
#$VAR1 = [
# [
# "\x{20ac}"
# ]
# ];
#$VAR1 = [
# [
# "\x{20ac}"
# ],
# [
examples/unicode_sql.pl view on Meta::CPAN
# ];
#
use DBI;
use strict;
use warnings;
use Data::Dumper;
my $h = DBI->connect();
#$h->{odbc_default_bind_type} = 12;
warn "Warning DBD::ODBC not built for unicode - this will not work as expected" if !$h->{'odbc_has_unicode'};
eval {$h->do(q/drop table martin/);};
print "Has unicode: " . $h->{odbc_has_unicode} . "\n";
$h->do(q/create table martin (a nvarchar(100))/);
my $s = $h->prepare(q/insert into martin values(?)/);
$s->execute("\x{20ac}");
my $r = $h->selectall_arrayref(q/select * from martin/);
print Dumper($r);
my $sql = 'insert into martin values(' . $h->quote("\x{20ac}") . ')';
if_you_are_taking_over_this_code.txt view on Meta::CPAN
things down only to find I broke older drivers and driver managers.
o dbi-dev mailing list is your friend - use it.
o Microsoft wrote the ODBC spec then handed it to X/Open but continue
to change it without reference to X/Open. This will happen again.
Be aware of it and live with it (it has just happended again with
ODBC 3.8!) and it hit me hard when 32bit moved to 64bit and the spec
changed over night.
o at this time, unicode in ODBC NEEDS a unicode aware ODBC Driver
i.e., it must have the wide SQLxxxW functions. People will tell you
that the driver manager can "translate" between ANSI and WIDE APIs
(or the driver can do UTF-8 etc) but it is nonesense e.g., unixODBC
does an ok job of this but it does not work with bound columns, ODBC
does not do UTF-8!
t/02simple.t view on Meta::CPAN
diag("\n");
diag("Perl $Config{PERL_REVISION}.$Config{PERL_VERSION}.$Config{PERL_SUBVERSION}\n");
diag("osname=$Config{osname}, osvers=$Config{osvers}, archname=$Config{archname}\n");
diag("Using DBI $DBI::VERSION\n");
diag("Using DBD::ODBC $DBD::ODBC::VERSION\n");
diag("Using DBMS_NAME " . DBI::neat($dbh->get_info(17)) . "\n");
diag("Using DBMS_VER " . DBI::neat($dbh->get_info(18)) . "\n");
$driver_name = DBI::neat($dbh->get_info(6));
diag("Using DRIVER_NAME $driver_name\n");
diag("Using DRIVER_VER " . DBI::neat($dbh->get_info(7)) . "\n");
diag("odbc_has_unicode " . ($dbh->{odbc_has_unicode} || '') . "\n");
}
# ReadOnly
{
# NOTE: the catching of warnings here needs a DBI > 1.628
local $dbh->{AutoCommit} = 0;
my $warning;
local $SIG{__WARN__} = sub {diag "AA:"; diag @_; $warning = 1};
$dbh->{ReadOnly} = 1;
if ($warning) {
return;
}
sub test_value
{
my ($dbh, $value) = @_;
local $dbh->{RaiseError} = 1;
my $max = 60001;
$max = 120001 if ($type == SQL_WLONGVARCHAR || $dbh->{odbc_has_unicode});
local $dbh->{LongReadLen} = $max;
my $row = $dbh->selectall_arrayref(q/select a from DBD_ODBC_drop_me/);
$ev = $@;
diag($ev) if $ev;
ok(!$ev, 'select test data back');
my $rc = is(length($row->[0]->[0]), length($value),
"sizes of insert/select compare");
SKIP: {
t/40UnicodeRoundTrip.t view on Meta::CPAN
$|=1;
my $WAIT=0;
my @data;
my $tests;
my $data_tests;
BEGIN {
if ($] < 5.008001) {
plan skip_all => "Old Perl lacking unicode support";
} elsif (!defined $ENV{DBI_DSN}) {
plan skip_all => "DBI_DSN is undefined";
}
@data=(
"hello ASCII: the quick brown fox jumps over the yellow dog",
"Hello Unicode: german umlauts (\x{00C4}\x{00D6}\x{00DC}\x{00E4}\x{00F6}\x{00FC}\x{00DF}) smile (\x{263A}) hebrew shalom (\x{05E9}\x{05DC}\x{05D5}\x{05DD})",
);
push @data,map { "again $_" } @data;
utf8::is_utf8($data[0]) and die "Perl set UTF8 flag on non-unicode string constant";
utf8::is_utf8($data[1]) or die "Perl did not set UTF8 flag on unicode string constant";
utf8::is_utf8($data[2]) and die "Perl set UTF8 flag on non-unicode string constant";
utf8::is_utf8($data[3]) or die "Perl did not set UTF8 flag on unicode string constant";
unshift @data,'';
push @data,42;
my @plaindata=grep { !utf8::is_utf8($_) } @data;
@plaindata or die "OOPS";
$data_tests = 6*@data+6*@plaindata;
#diag("Data Tests : $data_tests");
$tests=1+$data_tests;
t/40UnicodeRoundTrip.t view on Meta::CPAN
END {
Test::NoWarnings::had_no_warnings()
if ($has_test_nowarnings);
}
my $dbh=DBI->connect();
ok(defined($dbh),"DBI connect");
SKIP: {
skip "Unicode-specific tests disabled - not a unicode build",
$data_tests if (!$dbh->{odbc_has_unicode});
if (DBI::neat($dbh->get_info(6)) =~ 'SQORA32') {
skip "Oracle ODBC driver does not work with these tests",
$data_tests;
}
my $dbname=$dbh->get_info(17); # DBI::SQL_DBMS_NAME
SKIP: {
my ($len,$fromdual,$skipempty);
if ($dbname=~/Microsoft SQL Server/i) {
t/40UnicodeRoundTrip.t view on Meta::CPAN
pass("bind VARCHAR");
$sth->execute();
pass("execute");
my ($t,$tlen)=$sth->fetchrow_array();
pass('fetch');
cmp_ok($tlen,'==',length($txt),'length equal');
utf_eq_ok($t,$txt,'text equal');
}
my $sth=$dbh->prepare("SELECT ? as roundtrip, $len(?) as roundtriplen $fromdual");
ok(defined($sth),"prepare round-trip select statement unicode");
$sth->bind_param (1,$txt,SQL_WVARCHAR);
$sth->bind_param (2,$txt,SQL_WVARCHAR);
pass("bind WVARCHAR");
$sth->execute();
pass("execute");
my ($t,$tlen)=$sth->fetchrow_array();
pass('fetch');
cmp_ok($tlen,'==',length($txt),'length equal');
utf_eq_ok($t,$txt,'text equal');
t/41Unicode.t view on Meta::CPAN
$|=1;
my $WAIT=0;
my @data;
my $tests;
my $data_tests;
my $other_tests;
BEGIN {
if ($] < 5.008001) {
plan skip_all => "Old Perl lacking unicode support";
} elsif (!defined $ENV{DBI_DSN}) {
plan skip_all => "DBI_DSN is undefined";
}
@data=(
"hello ASCII: the quick brown fox jumps over the yellow dog",
"Hello Unicode: german umlauts (\x{00C4}\x{00D6}\x{00DC}\x{00E4}\x{00F6}\x{00FC}\x{00DF}) smile (\x{263A}) hebrew shalom (\x{05E9}\x{05DC}\x{05D5}\x{05DD})",
);
push @data,map { "again $_" } @data;
utf8::is_utf8($data[0]) and die "Perl set UTF8 flag on non-unicode string constant";
utf8::is_utf8($data[1]) or die "Perl did not set UTF8 flag on unicode string constant";
utf8::is_utf8($data[2]) and die "Perl set UTF8 flag on non-unicode string constant";
utf8::is_utf8($data[3]) or die "Perl did not set UTF8 flag on unicode string constant";
$data_tests=12*@data;
$other_tests = 7;
$tests = $other_tests + $data_tests;
eval "require Test::NoWarnings";
if (!$@) {
$has_test_nowarnings = 1;
}
$tests += 1 if $has_test_nowarnings;
plan tests => $tests,
t/41Unicode.t view on Meta::CPAN
END {
Test::NoWarnings::had_no_warnings()
if ($has_test_nowarnings);
}
my $dbh=DBI->connect();
ok(defined($dbh),"DBI connect");
SKIP: {
if (!$dbh->{odbc_has_unicode}) {
skip "Unicode-specific tests disabled - not a unicode build",
$data_tests + $other_tests - 1;
}
my $dbname = $dbh->get_info(17); # DBI::SQL_DBMS_NAME
SKIP: {
my ($sth,$NVARCHAR);
if ($dbname=~/Microsoft SQL Server/i) {
($NVARCHAR)=('NVARCHAR(1000)');
} elsif ($dbname=~/Oracle/i) {
t/41Unicode.t view on Meta::CPAN
utf_eq_ok($nva,$data[$i],"value matches $info col1");
cmp_ok(utf8::is_utf8($nvb),'>=',utf8::is_utf8($data[$i]),"utf8 flag $info col2");
utf_eq_ok($nva,$data[$i],"value matches $info col2");
cmp_ok(utf8::is_utf8($nvc),'>=',utf8::is_utf8($data[$i]),"utf8 flag $info col3");
utf_eq_ok($nva,$data[$i],"value matches $info col3");
}
$WAIT && eval {
print "you may want to look at the table now, the unicode data is damaged!\nHit Enter to continue\n";
$_=<STDIN>;
};
# eval {
# local $dbh->{RaiseError} = 0;
# $dbh->do("DROP TABLE PERL_DBD_TABLE1");
# };
$dbh->disconnect;
t/45_unicode_varchar.t view on Meta::CPAN
#!/usr/bin/perl -w -I./t
#
# Test insertion into varchar columns using unicode and codepage chrs
# Must be a unicode build of DBD::ODBC
# Currently needs MS SQL Server
#
use open ':std', ':encoding(utf8)';
use Test::More;
use strict;
use Data::Dumper;
$| = 1;
use DBI qw(:utils);
use DBI::Const::GetInfoType;
my $has_test_nowarnings = 1;
eval "require Test::NoWarnings";
$has_test_nowarnings = undef if $@;
my $dbh;
BEGIN {
if ($] < 5.008001) {
plan skip_all => "Old Perl lacking unicode support";
} elsif (!defined $ENV{DBI_DSN}) {
plan skip_all => "DBI_DSN is undefined";
}
}
END {
# tidy up
if ($dbh) {
local $dbh->{PrintError} = 0;
local $dbh->{PrintWarn} = 0;
t/45_unicode_varchar.t view on Meta::CPAN
$dbh->{RaiseError} = 1;
eval {local $dbh->{PrintWarn} =0; $dbh->{PrintError} = 0;$dbh->do(q/drop table PERL_DBD_TABLE1/)};
my $dbname = $dbh->get_info($GetInfoType{SQL_DBMS_NAME});
if ($dbname !~ /Microsoft SQL Server/i) {
note "Not MS SQL Server";
plan skip_all => "Not MS SQL Server";
exit;
}
if (!$dbh->{odbc_has_unicode}) {
note "Not a unicode build of DBD::ODBC";
plan skip_all => "Not a unicode build of DBD::ODBC";
exit 0;
}
if ($^O eq 'MSWin32') {
if (!code_page()) {
note "Win32::API not found";
}
}
eval {
t/45_unicode_varchar.t view on Meta::CPAN
fail("Cannot create table with collation - $@");
done_testing();
exit 0;
}
collations($dbh, 'PERL_DBD_TABLE1');
my $sql = q/insert into PERL_DBD_TABLE1 (b, a) values(?, ?)/;
my $s;
# a simple unicode string
my $unicode = "\x{20ac}\x{a3}";
diag "Inserting a unicode euro, utf8 flag on:\n";
$s = $dbh->prepare($sql); # redo to ensure no sticky params
execute($s, 1, $unicode);
show_it($dbh, [2], [2], ['0x80a3']);
my $codepage;
# a simple codepage string
{
use bytes;
$codepage = chr(0xa3) . chr(0x80); # it is important this is different to $unicode
}
diag "Inserting a codepage/bytes string:\n";
$s = $dbh->prepare($sql); # redo to ensure no sticky params
execute($s, 1, $codepage);
show_it($dbh, [2], [2], ['0xa380']);
# inserting a mixture of unicode chrs and codepage chrs per row in same insert
# unicode first - checks we rebind the 2nd parameter as SQL_CHAR
diag "Inserting a unicode followed by codepage chrs:\n";
$s = $dbh->prepare($sql); # redo to ensure no sticky params
execute($s, 1, $unicode);
execute($s, 2, $codepage);
show_it($dbh, [2,2], [2,2], ['0x80a3', '0x80a3']);
# inserting a mixture of unicode chrs and codepage chrs per row in same insert
# codepage first - checks we rebind the 2nd parameter SQL_WCHAR
diag "Inserting codepage chrs followed by unicode:\n";
$s = $dbh->prepare($sql); # redo to ensure no sticky params
execute($s, 1, $codepage);
execute($s, 2, $unicode);
show_it($dbh, [2,2], [2,2], ['0xa380', '0x80a3']);
Test::NoWarnings::had_no_warnings() if ($has_test_nowarnings);
done_testing();
t/70execute_array_native.t view on Meta::CPAN
diag("\n");
diag("Perl $Config{PERL_REVISION}.$Config{PERL_VERSION}.$Config{PERL_SUBVERSION}\n");
diag("osname=$Config{osname}, osvers=$Config{osvers}, archname=$Config{archname}\n");
diag("Using DBI $DBI::VERSION\n");
diag("Using DBD::ODBC $DBD::ODBC::VERSION\n");
diag("Using DBMS_NAME " . DBI::neat($dbh->get_info(17)) . "\n");
diag("Using DBMS_VER " . DBI::neat($dbh->get_info(18)) . "\n");
diag("Using DRIVER_NAME $driver_name\n");
diag("Using DRIVER_VER " . DBI::neat($dbh->get_info(7)) . "\n");
diag("odbc_has_unicode " . $dbh->{odbc_has_unicode} . "\n");
}
note("Using driver $dbh->{Driver}->{Name}");
$ENV{ODBC_DISABLE_ARRAY_OPERATIONS} = 0; # force array ops
$ea = ExecuteArray->new($dbh, 0); # don't set odbc_disable_array_operations
$dbh = $ea->dbh;
$ea->drop_table($dbh);
t/90_trace_flags.t view on Meta::CPAN
}
}
my $h = DBI->connect();
unless($h) {
BAIL_OUT("Unable to connect to the database ($DBI::errstr)\nTests skipped.\n");
exit 0;
}
my $bit;
$bit = $h->parse_trace_flag('odbcunicode');
is($bit, 0x02_00_00_00, 'odbcunicode');
$bit = $h->parse_trace_flag('odbcconnection');
is($bit, 0x04_00_00_00, 'odbcconnection');
my $val;
$val = $h->parse_trace_flags('odbcunicode|odbcconnection');
is($val, 0x06_00_00_00, "parse_trace_flags");
$h->disconnect;
Test::NoWarnings::had_no_warnings()
if ($has_test_nowarnings);
done_testing;
t/rt_101579.t view on Meta::CPAN
#!/usr/bin/perl -w -I./t
#
# rt 101579
#
# Between 1.43 and 1.50 DBD::ODBC changed to add check_for_unicode_param
# function which changes bound types of SQL_VARCHAR etc to their unicode
# equivalent if the perl scalar is unicode. Unfortunately, if the scalar was not unicode
# or the described type was not VARCHAR it returned the SQLDescribeParam
# described type ignoring the fact we map SQL_NUMERIC etc to SQL_VARCHAR.
# The result is the first call to execute works and subsequent calls often return
# string data, right truncated for numeric parameters.
#
use Test::More;
use strict;
use DBI;
use_ok('ODBCTEST');
t/rt_43384.t view on Meta::CPAN
SKIP: {
skip "Microsoft Access tests not supported using $dbname", 7
unless ($dbname =~ /Access/i);
eval {
local $dbh->{PrintWarn} = 0;
local $dbh->{PrintError} = 0;
$dbh->do(q/drop table PERL_DBD_rt_43384/);
};
pass('dropped test table');
eval {$dbh->do(q/create table PERL_DBD_rt_43384 (unicode_varchar text(200), unicode_text memo)/);};
my $ev = $@;
ok(!$ev, 'created test table PERL_DBD_rt_43384');
SKIP: {
skip 'failed to create test table', 2 if $ev;
my $sth = $dbh->prepare(q/insert into PERL_DBD_rt_43384 values(?,?)/);
ok($sth, 'insert prepared');
SKIP: {
skip 'failed to prepare', 1 if !$sth;
my $data = 'a' x 190;
t/rt_61370.t view on Meta::CPAN
# this needs to be MS SQL Server and not the OOB driver
if ($dbms_name !~ /Microsoft SQL Server/) {
note('Not Microsoft SQL Server');
exit 0;
}
if ($driver_name =~ /esoobclient/) {
note("Easysoft OOB");
exit 0;
}
if (!$dbh->{odbc_has_unicode}) {
note('DBD::ODBC not built with unicode support');
exit 0;
}
eval {
local $dbh->{PrintWarn} = 0;
local $dbh->{PrintError} = 0;
$dbh->do('drop table PERL_DBD_RT_61370');
};
# try and create a table with an XML column
# if we cannot, we'll have to assume your SQL Server is too old
test_results.txt view on Meta::CPAN
======================================================================
t/01base................ok
t/02simple..............ok 1/65#
# Perl v5.9.5 built for MSWin32-x64-multi-thread
# Using DBI 1.59
# Using DBD::ODBC 1.20
# Using DBMS_NAME 'Microsoft SQL Server'
# Using DBMS_VER '09.00.3042'
# Using DRIVER_NAME 'sqlncli10.dll'
# Using DRIVER_VER '10.00.1049'
# odbc_has_unicode 1
t/02simple..............ok
t/03dbatt...............ok 1/29#
# N.B. Some drivers (postgres/cache) may return ODBC 2.0 column names for the SQ
LTables result-set e.g. TABLE_QUALIFIER instead of TABLE_CAT
t/03dbatt...............ok
t/05meth................ok
t/07bind................ok
t/08bind2...............ok
t/09multi...............ok
t/10handler.............ok
test_results.txt view on Meta::CPAN
t/01base................ok
t/02simple..............ok 1/65#
# Perl 5.7.8
# osname=linux, osvers=2.6.9-22.0.2.elsmp, archname=i686-linux
# Using DBI 1.607
# Using DBD::ODBC 1.21
# Using DBMS_NAME 'Microsoft SQL Server'
# Using DBMS_VER '09.00.4035'
# Using DRIVER_NAME 'esoobclient'
# Using DRIVER_VER '02.00.0000'
# odbc_has_unicode 0
t/02simple..............ok
t/03dbatt...............ok 1/29#
# N.B. Some drivers (postgres/cache) may return ODBC 2.0 column names
for the SQLTables result-set e.g. TABLE_QUALIFIER instead of TABLE_CAT
t/03dbatt...............ok
t/05meth................ok
t/07bind................ok
t/08bind2...............ok
t/09multi...............ok
t/10handler.............ok
test_results.txt view on Meta::CPAN
# NOTE: You failed this test because your SQL Server driver
# is too old to handle the MARS_Connection attribute. This test cannot
# easily skip this test for old drivers as there is no definite SQL Server
# driver version it can check.
#
t/20SqlServer...........ok
1/65 skipped: WARNING: driver does NOT support MARS_Connection
t/30Oracle..............ok
3/5 skipped: Oracle tests not supported using Microsoft SQL Server
t/40UnicodeRoundTrip....ok
61/62 skipped: Unicode-specific tests disabled - not a unicode build
t/41Unicode.............ok
54/55 skipped: Unicode-specific tests disabled - not a unicode build
t/pod-coverage..........ok 1/1# Test::Pod::Coverage 1.04 required for
testing POD coverage
t/pod-coverage..........ok
t/pod...................ok
3/3 skipped: Test::Pod 1.00 required for testing POD
t/rt_38977..............ok
6/11 skipped: Easysoft OOB
t/rt_39841..............ok
25/28 skipped: not SQL Server ODBC or native client driver
t/rt_39897..............ok
unicode_helper.h view on Meta::CPAN
#ifdef WITH_UNICODE
#ifndef unicode_helper_h
#define unicode_helper_h
#include "ConvertUTF.h"
UTF16 * WValloc(char * s);
void WVfree(UTF16 * wp);
void sv_setwvn(pTHX_ SV * sv, UTF16 * wp, STRLEN len);
SV *sv_newwvn(pTHX_ UTF16 * wp, STRLEN len);
char * PVallocW(UTF16 * wp);
void PVfreeW(char * s);
void SV_toWCHAR(pTHX_ SV * sv);
void utf8sv_to_wcharsv(pTHX_ SV *sv);
#endif /* defined unicode_helper_h */
#endif /* WITH_UNICODE */