DBD-ASAny
view release on metacpan or search on metacpan
If fetch was called when there were no more rows, the driver set the err
value to (integer) zero but DBI could interpret that as a warning because
it converted it to a string and saw that it was non-empty. We now set
err to the empty string.
Changes in DBD::ASAny 1.13 7th Mar 2002, 3rd Sept 2003, 29th Oct 2003
Support 64bit UNIX platforms
Avoid compile-time warning about assign_blob()
Added /opt/sybase/SYBS* as default installation directories to search for ASA
An error was reported if execute() was called twice on the same statement
handle. Changed execute() to close a previously opened cursor using that
statement. Also, the prepared statement was being dropped during
finish() rather during destroy & that prevented the prepare_cache from
working.
Updated README to warn about resource governor errors now that statement
handles are not dropped until destroy time.
Set the "state" value on the handle when an error occurs.
Fetching 64-bit integers failed when they contained more than 8 digits
because we fetch them as strings but left the length as the size of the
bigint datatype (8 bytes).
Fixed additional compiler warnings from VC7.
Added CLONE method.
Added better handling for detecting new ASA versions -- should handle
up to version 15 now (version 9 is the highest shipping version).
Implemented the blob_read function which is an undocumented DBI API.
Changed the long string/binary fetching code so that long character strings
are fetched as varchar rather than binary so that character set translation
will be applied to the string.
Added a commit operation after a cursor is closed if AutoCommit is enabled.
Previously, a commit was only performed if a non-cursor statement was
executed. DML executed in procedures returning result sets were not
committed.
Added "-r" switch to sqlpp to generate thread-safe code.
Added dbtasks library to UNIX link lines.
Link against "_r" reentrant versions of ASA libraries when using ithreads
on UNIX platforms.
Avoid warning when building with VS.NET 2003 which has deprecated the
/Gf switch (automatically added by MakeMaker). Changed it to /GF.
Allow ASANY installation directory to be specified by the "ASANY{ver}"
environment variable (ASANY9, ASANY8, etc).
output SQLDA.
Correctly handle NULL output parameters from procedures.
Removed some debug printfs that snuck into v1.10.
Corrected README to show that bind_param_inout is a supported function.
Do not treat warnings as errors (err/errstr are still set but the calls
to execute/fetch will not report a failure).
Changes in DBD::ASAny 1.11 27th March 2001
Calling a procedure or batch that returned a result set for which
the engine could not infer the data types, errors were reported. Fixed
by describing the cursor after it was opened.
Changes in DBD::ASAny 1.10 13th November 2000
Added support for AutoCommit.
Changes in DBD::ASAny 1.09 21st September 2000
Added support for type_info_all and (indirectly) quote()
Added support for the following statement attributes:
TYPE, ASATYPE, SCALE, PRECISION, NULLABLE, CursorName,
Statement, and RowsInCache.
RowsInCache always returns undef.
Changes in DBD::ASAny 1.04 19th November 1999
Fixed inout parameter support.
Moved changes info from Makefile.PL to this file.
Changes in DBD::ASAny 1.03 9th November 1999
Driver now builds on HPUX, AIX, Linux with native and GNU compilers.
Dynamically resize SQLDA for bind variables or output values if the
default size is not large enough.
Avoid "memory leak" caused by dblib holding onto cursor information
so that the cursor can be reopend without being redeclared.
Changes in DBD::ASAny 1.02 21st October 1999
Driver should now be thread safe & it builds with the
threaded version of perl; however, perl threads and/or DBI
in a threaded environment is very unstable. Simple multithreaded
tests easily crash perl.
Changes in DBD::ASAny 1.01 5th August 1999
2. The ASA DBD driver is thread-safe when using Perl ithreads.
3. Prepared statements are not dropped from the ASA server until the
statement handle is destroyed in the perl script. Calling finish()
is not sufficient to drop the handle that the server is holding
onto -- use "undef" instead or reuse the same perl variable for
another handle. To help detect handle leaks in client
applications, ASA by default limits the number of prepared
statements and cursors that any connection can hold at one time to
50 of each. If that limit is exceeded, a "Resource governor
... exceeded" error is reported. If you encounter this error, make
sure you are dropping all of your statement handles and, if so,
consult the ASA documentation for the MAX_CURSOR_COUNT and
MAX_STATEMENT_COUNT options. Be careful when using
prepare_cached() since the cache will hold onto statement handles.
---------------------------------------------------------------
Examples can be found in the 'eg' directory:
2. The ASA DBD driver is thread-safe when using Perl ithreads.
3. Prepared statements are not dropped from the ASA server until the
statement handle is destroyed in the perl script. Calling finish()
is not sufficient to drop the handle that the server is holding
onto -- use "undef" instead or reuse the same perl variable for
another handle. To help detect handle leaks in client
applications, ASA by default limits the number of prepared
statements and cursors that any connection can hold at one time to
50 of each. If that limit is exceeded, a "Resource governor
... exceeded" error is reported. If you encounter this error, make
sure you are dropping all of your statement handles and, if so,
consult the ASA documentation for the MAX_CURSOR_COUNT and
MAX_STATEMENT_COUNT options. Be careful when using
prepare_cached() since the cache will hold onto statement handles.
---------------------------------------------------------------
Examples can be found in the 'eg' directory:
/* these are (almost) random values ! */
#define MAX_COLS 1025
#define BIND_VARIABLES_INITIAL_SQLDA_SIZE 100
#define OUTPUT_VARIABLES_INITIAL_SQLDA_SIZE 100
typedef char a_cursor_name[32];
#define NO_CURSOR_ID (~0UL)
#define AVAILABLE_CURSORS_GROWTH_AMOUNT 10
// SQLDA var field requires 2 bytes of length information in addition
// to the data. We must prevent sqlvar->sqllen field from overflowing
// (sqlvar->sqllen is a 16-bit signed integer)
#define MAX_DT_VARCHAR_LENGTH 32765
// When transferring a DT_STRING string, this is the max size (we
// leave space for a NULL byte).
typedef struct imp_fbh_st imp_fbh_t;
/* Define dbh implementor data structure */
// Note: only one thread may use a connection at one time
struct imp_dbh_st {
dbih_dbc_t com; /* MUST be first element in structure */
SQLCA sqlca;
SQLCA *sqlcap; // needed so server-side perl can use an
// existing connection
// We want to reuse cursor names because dblib
// holds onto cursor information in case it is reopened
// without being redeclared
unsigned long available_cursors_top;
unsigned long available_cursors_size;
unsigned long *available_cursors;
unsigned long next_cursor_id;
unsigned long next_tempvar_id;
};
struct imp_drh_st {
dbih_drc_t com; /* MUST be first element in structure */
};
struct sql_type_info {
short int sqltype;
short int sqllen;
/* Define sth implementor data structure */
struct imp_sth_st {
dbih_stc_t com; /* MUST be first element in structure */
a_sql_statement_number statement_number;
SQLDA *input_sqlda; /* Bind variables */
SQLDA *output_sqlda;
short *original_input_indicators;
struct sql_type_info *original_output_type_info;
int cursor_open;
int row_count;
char *statement; /* sql (see sth_scan) */
HV *bind_names;
int done_prepare; /* have we prepared this sth yet ? */
int done_desc; /* have we described this sth yet ? */
int long_trunc_ok; /* is truncating a long an error */
unsigned long cursor_id;
a_cursor_name cursor_name;
int has_output_params;
int statement_type;
};
#define IMP_STH_EXECUTING 0x0001
#define SQLPRES_STMT_OTHER 0
#define SQLPRES_STMT_DELETE 1
#define SQLPRES_STMT_INSERT 2
#define SQLPRES_STMT_SELECT 3
#define SQLPRES_STMT_UPDATE 4
if( uid != NULL && *uid != '\0' ) {
sqlca_provided = TRUE;
sscanf( uid, "%p", &imp_dbh->sqlcap );
} else {
imp_dbh->sqlcap = &imp_dbh->sqlca;
}
if( !sqlca_provided && !db_init( SQLCAPTR ) ) {
ssa_error( dbh, SQLCAPTR, SQLE_ERROR, "db_init failed" );
}
imp_dbh->available_cursors = NULL;
imp_dbh->available_cursors_size = 0;
imp_dbh->available_cursors_top = 0;
imp_dbh->next_cursor_id = 1;
imp_dbh->next_tempvar_id = 1;
#if defined( SQLPP_DBLIB_VERSION_SA10 )
if( !sqlca_provided ) {
db_set_property( SQLCAPTR, DB_SET_PROP_CLIENT_API, "PerlDBD" );
}
#endif
// printf( "Connect string: %s\n", conn_str );
if( !sqlca_provided ) {
/*******************************************/
{
// don't close the connection if it was opened externally
if( !EXTERNAL_CONNECTION( imp_dbh ) ) {
if( DBIc_ACTIVE( imp_dbh ) ) {
dbd_db_disconnect( dbh, imp_dbh );
}
db_fini( SQLCAPTR );
}
if( imp_dbh->available_cursors != NULL ) {
safefree( imp_dbh->available_cursors );
imp_dbh->available_cursors= NULL;
}
imp_dbh->available_cursors_size = 0;
imp_dbh->available_cursors_top = 0;
imp_dbh->next_cursor_id = NO_CURSOR_ID;
/* Nothing in imp_dbh to be freed */
DBIc_IMPSET_off( imp_dbh );
}
int
dbd_db_STORE_attrib( SV *dbh, imp_dbh_t *imp_dbh, SV *keysv, SV *valuesv )
/************************************************************************/
{
/*************************************************************************/
{
D_imp_dbh_from_sth;
EXEC SQL BEGIN DECLARE SECTION;
char *_statement;
a_sql_statement_number stmt_number;
EXEC SQL END DECLARE SECTION;
imp_sth->done_prepare = 0;
imp_sth->done_desc = 0;
imp_sth->cursor_open = 0;
imp_sth->input_sqlda = NULL;
imp_sth->output_sqlda = NULL;
imp_sth->original_input_indicators = NULL;
imp_sth->original_output_type_info = NULL;
/* scan statement for '?', ':1' and/or ':foo' style placeholders */
dbd_preparse( imp_sth, statement );
_statement = (char *)imp_sth->statement;
//PerlIO_printf( PerlIO_stderr(), "\n\nPrepare: '%s'\n\n", _statement ); fflush(stdout);
name_len = strlen(name);
} else {
name = SvPV( ph_namesv, name_len );
}
if( SvTYPE(newvalue) > SVt_PVLV ) { /* hook for later array logic */
croak( "Can't bind a non-scalar value (%s)", neatsvpv(newvalue,0) );
}
if( SvROK(newvalue) && !IS_DBI_HANDLE(newvalue) ) {
/* dbi handle allowed for cursor variables */
croak( "Can't bind a reference (%s)", neatsvpv(newvalue,0) );
}
if( SvTYPE(newvalue) == SVt_PVLV && is_inout ) { /* may allow later */
croak( "Can't bind ``lvalue'' mode scalar as inout parameter (currently)" );
}
if( DBIS->debug >= 2 ) {
PerlIO_printf( DBILOGFP, " bind %s <== %s (type %ld",
name, neatsvpv(newvalue,0), (long)sql_type );
(sqlda->sqlvar[i].sqltype&DT_TYPES) == DT_VARCHAR ) {
safefree( sqlda->sqlvar[i].sqldata );
}
sqlda->sqlvar[i].sqldata = NULL;
}
}
free_sqlda( sqlda );
}
static int
describe_cursor( SV *sth, imp_sth_t *imp_sth )
/********************************************/
{
D_imp_dbh_from_sth;
int sqlda_size;
EXEC SQL BEGIN DECLARE SECTION;
char *crsr_name;
EXEC SQL END DECLARE SECTION;
my_free_sqlda( imp_sth->output_sqlda, TRUE );
crsr_name = imp_sth->cursor_name;
sqlda_size = OUTPUT_VARIABLES_INITIAL_SQLDA_SIZE;
for(;;) {
imp_sth->output_sqlda = alloc_sqlda( sqlda_size );
EXEC SQL DESCRIBE OUTPUT FOR CURSOR :crsr_name USING DESCRIPTOR "imp_sth->output_sqlda";
if( SQLCODE ) {
ssa_error( sth, SQLCAPTR, SQLE_ERROR, "describe cursor failed" );
return( 0 );
}
if( imp_sth->output_sqlda->sqld <= imp_sth->output_sqlda->sqln ) {
break;
}
// Try again with a larger SQLDA
sqlda_size = imp_sth->output_sqlda->sqld;
free_sqlda( imp_sth->output_sqlda );
}
setup_output_sqlda( imp_sth );
return( 1 );
}
static void
build_cursor_name( a_cursor_name name, a_sql_statement_number id )
/****************************************************************/
{
sprintf( name, "CURS_%d", id );
}
static unsigned long
alloc_cursor_id( imp_dbh_t *imp_dbh )
/***********************************/
{
unsigned long cursor_id;
if( (imp_dbh->available_cursors == NULL) ||
(imp_dbh->available_cursors_top == 0) ) {
cursor_id = imp_dbh->next_cursor_id++;
} else {
cursor_id = imp_dbh->available_cursors[--imp_dbh->available_cursors_top];
}
return( cursor_id );
}
static void
free_cursor_id( imp_dbh_t *imp_dbh, unsigned long cursor_id )
/***********************************************************/
{
if( imp_dbh->available_cursors == NULL ) {
imp_dbh->available_cursors_size = AVAILABLE_CURSORS_GROWTH_AMOUNT;
imp_dbh->available_cursors = (unsigned long *)safemalloc( imp_dbh->available_cursors_size*sizeof(unsigned long) );
imp_dbh->available_cursors_top = 0;
} else if( imp_dbh->available_cursors_top >= imp_dbh->available_cursors_size ) {
imp_dbh->available_cursors_size += AVAILABLE_CURSORS_GROWTH_AMOUNT;
imp_dbh->available_cursors = (unsigned long *)saferealloc( imp_dbh->available_cursors, imp_dbh->available_cursors_size*sizeof(unsigned long) );
}
imp_dbh->available_cursors[imp_dbh->available_cursors_top++] = cursor_id;
}
static int
assign_blob( imp_dbh_t *imp_dbh, imp_sth_t *imp_sth, SV *sv, int col )
/********************************************************************/
{
struct sqlvar *var = &imp_sth->output_sqlda->sqlvar[col];
short fetched_len;
I32 max_len;
I32 len;
//printf( "col=%d, fetched_len=%d, max_len=%d\n", col, fetched_len, max_len );
//printf( "Value: %s\n", (char *)(((short *)var->sqldata)+1) );
return( FALSE );
}
if( fetched_len >= max_len ) {
sv_setpvn( sv, (char *)var->sqldata + sizeof(short), max_len );
return( TRUE );
}
crsr_name = imp_sth->cursor_name;
crsr_col = (unsigned short)(col+1);
sv_setpvn( sv, (char *)var->sqldata + sizeof(short), fetched_len );
memcpy( piece_sqlda.sqldaid, "SQLDA ", 8 );
piece_sqlda.sqldabc = sizeof(SQLDA);
piece_sqlda.sqln = 1;
piece_sqlda.sqld = 1;
// We want to make sure that we fetch non-binary blobs as long varchar so that character
// set translation will occur
piece_sqlda.sqlvar[0].sqltype = var->sqltype;
char *crsr_name;
int do_commit = FALSE;
if( !imp_sth->done_desc ) {
/* describe and allocate storage for results */
if( !dbd_describe( sth, imp_sth ) ) {
return( -2 ); /* dbd_describe already called ora_error() */
}
}
// If a cursor is still open, it must be closed before we open another
// one on the same handle.
dbd_st_finish( sth, imp_sth );
imp_sth->cursor_open = 0;
if( DBIc_NUM_FIELDS(imp_sth) == 0 || imp_sth->has_output_params ) {
// Nothing coming back -- use execute
//PerlIO_printf( PerlIO_stderr(), "Executing stmt\n" );
imp_sth->cursor_id = NO_CURSOR_ID;
stmt_number = imp_sth->statement_number;
EXEC SQL EXECUTE :stmt_number USING DESCRIPTOR "imp_sth->input_sqlda" INTO DESCRIPTOR "imp_sth->output_sqlda";
if( SQLCODE ) {
if( SQLCODE == SQLE_NOTFOUND ) {
sv_setpv( DBIc_ERR(imp_sth), "" );
return( 0 ); // No rows affected
} else {
// This error case for SQLE_TRUNCATED as well because there is no
// way to call GET DATA without a cursor.
if( SQLCODE > 0 ) {
// Just a warning
ssa_error( sth, SQLCAPTR, SQLE_ERROR, "warning during execute" );
if( DBIS->debug >= 3 ) {
PerlIO_printf( DBILOGFP, " dbd_st_execute warning, rc=%d", SQLCODE );
}
} else {
ssa_error( sth, SQLCAPTR, SQLE_ERROR, "execute failed" );
if( DBIS->debug >= 3 ) {
PerlIO_printf( DBILOGFP, " dbd_st_execute failed, rc=%d", SQLCODE );
if( imp_sth->has_output_params ) {
// Assign the new values back to the variable
assign_inout_parameters( sth, imp_sth );
}
if( DBIc_has(imp_dbh,DBIcf_AutoCommit) ) {
// Don't do the commit here -- we don't want to lose the rowcount
do_commit = TRUE;
}
imp_sth->row_count = SQLCOUNT;
} else {
//PerlIO_printf( PerlIO_stderr(), "Opening cursor\n" );
stmt_number = imp_sth->statement_number;
imp_sth->cursor_id = alloc_cursor_id( imp_dbh );
crsr_name = imp_sth->cursor_name;
build_cursor_name( crsr_name, (a_sql_statement_number)imp_sth->cursor_id );
EXEC SQL DECLARE :crsr_name CURSOR FOR :stmt_number;
if( DBIS->debug >= 2 ) {
PerlIO_printf( DBILOGFP, "Open %s (%x)\n", crsr_name, imp_sth );
}
EXEC SQL OPEN :crsr_name USING DESCRIPTOR "imp_sth->input_sqlda";
if( SQLCODE > 0 ) {
ssa_error( sth, SQLCAPTR, SQLE_ERROR, "warning during open cursor" );
} else if( SQLCODE < 0 ) {
ssa_error( sth, SQLCAPTR, SQLE_ERROR, "open cursor failed" );
return( -2 );
}
imp_sth->cursor_open = 1;
imp_sth->row_count = SQLCOUNT;
if( imp_sth->statement_type == SQLPRES_STMT_CALL ||
imp_sth->statement_type == SQLPRES_STMT_BATCH ) {
if( !describe_cursor( sth, imp_sth ) ) {
return( -2 );
}
}
}
DBIc_ACTIVE_on(imp_sth);
if( do_commit ) {
EXEC SQL COMMIT;
}
// Negative row-counts are estimates but dbperl wants a positive
char *crsr_name;
/* Check that execute() was executed sucessfuly. This also implies */
/* that dbd_describe() executed sucessfuly so the memory buffers */
/* are allocated and bound. */
if( !DBIc_ACTIVE(imp_sth) ) {
ssa_error( sth, NULL, SQLE_CURSOR_NOT_OPEN, "no statement executing" );
return( Nullav );
}
if( !imp_sth->cursor_open ) {
return( Nullav ); // we figured it was just an EXECUTE
}
stmt_number = imp_sth->statement_number;
crsr_name = imp_sth->cursor_name;
//printf( "Fetch %s (%x)\n", crsr_name, imp_sth ); fflush( stdout );
// EXEC SQL DECLARE :crsr_name CURSOR FOR :stmt_number;
EXEC SQL FETCH :crsr_name USING DESCRIPTOR "imp_sth->output_sqlda";
if( SQLCODE ) {
if( SQLCODE == SQLE_NOTFOUND ) {
sv_setpv( DBIc_ERR(imp_sth), "" ); /* just end-of-fetch */
return( Nullav );
} else if( SQLCODE != SQLE_TRUNCATED ) {
if( SQLCODE > 0 ) {
/* that dbd_describe() executed sucessfuly so the memory buffers */
/* are allocated and bound. */
if( !DBIc_ACTIVE(imp_sth) ) {
if( DBIS->debug >= 3 ) {
PerlIO_printf( DBILOGFP, "blob_read on inactive handle\n" );
}
ssa_error( sth, NULL, SQLE_CURSOR_NOT_OPEN, "no statement executing" );
return( 0 );
}
if( !imp_sth->cursor_open ) {
if( DBIS->debug >= 3 ) {
PerlIO_printf( DBILOGFP, "blob_read on cursor that is not open\n" );
}
return( 0 ); // we figured it was just an EXECUTE
}
if( field >= imp_sth->output_sqlda->sqld ) {
if( DBIS->debug >= 3 ) {
PerlIO_printf( DBILOGFP, "blob_read: field number too large\n" );
}
return( 0 );
}
(piece_sqlda.sqlvar[0].sqltype&DT_TYPES) != DT_BINARY ) {
if( DBIS->debug >= 3 ) {
PerlIO_printf( DBILOGFP, "blob_read: field is neither string nor binary\n" );
}
return( 0 );
}
piece_sqlda.sqlvar[0].sqlind = &ind;
piece_sqlda.sqlvar[0].sqldata = &piece;
piece_sqlda.sqlvar[0].sqllen = (short)((sizeof(piece) - sizeof(piece.array)) + len);
crsr_name = imp_sth->cursor_name;
crsr_col = (unsigned short)(field+1);
bufsv = SvRV( destrv );
sv_setpvn( bufsv, "", 0 ); /* ensure it's writable string */
SvGROW( bufsv, (STRLEN)destoffset+len+1 ); /* SvGROW doesn't do +1 */
EXEC SQL GET DATA :crsr_name COLUMN :crsr_col OFFSET :offset USING DESCRIPTOR "&piece_sqlda";
if( SQLCODE < 0 || (SQLCODE != 0 && SQLCODE != SQLE_TRUNCATED) ) {
if( DBIS->debug >= 3 ) {
dbd_st_finish( SV *sth, imp_sth_t *imp_sth )
/******************************************/
{
dTHR;
D_imp_dbh_from_sth;
char *crsr_name;
/* Check if an explicit disconnect() or global destruction has */
/* disconnected us from the database before attempting to close. */
if( DBIc_ACTIVE(imp_dbh) ) {
if( imp_sth->cursor_open ) {
crsr_name = imp_sth->cursor_name;
//printf( "Closing %s (%x)\n", crsr_name, imp_sth ); fflush( stdout );
// EXEC SQL DECLARE :crsr_name CURSOR FOR :stmt_number;
free_cursor_id( imp_dbh, imp_sth->cursor_id );
imp_sth->cursor_id = NO_CURSOR_ID;
EXEC SQL CLOSE :crsr_name;
if( SQLCODE ) {
ssa_error( sth, SQLCAPTR, SQLE_ERROR, "close cursor failed" );
return( 0 );
}
imp_sth->cursor_open = 0;
if( DBIc_has(imp_dbh,DBIcf_AutoCommit) ) {
EXEC SQL COMMIT;
}
}
}
DBIc_ACTIVE_off(imp_sth);
return( 1 );
}
void
break;
}
}
} else if( kl == 8 && strEQ( key, "NULLABLE" ) ) {
AV *av = newAV();
retsv = newRV( sv_2mortal( (SV*)av ) );
while( --i >= 0 ) {
av_store( av, i, boolSV((imp_sth->original_output_type_info[i].sqltype&DT_NULLS_ALLOWED)?1:0) );
}
} else if( kl == 10 && strEQ( key, "CursorName" ) ) {
retsv = newSVpv( (char *)imp_sth->cursor_name, 0 );
} else if( kl == 9 && strEQ( key, "Statement" ) ) {
retsv = newSVpv( (char *)imp_sth->statement, 0 );
} else if( kl == 11 && strEQ( key, "RowsInCache" ) ) {
retsv = &sv_undef;
} else {
return( Nullsv );
}
if( cacheit ) { /* cache for next time (via DBI quick_FETCH) */
SV **svp = hv_fetch( (HV*)SvRV(sth), key, (I32)kl, 1 );
sv_free( *svp );
eg/blobs.pl view on Meta::CPAN
#!/usr/local/bin/perl -w
use DBI;
use strict;
my $dbh;
my $ins;
my $nrows;
my $upd;
my $cursor;
my $sth;
my $blob1;
my $blob2;
my $bloblen = 1000000;
my $i;
$| = 1;
my $connstr = 'ENG=asademo;DBN=asademo;DBF=asademo.db;UID=dba;PWD=sql';
print "Connecting to Database\n";
eg/blobs.pl view on Meta::CPAN
} else {
printf( "Inserts complete\n" );
}
$ins->finish;
undef $ins;
#
# Check the inserts values by fetching the values back
#
printf( "Checking inserts\n" );
$cursor = $dbh->prepare( "select a, b from blobs" );
$cursor->execute();
$nrows = 0;
while( ($a,$b) = $cursor->fetchrow() ) {
$nrows++;
if( $a ne $blob1 && $a ne $blob2 ) {
die( "******ERROR: Fetched value for column a is incorrect: %s\n", $a );
}
if( $b ne $blob1 && $b ne $blob2 && $b ne "jcs" ) {
die( "******ERROR: Fetched value for column b is incorrect: %s\n", $b );
}
}
if( defined( $cursor->err ) && defined( $cursor->errstr ) ) {
die( "******ERROR: err %d, errstr %s\n", $cursor->err, $cursor->errstr );
} elsif( $nrows != 3 ) {
die( "******ERROR: Incorrect number of rows fetched: %d\n", $nrows );
} else {
printf( "Inserts OK\n" );
}
$cursor->finish();
#
# Do some updates
#
printf( "Doing updates\n" );
$upd = $dbh->prepare( 'update blobs set b=? where a=?' );
$upd->execute( $blob1, $blob1 ) || die( "update failed\n" );
$dbh->commit();
$upd->finish();
#
# Check updates
#
printf( "Checking updates\n" );
$cursor = $dbh->prepare( "select a, b from blobs" );
$cursor->execute();
$nrows = 0;
while( ($a,$b) = $cursor->fetchrow() ) {
$nrows++;
if( $a eq $blob1 && $b ne $blob1 ) {
die( "******ERROR: Update didn't work correctly\n" );
}
if( $a ne $blob1 && $a ne $blob2 ) {
die( "******ERROR: Fetched value for column a is incorrect\n" );
}
if( $b ne $blob1 && $b ne $blob2 && $b ne "jcs" ) {
die( "******ERROR: Fetched value for column b is incorrect\n" );
}
}
if( defined( $cursor->err ) && defined( $cursor->errstr ) ) {
die( "******ERROR: err %d, errstr %s\n", $cursor->err, $cursor->errstr );
} elsif( $nrows != 3 ) {
die( "******ERROR: Incorrect number of rows fetched: %d\n", $nrows );
} else {
printf( "Updates OK\n" );
}
$cursor->finish();
$dbh->commit();
$dbh->do( 'drop table blobs' );
$dbh->disconnect();
undef $dbh;
( run in 0.288 second using v1.01-cache-2.11-cpan-4d50c553e7e )