view release on metacpan or search on metacpan
Revision history for Perl extension AI::NeuralNet::FastSOM.
0.19 Sat Dec 3 14:52:39 EST 2016
- fix some errant sprintf's
0.18 Sat Dec 3 14:36:03 EST 2016
- force all tests serially, not just test_dynamic
0.17 Sat Dec 3 02:43:38 EST 2016
- force test harness to test serially
- update copyright notice
- clean up tests
0.16 Sat Jan 3 05:53:12 EST 2015
- version bump - hasnt been tested in a while...
- added auto-README generation to Makefile.PL
- update copyright notice
0.15 Wed Jul 11 00:13:02 2012
- tidy up build a bit
- fixed warnings from CODE blocks using RETVAL without OUTPUT
blocks in newer perls
- yet another typemap workaround. this time we have a 5.6.2 with
a new ParseXS and an old xsubpp. i wont even mention the problem
i found in old Test::More finding this. i hope it never becomes
an issue. (Note: since this is an almost 3 year old issue and
haven't seen any more cases, we'll assume it was isolated
to a single user to start with and the whole mess is fixed
now.)
0.14 Fri Aug 21 12:52:32 2009
- work around some sort of ExtUtils::ParseXS bug in 5.6.2,
not picking up typemap files unless specifically named "typemap"
0.13 Mon Aug 17 08:42:37 2009
- fixed perl version check in Makefile.PL
0.12 Sat Aug 15 14:24:50 2009
- will now pass -Wall -Wextra -ansi -Wdeclaration-after-statement
(locally anyway)
- wrapped newSVpvs in INT2PTR to hopefully satisfy some platforms
- bumped perl require back up to 5.8.0 for now
- defined PERL_MAGIC_tied for older perls
- changed hv_fetchs() to hv_fetch() for older perls
- hacked in defines for Newx() and friends for older perls
- changed newSVpvs() to newSVpvn() for older perls
- created seperate typemap for older perls, along with Makefile.PL
modification to use it before 5.8.0
- added requirement for Storable which is non-core in older perls
- moved perl require back down to 5.6.2
0.11 Sun Aug 9 10:04:19 2009
- casting newSVpvs() to SV* to satisfy at least one platform
- added 'const char *' to typemap for older perls
- removed a few unneeded casts to internal types
- moved DESTROY methods to superclass, thus fixing missing
Hexa::DESTROY and consolidating common code
- consolidated neighbors code
- general housekeeping
0.10 Fri Aug 7 09:11:39 2009
- no longer relying on sizeof(void)
- removed a bit of old test code
- one more PTR2INT conversion
- experimentally dropped perl require to 5.6.2
- hopefully fixed a few casting problems for some platforms
0.09 Wed Aug 5 20:26:17 2009
- removed several temporary AVs in train(), fixing massive
memory leak
- removed another temp AV in _bmu_guts(), fixing another
memory leak
- added pointer <-> IV conversions, hopefully fixing tons of
warnings on platforms where ivsize != ptrsize
- added macros to speed up pointer conversions
- consolidated bmu code in ::FastSOM
0.08 Fri Jul 31 16:17:32 2009
- removed leading underscore from struct member names
- removed all // comments, just in case...
- changed all native type (int,double) to perl types (IV,NV)
- fixed couple of instances of calling back to perl to get
stuff from c structs
- reworked Storable support
0.07 Sat Jul 25 14:18:03 2009
- clean up things a bit
- now using Atol() instead of atoi()
- now using Drand01() instead of rand()
- now using seedDrand01() instead of srand()
- fixed problem with not using all training vectors, or some twice
- removed non-core Data::Dumper from tests
- added tests for store/retrieve via Storable
- first public release
0.06 Wed Jul 22 12:07:25 2009
- removed AI::NN::FSOM::ARRAY, ::MAP, and ::VECTOR modules
- removed Inline::C code from remaining modules
- removed dependence on non-core parent.pm
- removed remaining Inline::C macros and INLINE.h
- moved train() into C
- now parsing input_ and output_dim parameters (finally!)
0.05 Mon Jul 20 13:20:06 2009
- re-added support for labels, originally in AI::NN::SOM
- added original AI::NN::SOM test suite (and it works!)
0.04 Sat Jul 18 16:45:27 2009
- removed dependence on Inline::C
- minor refactor
0.03 Sat Jul 18 09:30:08 2009
- created wrappers for most c-level stuff
0.02 Wed Jul 15 18:56:13 2009
- moved data structures into C structs
0.01 Thu Jul 2 09:07:01 2009
- original version; created by h2xs 1.23 with options
-AXn AI::NeuralNet::FastSOM
/*
*
* Example 2x3x2 structure
* =======================
*
* Rect---+---Map---+---Array---+---Vector---+---NV
* | | \--NV
* | +---Vector---+---NV
* | | \--NV
* | \--Vector---+---NV
* | \--NV
* \--Array---+---Vector---+---NV
* | \--NV
* +---Vector---+---NV
* | \--NV
* \--Vector---+---NV
* \--NV
*
* References
* ==========
*
* Each of Rect, Map, Array, and Vector contains a member 'ref' which is
* an SV* pointing to an RV. The RV can be returned directly to perl-land
* after being blessed into its respective class.
*
* The RV references an SV containing an IV. The IV is set to the base
* address of its component structure. This is so the class code can know
* which instance of the class is being referred to on callback.
*
* The reference count of the SV has its initial reference count set to one,
* representing its parents ownership. If a parent dies or a perl-land
* reference is taken of any componenet, its reference count should
* be adjusted accordingly.
*
* When the count reaches zero perl will call the classes DESTROY method,
* at which point we can decrease the reference count on each child and
* free the component structure.
*
* The intent of all this reference count tom-foolery is to keep the
* component structures from disappearing from underneath perl-land
* references to them. As a bonus, we get a neat destruction mechanism
* without having to reimplement OOP in C.
*/
/*
* SOM_Vector : holds Z NVs
*
* should be allocated:
* sizeof(SOM_Vector) + sizeof(NV)*(Z-1)
*
* this is enough space to use the 'element' member as the base of an array
* of Z NVs.
*
* the 'ref' element is a pointer to a perl RV referencing a tied array.
* a copy of 'ref' will be returned to the perl side on request, and the
* tied array interface can be use to access the members of this struct.
*
* 'Z' is of course the number of NVs in the 'element' array.
*/
typedef struct {
SV *ref;
IV Z;
NV element;
} SOM_Vector;
/*
* SOM_Array : holds Y ptrs to SOM_Vector thingys
*
* should be allocated:
* sizeof(SOM_Array) + sizeof(SOM_Vector*)*(Y-1)
*
* 'ref' and 'vector' elements similar in functionality to the 'ref' and
* 'element' members, respectively, of the SOM_Vector struct.
*
* 'Y' is the number of SOM_Vector pointers in the 'vector' array.
*
* 'Z' is provided here only for propogation down the line in creating
* the SOM_Vectors.
*/
typedef struct {
SV *ref;
IV Y;
IV Z;
SOM_Vector *vector;
} SOM_Array;
/*
* SOM_Map : holds X ptrs to SOM_Array thingys
*
* should be allocated:
* sizeof(SOM_Map) + sizeof(SOM_Array*)*(X-1)
*
* 'ref' and 'array' are similar in functionality to the 'ref' and 'element'
* members, respectively, of the SOM_Vector struct.
*
* 'X' is the number of SOM_Array pointers in the 'array' array.
*
* 'Y' and 'Z' are provided here only for propagation down the line in
* creation of SOM_Array and SOM_Vector structs.
*/
typedef struct {
SV *ref;
IV X;
IV Y;
IV Z;
SOM_Array *array;
} SOM_Map;
/*
* SOM_Rect : holds a ptr to a single SOM_Map thingy
*
* should be allocated:
* sizeof(SOM_Rect)
*
* this struct is the main object.
*
* 'X', 'Y', and 'Z' are held here for progagation down to the structs
* that make up our grid map.
*
* '_R' = initial SOM radius
* '_Sigma0' = ???
* '_L0' = initial SOM learning rate
*
* 'output_dim' is kept from instantiation simply because the perl interface
* already provides access to it.
*/
typedef struct {
SV *ref;
IV X;
IV Y;
IV Z;
NV R;
NV Sigma0;
NV L0;
NV LAMBDA;
NV T;
int type;
SV *output_dim;
AV *labels;
SOM_Map *map;
} SOM_GENERIC;
typedef SOM_GENERIC SOM_Rect;
typedef SOM_GENERIC SOM_Torus;
typedef SOM_GENERIC SOM_Hexa;
enum SOMType {
SOMType_Hexa,
SOMType_Rect,
SOMType_Torus
};
typedef AV AV_SPECIAL;
#ifndef PERL_MAGIC_tied
#define PERL_MAGIC_tied 'P'
#endif
#ifndef Newx
#define Newx(ptr,nitems,type) New(0,ptr,nitems,type)
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include <math.h>
#include "FastSOM.h"
#include "proto.h"
/*
* generic funtions
*
* NOT superclass functions, but usable by all
*/
NV _vector_distance(AV* V1, AV* V2) {
NV diff,sum;
I32 w_ptr;
sum = 0;
for ( w_ptr=av_len(V2) ; w_ptr>=0 ; w_ptr-- ) {
diff = SvNV(*av_fetch(V1, w_ptr, FALSE))
- SvNV(*av_fetch(V2, w_ptr, FALSE));
sum += diff * diff;
}
return sqrt(sum);
}
void _bmuguts(SOM_GENERIC *som,AV *sample,IV *bx,IV *by,NV *bd) {
IV x,y,z,X,Y,Z;
NV sum,diff,distance;
SOM_Map *map;
SOM_Array *array;
SOM_Vector *vector;
map = som->map;
X = som->X;
Y = som->Y;
Z = som->Z;
*bx = -1;
*by = 0;
*bd = 0.0;
for ( x=0 ; x<X ; x++ ) {
array = (&map->array)[x];
for ( y=0 ; y<Y ; y++ ) {
vector = (&array->vector)[y];
sum = 0;
for ( z=0 ; z<Z ; z++ ) {
diff = SvNV(*av_fetch(sample,z,0))
- (&vector->element)[z];
sum += diff * diff;
}
distance = sqrt(sum);
if ( *bx < 0 )
{ *bx = 0; *by = 0; *bd = distance; }
if ( distance < *bd )
{ *bx = x; *by = y; *bd = distance; }
}
}
}
/* http://www.ai-junkie.com/ann/som/som4.html */
void _adjust(SV* self,NV l,NV sigma,AV* unit,AV* v) {
IV x,y;
I32 z,Z;
NV d,theta,vold,wold;
MAGIC *mg;
SOM_Map *map;
SOM_Array *array;
SOM_Vector *vector;
SOM_GENERIC *som;
x = SvIV(*av_fetch(unit, 0, FALSE));
y = SvIV(*av_fetch(unit, 1, FALSE));
d = SvNV(*av_fetch(unit, 2, FALSE));
theta = exp( -d*d/2/sigma/sigma );
if ( !(mg = selfmagic(self)) )
croak("self has no magic!\n");
som = self2somptr(self,mg);
map = som->map;
array = (&map->array)[x];
vector = (&array->vector)[y];
/* hmm.. casting IV to I32.. is that sane? */
Z = (I32)som->Z;
for ( z=0 ; z<Z ; z++ ) {
wold = (&vector->element)[z];
vold = SvNV(*av_fetch(v,z,FALSE));
(&vector->element)[z] = (vold - wold) * l * theta + wold;
}
}
void _adjustn(SOM_GENERIC* som,NV l,NV sigma,NV* n,AV* v) {
IV x,y,X,Y;
I32 z,Z;
NV d,theta,vold,wold;
SOM_Map *map;
SOM_Array *array;
SOM_Vector *vector;
map = som->map;
X = som->X;
Y = som->Y;
for ( x=0 ; x<X ; x++ ) {
array = (&map->array)[x];
for ( y=0 ; y<Y ; y++ ) {
d = n[x*Y+y];
if (d < 0) continue;
theta = exp( -d*d/2/sigma/sigma );
vector = (&array->vector)[y];
/* hmm.. casting IV to I32.. is that sane? */
Z = (I32)som->Z;
for ( z=0 ; z<Z ; z++ ) {
wold = (&vector->element)[z];
vold = SvNV(*av_fetch(v,z,FALSE));
(&vector->element)[z] =
(vold - wold) * l * theta + wold;
}
}
}
}
AV* _neighbors(SV* self,NV sigma,IV X0,IV Y0,...) {
IV i,x,y,X,Y;
NV distance,*n;
AV *tmp,*neighbors;
MAGIC *mg;
SOM_GENERIC *som;
void (*neiguts)(SOM_GENERIC* som,NV sigma,IV X0,IV Y0,NV *n);
if ( !(mg = selfmagic(self)) )
croak("self has no magic!\n");
som = self2somptr(self,mg);
X = som->X;
Y = som->Y;
i = X*Y;
Newx(n,i,NV);
for ( i-=1 ; i>=0 ; i-- )
n[i] = -1;
if ( som->type == SOMType_Torus )
neiguts = _torus_neiguts;
else if ( som->type == SOMType_Hexa )
neiguts = _hexa_neiguts;
else if ( som->type == SOMType_Rect )
neiguts = _rect_neiguts;
else
croak("unknown type");
neiguts(som,sigma,X0,Y0,n);
neighbors = newAV();
for ( x=0 ; x<X ; x++ ) {
for ( y=0 ; y<Y ; y++ ) {
distance = n[x*Y+y];
if ( distance >= 0 ) {
tmp = newAV();
av_push(tmp,newSViv(x));
av_push(tmp,newSViv(y));
av_push(tmp,newSVnv(distance));
av_push(neighbors,newRV_noinc((SV*)tmp));
}
}
}
Safefree(n);
return neighbors;
}
SOM_Vector* _make_vector(SOM_Array* array) {
IV z,len;
AV *thingy;
SV *tie;
HV *stash;
SOM_Vector *vector;
z = array->Z;
len = sizeof(SOM_Vector)+z*sizeof(NV);
Newxc(vector, len, char, SOM_Vector);
Zero(vector, len, char);
vector->Z = z;
thingy = newAV();
tie = newRV_noinc(newSViv(PTR2IV(vector)));
stash = gv_stashpv("AI::NeuralNet::FastSOM::VECTOR", GV_ADD);
sv_bless(tie, stash);
hv_magic((HV*)thingy, (GV*)tie, 'P');
vector->ref = newRV_noinc((SV*)thingy);
(&vector->element)[z] = 0.0;
for ( z-=1 ; z>=0 ; z-- ) {
(&vector->element)[z] = 0.0;
}
return vector;
}
SOM_Array* _make_array(SOM_Map* map) {
IV y,len;
AV *thingy;
SV *tie;
HV *stash;
SOM_Array *array;
y = map->Y;
len = sizeof(SOM_Array)+y*sizeof(SOM_Vector*);
Newxc(array, len, char, SOM_Array);
Zero(array, len, char);
array->Y = y;
array->Z = map->Z;
thingy = newAV();
tie = newRV_noinc(newSViv(PTR2IV(array)));
stash = gv_stashpv("AI::NeuralNet::FastSOM::ARRAY", GV_ADD);
sv_bless(tie, stash);
hv_magic((HV*)thingy, (GV*)tie, PERL_MAGIC_tied);
array->ref = newRV_noinc((SV*)thingy);
(&array->vector)[y] = NULL;
for ( y-=1 ; y>=0 ; y-- )
(&array->vector)[y] = _make_vector( array );
return array;
}
SOM_Map* _make_map(SOM_GENERIC *som) {
IV x,len;
AV *thingy;
SV *tie;
HV *stash;
SOM_Map *map;
x = som->X;
len = sizeof(SOM_Map)+x*sizeof(SOM_Array*);
Newxc(map, len, char, SOM_Map);
Zero(map, len, char);
map->X = x;
map->Y = som->Y;
map->Z = som->Z;
thingy = newAV();
tie = newRV_noinc(newSViv(PTR2IV(map)));
stash = gv_stashpv("AI::NeuralNet::FastSOM::MAP", GV_ADD);
sv_bless(tie, stash);
hv_magic((HV*)thingy, (GV*)tie, PERL_MAGIC_tied);
map->ref = newRV_noinc((SV*)thingy);
(&map->array)[x] = NULL;
for ( x-=1 ; x>=0 ; x-- )
(&map->array)[x] = _make_array( map );
return map;
}
/*
* som functions
*/
void _som_bmu(SV* self, AV* sample) {
IV cx,cy;
NV cd;
MAGIC *mg;
SOM_GENERIC *som;
dXSARGS;
if ( !(mg = selfmagic(self)) )
croak("self has no magic!\n");
som = self2somptr(self,mg);
_bmuguts(som,sample,&cx,&cy,&cd);
PERL_UNUSED_VAR(items); /* -W */
sp = mark;
XPUSHs(sv_2mortal(newSViv(cx)));
XPUSHs(sv_2mortal(newSViv(cy)));
XPUSHs(sv_2mortal(newSVnv(cd)));
PUTBACK;
}
SV* _som_map(SV* self) {
MAGIC *mg;
SOM_GENERIC *som;
if ( !(mg = selfmagic(self)) )
croak("self has no magic!\n");
som = self2somptr(self,mg);
SvREFCNT_inc(som->map->ref);
return som->map->ref;
}
SV* _som_output_dim(SV* self) {
MAGIC *mg;
SOM_GENERIC *som;
if ( !(mg = selfmagic(self)) )
croak("self has no magic!\n");
som = self2somptr(self,mg);
SvREFCNT_inc(som->output_dim);
return som->output_dim;
}
void _som_train(SV* self,IV epochs) {
IV i,X,Y,bx,by,epoch;
NV bd,l,sigma,*n;
AV **org,**veg,*sample;
I32 p,pick,nitems,oitems,vitems;
MAGIC *mg;
SOM_GENERIC *som;
bool wantarray;
void (*neiguts)(SOM_GENERIC* som,NV sigma,IV X0,IV Y0,NV* n);
dXSARGS;
if ( !(mg = selfmagic(self)) )
croak("self has no magic!");
som = self2somptr(self,mg);
if ( epochs < 1 )
epochs = 1;
if ( items < 3 )
croak("no data to learn");
oitems = items - 2;
Newx(org,oitems,AV*);
Newx(veg,oitems,AV*);
for ( i=2 ; i<items ; i++ )
if ( SvTYPE(SvRV(ST(i))) != SVt_PVAV )
croak("training item %i is not an array ref", (int)i);
else
org[i-2] = (AV*)SvRV(ST(i));
som->LAMBDA = epochs / log( som->Sigma0 );
X = som->X;
Y = som->Y;
nitems = X*Y;
Newx(n,nitems,NV);
if ( som->type == SOMType_Torus )
neiguts = _torus_neiguts;
else if ( som->type == SOMType_Hexa )
neiguts = _hexa_neiguts;
else if ( som->type == SOMType_Rect )
neiguts = _rect_neiguts;
else
croak("unknown type");
wantarray = GIMME_V == G_ARRAY ? TRUE : FALSE;
/* should this be moved somewhere more global? */
if ( !PL_srand_called ) {
seedDrand01((Rand_seed_t)(time(NULL)+PerlProc_getpid()));
PL_srand_called = TRUE;
}
sp = mark;
for ( epoch=1 ; epoch<=epochs ; epoch++ ) {
som->T = epoch;
sigma = som->Sigma0 * exp(-epoch / som->LAMBDA);
l = som->L0 * exp(-epoch / epochs);
Copy(org,veg,oitems,AV*);
vitems = oitems;
while ( vitems > 0 ) {
pick = (I32)(Drand01() * vitems);
sample = (AV*)veg[pick];
/* optimize me! */
for ( p=pick+1 ; p<vitems ; p++ ) veg[p-1] = veg[p];
vitems--;
_bmuguts(som,sample,&bx,&by,&bd);
if ( wantarray ) XPUSHs(newSVnv(bd));
for ( i=0 ; i<nitems ; i++ ) n[i] = -1;
neiguts(som,sigma,bx,by,n);
_adjustn(som,l,sigma,n,sample);
}
}
Safefree(n);
Safefree(org);
Safefree(veg);
PUTBACK;
}
void _som_FREEZE(SV* self,SV* cloning) {
IV x,y,z;
MAGIC *mg;
SOM_Map *m;
SOM_Array *a;
SOM_Vector *v;
SOM_GENERIC *som;
dXSARGS;
PERL_UNUSED_VAR(items); /* -W */
sp = mark;
if ( !SvTRUE(cloning) ) {
if ( (mg = selfmagic(self)) != NULL) {
/*
* we should get here on the first pass. this is where we
* serialize the hash seen from perl.
*/
XPUSHs(INT2PTR(SV*,newSVpvn("i wanna be a cowboy",19)));
}
else if ( SvTYPE(SvRV(self)) == SVt_PVMG ) {
/*
* this should be the second pass. here we need to serialize
* the tied part not seen from the perl side.
*/
som = INT2PTR(SOM_GENERIC*,self2iv(self));
XPUSHs( INT2PTR(SV*,newSVpvn(
"beat me whip me make me code badly",34)) );
XPUSHs( newRV_noinc(newSViv(som->type)) );
XPUSHs( newRV_noinc(newSViv(som->X)) );
XPUSHs( newRV_noinc(newSViv(som->Y)) );
XPUSHs( newRV_noinc(newSViv(som->Z)) );
XPUSHs( newRV_noinc(newSVnv(som->R)) );
XPUSHs( newRV_noinc(newSVnv(som->Sigma0)) );
XPUSHs( newRV_noinc(newSVnv(som->L0)) );
XPUSHs( newRV_noinc(newSVnv(som->LAMBDA)) );
XPUSHs( newRV_noinc(newSVnv(som->T)) );
XPUSHs( newRV_noinc(som->output_dim) );
XPUSHs( newRV_noinc((SV*)som->labels) );
m = som->map;
for ( x=som->X-1 ; x>=0 ; x-- ) {
a = (&m->array)[x];
for ( y=som->Y-1 ; y>=0 ; y-- ) {
v = (&a->vector)[y];
for ( z=som->Z-1 ; z>=0 ; z-- ) {
XPUSHs(newRV_noinc(newSVnv(
(&v->element)[z])));
}
}
}
}
else {
croak("i wanna run with scissors!");
}
} /* cloning */
PUTBACK;
}
void _som_THAW(SV* self,SV* cloning,SV* serialized) {
IV x,y,z,i;
SV *rrr;
HV *stash;
SOM_Map *m;
SOM_Array *a;
SOM_Vector *v;
SOM_GENERIC *som;
dXSARGS;
PERL_UNUSED_VAR(serialized); /* -W */
if (!SvTRUE(cloning)) {
if ( SvTYPE(SvRV(self)) == SVt_PVMG ) {
Newxz(som,1,SOM_GENERIC);
som->type = SvIV(SvRV(ST(3)));
som->X = SvIV(SvRV(ST(4)));
som->Y = SvIV(SvRV(ST(5)));
som->Z = SvIV(SvRV(ST(6)));
som->R = SvNV(SvRV(ST(7)));
som->Sigma0 = SvNV(SvRV(ST(8)));
som->L0 = SvNV(SvRV(ST(9)));
som->LAMBDA = SvNV(SvRV(ST(10)));
som->T = SvNV(SvRV(ST(11)));
som->output_dim = newSVsv(SvRV(ST(12)));
som->labels = (AV*)SvRV(ST(13));
som->map = _make_map( som );
i = 14;
m = som->map;
for ( x=som->X-1 ; x>=0 ; x-- ) {
a = (&m->array)[x];
for ( y=som->Y-1 ; y>=0 ; y-- ) {
v = (&a->vector)[y];
for ( z=som->Z-1 ; z>=0 ; z-- ) {
/*
(&v->element)[z] =
SvNV(SvRV(ST(i++)));
*/
rrr = SvRV(ST(i++));
(&v->element)[z] = SvNV(rrr);
}
}
}
SvSetSV( SvRV(self), sv_2mortal(newSViv((IV)PTR2IV(som))) );
stash = SvSTASH(SvRV(self));
som->ref = sv_bless(newRV_inc((SV*)self),stash);
}
else if ( SvTYPE(SvRV(self)) != SVt_PVHV )
croak("you'll put an eye out!");
} /* cloning */
PERL_UNUSED_VAR(items); /* -W */
sp = mark;
PUTBACK;
}
SV* _som_FETCH(SV* self,SV* key) {
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("map",3) ) ) ) {
SOM_GENERIC *som = INT2PTR(SOM_Rect*,self2iv(self));
SvREFCNT_inc(som->map->ref);
return som->map->ref;
}
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_X",2) ) ) )
return newSViv(tied2ptr(self)->X);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_Y",2) ) ) )
return newSViv(tied2ptr(self)->Y);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_Z",2) ) ) )
return newSViv(tied2ptr(self)->Z);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_R",2) ) ) )
return newSVnv(tied2ptr(self)->R);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_L0",3) ) ) )
return newSVnv(tied2ptr(self)->L0);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_Sigma0",7) ) ) )
return newSVnv(tied2ptr(self)->Sigma0);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("output_dim",10) ) ) )
return newSVsv(tied2ptr(self)->output_dim);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("LAMBDA",6) ) ) )
return newSVnv(tied2ptr(self)->LAMBDA);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("T",1) ) ) )
return newSVnv(tied2ptr(self)->T);
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("labels",6) ) ) )
return newRV_inc((SV*)(tied2ptr(self)->labels));
croak("%s not accessible for read", SvPV_nolen(key));
}
SV* _som_STORE(SV* self,SV* key,SV* val) {
if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_X",2) ) ) )
tied2ptr(self)->X = SvIV(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_Y",2) ) ) )
tied2ptr(self)->Y = SvIV(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_Z",2) ) ) )
tied2ptr(self)->Z = SvIV(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_R",2) ) ) )
tied2ptr(self)->R = SvNV(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_L0",3) ) ) )
tied2ptr(self)->L0 = SvNV(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("_Sigma0",7) ) ) )
tied2ptr(self)->Sigma0 = SvNV(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("output_dim",10) ) ) )
tied2ptr(self)->output_dim = newSVsv(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("LAMBDA",6) ) ) )
tied2ptr(self)->LAMBDA = SvNV(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("T",1) ) ) )
tied2ptr(self)->T = SvNV(val);
else if ( !sv_cmp( key, INT2PTR(SV*,newSVpvn("map",3) ) ) )
croak("cant assign to map");
else
croak("%s not accessible for write", SvPV_nolen(key));
return val;
}
SV* _som_FIRSTKEY() {
return INT2PTR(SV*,newSVpvn("_X",2));
}
SV* _som_NEXTKEY(SV* prev) {
if ( strEQ( SvPVX(prev), "_X" ) )
return INT2PTR(SV*,newSVpvn("_Y",2));
else if ( strEQ( SvPVX(prev), "_Y" ) )
return INT2PTR(SV*,newSVpvn("_Z",2));
else if ( strEQ( SvPVX(prev), "_Z" ) )
return INT2PTR(SV*,newSVpvn("_R",2));
else if ( strEQ( SvPVX(prev), "_R" ) )
return INT2PTR(SV*,newSVpvn("_Sigma0",7));
else if ( strEQ( SvPVX(prev), "_Sigma0" ) )
return INT2PTR(SV*,newSVpvn("_L0",3));
else if ( strEQ( SvPVX(prev), "_L0" ) )
return INT2PTR(SV*,newSVpvn("LAMBDA",6));
else if ( strEQ( SvPVX(prev), "LAMBDA" ) )
return INT2PTR(SV*,newSVpvn("T",1));
else if ( strEQ( SvPVX(prev), "T" ) )
return INT2PTR(SV*,newSVpvn("labels",6));
else if ( strEQ( SvPVX(prev), "labels" ) )
return INT2PTR(SV*,newSVpvn("map",3));
return &PL_sv_undef;
}
void _som_DESTROY(SV* self) {
IV iv;
SV *ref;
SOM_Map *map;
SOM_GENERIC *som;
if ( !SvROK(self) )
return;
ref = SvRV(self);
if ( !SvIOK(ref) )
return;
iv = SvIV(ref);
som = INT2PTR(SOM_GENERIC*,iv);
if ( !som )
return;
map = som->map;
/* more to do here ? */
}
/*
* rect functions
*/
void _rect_neiguts(SOM_Rect* som,NV sigma,IV X0,IV Y0,NV* n) {
IV x,y,X,Y;
NV d2,s2;
X = som->X;
Y = som->Y;
s2 = sigma * sigma;
for ( x=0 ; x<X ; x++ ) {
for ( y=0 ; y<Y ; y++ ) {
d2 = (x-X0)*(x-X0)+(y-Y0)*(y-Y0);
if (d2 <= s2) n[x*Y+y] = sqrt(d2);
}
}
}
void _rect_new(const char* class,...) {
IV i;
NV sigma0,rate;
SV *tie,*rv,*key,*val,*od,*sclass;
HV *options,*hash,*stash;
char *begptr,*endptr,*xstart,*ystart,*yend;
STRLEN len;
SOM_GENERIC *som;
dXSARGS;
if ( (items & 1) ^ 1 )
croak( "Weird number of arguments\n" );
options = newHV();
for ( i=1 ; i<items ; i+=2 ) {
key = ST(i);
val = ST(i+1);
len = sv_len(key);
hv_store( options, SvPV_nolen(key), len, val, 0 );
}
if ( !hv_exists( options, "input_dim", 9 ) )
croak( "no input_dim argument" );
if ( !hv_exists( options, "output_dim", 10 ) )
croak( "no output_dim argument" );
Newxz(som,1,SOM_GENERIC);
od = newSVsv(*hv_fetch(options,"output_dim",10,FALSE));
som->output_dim = od;
begptr = SvPV_force(od,SvLEN(od));
endptr = SvEND(od) - 1; /* allow for terminating character */
if ( endptr < begptr )
croak("brain damage!!!");
xstart = begptr;
if ( !isDIGIT((char)*xstart) )
croak("no x-dimension found");
som->X = Atol(xstart);
ystart = yend = endptr;
if ( !isDIGIT((char)*ystart) )
croak("no y-dimension found");
while (--ystart >= begptr)
if ( !isDIGIT((char)*ystart) )
break;
som->Y = Atol(++ystart);
som->Z = SvIV(*hv_fetch(options,"input_dim",9,FALSE));
som->R = som->X > som->Y
? som->Y / 2.0
: som->X / 2.0;
if ( hv_exists( options, "sigma0", 6 ) ) {
sigma0 = SvNV(*hv_fetch(options,"sigma0",6,0));
if ( sigma0 )
som->Sigma0 = sigma0;
else
som->Sigma0 = som->R;
}
else
som->Sigma0 = som->R;
if ( hv_exists( options, "learning_rate", 13 ) ) {
rate = SvNV(*hv_fetch(options,"learning_rate",13,0));
if ( rate )
som->L0 = rate;
else
som->L0 = 0.1;
}
else
som->L0 = 0.1;
som->map = _make_map(som);
som->labels = newAV();
sclass = sv_2mortal(newSVpvf("%s",class));
if (!sv_cmp(sclass,INT2PTR(
SV*,newSVpvn("AI::NeuralNet::FastSOM::Rect",28))))
som->type = SOMType_Rect;
/*
else if (!sv_cmp(sclass,INT2PTR(
SV*,newSVpvn("AI::NeuralNet::FastSOM::Hexa",28))))
som->type = SOMType_Hexa;
*/
else if (!sv_cmp(sclass,INT2PTR(
SV*,newSVpvn("AI::NeuralNet::FastSOM::Torus",29))))
som->type = SOMType_Torus;
else
croak("unknown type");
hash = (HV*)sv_2mortal((SV*)newHV());
tie = newRV_noinc(newSViv(PTR2IV(som)));
stash = gv_stashpv(class, GV_ADD);
sv_bless(tie, stash);
hv_magic(hash, (GV*)tie, PERL_MAGIC_tied);
rv = sv_bless(newRV_noinc((SV*)hash),stash);
som->ref = rv;
/*
* here 'hash' is the object seen from the perl side.
* 'tie' is what we see from the c side when accessing the tied
* functionality.
*/
sp = mark;
XPUSHs(rv);
PUTBACK;
}
SV* _rect_radius(SV* self) {
MAGIC *mg;
SOM_GENERIC *som;
if ( !(mg = selfmagic(self)) )
croak("self has no magic!\n");
som = self2somptr(self,mg);
return newSVnv(som->R);
}
/*
* torus functions
*/
void _torus_neiguts(SOM_Torus* som,NV sigma,IV X0,IV Y0,NV* n) {
IV x,y,X,Y;
NV d2,s2;
X = som->X;
Y = som->Y;
s2 = sigma * sigma;
for ( x=0 ; x<X ; x++ ) {
for ( y=0 ; y<Y ; y++ ) {
/*
* which one of these should "win"?
*/
d2 = (x-X0)*(x-X0) + (y-Y0)*(y-Y0);
if (d2 <= s2) n[x*Y+y] = sqrt(d2);
d2 = (x-X-X0)*(x-X-X0) + (y-Y0)*(y-Y0);
if (d2 <= s2) n[x*Y+y] = sqrt(d2);
d2 = (x+X-X0)*(x+X-X0) + (y-Y0)*(y-Y0);
if (d2 <= s2) n[x*Y+y] = sqrt(d2);
d2 = (x-X0)*(x-X0) + (y-Y-Y0)*(y-Y-Y0);
if (d2 <= s2) n[x*Y+y] = sqrt(d2);
d2 = (x-X0)*(x-X0) + (y+Y-Y0)*(y+Y-Y0);
if (d2 <= s2) n[x*Y+y] = sqrt(d2);
}
}
}
/* http://www.ai-junkie.com/ann/som/som3.html */
/*
* hexa functions
*/
void _hexa_new(const char* class) {
IV i;
SV *key,*val,*od,*tie,*rv;
NV sigma0,rate;
HV *options,*hash,*stash;
STRLEN len;
SOM_Hexa *hexa;
dXSARGS;
if ( (items & 1) ^ 1 )
croak( "Weird number of arguments\n" );
options = newHV();
for ( i=1 ; i<items ; i+=2 ) {
key = ST(i);
val = ST(i+1);
len = sv_len(key);
hv_store( options, SvPV_nolen(key), len, val, 0 );
}
if ( !hv_exists( options, "input_dim", 9 ) )
croak( "no input_dim argument" );
if ( !hv_exists( options, "output_dim", 10 ) )
croak( "no ouput_dim argument" );
Newxz(hexa,1,SOM_Hexa);
od = newSVsv(*hv_fetch(options,"output_dim",10,FALSE));
hexa->output_dim = od;
hexa->X=SvIV(*hv_fetch(options,"output_dim",10,FALSE));
hexa->Z=SvIV(*hv_fetch(options,"input_dim",9,FALSE));
hexa->Y = hexa->X;
hexa->R = hexa->X / 2.0;
if ( hv_exists( options, "sigma0", 6 ) ) {
sigma0 = SvNV(*hv_fetch(options,"sigma0",6,0));
if ( sigma0 )
hexa->Sigma0 = sigma0;
else
hexa->Sigma0 = hexa->R;
}
else
hexa->Sigma0 = hexa->R;
if ( hv_exists( options, "learning_rate", 13 ) ) {
rate = SvNV(*hv_fetch(options,"learning_rate",13,0));
if ( rate )
hexa->L0 = rate;
else
hexa->L0 = 0.1;
}
else
hexa->L0 = 0.1;
hexa->map = _make_map( hexa );
hexa->labels = newAV();
hexa->type = SOMType_Hexa;
hash = (HV*)sv_2mortal((SV*)newHV());
tie = newRV_noinc(newSViv(PTR2IV(hexa)));
stash = gv_stashpv(class, GV_ADD);
sv_bless(tie, stash);
hv_magic(hash, (GV*)tie, PERL_MAGIC_tied);
rv = sv_bless(newRV_noinc((SV*)hash),stash);
hexa->ref = rv;
sp = mark;
XPUSHs(rv);
PUTBACK;
}
NV _hexa_distance(NV x1,NV y1,NV x2,NV y2) {
NV tmp,dx,dy;
if ( x1+y1 > x2+y2 ) {
tmp=x1; x1=x2; x2=tmp;
tmp=y1; y1=y2; y2=tmp;
}
dx = x2 - x1;
dy = y2 - y1;
if ( dx<0 || dy<0 )
return abs(dx) + abs(dy);
else
return dx<dy ? dy : dx;
}
void _hexa_neiguts(SOM_Hexa* som,NV sigma,IV X0,IV Y0,NV* n) {
IV x,y,X,Y;
NV d;
X = som->X;
Y = som->Y;
for ( x=0 ; x<X ; x++ ) {
for ( y=0 ; y<Y ; y++ ) {
d = _hexa_distance(X0,Y0,x,y);
if (d <= sigma) n[x*Y+y] = d;
}
}
}
/*
* map functions
*/
SV* _map_FETCH(SV* self,I32 x) {
SOM_Map *map;
SOM_Array *array;
map = INT2PTR(SOM_Map*,self2iv(self));
array = (&map->array)[x];
SvREFCNT_inc(array->ref);
return array->ref;
}
void _map_DESTROY(SV* self) {
SOM_Map *map;
map = INT2PTR(SOM_Map*,self2iv(self));
/* need more done here ? */
Safefree( map );
}
/*
* array functions
*/
void _array_STORE(SV* self,IV y,SV* aref) {
I32 len;
NV tmp;
AV *src;
SV **ptr;
SOM_Array *array;
SOM_Vector *dst;
if ( SvTYPE( SvRV(aref) ) != SVt_PVAV )
croak("value to store is not a reference to an array\n");
src = (AV*)SvRV( aref );
array = INT2PTR(SOM_Array*,self2iv(self));
dst = (&array->vector)[y];
if ( y < 0 )
croak("storing y-index < 0 not supported\n");
if ( y >= array->Y )
croak("storing y-index > y-dimension of SOM\n");
len = av_len( src );
if ( len < 0 )
croak("cant store empty vector\n");
if ( len+1 > array->Z )
croak("vector too long\n");
if ( len+1 < array->Z )
croak("vector too short\n");
for ( ; len >= 0 ; len-- ) {
ptr = av_fetch( src, len, 0 );
if ( ptr == NULL )
croak("NULL ptr!\n");
tmp = SvNV(*ptr);
(&dst->element)[len] = tmp;
}
}
SV* _array_FETCH(SV* self,I32 y) {
SOM_Array *array;
SOM_Vector *vector;
array = INT2PTR(SOM_Array*,self2iv(self));
vector = (&array->vector)[y];
SvREFCNT_inc(vector->ref);
return vector->ref;
}
void _array_DESTROY(SV* self) {
SOM_Array *array;
array = INT2PTR(SOM_Array*,self2iv(self));
/* need more done here ? */
Safefree( array );
}
/*
* vector functions
*/
void _vector_STORE(SV* self,I32 z,NV val) {
SOM_Vector *vector;
vector = INT2PTR(SOM_Vector*,self2iv(self));
if ( z < 0 )
croak("negative z-index not supported\n");
if ( z >= vector->Z )
croak("z-index larger than vector dimension\n");
(&vector->element)[z] = val;
}
SV* _vector_FETCH(SV* self,I32 z) {
SOM_Vector *vector;
vector = INT2PTR(SOM_Vector*,self2iv(self));
return newSVnv((&vector->element)[z]);
}
void _vector_DESTROY(SV* self) {
SOM_Vector *vector;
vector = INT2PTR(SOM_Vector*,self2iv(self));
/* need more done here ? */
Safefree( vector );
}
/*
*
* End of C code. Begin XS.
*
*/
MODULE = AI::NeuralNet::FastSOM PACKAGE = AI::NeuralNet::FastSOM
PROTOTYPES: DISABLE
void
train (self, epochs, ...)
SV * self
IV epochs
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_som_train(self,epochs);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
void
bmu (self, sample)
SV * self
AV * sample
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_som_bmu(self,sample);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
SV *
map (self)
SV * self
PREINIT:
SV* rv;
CODE:
rv = _som_map(self);
ST(0) = rv;
sv_2mortal(ST(0));
SV *
output_dim (self)
SV * self
PREINIT:
SV* rv;
CODE:
rv = _som_output_dim(self);
ST(0) = rv;
sv_2mortal(ST(0));
void
_adjust (self, l, sigma, unit, v)
SV * self
NV l
NV sigma
AV * unit
AV * v
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_adjust(self, l, sigma, unit, v);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
void
STORABLE_freeze (self, cloning)
SV * self
SV * cloning
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_som_FREEZE(self,cloning);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
void
STORABLE_thaw (self, cloning, serialized, ...)
SV * self
SV * cloning
SV * serialized
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_som_THAW(self,cloning,serialized);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_UNDEF;
}
return;
SV *
FETCH (self, key)
SV * self
SV * key
PREINIT:
SV* rv;
CODE:
rv = _som_FETCH(self, key);
ST(0) = rv;
sv_2mortal(ST(0));
void
STORE (self, key, val)
SV * self
SV * key
SV * val
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_som_STORE(self, key, val);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
SV *
FIRSTKEY (self)
SV * self
PREINIT:
SV* rv;
CODE:
if (!self) croak("avoiding -Wextra");
rv = _som_FIRSTKEY();
ST(0) = rv;
sv_2mortal(ST(0));
SV *
NEXTKEY (self,prev)
SV * self
SV * prev
PREINIT:
SV* rv;
CODE:
if (!self) croak("avoiding -Wextra");
rv = _som_NEXTKEY(prev);
ST(0) = rv;
sv_2mortal(ST(0));
void
DESTROY (obj)
SV * obj
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_som_DESTROY(obj);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
MODULE = AI::NeuralNet::FastSOM PACKAGE = AI::NeuralNet::FastSOM::Rect
PROTOTYPES: DISABLE
void
new (class, ...)
const char * class
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_rect_new(class);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
AV *
neighbors (self, sigma, X, Y, ...)
SV * self
NV sigma
IV X
IV Y
PREINIT:
I32* temp;
CODE:
temp = PL_markstack_ptr++;
RETVAL = _neighbors(self, sigma, X, Y);
PL_markstack_ptr = temp;
OUTPUT:
RETVAL
SV *
radius (self)
SV * self
PREINIT:
SV* rv;
CODE:
rv = _rect_radius(self);
ST(0) = rv;
sv_2mortal(ST(0));
MODULE = AI::NeuralNet::FastSOM PACKAGE = AI::NeuralNet::FastSOM::Torus
PROTOTYPES: DISABLE
AV *
neighbors (self, sigma, X, Y, ...)
SV * self
NV sigma
IV X
IV Y
PREINIT:
I32* temp;
CODE:
temp = PL_markstack_ptr++;
RETVAL = _neighbors(self, sigma, X, Y);
PL_markstack_ptr = temp;
OUTPUT:
RETVAL
MODULE = AI::NeuralNet::FastSOM PACKAGE = AI::NeuralNet::FastSOM::Hexa
PROTOTYPES: DISABLE
void
new (class, ...)
const char * class
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_hexa_new(class);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
AV *
neighbors (self, sigma, X, Y, ...)
SV * self
NV sigma
IV X
IV Y
PREINIT:
I32* temp;
CODE:
temp = PL_markstack_ptr++;
RETVAL = _neighbors(self, sigma, X, Y);
PL_markstack_ptr = temp;
OUTPUT:
RETVAL
MODULE = AI::NeuralNet::FastSOM PACKAGE = AI::NeuralNet::FastSOM::Utils
PROTOTYPES: DISABLE
NV
vector_distance (V1, V2)
AV_SPECIAL* V1;
AV_SPECIAL* V2;
PREINIT:
NV rv;
CODE:
rv = _vector_distance((AV*)V1, (AV*)V2);
XSprePUSH; PUSHn((NV)rv);
MODULE = AI::NeuralNet::FastSOM PACKAGE = AI::NeuralNet::FastSOM::MAP
PROTOTYPES: DISABLE
SV *
FETCH (self, x)
SV * self
I32 x
PREINIT:
SV* rv;
CODE:
rv = _map_FETCH(self, x);
ST(0) = rv;
sv_2mortal(ST(0));
IV
FETCHSIZE (self)
SV * self
PREINIT:
IV rv;
CODE:
rv = (INT2PTR(SOM_Map*,self2iv(self)))->X;
XSprePUSH; PUSHi((IV)rv);
void
DESTROY (obj)
SV * obj
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_map_DESTROY(obj);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
MODULE = AI::NeuralNet::FastSOM PACKAGE = AI::NeuralNet::FastSOM::ARRAY
PROTOTYPES: DISABLE
void
STORE (self, y, aref)
SV * self
IV y
SV * aref
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_array_STORE(self, y, aref);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
SV *
FETCH (self, y)
SV * self
I32 y
PREINIT:
SV* rv;
CODE:
rv = _array_FETCH(self, y);
ST(0) = rv;
sv_2mortal(ST(0));
IV
FETCHSIZE (self)
SV * self
PREINIT:
IV rv;
CODE:
rv = (INT2PTR(SOM_Array*,self2iv(self)))->Y;
XSprePUSH; PUSHi((IV)rv);
void
DESTROY (obj)
SV * obj
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_array_DESTROY(obj);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
MODULE = AI::NeuralNet::FastSOM PACKAGE = AI::NeuralNet::FastSOM::VECTOR
PROTOTYPES: DISABLE
void
STORE (self, z, val)
SV * self
I32 z
NV val
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_vector_STORE(self, z, val);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
SV *
FETCH (self, z)
SV * self
I32 z
PREINIT:
SV* rv;
CODE:
rv = _vector_FETCH(self, z);
ST(0) = rv;
sv_2mortal(ST(0));
IV
FETCHSIZE (self)
SV * self
PREINIT:
IV rv;
CODE:
rv = (INT2PTR(SOM_Vector*,self2iv(self)))->Z;
XSprePUSH; PUSHi((IV)rv);
void
DESTROY (obj)
SV * obj
PREINIT:
I32* temp;
PPCODE:
temp = PL_markstack_ptr++;
_vector_DESTROY(obj);
if (PL_markstack_ptr != temp) {
PL_markstack_ptr = temp;
XSRETURN_EMPTY;
}
return;
{
"abstract" : "Perl extension for fast Kohonen Maps",
"author" : [
"Rick Myers <jrm@cpan.org>"
],
"dynamic_config" : 1,
"generated_by" : "ExtUtils::MakeMaker version 7.1, CPAN::Meta::Converter version 2.150005",
"license" : [
"perl_5"
],
"meta-spec" : {
"url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec",
"version" : "2"
},
"name" : "AI-NeuralNet-FastSOM",
"no_index" : {
"directory" : [
"t",
"inc"
]
},
"prereqs" : {
"build" : {
"requires" : {
"ExtUtils::MakeMaker" : "0"
}
},
"configure" : {
"requires" : {
"ExtUtils::MakeMaker" : "0"
}
},
"runtime" : {
"requires" : {
"Storable" : "0"
}
}
},
"release_status" : "stable",
"version" : "0.19",
"x_serialization_backend" : "JSON::PP version 2.27300"
}
---
abstract: 'Perl extension for fast Kohonen Maps'
author:
- 'Rick Myers <jrm@cpan.org>'
build_requires:
ExtUtils::MakeMaker: '0'
configure_requires:
ExtUtils::MakeMaker: '0'
dynamic_config: 1
generated_by: 'ExtUtils::MakeMaker version 7.1, CPAN::Meta::Converter version 2.150005'
license: perl
meta-spec:
url: http://module-build.sourceforge.net/META-spec-v1.4.html
version: '1.4'
name: AI-NeuralNet-FastSOM
no_index:
directory:
- t
- inc
requires:
Storable: '0'
version: '0.19'
x_serialization_backend: 'CPAN::Meta::YAML version 0.012'
Makefile.PL view on Meta::CPAN
use 5.006002;
use ExtUtils::MakeMaker;
WriteMakefile(
NAME => 'AI::NeuralNet::FastSOM',
VERSION_FROM => 'lib/AI/NeuralNet/FastSOM.pm',
ABSTRACT_FROM => 'lib/AI/NeuralNet/FastSOM.pm',
AUTHOR => 'Rick Myers <jrm@cpan.org>',
LICENSE => 'perl',
PREREQ_PM => { Storable => 0 },
TYPEMAPS => [ $] < 5.008000 ? 'typemap.v1' : 'typemap.v2' ],
test => { TESTS => 't/*.t t/orig/*.t' },
clean => { FILES => 't/*.bin typemap' },
);
#
# everything below is a work-around for some sort of bug in ExtUtils::ParseXS
# not picking up typemap files unless named "typemap" in perl5.6.2
#
# note however that the TYPEMAPS entry above is still needed for 5.6.2's still
# using the old xsubpp
#
package MY;
sub xs_c {
my $t = shift->SUPER::xs_c(@_);
$t =~ s/:/:\n \$(MAKE) typemap/;
$t;
}
sub test {
my $t = shift->SUPER::test(@_);
$t =~ s/(PERL_DL_NONLAZY=)/HARNESS_OPTIONS=j1 $1/g;
$t;
}
sub postamble {
my $out = <<'README';
readme:
pod2text lib/AI/NeuralNet/FastSOM.pm README
perl -i -pe's{\\*(\\S+)\\*}{\\1}g' README
README
if ( $] < 5.008000 ) {
$out .= <<'EOP';
typemap:
$(CP) typemap.v1 typemap
EOP
}
else {
$out .= <<'EOP';
typemap:
$(CP) typemap.v2 typemap
EOP
}
return $out;
}
exit 0;
NAME
AI::NeuralNet::FastSOM - Perl extension for fast Kohonen Maps
SYNOPSIS
use AI::NeuralNet::FastSOM::Rect;
instead of
use AI::NeuralNet::SOM;
DESCRIPTION
A drop-in replacement for Robert Barta's AI::NeuralNet::SOM. See those
docs for details.
SUPPORT
Bugs should always be submitted via the CPAN bug tracker
SEE ALSO
Explanation of the algorithm:
<http://www.ai-junkie.com/ann/som/som1.html>
Subclasses:
AI::NeuralNet::FastSOM::Hexa AI::NeuralNet::FastSOM::Rect
AI::NeuralNet::FastSOM::Torus
AUTHOR
Rick Myers, <jrm@cpan.org>
COPYRIGHT AND LICENSE
Copyright (C) 2009-2016 by Rick Myers
This library is free software; you can redistribute it and/or modify it
under the same terms as Perl itself, either Perl version 5.10.0 or, at
your option, any later version of Perl 5 you may have available.
- skip some tests if Storable not found (for old perls).
versions ok: 2.21, 2.20, 2.19, 2.18
perls found in: 5.11.0, 5.10.1, 5.10.0, 5.8.8, 5.8.9p35104, 5.8.9, 5.8.8, 5.6.2
- "passing argument 3 of 'Perl_newXS' discards qualifiers
from pointer target type". is this still a problem?
- clean up!
- figure out how to call perl's exp(), log() and getpid() functions.
examples/eigenvector_initialization.pl view on Meta::CPAN
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
my $dim = 3;
#my @vs = ([1,-0.5], [0,1]);
#my $dim = 2;
my $epsilon = 0.001;
my $epochs = 400;
{ # random initialisation
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => $dim);
$nn->initialize; # random
my @mes = $nn->train ($epochs, @vs);
warn "random: length until error is < $epsilon ". scalar (grep { $_ >= $epsilon } @mes);
}
{ # constant initialisation
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => $dim);
$nn->initialize ($vs[-1]);
my @mes = $nn->train ($epochs, @vs);
warn "constant: length until error is < $epsilon ". scalar (grep { $_ >= $epsilon } @mes);
}
{ # eigenvector initialisation
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => $dim);
my @training_vectors; # find these training vectors
{ # and prime them with this eigenvector stuff;
use PDL;
my $A = pdl \@vs;
while ($A->getdim(0) < $A->getdim(1)) { # make the beast quadratic
$A = append ($A, zeroes (1)); # by padding zeroes
}
my ($E, $e) = eigens_sym $A;
# print $E;
# print $e;
my @es = list $e; # eigenvalues
# warn "es : ".Dumper \@es;
my @es_desc = sort { $b <=> $a } @es; # eigenvalues sorted desc
# warn "desc: ".Dumper \@es_desc;
my @es_idx = map { _find_num ($_, \@es) } @es_desc; # eigenvalue indices sorted by eigenvalue (desc)
# warn "idx: ".Dumper \@es_idx;
sub _find_num {
my $v = shift;
my $l = shift;
for my $i (0..$#$l) {
return $i if $v == $l->[$i];
}
return undef;
}
for (@es_idx) { # from the highest values downwards, take the index
push @training_vectors, [ list $E->dice($_) ] ; # get the corresponding vector
}
}
$nn->initialize (@training_vectors[0..0]); # take only the biggest ones (the eigenvalues are big, actually)
#warn $nn->as_string;
my @mes = $nn->train ($epochs, @vs);
warn "eigen: length until error is < $epsilon ". scalar (grep { $_ >= $epsilon } @mes);
}
__END__
examples/load_save.pl view on Meta::CPAN
use strict;
use Data::Dumper;
use AI::NeuralNet::FastSOM::Rect;
{
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3);
$nn->initialize;
$nn->train (400, ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]));
# now we freeze the thing
use Storable;
store $nn, '/tmp/somnia';
# and forget it
}
{ # get it back, get it back
my $nn = retrieve('/tmp/somnia');
warn Dumper $nn;
# ....
}
__END__
use AI::NeuralNet::FastSOM::Rect;
#my @vs = ([1,-0.5], [0,1]);
#my $dim = 2;
my $epsilon = 0.001;
$nn->initialize; # random
my @mes = $nn->train ($epochs, @vs);
warn "random: length until error is < $epsilon ". scalar (grep { $_ >= $epsilon } @mes);
}
{ # constant initialisation
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => $dim);
$nn->initialize ($vs[-1]);
my @mes = $nn->train ($epochs, @vs);
warn "constant: length until error is < $epsilon ". scalar (grep { $_ >= $epsilon } @mes);
}
{ # eigenvector initialisation
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => $dim);
my @training_vectors; # find these training vectors
{ # and prime them with this eigenvector stuff;
use PDL;
my $A = pdl \@vs;
while ($A->getdim(0) < $A->getdim(1)) { # make the beast quadratic
$A = append ($A, zeroes (1)); # by padding zeroes
}
my ($E, $e) = eigens_sym $A;
# print $E;
# print $e;
my @es = list $e; # eigenvalues
# warn "es : ".Dumper \@es;
my @es_desc = sort { $b <=> $a } @es; # eigenvalues sorted desc
# warn "desc: ".Dumper \@es_desc;
my @es_idx = map { _find_num ($_, \@es) } @es_desc; # eigenvalue indices sorted by eigenvalue (desc)
# warn "idx: ".Dumper \@es_idx;
sub _find_num {
my $v = shift;
my $l = shift;
for my $i (0..$#$l) {
return $i if $v == $l->[$i];
}
return undef;
}
for (@es_idx) { # from the highest values downwards, take the index
push @training_vectors, [ list $E->dice($_) ] ; # get the corresponding vector
}
}
$nn->initialize (@training_vectors[0..0]); # take only the biggest ones (the eigenvalues are big, actually)
#warn $nn->as_string;
my @mes = $nn->train ($epochs, @vs);
warn "eigen: length until error is < $epsilon ". scalar (grep { $_ >= $epsilon } @mes);
}
__END__
lib/AI/NeuralNet/FastSOM.pm view on Meta::CPAN
use strict;
use warnings;
use XSLoader;
our $VERSION = '0.19';
sub new { die 'Dont use this class directly' }
sub label {
my ($self, $x, $y, $l) = @_;
return defined $l
? $self->{labels}->[$x]->[$y] = $l
: $self->{labels}->[$x]->[$y];
}
sub value {
my ($self, $x, $y, $v) = @_;
return defined $v
? $self->{map}[$x][$y] = $v
: $self->{map}[$x][$y];
}
sub mean_error {
my $self = shift;
my $error = 0;
map { $error += $_ } # then add them all up
map { ( $self->bmu($_) )[2] } # then find the distance
@_; # take all data vectors
return ($error / scalar @_); # return the mean value
}
XSLoader::load(__PACKAGE__);
1;
__END__
=pod
=head1 NAME
AI::NeuralNet::FastSOM - Perl extension for fast Kohonen Maps
=head1 SYNOPSIS
use AI::NeuralNet::FastSOM::Rect;
instead of
use AI::NeuralNet::SOM;
=head1 DESCRIPTION
A drop-in replacement for Robert Barta's AI::NeuralNet::SOM. See those
docs for details.
=head1 SUPPORT
Bugs should always be submitted via the CPAN bug tracker
lib/AI/NeuralNet/FastSOM/Hexa.pm view on Meta::CPAN
our $VERSION = '0.19';
sub radius { shift->{_R} }
sub diameter { shift->{_X} }
sub as_data { die 'not implemented' }
sub as_string { die 'not implemented' }
sub initialize {
my $self = shift;
my @data = @_;
our $i = 0;
my $get_from_stream = sub {
$i = 0 if $i > $#data;
return [ @{ $data[$i++] } ]; # cloning !
} if @data;
$get_from_stream ||= sub {
return [ map { rand( 1 ) - 0.5 } 1..$self->{_Z} ];
};
for my $x (0 .. $self->{_X}-1) {
for my $y (0 .. $self->{_X}-1) {
$self->{map}->[$x]->[$y] = &$get_from_stream;
}
}
}
1;
__END__
=pod
=head1 NAME
AI::NeuralNet::FastSOM::Hexa - Perl extension for Kohonen Maps (hexagonal topology)
=head1 SYNOPSIS
use AI::NeuralNet::FastSOM::Hexa;
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 6,
input_dim => 3);
# ... see also base class AI::NeuralNet::FastSOM
=head1 INTERFACE
=head2 Constructor
The constructor takes the following arguments (additionally to those in
the base class):
=over
=item C<output_dim> : (mandatory, no default)
A positive, non-zero number specifying the diameter of the hexagonal. C<1>
creates one with a single hexagon, C<2> one with 4, C<3> one with 9. The
number plays the role of a diameter.
=back
Example:
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 6,
input_dim => 3);
=head2 Methods
=over
=item I<radius>
Returns the radius (half the diameter).
=item I<diameter>
lib/AI/NeuralNet/FastSOM/Hexa.pm view on Meta::CPAN
=item I<map>
I<$m> = I<$nn>->map
This method returns the 2-dimensional array of vectors in the grid
(as a reference to an array of references to arrays of vectors).
Example:
my $m = $nn->map;
for my $x (0 .. $nn->diameter -1) {
for my $y (0 .. $nn->diameter -1){
warn "vector at $x, $y: ". Dumper $m->[$x]->[$y];
}
}
This array represents a hexagon like this (ASCII drawing is so cool):
<0,0>
<0,1> <1,0>
<0,2> <1,1> <2,0>
<0,3> <1,2> <2,1> <3,0>
...............................
=item I<as_string>
Not implemented.
=item I<as_data>
Not implemented.
lib/AI/NeuralNet/FastSOM/Hexa.pm view on Meta::CPAN
Copyright (C) 2009-2016 by Rick Myers
This library is free software; you can redistribute it and/or modify
it under the same terms as Perl itself, either Perl version 5.10.0 or,
at your option, any later version of Perl 5 you may have available.
=cut
sub _get_coordinates {
my $self = shift;
my $D1 = $self->{_D}-1;
my $t;
return map { $t = $_ ; map { [ $t, $_ ] } (0 .. $D1) } (0 .. $D1)
}
sqrt ( ($x - $X) ** 2 + ($y - $Y) ** 2 );
lib/AI/NeuralNet/FastSOM/Rect.pm view on Meta::CPAN
use warnings;
use AI::NeuralNet::FastSOM;
our @ISA = qw/AI::NeuralNet::FastSOM/;
our $VERSION = '0.19';
sub _old_radius { shift->{_R} }
sub initialize {
my $self = shift;
my @data = @_;
my $i = 0;
my $get_from_stream = sub {
$i = 0 if $i > $#data;
return [ @{ $data[$i++] } ]; # cloning !
} if @data;
$get_from_stream ||= sub {
return [ map { rand( 1 ) - 0.5 } 1..$self->{_Z} ];
};
for my $x (0 .. $self->{_X}-1) {
for my $y (0 .. $self->{_Y}-1) {
$self->{map}->[$x]->[$y] = &$get_from_stream;
}
}
}
sub as_string {
my $self = shift;
my $s = '';
$s .= " ";
for my $y (0 .. $self->{_Y}-1){
$s .= sprintf(" %02d ",$y);
}
$s .= "\n" . "-"x107 . "\n";
my $dim = scalar @{ $self->{map}->[0]->[0] };
for my $x (0 .. $self->{_X}-1) {
for my $w ( 0 .. $dim-1 ){
$s .= sprintf("%02d | ",$x);
for my $y (0 .. $self->{_Y}-1){
$s .= sprintf("% 2.2f ", $self->{map}->[$x]->[$y]->[$w]);
}
$s .= "\n";
}
$s .= "\n";
}
return $s;
}
sub as_data {
my $self = shift;
my $s = '';
my $dim = scalar @{ $self->{map}->[0]->[0] };
for my $x (0 .. $self->{_X}-1) {
for my $y (0 .. $self->{_Y}-1){
for my $w ( 0 .. $dim-1 ){
$s .= sprintf("\t%f", $self->{map}->[$x]->[$y]->[$w]);
}
$s .= "\n";
}
}
return $s;
}
1;
__END__
=pod
=head1 NAME
AI::NeuralNet::FastSOM::Rect - Perl extension for Kohonen Maps (rectangular topology)
=head1 SYNOPSIS
use AI::NeuralNet::FastSOM::Rect;
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3);
$nn->initialize;
$nn->train (30,
[ 3, 2, 4 ],
[ -1, -1, -1 ],
[ 0, 4, -3]);
print $nn->as_data;
=head1 INTERFACE
=head2 Constructor
The constructor takes the following arguments (additionally to those in
the base class):
=over
=item C<output_dim> : (mandatory, no default)
A string of the form "3x4" defining the X and the Y dimensions.
=back
Example:
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3);
=head2 Methods
=over
=item I<map>
I<$m> = I<$nn>->map
This method returns the 2-dimensional array of vectors in the grid
(as a reference to an array of references to arrays of vectors). The
representation of the 2-dimensional array is straightforward.
Example:
my $m = $nn->map;
for my $x (0 .. 5) {
for my $y (0 .. 4){
warn "vector at $x, $y: ". Dumper $m->[$x]->[$y];
}
}
=item I<as_data>
print I<$nn>->as_data
This methods creates a string containing the raw vector data, row by
row. This can be fed into gnuplot, for instance.
=back
# change 'tests => 1' to 'tests => last_test_to_print';
use Test::More qw(no_plan);
BEGIN { use_ok('AI::NeuralNet::FastSOM::Hexa') };
######
use Storable qw/store/;
{
my $nn = AI::NeuralNet::FastSOM::Hexa->new(
output_dim => 6,
input_dim => 3,
);
ok( $nn->isa('AI::NeuralNet::FastSOM::Hexa'), 'class' );
is( $nn->{_R}, 3, 'R' );
is( $nn->radius, 3, 'radius' );
}
{
my $nn = AI::NeuralNet::FastSOM::Hexa->new(
output_dim => 2,
input_dim => 3,
);
$nn->initialize( [ 0, 0, 1 ], [ 0, 1, 0 ] );
my $d = $nn->diameter;
for my $x ( 0 .. $d-1 ) {
for my $y (0 .. $d-1) {
ok(
eq_array(
$nn->{map}->[$x]->[$y],
$y == 0 ? [ 0, 0, 1 ] : [ 0, 1, 0 ]
), 'value init'
);
}
}
# warn Dumper $nn;
}
{
my $nn = AI::NeuralNet::FastSOM::Hexa->new(
output_dim => 2,
input_dim => 3,
);
$nn->initialize;
for my $x ( 0 .. $nn->diameter -1 ) {
for my $y ( 0 .. $nn->diameter -1 ) {
ok(
(!grep { $_ > 0.5 || $_ < -0.5 } @{ $nn->value ( $x, $y ) }),
"$x, $y: random vectors in [-0.5, 0.5]"
);
}
}
}
{
my $nn = AI::NeuralNet::FastSOM::Hexa->new(
output_dim => 2,
input_dim => 3,
);
$nn->initialize( [ 0, 0, 1 ] );
ok(
eq_array(
$nn->bmu( [ 1, 1, 1 ] ),
[ 1, 1, 0 ]
),
'bmu'
);
}
{
my $nn = AI::NeuralNet::FastSOM::Hexa->new(
output_dim => 6,
input_dim => 3,
);
ok(
eq_array(
$nn->neighbors( 1, 3, 2 ),
[
[ 2, 1, 1 ],
[ 2, 2, 1 ],
[ 3, 1, 1 ],
[ 3, 2, 0 ],
[ 3, 3, 1 ],
[ 4, 2, 1 ],
[ 4, 3, 1 ],
]
),
'neighbors 6+1'
);
ok(
eq_array(
$nn->neighbors( 1, 0, 0 ),
[
[ 0, 0, 0 ],
[ 0, 1, 1 ],
[ 1, 0, 1 ],
[ 1, 1, 1 ],
]
),
'neighbors 3+1'
);
ok(
eq_array(
$nn->neighbors( 0, 3, 3 ),
[
[ 3, 3, 0 ],
]
),
'neighbors 0+1'
);
}
{
my $nn = AI::NeuralNet::FastSOM::Hexa->new(
output_dim => 3,
input_dim => 3,
sigma0 => 4,
); # make change network-wide
$nn->initialize( [ 0, -1, 1 ] );
$nn->train( 100, [ 1, 1, 1 ] );
for my $x ( 0 .. $nn->diameter - 1 ) {
for my $y ( 0 .. $nn->diameter - 1 ) {
ok(
(! grep { $_ < 0.9 } @{ $nn->value( $x, $y ) }),
"$x, $y: vector above 0.9"
);
}
}
}
{
my $nn = AI::NeuralNet::FastSOM::Hexa->new(
output_dim => 3,
input_dim => 3,
);
$nn->initialize( [ 0, -1, -1 ] );
$nn->train( 100, [ 1, 1, 1 ] );
my ($x, $y) = $nn->bmu( [ 1, 1, 1 ] );
ok(
eq_array(
[ $x, $y ],
[ 0, 0 ],
),
'bmu after training'
);
}
{
my $nn = AI::NeuralNet::FastSOM::Hexa->new(
output_dim => 3,
input_dim => 3,
);
$nn->initialize;
my @vs = ( [ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3] );
$nn->train( 400, @vs );
my ($bmu_x, $bmu_y) = $nn->bmu( [ 3, 2, 4 ] );
ok( open(FILE, '> t/save_hexa_bmu.bin'), 'hexa save' );
print FILE "$bmu_x\n$bmu_y\n";
close FILE;
store( $nn, 't/save_hexa.bin' );
}
__END__
t/hexa_retrieve.t view on Meta::CPAN
ok( open(FILE, '< t/save_hexa_bmu.bin'), 'hexa open' );
my ( $bmu_x, $bmu_y ) = <FILE>;
chomp $bmu_x;
chomp $bmu_y;
ok( defined $bmu_x, 'x' );
ok( defined $bmu_y, 'y' );
{
my $nn = retrieve( 't/save_hexa.bin' );
isa_ok( $nn, 'AI::NeuralNet::FastSOM::Hexa', 'retrieve hexa' );
is($nn->{_X}, 3, '_X');
#is($nn->{_Y}, 6, '_Y');
is($nn->{_Z}, 3, '_Z');
my ($x,$y) = $nn->bmu([3,2,4]);
is( $x, $bmu_x, 'stored x' );
is( $y, $bmu_y, 'stored y' );
my $m = $nn->map;
isa_ok( $m, 'ARRAY', 'stored map' );
isa_ok( $m->[0], 'ARRAY', 'stored array' );
isa_ok( $m->[0][0], 'ARRAY', 'stored vector' );
ok( $m->[0][0][0], 'stored scalar' );
}
__END__
t/orig/hexa.t view on Meta::CPAN
# change 'tests => 1' to 'tests => last_test_to_print';
use Test::More qw(no_plan);
BEGIN { use_ok('AI::NeuralNet::FastSOM::Hexa') };
######
#use Data::Dumper;
{
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 6,
input_dim => 3);
ok ($nn->isa ('AI::NeuralNet::FastSOM::Hexa'), 'class');
is ($nn->{_R}, 3, 'R');
is ($nn->radius, 3, 'radius');
}
{
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 2,
input_dim => 3);
$nn->initialize ( [ 0, 0, 1 ], [ 0, 1, 0 ] );
my $d = $nn->diameter;
for my $x (0 .. $d-1) {
for my $y (0 .. $d-1) {
ok (eq_array ($nn->{map}->[$x]->[$y],
$y == 0 ? [ 0, 0, 1 ] : [ 0, 1, 0 ]), 'value init');
}
}
# warn Dumper $nn;
}
{
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 2,
input_dim => 3);
$nn->initialize;
foreach my $x (0 .. $nn->diameter -1) {
foreach my $y (0 .. $nn->diameter -1 ) {
ok ( (!grep { $_ > 0.5 || $_ < -0.5 } @{ $nn->value ( $x, $y ) }) , "$x, $y: random vectors in [-0.5, 0.5]");
}
}
}
{
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 2,
input_dim => 3);
$nn->initialize ( [ 0, 0, 1 ] );
ok (eq_array ($nn->bmu ([ 1, 1, 1 ]),
[ 1, 1, 0 ]), 'bmu');
}
{
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 6,
input_dim => 3);
# warn Dumper $nn;
ok (eq_array ( $nn->neighbors (1, 3, 2),
[
[2, 1, 1 ],
[2, 2, 1 ],
[3, 1, 1 ],
[3, 2, 0 ],
[3, 3, 1 ],
[4, 2, 1 ],
[4, 3, 1 ]
]), 'neighbors 6+1');
ok (eq_array ( $nn->neighbors (1, 0, 0),
[
[0, 0, 0 ],
[0, 1, 1 ],
[1, 0, 1 ],
[1, 1, 1 ],
]), 'neighbors 3+1');
ok (eq_array ( $nn->neighbors (0, 3, 3),
[
[3, 3, 0 ],
]), 'neighbors 0+1');
}
{
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 3,
input_dim => 3,
sigma0 => 4); # make change network-wide
$nn->initialize ( [ 0, -1, 1 ] );
$nn->train (100, [ 1, 1, 1 ]);
# warn Dumper $nn;
foreach my $x (0 .. $nn->diameter -1) {
foreach my $y (0 .. $nn->diameter -1 ) {
ok ( (! grep { $_ < 0.9 } @{ $nn->value ( $x, $y ) }) , "$x, $y: vector above 0.9");
}
}
}
{
my $nn = new AI::NeuralNet::FastSOM::Hexa (output_dim => 3,
input_dim => 3);
$nn->initialize ( [ 0, -1, -1 ] );
$nn->train (100, [ 1, 1, 1 ]);
my ($x, $y) = $nn->bmu ([ 1, 1, 1 ]) ;
ok (eq_array ([ $x, $y ],
[ 0, 0 ]), 'bmu after training');
# warn Dumper $nn;
}
__END__
t/orig/pods.t view on Meta::CPAN
#== TESTS =====================================================================
use strict;
use Test::More;
eval "use Test::Pod 1.00";
plan skip_all => "Test::Pod 1.00 required for testing POD" if $@;
my @PODs = qw(
lib/AI/NeuralNet/FastSOM.pm
lib/AI/NeuralNet/FastSOM/Rect.pm
lib/AI/NeuralNet/FastSOM/Hexa.pm
lib/AI/NeuralNet/FastSOM/Torus.pm
);
plan tests => scalar @PODs;
map {
pod_file_ok ( $_, "$_ pod ok" )
} @PODs;
t/orig/rect.t view on Meta::CPAN
# change 'tests => 1' to 'tests => last_test_to_print';
use Test::More qw(no_plan);
BEGIN { use_ok('AI::NeuralNet::FastSOM::Rect') };
######
#use Data::Dumper;
{
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3);
ok ($nn->isa ('AI::NeuralNet::FastSOM::Rect'), 'class');
is ($nn->{_X}, 5, 'X');
is ($nn->{_Y}, 6, 'Y');
is ($nn->{_Z}, 3, 'Z');
is ($nn->radius, 2.5, 'radius');
is ($nn->output_dim, "5x6", 'output dim');
}
{
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3);
$nn->initialize;
# print Dumper $nn;
# exit;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
$nn->train (400, @vs);
foreach my $v (@vs) {
ok (_find ($v, $nn->map), 'found learned vector '. join (",", @$v));
}
sub _find {
my $v = shift;
my $m = shift;
use AI::NeuralNet::FastSOM::Utils;
foreach my $x ( 0 .. 4 ) {
foreach my $y ( 0 .. 5 ) {
return 1 if AI::NeuralNet::FastSOM::Utils::vector_distance ($m->[$x]->[$y], $v) < 0.01;
}
}
return 0;
}
ok ($nn->as_string, 'pretty print');
ok ($nn->as_data, 'raw format');
# print $nn->as_string;
}
{
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3);
$nn->initialize;
foreach my $x (0 .. 5 -1) {
foreach my $y (0 .. 6 -1 ) {
ok ( (!grep { $_ > 0.5 || $_ < -0.5 } @{ $nn->value ( $x, $y ) }) , "$x, $y: random vectors in [-0.5, 0.5]");
}
}
}
__END__
# randomized pick
@vectors = ...;
my $get = sub {
return @vectors [ int (rand (scalar @vectors) ) ];
}
$nn->train ($get);
# take exactly 500, round robin, in order
our $i = 0;
my $get = sub {
return undef unless $i < 500;
return @vectors [ $i++ % scalar @vectors ];
}
t/orig/som.t view on Meta::CPAN
# Change 'tests => 1' to 'tests => last_test_to_print';
use Test::More qw(no_plan);
BEGIN { use_ok('AI::NeuralNet::FastSOM') };
######
#use Data::Dumper;
{
use AI::NeuralNet::FastSOM::Rect; # any non-abstract subclass should do
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3,
);
$nn->value ( 1, 1, [ 1, 1, 1 ] );
ok (eq_array ($nn->value ( 1, 1),
[ 1, 1, 1 ]), 'value set/get');
$nn->label ( 1, 1, 'rumsti' );
is ($nn->label ( 1, 1), 'rumsti', 'label set/get');
is ($nn->label ( 1, 0), undef, 'label set/get');
}
{
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3);
$nn->initialize;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
my $me = $nn->mean_error (@vs);
for (1 .. 40) {
$nn->train (50, @vs);
ok ($me >= $nn->mean_error (@vs), 'mean error getting smaller');
$me = $nn->mean_error (@vs);
# warn $me;
}
foreach (1..3) {
my @mes = $nn->train (20, @vs);
is (scalar @mes, 3 * 20, 'errors while training, nr');
ok ((!grep { $_ > 10 * $me } @mes), 'errors while training, none significantly bigger');
}
}
__END__
# randomized pick
@vectors = ...;
my $get = sub {
return @vectors [ int (rand (scalar @vectors) ) ];
}
$nn->train ($get);
# take exactly 500, round robin, in order
our $i = 0;
my $get = sub {
return undef unless $i < 500;
return @vectors [ $i++ % scalar @vectors ];
}
t/orig/torus.t view on Meta::CPAN
# change 'tests => 1' to 'tests => last_test_to_print';
use Test::More qw(no_plan);
BEGIN { use_ok('AI::NeuralNet::FastSOM::Torus') };
######
#use Data::Dumper;
{
my $nn = new AI::NeuralNet::FastSOM::Torus (output_dim => "5x6",
input_dim => 3);
ok ($nn->isa ('AI::NeuralNet::FastSOM::Torus'), 'class');
is ($nn->{_X}, 5, 'X');
is ($nn->{_Y}, 6, 'Y');
is ($nn->{_Z}, 3, 'Z');
is ($nn->radius, 2.5, 'radius');
is ($nn->output_dim, "5x6", 'output dim');
}
{
my $nn = new AI::NeuralNet::FastSOM::Torus (output_dim => "5x6",
input_dim => 3);
ok (eq_set ( $nn->neighbors (1, 0, 0),
[
[ 0, 0, '0' ],
[ 0, 1, '1' ],
[ 0, 5, '1' ],
[ 1, 0, '1' ],
[ 4, 0, '1' ]
]), 'neighbors 4+1');
ok (eq_set ( $nn->neighbors (1, 3, 2),
[
[ 2, 2, '1' ],
[ 3, 1, '1' ],
[ 3, 2, '0' ],
[ 3, 3, '1' ],
[ 4, 2, '1' ]
]), 'neighbors 4+1');
}
{
my $nn = new AI::NeuralNet::FastSOM::Torus (output_dim => "5x6",
input_dim => 3);
$nn->initialize;
# print Dumper $nn;
# exit;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
$nn->train (400, @vs);
foreach my $v (@vs) {
ok (_find ($v, $nn->map), 'found learned vector '. join (",", @$v));
}
sub _find {
my $v = shift;
my $m = shift;
use AI::NeuralNet::FastSOM::Utils;
foreach my $x ( 0 .. 4 ) {
foreach my $y ( 0 .. 5 ) {
return 1 if AI::NeuralNet::FastSOM::Utils::vector_distance ($m->[$x]->[$y], $v) < 0.01;
}
}
return 0;
}
ok ($nn->as_string, 'pretty print');
ok ($nn->as_data, 'raw format');
# print $nn->as_string;
}
__END__
# randomized pick
@vectors = ...;
my $get = sub {
return @vectors [ int (rand (scalar @vectors) ) ];
}
$nn->train ($get);
# take exactly 500, round robin, in order
our $i = 0;
my $get = sub {
return undef unless $i < 500;
return @vectors [ $i++ % scalar @vectors ];
}
#== TESTS =====================================================================
use strict;
use Test::More;
eval "use Test::Pod 1.00";
plan skip_all => "Test::Pod 1.00 required for testing POD" if $@;
my @PODs = qw(
lib/AI/NeuralNet/FastSOM.pm
lib/AI/NeuralNet/FastSOM/Rect.pm
lib/AI/NeuralNet/FastSOM/Hexa.pm
lib/AI/NeuralNet/FastSOM/Torus.pm
);
plan tests => scalar @PODs;
map {
pod_file_ok( $_, "$_ pod ok" )
} @PODs;
__END__
# change 'tests => 1' to 'tests => last_test_to_print';
use Test::More qw(no_plan);
BEGIN { use_ok('AI::NeuralNet::FastSOM::Rect') };
######
use AI::NeuralNet::FastSOM::Utils;
use Storable qw/store/;
{
my $nn = AI::NeuralNet::FastSOM::Rect->new(
output_dim => '5x6',
input_dim => 3
);
ok( $nn->isa( 'AI::NeuralNet::FastSOM::Rect' ), 'rect class' );
my $nn2 = $nn;
my $nn3 = $nn2;
is( $nn, $nn3, 'rect eq' );
my $m1 = $nn->map;
isa_ok( $m1, 'ARRAY', 'map array' );
my $m2 = $m1;
my $m3 = $nn2->map;
my $m4 = $m3;
is( $m2, $m4, 'map eq' );
my $a = $m1->[0];
isa_ok( $a, 'ARRAY', 'array array' );
ok( $a != $m1, 'array unique' );
my $a2 = $m4->[0];
is( $a, $a2, 'array eq' );
my $v = $a->[0];
isa_ok( $v, 'ARRAY', 'vector array' );
ok( $v != $a, 'vector unique' );
my $v2 = $nn3->map->[0]->[0];
is( $v, $v2, 'vector eq' );
my $v3 = $nn2->map->[0][0];
is( $v, $v3, 'vector shorter' );
my $m = $nn->map;
$m->[0][0][0] = 3.245;
is( $m->[0][0][0], 3.245, 'element set' );
$m->[0][0][0] = 1.25;
is( $m->[0][0][0], 1.25, 'element reset' );
$m->[0][0][1] = 4.8;
is( $m->[0][0][1], 4.8, 'element set z' );
$m->[0][0][1] = 2.6;
is( $m->[0][0][1], 2.6, 'element reset z' );
$m->[0][1][0] = 8.9;
is( $m->[0][1][0], 8.9, 'element set y' );
$m->[0][1][0] = 1.2;
is( $m->[0][1][0], 1.2, 'element reset y' );
$m->[1][0][0] = 5.4;
is( $m->[1][0][0], 5.4, 'element set z' );
$m->[1][0][0] = 3.23;
is( $m->[1][0][0], 3.23, 'element reset z');
$m->[4][5][2] = 2.29;
is( $m->[4][5][2], 2.29, 'last element set' );
is( $m->[-1][5][2], 2.29, 'negative x' );
is( $m->[4][-1][2], 2.29, 'negative y' );
is( $m->[4][5][-1], 2.29, 'negative z' );
is( $m->[-1][-1][-1], 2.29, 'negative all' );
}
{
my $nn = AI::NeuralNet::FastSOM::Rect->new(
output_dim => '5x6',
input_dim => 3
);
ok ($nn->isa ('AI::NeuralNet::FastSOM::Rect'), 'class');
is ($nn->{_X}, 5, 'X');
is ($nn->{_Y}, 6, 'Y');
is ($nn->{_Z}, 3, 'Z');
is ($nn->radius, 2.5, 'radius');
is ($nn->output_dim, "5x6", 'output dim');
}
sub _find {
my $v = shift;
my $m = shift;
for my $x ( 0 .. 4 ) {
for my $y ( 0 .. 5 ) {
my $rv = AI::NeuralNet::FastSOM::Utils::vector_distance($m->[$x]->[$y], $v);
return 1 if $rv < 0.01;
}
}
return 0;
}
{
my $nn = new AI::NeuralNet::FastSOM::Rect(
output_dim => "5x6",
input_dim => 3
);
$nn->initialize;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
$nn->train(400, @vs);
for my $v (@vs) {
ok(_find($v,$nn->map),'found learned vector '.join(",", @$v));
}
ok ($nn->as_string, 'pretty print');
ok ($nn->as_data, 'raw format');
}
{
my $nn = new AI::NeuralNet::FastSOM::Rect (output_dim => "5x6",
input_dim => 3);
$nn->initialize;
for my $x (0 .. 5 -1) {
for my $y (0 .. 6 -1 ) {
ok ( (!grep { $_ > 0.5 || $_ < -0.5 } @{ $nn->value ( $x, $y ) }) , "$x, $y: random vectors in [-0.5, 0.5]");
}
}
}
{
my $nn = new AI::NeuralNet::FastSOM::Rect(
output_dim => "5x6",
input_dim => 3
);
$nn->initialize;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
$nn->train(400, @vs);
my $k = keys %$nn;
is( $k, 10, 'scalar rect key count' );
my @k = keys %$nn;
is( @k, 10, 'array rect key count' );
}
{
my $nn = AI::NeuralNet::FastSOM::Rect->new(
output_dim => '5x6',
input_dim => 3
);
$nn->initialize;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
$nn->train(400, @vs);
my ($bmu_x,$bmu_y) = $nn->bmu([3,2,4]);
ok( open(FILE, '> t/save_rect_bmu.bin'), 'rect save' );
print FILE "$bmu_x\n$bmu_y\n";
close FILE;
store( $nn, 't/save_rect.bin' );
}
__END__
t/rect_retrieve.t view on Meta::CPAN
ok( open(FILE, '< t/save_rect_bmu.bin'), 'rect open' );
my ( $bmu_x, $bmu_y ) = <FILE>;
chomp $bmu_x;
chomp $bmu_y;
ok( defined $bmu_x, 'x' );
ok( defined $bmu_y, 'y' );
{
my $nn = retrieve( 't/save_rect.bin' );
isa_ok( $nn, 'AI::NeuralNet::FastSOM::Rect', 'retrieve rect' );
is($nn->{_X}, 5, '_X');
is($nn->{_Y}, 6, '_Y');
is($nn->{_Z}, 3, '_Z');
my ($x,$y) = $nn->bmu([3,2,4]);
is( $x, $bmu_x, 'stored x' );
is( $y, $bmu_y, 'stored y' );
my $m = $nn->map;
isa_ok( $m, 'ARRAY', 'stored map' );
isa_ok( $m->[0], 'ARRAY', 'stored array' );
isa_ok( $m->[0][0], 'ARRAY', 'stored vector' );
ok( $m->[0][0][0], 'stored scalar' );
}
__END__
# Change 'tests => 1' to 'tests => last_test_to_print';
use Test::More qw(no_plan);
#BEGIN { use_ok('AI::NeuralNet::FastSOM') };
######
use AI::NeuralNet::FastSOM::Rect; # any non-abstract subclass should do
{
my $nn = AI::NeuralNet::FastSOM::Rect->new(
output_dim => "5x6",
input_dim => 3,
);
$nn->value ( 1, 1, [ 1, 1, 1 ] );
ok(
eq_array(
$nn->value( 1, 1 ),
[ 1, 1, 1 ]
),
'value set/get'
);
# unsupported, for now (rik)
# $nn->label ( 1, 1, 'rumsti' );
# is ($nn->label ( 1, 1), 'rumsti', 'label set/get');
#
# is ($nn->label ( 1, 0), undef, 'label set/get');
}
{
my $nn = AI::NeuralNet::FastSOM::Rect->new(
output_dim => "5x6",
input_dim => 3,
);
$nn->initialize;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
my $me = $nn->mean_error(@vs);
for (1 .. 40) {
$nn->train(50, @vs);
ok ($me >= $nn->mean_error(@vs), 'mean error getting smaller');
$me = $nn->mean_error(@vs);
}
for (1..3) {
my @mes = $nn->train(20, @vs);
is (scalar @mes, 3 * 20, 'errors while training, nr');
ok ((!grep { $_ > 10 * $me } @mes), 'errors while training, none significantly bigger');
}
}
__END__
use Test::More qw(no_plan);
BEGIN { use_ok('AI::NeuralNet::FastSOM::Torus') };
######
use AI::NeuralNet::FastSOM::Utils;
use Storable qw/store/;
{
my $nn = AI::NeuralNet::FastSOM::Torus->new(
output_dim => "5x6",
input_dim => 3,
);
ok( $nn->isa ('AI::NeuralNet::FastSOM::Torus'), 'class' );
is( $nn->{_X}, 5, 'X' );
is( $nn->{_Y}, 6, 'Y' );
is( $nn->{_Z}, 3, 'Z' );
is( $nn->radius, 2.5, 'radius' );
is( $nn->output_dim, "5x6", 'output dim' );
}
{
my $nn = AI::NeuralNet::FastSOM::Torus->new(
output_dim => "5x6",
input_dim => 3,
);
ok(
eq_set(
$nn->neighbors(1, 0, 0),
[
[ 0, 0, '0' ],
[ 0, 1, '1' ],
[ 0, 5, '1' ],
[ 1, 0, '1' ],
[ 4, 0, '1' ]
]
),
'neighbors 4+1'
);
ok(
eq_set(
$nn->neighbors(1, 3, 2),
[
[ 2, 2, '1' ],
[ 3, 1, '1' ],
[ 3, 2, '0' ],
[ 3, 3, '1' ],
[ 4, 2, '1' ]
]
),
'neighbors 4+1'
);
}
sub _find {
my $v = shift;
my $m = shift;
for my $x ( 0 .. 4 ) {
for my $y ( 0 .. 5 ) {
return 1
if AI::NeuralNet::FastSOM::Utils::vector_distance( $m->[$x]->[$y], $v ) < 0.01;
}
}
return 0;
}
{
my $nn = AI::NeuralNet::FastSOM::Torus->new(
output_dim => "5x6",
input_dim => 3,
);
$nn->initialize;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
$nn->train(400, @vs);
for my $v (@vs) {
ok( _find($v, $nn->map), 'found learned vector '. join (",", @$v) );
}
ok( $nn->as_string, 'pretty print' );
ok( $nn->as_data, 'raw format' );
}
{
my $nn = AI::NeuralNet::FastSOM::Torus->new(
output_dim => '5x6',
input_dim => 3,
);
$nn->initialize;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
$nn->train(400, @vs);
my $k = keys %$nn;
is( $k, 10, 'scalar torus key count' );
my @k = keys %$nn;
is( @k, 10, 'array torus key count' );
}
{
my $nn = AI::NeuralNet::FastSOM::Torus->new(
output_dim => '5x6',
input_dim => 3
);
$nn->initialize;
my @vs = ([ 3, 2, 4 ], [ -1, -1, -1 ], [ 0, 4, -3]);
$nn->train(400, @vs);
my ($bmu_x,$bmu_y) = $nn->bmu([3,2,4]);
ok( open(FILE, '> t/save_torus_bmu.bin'), 'torus save' );
print FILE "$bmu_x\n$bmu_y\n";
close FILE;
store( $nn, 't/save_torus.bin' );
}
__END__
t/torus_retrieve.t view on Meta::CPAN
ok( open(FILE, '< t/save_torus_bmu.bin'), 'torus open' );
my ( $bmu_x, $bmu_y ) = <FILE>;
chomp $bmu_x;
chomp $bmu_y;
ok( defined $bmu_x, 'x' );
ok( defined $bmu_y, 'y' );
{
my $nn = retrieve( 't/save_torus.bin' );
isa_ok( $nn, 'AI::NeuralNet::FastSOM::Torus', 'retrieve torus' );
is($nn->{_X}, 5, '_X');
is($nn->{_Y}, 6, '_Y');
is($nn->{_Z}, 3, '_Z');
my ($x,$y) = $nn->bmu([3,2,4]);
is( $x, $bmu_x, 'stored x' );
is( $y, $bmu_y, 'stored y' );
my $m = $nn->map;
isa_ok( $m, 'ARRAY', 'stored map' );
isa_ok( $m->[0], 'ARRAY', 'stored array' );
isa_ok( $m->[0][0], 'ARRAY', 'stored vector' );
ok( $m->[0][0][0], 'stored scalar' );
}
__END__
const char * T_PV
AV_SPECIAL * T_AVREF_SPECIAL
AV * T_AVREF
###############################################################################
INPUT
T_AVREF_SPECIAL
if (SvTYPE($arg)==SVt_PVMG && SvRMAGICAL($arg) && mg_find($arg,'p'))
mg_get($arg);
if (SvROK($arg) && SvTYPE(SvRV($arg))==SVt_PVAV)
$var = (AV*)SvRV($arg);
else
Perl_croak(aTHX_ \"%s: %s is not a magical array reference\",
${$ALIAS?\q[GvNAME(CvGV(cv))]:\qq[\"$pname\"]},
\"$var\")
T_AVREF
if (SvROK($arg) && SvTYPE(SvRV($arg))==SVt_PVAV)
$var = (AV*)SvRV($arg);
else
Perl_croak(aTHX_ \"%s: %s is not an array reference\",
${$ALIAS?\q[GvNAME(CvGV(cv))]:\qq[\"$pname\"]},
\"$var\")
###############################################################################
OUTPUT
T_AVREF_SPECIAL
$arg = newRV((SV*)$var);
T_AVREF
$arg = newRV((SV*)$var);
const char * T_PV
AV_SPECIAL * T_AVREF_SPECIAL
###############################################################################
INPUT
T_AVREF_SPECIAL
if (SvTYPE($arg)==SVt_PVLV && SvRMAGICAL($arg) && mg_find($arg,'p'))
mg_get($arg);
if (SvROK($arg) && SvTYPE(SvRV($arg))==SVt_PVAV)
$var = (AV*)SvRV($arg);
else
Perl_croak(aTHX_ \"%s: %s is not a magical array reference\",
${$ALIAS?\q[GvNAME(CvGV(cv))]:\qq[\"$pname\"]},
\"$var\")
###############################################################################
OUTPUT
T_AVREF_SPECIAL
$arg = newRV((SV*)$var);