AI-LibNeural

 view release on metacpan or  search on metacpan

LICENSE  view on Meta::CPAN

  The licenses for most software are designed to take away your
freedom to share and change it.  By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.

  This license, the Library General Public License, applies to some
specially designated Free Software Foundation software, and to any
other libraries whose authors decide to use it.  You can use it for
your libraries, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.

  To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if
you distribute copies of the library, or if you modify it.

LICENSE  view on Meta::CPAN

(which use some of those functions and data) to form executables.

  The "Library", below, refers to any such software library or work
which has been distributed under these terms.  A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language.  (Hereinafter, translation is
included without limitation in the term "modification".)

  "Source code" for a work means the preferred form of the work for
making modifications to it.  For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.

  Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope.  The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for

LICENSE  view on Meta::CPAN

and what the program that uses the Library does.
  
  1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.

  You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.

  2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:

    a) The modified work must itself be a software library.

LibNeural.pm  view on Meta::CPAN


=item $nn = AI::LibNeural->new(INTPUTS,HIDDENS,OUTPUTS)

Creates a new AI::LibNeural object with INPUTS input nodes, HIDDENS hidden
nodes, and OUTPUTS output nodes.

=item $nn->train([I1,I2,...],[O1,O2,...],MINERR,TRAINRATE)

Completes a training cycle for the given inputs I1-IN, with the expected
results of O1-OM, where N is the number of inputs and M is the number of
outputs. MINERR is the mean squared error at the output that you wish to be achieved. TRAINRATE is the learning rate to be used.

=item (O1,O2) = $nn->run([I1,I2,...])

Calculate the corresponding outputs (O1-OM) for the given inputs (I1-ON) based
on the previous training. Should only be called after the network has been
suitably trained.

=item NUM = $nn->get_layersize(WHICH)

Retrieves the number of nodes at the specified layer, WHICH. WHICH should be

LibNeural.xs  view on Meta::CPAN

static int
not_here(char *s)
{
    croak("%s not implemented on this architecture", s);
    return -1;
}

static double
constant(char *name, int len, int arg)
{
    errno = 0;
    switch (name[0 + 0]) {
    case 'A':
	if (strEQ(name + 0, "ALL")) {	/*  removed */
#ifdef ALL
	    return ALL;
#else
	    goto not_there;
#endif
	}
    case 'H':

LibNeural.xs  view on Meta::CPAN

	}
    case 'O':
	if (strEQ(name + 0, "OUTPUT")) {	/*  removed */
#ifdef OUTPUT
	    return OUTPUT;
#else
	    goto not_there;
#endif
	}
    }
    errno = EINVAL;
    return 0;

not_there:
    errno = ENOENT;
    return 0;
}

/* function that takes an array reference and convert it into an equivelent
 * float array. dlen is the number of elements that we want to make sure are in
 * the array */
static float *
svpvav_to_float_array (SV * svpvav, int dlen)
{
	float *  array;

LibNeural.xs  view on Meta::CPAN

	else
		Perl_croak(aTHX_ "Usage: Neural::new([ins, hids, outs])");
    OUTPUT:
	RETVAL    

int
nnwork::get_layersize (which)
	int which

void
nnwork::train (ins, outs, minerr, trainrate)
	SV    * ins
	SV    * outs
	float	minerr
	float	trainrate
    PREINIT:
	int     i;
	int     nin;
	int     nout;
	float * ains;
	float * aouts;
    CODE:
	nin = THIS->get_layersize(INPUT);
	nout = THIS->get_layersize(OUTPUT);

	ains = svpvav_to_float_array(ins, nin);
	aouts = svpvav_to_float_array(outs, nout);

	THIS->train(ains, aouts, minerr, trainrate);

	if( ains ) free(ains);
	if( aouts ) free(aouts);

void
nnwork::run (ins)
	SV * ins
    PREINIT:
	int     i;
	int     nin;



( run in 0.537 second using v1.01-cache-2.11-cpan-49f99fa48dc )