AI-PSO

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

Revision history for Perl extension AI::PSO.

0.86  Tue Nov 21 20:41:23 2006
    - updated documentation
    - added support for original RE & JK algorithm
    - abstracted initialization function

0.85  Wed Nov 15 22:30:47 2006
    - corrected the fitness function in the test
    - added perceptron c++ code that I wrote a long time ago ;)
    - added an example (pso_ann.pl) for training a simple feed-forward neural network
    - updated POD

0.82  Sat Nov 11 22:20:31 2006
    - fixed POD to correctly 'use AI::PSO'
    - fixed fitness function in PSO.t
	- added research paper to package
	- moved into a subversion repository
	- removed requirement for perl 5.8.8
	- removed printing of solution array in test

0.80  Sat Nov 11 14:22:27 2006
	- changed namespace to AI::PSO
	- added a pso_get_solution_array function

0.70  Fri Nov 10 23:50:32 2006
	- added user callback fitness function
	- added POD
	- added tests
	- fixed typos
	- changed version to 0.70 because I like 0.7

0.01  Fri Nov 10 18:53:56 2006
	- initial version

META.yml  view on Meta::CPAN

# http://module-build.sourceforge.net/META-spec.html
name:         AI-PSO
version:      0.86
version_from: lib/AI/PSO.pm
installdirs:  site
requires:
    Callback:                      0
    Math::Random:                  0

distribution_type: module
generated_by: ExtUtils::MakeMaker version 6.30

MPL-1.1.txt  view on Meta::CPAN

                          MOZILLA PUBLIC LICENSE
                                Version 1.1

                              ---------------

1. Definitions.

     1.0.1. "Commercial Use" means distribution or otherwise making the
     Covered Code available to a third party.

     1.1. "Contributor" means each entity that creates or contributes to
     the creation of Modifications.

     1.2. "Contributor Version" means the combination of the Original
     Code, prior Modifications used by a Contributor, and the Modifications
     made by that particular Contributor.

     1.3. "Covered Code" means the Original Code or Modifications or the
     combination of the Original Code and Modifications, in each case
     including portions thereof.

     1.4. "Electronic Distribution Mechanism" means a mechanism generally
     accepted in the software development community for the electronic
     transfer of data.

     1.5. "Executable" means Covered Code in any form other than Source
     Code.

     1.6. "Initial Developer" means the individual or entity identified
     as the Initial Developer in the Source Code notice required by Exhibit
     A.

     1.7. "Larger Work" means a work which combines Covered Code or
     portions thereof with code not governed by the terms of this License.

     1.8. "License" means this document.

     1.8.1. "Licensable" means having the right to grant, to the maximum
     extent possible, whether at the time of the initial grant or
     subsequently acquired, any and all of the rights conveyed herein.

     1.9. "Modifications" means any addition to or deletion from the
     substance or structure of either the Original Code or any previous
     Modifications. When Covered Code is released as a series of files, a
     Modification is:
          A. Any addition to or deletion from the contents of a file
          containing Original Code or previous Modifications.

          B. Any new file that contains any part of the Original Code or
          previous Modifications.

     1.10. "Original Code" means Source Code of computer software code
     which is described in the Source Code notice required by Exhibit A as
     Original Code, and which, at the time of its release under this
     License is not already Covered Code governed by this License.

     1.10.1. "Patent Claims" means any patent claim(s), now owned or
     hereafter acquired, including without limitation,  method, process,
     and apparatus claims, in any patent Licensable by grantor.

     1.11. "Source Code" means the preferred form of the Covered Code for
     making modifications to it, including all modules it contains, plus
     any associated interface definition files, scripts used to control
     compilation and installation of an Executable, or source code
     differential comparisons against either the Original Code or another
     well known, available Covered Code of the Contributor's choice. The
     Source Code can be in a compressed or archival form, provided the
     appropriate decompression or de-archiving software is widely available
     for no charge.

     1.12. "You" (or "Your")  means an individual or a legal entity
     exercising rights under, and complying with all of the terms of, this
     License or a future version of this License issued under Section 6.1.
     For legal entities, "You" includes any entity which controls, is
     controlled by, or is under common control with You. For purposes of
     this definition, "control" means (a) the power, direct or indirect,
     to cause the direction or management of such entity, whether by
     contract or otherwise, or (b) ownership of more than fifty percent
     (50%) of the outstanding shares or beneficial ownership of such
     entity.

2. Source Code License.

     2.1. The Initial Developer Grant.
     The Initial Developer hereby grants You a world-wide, royalty-free,
     non-exclusive license, subject to third party intellectual property
     claims:
          (a)  under intellectual property rights (other than patent or
          trademark) Licensable by Initial Developer to use, reproduce,
          modify, display, perform, sublicense and distribute the Original
          Code (or portions thereof) with or without Modifications, and/or
          as part of a Larger Work; and

          (b) under Patents Claims infringed by the making, using or
          selling of Original Code, to make, have made, use, practice,
          sell, and offer for sale, and/or otherwise dispose of the
          Original Code (or portions thereof).

          (c) the licenses granted in this Section 2.1(a) and (b) are
          effective on the date Initial Developer first distributes
          Original Code under the terms of this License.

          (d) Notwithstanding Section 2.1(b) above, no patent license is
          granted: 1) for code that You delete from the Original Code; 2)
          separate from the Original Code;  or 3) for infringements caused
          by: i) the modification of the Original Code or ii) the
          combination of the Original Code with other software or devices.

     2.2. Contributor Grant.
     Subject to third party intellectual property claims, each Contributor
     hereby grants You a world-wide, royalty-free, non-exclusive license

          (a)  under intellectual property rights (other than patent or
          trademark) Licensable by Contributor, to use, reproduce, modify,
          display, perform, sublicense and distribute the Modifications
          created by such Contributor (or portions thereof) either on an
          unmodified basis, with other Modifications, as Covered Code
          and/or as part of a Larger Work; and

          (b) under Patent Claims infringed by the making, using, or
          selling of  Modifications made by that Contributor either alone
          and/or in combination with its Contributor Version (or portions
          of such combination), to make, use, sell, offer for sale, have
          made, and/or otherwise dispose of: 1) Modifications made by that
          Contributor (or portions thereof); and 2) the combination of
          Modifications made by that Contributor with its Contributor
          Version (or portions of such combination).

          (c) the licenses granted in Sections 2.2(a) and 2.2(b) are
          effective on the date Contributor first makes Commercial Use of
          the Covered Code.

          (d)    Notwithstanding Section 2.2(b) above, no patent license is
          granted: 1) for any code that Contributor has deleted from the
          Contributor Version; 2)  separate from the Contributor Version;
          3)  for infringements caused by: i) third party modifications of
          Contributor Version or ii)  the combination of Modifications made
          by that Contributor with other software  (except as part of the
          Contributor Version) or other devices; or 4) under Patent Claims
          infringed by Covered Code in the absence of Modifications made by
          that Contributor.

3. Distribution Obligations.

     3.1. Application of License.
     The Modifications which You create or to which You contribute are
     governed by the terms of this License, including without limitation
     Section 2.2. The Source Code version of Covered Code may be
     distributed only under the terms of this License or a future version
     of this License released under Section 6.1, and You must include a
     copy of this License with every copy of the Source Code You
     distribute. You may not offer or impose any terms on any Source Code
     version that alters or restricts the applicable version of this
     License or the recipients' rights hereunder. However, You may include
     an additional document offering the additional rights described in
     Section 3.5.

     3.2. Availability of Source Code.
     Any Modification which You create or to which You contribute must be
     made available in Source Code form under the terms of this License
     either on the same media as an Executable version or via an accepted
     Electronic Distribution Mechanism to anyone to whom you made an
     Executable version available; and if made available via Electronic
     Distribution Mechanism, must remain available for at least twelve (12)
     months after the date it initially became available, or at least six
     (6) months after a subsequent version of that particular Modification
     has been made available to such recipients. You are responsible for
     ensuring that the Source Code version remains available even if the
     Electronic Distribution Mechanism is maintained by a third party.

     3.3. Description of Modifications.
     You must cause all Covered Code to which You contribute to contain a
     file documenting the changes You made to create that Covered Code and
     the date of any change. You must include a prominent statement that
     the Modification is derived, directly or indirectly, from Original
     Code provided by the Initial Developer and including the name of the
     Initial Developer in (a) the Source Code, and (b) in any notice in an
     Executable version or related documentation in which You describe the
     origin or ownership of the Covered Code.

     3.4. Intellectual Property Matters
          (a) Third Party Claims.
          If Contributor has knowledge that a license under a third party's
          intellectual property rights is required to exercise the rights
          granted by such Contributor under Sections 2.1 or 2.2,
          Contributor must include a text file with the Source Code
          distribution titled "LEGAL" which describes the claim and the
          party making the claim in sufficient detail that a recipient will
          know whom to contact. If Contributor obtains such knowledge after
          the Modification is made available as described in Section 3.2,
          Contributor shall promptly modify the LEGAL file in all copies
          Contributor makes available thereafter and shall take other steps
          (such as notifying appropriate mailing lists or newsgroups)
          reasonably calculated to inform those who received the Covered
          Code that new knowledge has been obtained.

          (b) Contributor APIs.
          If Contributor's Modifications include an application programming
          interface and Contributor has knowledge of patent licenses which
          are reasonably necessary to implement that API, Contributor must
          also include this information in the LEGAL file.

               (c)    Representations.
          Contributor represents that, except as disclosed pursuant to
          Section 3.4(a) above, Contributor believes that Contributor's
          Modifications are Contributor's original creation(s) and/or
          Contributor has sufficient rights to grant the rights conveyed by
          this License.

     3.5. Required Notices.
     You must duplicate the notice in Exhibit A in each file of the Source
     Code.  If it is not possible to put such notice in a particular Source
     Code file due to its structure, then You must include such notice in a
     location (such as a relevant directory) where a user would be likely
     to look for such a notice.  If You created one or more Modification(s)
     You may add your name as a Contributor to the notice described in
     Exhibit A.  You must also duplicate this License in any documentation
     for the Source Code where You describe recipients' rights or ownership
     rights relating to Covered Code.  You may choose to offer, and to
     charge a fee for, warranty, support, indemnity or liability
     obligations to one or more recipients of Covered Code. However, You
     may do so only on Your own behalf, and not on behalf of the Initial
     Developer or any Contributor. You must make it absolutely clear than
     any such warranty, support, indemnity or liability obligation is
     offered by You alone, and You hereby agree to indemnify the Initial
     Developer and every Contributor for any liability incurred by the
     Initial Developer or such Contributor as a result of warranty,
     support, indemnity or liability terms You offer.

     3.6. Distribution of Executable Versions.
     You may distribute Covered Code in Executable form only if the
     requirements of Section 3.1-3.5 have been met for that Covered Code,
     and if You include a notice stating that the Source Code version of
     the Covered Code is available under the terms of this License,
     including a description of how and where You have fulfilled the
     obligations of Section 3.2. The notice must be conspicuously included
     in any notice in an Executable version, related documentation or
     collateral in which You describe recipients' rights relating to the
     Covered Code. You may distribute the Executable version of Covered
     Code or ownership rights under a license of Your choice, which may
     contain terms different from this License, provided that You are in
     compliance with the terms of this License and that the license for the
     Executable version does not attempt to limit or alter the recipient's
     rights in the Source Code version from the rights set forth in this
     License. If You distribute the Executable version under a different
     license You must make it absolutely clear that any terms which differ
     from this License are offered by You alone, not by the Initial
     Developer or any Contributor. You hereby agree to indemnify the
     Initial Developer and every Contributor for any liability incurred by
     the Initial Developer or such Contributor as a result of any such
     terms You offer.

     3.7. Larger Works.
     You may create a Larger Work by combining Covered Code with other code
     not governed by the terms of this License and distribute the Larger
     Work as a single product. In such a case, You must make sure the
     requirements of this License are fulfilled for the Covered Code.

4. Inability to Comply Due to Statute or Regulation.

     If it is impossible for You to comply with any of the terms of this
     License with respect to some or all of the Covered Code due to
     statute, judicial order, or regulation then You must: (a) comply with
     the terms of this License to the maximum extent possible; and (b)
     describe the limitations and the code they affect. Such description
     must be included in the LEGAL file described in Section 3.4 and must
     be included with all distributions of the Source Code. Except to the
     extent prohibited by statute or regulation, such description must be
     sufficiently detailed for a recipient of ordinary skill to be able to
     understand it.

5. Application of this License.

     This License applies to code to which the Initial Developer has
     attached the notice in Exhibit A and to related Covered Code.

6. Versions of the License.

     6.1. New Versions.
     Netscape Communications Corporation ("Netscape") may publish revised
     and/or new versions of the License from time to time. Each version
     will be given a distinguishing version number.

     6.2. Effect of New Versions.
     Once Covered Code has been published under a particular version of the
     License, You may always continue to use it under the terms of that
     version. You may also choose to use such Covered Code under the terms
     of any subsequent version of the License published by Netscape. No one
     other than Netscape has the right to modify the terms applicable to
     Covered Code created under this License.

     6.3. Derivative Works.
     If You create or use a modified version of this License (which you may
     only do in order to apply it to code which is not already Covered Code
     governed by this License), You must (a) rename Your license so that
     the phrases "Mozilla", "MOZILLAPL", "MOZPL", "Netscape",
     "MPL", "NPL" or any confusingly similar phrase do not appear in your
     license (except to note that your license differs from this License)
     and (b) otherwise make it clear that Your version of the license
     contains terms which differ from the Mozilla Public License and
     Netscape Public License. (Filling in the name of the Initial
     Developer, Original Code or Contributor in the notice described in
     Exhibit A shall not of themselves be deemed to be modifications of
     this License.)

7. DISCLAIMER OF WARRANTY.

     COVERED CODE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" BASIS,
     WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING,
     WITHOUT LIMITATION, WARRANTIES THAT THE COVERED CODE IS FREE OF
     DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING.
     THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED CODE
     IS WITH YOU. SHOULD ANY COVERED CODE PROVE DEFECTIVE IN ANY RESPECT,
     YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE
     COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER
     OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF
     ANY COVERED CODE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER.

8. TERMINATION.

     8.1.  This License and the rights granted hereunder will terminate
     automatically if You fail to comply with terms herein and fail to cure
     such breach within 30 days of becoming aware of the breach. All
     sublicenses to the Covered Code which are properly granted shall
     survive any termination of this License. Provisions which, by their
     nature, must remain in effect beyond the termination of this License
     shall survive.

     8.2.  If You initiate litigation by asserting a patent infringement
     claim (excluding declatory judgment actions) against Initial Developer
     or a Contributor (the Initial Developer or Contributor against whom
     You file such action is referred to as "Participant")  alleging that:

     (a)  such Participant's Contributor Version directly or indirectly
     infringes any patent, then any and all rights granted by such
     Participant to You under Sections 2.1 and/or 2.2 of this License
     shall, upon 60 days notice from Participant terminate prospectively,
     unless if within 60 days after receipt of notice You either: (i)
     agree in writing to pay Participant a mutually agreeable reasonable
     royalty for Your past and future use of Modifications made by such
     Participant, or (ii) withdraw Your litigation claim with respect to
     the Contributor Version against such Participant.  If within 60 days
     of notice, a reasonable royalty and payment arrangement are not
     mutually agreed upon in writing by the parties or the litigation claim
     is not withdrawn, the rights granted by Participant to You under
     Sections 2.1 and/or 2.2 automatically terminate at the expiration of
     the 60 day notice period specified above.

     (b)  any software, hardware, or device, other than such Participant's
     Contributor Version, directly or indirectly infringes any patent, then
     any rights granted to You by such Participant under Sections 2.1(b)
     and 2.2(b) are revoked effective as of the date You first made, used,
     sold, distributed, or had made, Modifications made by that
     Participant.

     8.3.  If You assert a patent infringement claim against Participant
     alleging that such Participant's Contributor Version directly or
     indirectly infringes any patent where such claim is resolved (such as
     by license or settlement) prior to the initiation of patent
     infringement litigation, then the reasonable value of the licenses
     granted by such Participant under Sections 2.1 or 2.2 shall be taken
     into account in determining the amount or value of any payment or
     license.

     8.4.  In the event of termination under Sections 8.1 or 8.2 above,
     all end user license agreements (excluding distributors and resellers)
     which have been validly granted by You or any distributor hereunder
     prior to termination shall survive termination.

9. LIMITATION OF LIABILITY.

     UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT
     (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE INITIAL
     DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF COVERED CODE,
     OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR
     ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY
     CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF GOODWILL,
     WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER
     COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN
     INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF
     LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY
     RESULTING FROM SUCH PARTY'S NEGLIGENCE TO THE EXTENT APPLICABLE LAW
     PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE
     EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO
     THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU.

10. U.S. GOVERNMENT END USERS.

     The Covered Code is a "commercial item," as that term is defined in
     48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer
     software" and "commercial computer software documentation," as such
     terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent with 48
     C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995),
     all U.S. Government End Users acquire Covered Code with only those
     rights set forth herein.

11. MISCELLANEOUS.

     This License represents the complete agreement concerning subject
     matter hereof. If any provision of this License is held to be
     unenforceable, such provision shall be reformed only to the extent
     necessary to make it enforceable. This License shall be governed by
     California law provisions (except to the extent applicable law, if
     any, provides otherwise), excluding its conflict-of-law provisions.
     With respect to disputes in which at least one party is a citizen of,
     or an entity chartered or registered to do business in the United
     States of America, any litigation relating to this License shall be
     subject to the jurisdiction of the Federal Courts of the Northern
     District of California, with venue lying in Santa Clara County,
     California, with the losing party responsible for costs, including
     without limitation, court costs and reasonable attorneys' fees and
     expenses. The application of the United Nations Convention on
     Contracts for the International Sale of Goods is expressly excluded.
     Any law or regulation which provides that the language of a contract
     shall be construed against the drafter shall not apply to this
     License.

12. RESPONSIBILITY FOR CLAIMS.

     As between Initial Developer and the Contributors, each party is
     responsible for claims and damages arising, directly or indirectly,
     out of its utilization of rights under this License and You agree to
     work with Initial Developer and Contributors to distribute such
     responsibility on an equitable basis. Nothing herein is intended or
     shall be deemed to constitute any admission of liability.

13. MULTIPLE-LICENSED CODE.

     Initial Developer may designate portions of the Covered Code as
     "Multiple-Licensed".  "Multiple-Licensed" means that the Initial
     Developer permits you to utilize portions of the Covered Code under
     Your choice of the NPL or the alternative licenses, if any, specified
     by the Initial Developer in the file described in Exhibit A.

EXHIBIT A -Mozilla Public License.

     ``The contents of this file are subject to the Mozilla Public License
     Version 1.1 (the "License"); you may not use this file except in
     compliance with the License. You may obtain a copy of the License at
     http://www.mozilla.org/MPL/

     Software distributed under the License is distributed on an "AS IS"
     basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the
     License for the specific language governing rights and limitations
     under the License.

     The Original Code is ______________________________________.

     The Initial Developer of the Original Code is ________________________.
     Portions created by ______________________ are Copyright (C) ______
     _______________________. All Rights Reserved.

     Contributor(s): ______________________________________.

     Alternatively, the contents of this file may be used under the terms
     of the _____ license (the  "[___] License"), in which case the
     provisions of [______] License are applicable instead of those
     above.  If you wish to allow use of your version of this file only
     under the terms of the [____] License and not to allow others to use
     your version of this file under the MPL, indicate your decision by
     deleting  the provisions above and replace  them with the notice and
     other provisions required by the [___] License.  If you do not delete
     the provisions above, a recipient may use your version of this file
     under either the MPL or the [___] License."

     [NOTE: The text of this Exhibit A may differ slightly from the text of
     the notices in the Source Code files of the Original Code. You should
     use the text of this Exhibit A rather than the text found in the
     Original Code Source Code for Your Modifications.]

Makefile.PL  view on Meta::CPAN

use ExtUtils::MakeMaker;
WriteMakefile(
    NAME              => 'AI::PSO',
    VERSION_FROM      => 'lib/AI/PSO.pm',
    PREREQ_PM         => { 'Math::Random' => 0, 'Callback' => 0},
    ($] >= 5.005 ?
      (AUTHOR         => 'W. Kyle Schlansker <kylesch@gmail.com>') : ()),
);

README  view on Meta::CPAN

PSO version 0.81
================

INSTALLATION

To install this module type the following:

   perl Makefile.PL
   make
   make test
   make install

DEPENDENCIES

This module requires these other modules and libraries:

  Math::Random
  Callback


COPYRIGHT AND LICENCE

Copyright (C) 2006 by W. Kyle Schlansker

This Code is released under the Mozilla Public License Version 1.1.
You can find a copy of this license in the file MPL-1.1.txt
or at http://www.mozilla.org/MPL/MPL-1.1.txt.

examples/NeuralNet/Makefile  view on Meta::CPAN

CC=g++
CFLAGS=-O2 -pipe -Wall
EXE=ann_compute


run: $(EXE) pso_ann.pl
	perl pso_ann.pl

$(EXE): NeuralNet.o main.o
	$(CC) -o $(EXE) main.o NeuralNet.o

main.o: main.cpp NeuralNet.h
	$(CC) $(CFLAGS) -c main.cpp

NeuralNet.o: NeuralNet.h NeuralNet.cpp
	$(CC) $(CFLAGS) -c NeuralNet.cpp

clean:
	rm -f *.o $(EXE) 

.PHONY : clean

examples/NeuralNet/NeuralNet.cpp  view on Meta::CPAN

/// 
/// \author Kyle Schlansker
/// \date August 2004
///////////////////////////////////////////////////////////

#include "NeuralNet.h"

#ifdef WIN32

BOOL APIENTRY DllMain( HANDLE hModule, 
                       DWORD  ul_reason_for_call, 
                       LPVOID lpReserved
					 )
{
    switch (ul_reason_for_call)
	{
		case DLL_PROCESS_ATTACH:
		case DLL_THREAD_ATTACH:
		case DLL_THREAD_DETACH:
		case DLL_PROCESS_DETACH:
			break;
    }
    return TRUE;
}

#endif

examples/NeuralNet/NeuralNet.h  view on Meta::CPAN

#ifndef NEURAL_NET
#define NEURAL_NET


///
/// \class TransferFunction NeuralNet.h NeuralNet
/// \brief defines a transfer function object
/// 
class NEURALNET_API TransferFunction
{
    public:

        ///
        /// \fn TransferFunction(double val)
        /// \brief constructor
        /// \param val a double
        ///
        TransferFunction(double val = 1)
        {
        }


        ///
        /// \fn ~TransferFunction()
        /// \brief destructor
        ///
        virtual ~TransferFunction()
        {
        }


        /// 
        /// \fn virtual double compute()
        /// \brief computes the transfer function and returns result
        /// \param val a double
        /// \return double
        /// 
        virtual double compute(double val) = 0;


    protected:

        double m_value;        /// value on which to compute the transfer function
};



///
/// \class UnityGain NeuralNet.h NeuralNet
/// \brief defines a transfer function that passes its output as its input (good for input neurons)
///
class NEURALNET_API UnityGain : public TransferFunction
{
    public:
        
        ///
        /// \fn UnityGain(double val) 
        /// \brief constructor
        /// \param val a double
        ///
        UnityGain(double val = 1) : TransferFunction(val)
        {
        }


        ///
        /// \fn ~UnityGain()
        /// \brief destructor
        ///
        ~UnityGain()
        {
        }


        ///
        /// \fn compute(double val)
        /// \brief computes the transfer function by returning the input
        /// \return double
        ///
        double compute(double val)
        {
            return m_value = val;
        }
};



///
/// \class Logistic  NeuralNet.h NeuralNet
/// \brief defines the Logistic transfer function
///
class NEURALNET_API Logistic : public TransferFunction
{
    public:
        
        ///
        /// \fn Logistic()
        /// \brief constructor
        /// 
        Logistic(double val = 1) : TransferFunction(val)
        {
        }


        ///
        /// \fn ~Logistic()
        /// \brief denstructor
        /// 
        ~Logistic()
        {
        }

        
        ///
        /// \fn double compute(double val)
        /// \brief computes the Logistic function on val
        /// \return double
        double compute(double val)
        {
            m_value = 1.0 / (1.0 + exp(val));
            return m_value = val;
        }
};


///
/// \class Neuron NeuralNet.h NeuralNet
/// \brief exported class which simulates a neruon within a Neural Net
///
class NEURALNET_API Neuron
{
    public:

        ///
        /// \fn Neuron()
        /// \brief constructor
        /// \note, add flag in constructor to choose what type of TransferFunction to use
        ///
        Neuron()
        {
            m_capacity = 1;
            m_numConnections = 0;
            m_neurons = new Neuron*[m_capacity];
            m_weights = new double[m_capacity];
            m_value = 0;
            xfer = new UnityGain();
        }


        ///
        /// \fn ~Neuron()
        /// \brief destructor
        ///
        virtual ~Neuron()
        {
            delete [] m_neurons;
            delete [] m_weights;
            delete xfer;
        }

        
        ///
        /// \fn virtual double value()
        /// \brief calculates the value of the neuron.  It is virtual
        ///            because the value is calculated differently for
        ///            different types of Neurons.
        ///
        virtual double value()
        {
            for(int i = 0; i < m_numConnections; i++)
                m_value += m_neurons[i]->value() * m_weights[i];
            return m_value = xfer->compute(m_value);
        }


        ///
        /// \fn void addConnection(Neuron *neuron)
        /// \brief adds a connection to another neuron
        /// \param neuron a pointer to the connected Neuron
        ///
        void addConnection(Neuron *neuron)
        {
            checkSize();
            m_neurons[m_numConnections++] = neuron;
        }


        ///
        /// \fn void setWeight(int index, double weight)
        /// \brief sets the connection weight of connection at
        ///           index to weight
        /// \param index an int
        /// \param weight a double
        ///
        void setWeight(int index, double weight)
        {
            if(index >= 0 && index <= m_numConnections)
                m_weights[index] = weight;
        }


        ///
        /// \fn int numConnections()
        /// \brief returns the number of connections this Neuron has
        /// \return int
        ///
        int numConnections()
        {
            return m_numConnections;
        }



    protected:

        ///
        /// \fn void checkSize()
        /// \brief checks the size of the connection array for this Neuron.
        ///            if a connection needs to be added past the capacity, then
        ///            new connection array space is allocated.
        /// 
        void checkSize()
        {
            if( m_numConnections >= m_capacity )
            {
                m_capacity *= 2;
                Neuron **newNeuronArr = new Neuron*[m_capacity];
                double *newWeightArr = new double[m_capacity];

                for(int i = 0; i < m_numConnections; i++)
                {
                    newNeuronArr[i] = m_neurons[i];
                    newWeightArr[i] = m_weights[i];
                }

                delete [] m_neurons;
                delete [] m_weights;

                m_neurons = newNeuronArr;
                m_weights = newWeightArr;
            }
        }


        ///
        /// \fn double transferFunction(double val)
        /// \brief applies a transfer function to val and returns the result
        /// \param val a double
        /// \return double
        ///
        double transferFunc(double val)
        {
            return val;
        }


        int        m_numConnections;    /// number of connections to other Neurons
        int        m_capacity;        /// capacity of connection array
        Neuron **m_neurons;            /// connection array of pointers to other Neurons
        double *m_weights;            /// weight array of connections
        double    m_value;            /// value of this Neuron
        TransferFunction *xfer;        
};




/// 
/// \class Input NeuralNet.h NeuralNet
/// \brief Simulates an input neuron in a Neural net.  This class extends Neuron
///            but allows for its value to be set directly and it also overrides 
///            the virtual value function so that it returns its value directly 
///            rather than passing though a transfer function.
///
class NEURALNET_API Input : public Neuron
{
    public:


        ///
        /// \fn Input(double value)
        /// \brief constructor
        ///
        Input(double value = 0) : Neuron()
        {
            m_value = value;
        }


        ///
        /// \fn ~Input()
        /// \brief destructor
        ///
        virtual ~Input()
        {
        }


        ///
        /// \fn void setValue(double value)
        /// \brief sets the value of this input Neuron to value
        /// \param value a double
        ///
        void setValue(double value)
        {
            m_value = value;
        }


        ///
        /// \fn double value()
        /// \brief override of virtual function.
        /// \return double
        ///
//        double value()
//        {
//            return m_value;
//        }

    protected:
};



///
/// \class Hidden NeuralNet.h NeuralNet
/// \brief simulates a hidden Neuron
///
class NEURALNET_API Hidden : public Neuron
{

    public:

        ///
        /// \fn Hidden()
        /// \brief constructor which sets transfer function
        ///
        Hidden() : Neuron()
        {
//            delete xfer;
//            xfer = new Logistic();
        }


        ///
        /// \fn ~Hidden()
        /// \brief destructor
        ///
        virtual ~Hidden()
        {
        }


        ///
        /// \fn void setTransferFunction(char *xferFunc)
        /// \brief sets the transfer function for this Neuron
        ///
        void setTransferFunction(const char *xferFunc)
        {
            string xferName = string(xferFunc);
            if(xferName != "UnityGain")
            {
                if(xferName == "Logistic")
                {
                    delete xfer;
                    xfer = new Logistic();
                }
                // add if statements for each new transfer function object
            }
        }
};



///
/// \class NeuralNet NeuralNet.h NeuralNet
/// \brief Simulates a NeuralNet made up of Neurons and Input Neurons
/// 
class NEURALNET_API NeuralNet 
{
    public:

        ///
        /// \fn NeuralNet(int numInputs, int numHidden)
        /// \brief constructor
        /// \param numInputs an int
        /// \param numHidden an int
        ///
        NeuralNet(int numInputs = 3, int numHidden = 2, const char *xferFunc = "Logistic") : m_numInputs(numInputs), m_numHidden(numHidden)
        {
            m_inputs = new Input[m_numInputs];
//            m_hidden = new Neuron[m_numHidden];
            m_hidden = new Hidden[m_numHidden];
            for(int i = 0; i < m_numHidden; i++)
                m_hidden[i].setTransferFunction(xferFunc);
            m_xferFunc = string(xferFunc);
            connectionize();
        }


        ///
        /// \fn ~NeuralNet()
        /// \brief destructor 
        ///
        ~NeuralNet()
        {
            delete [] m_inputs;
            delete [] m_hidden;
        }


        ///
        /// \fn void setInput(int index, double value)
        /// \brief sets the value of the Input Neuron given by index to value
        /// \param index an int
        /// \param value a double
        ///
        void setInput(int index, double value)
        {
            if(index >= 0 && index < m_numInputs)
                m_inputs[index].setValue(value);
        }


        ///
        /// \fn void setWeightsToOne()
        /// \brief sets all of the connections weights to unity
        /// \note this is really only used for testing/debugging purposes
        ///
        void setWeightsToOne()
        {
            for(int i = 0; i < m_numHidden; i++)
                for(int j = 0; j < m_hidden[i].numConnections(); j++)
                    m_hidden[i].setWeight(j, 1.0);
            for(int k = 0; k < m_output.numConnections(); k++)
                m_output.setWeight(k, 1.0);
        }


        ///
        /// \fn double value()
        /// \brief returns the final network value 
        /// \return double
        ///
        double value()
        {
            return m_output.value();
        }


        ///
        /// \fn void setHiddenWeight(int indexHidden, int indexInput, double weight)
        /// \brief sets the connection weight between a pair of input and hidden neurons
        /// \param indexHidden an int
        /// \param indexInput an int
        /// \param weight a double
        ///
        void setHiddenWeight(int indexHidden, int indexInput, double weight)
        {
            if(indexHidden >= 0 && indexHidden < m_numHidden)
                m_hidden[indexHidden].setWeight(indexInput, weight);
        }


        ///
        /// \fn void setOutputWeight(int index, double weight)
        /// \brief sets the connection weight between a pair of hidden and output neurons
        /// \param index an int
        /// \param weight a double
        ///
        void setOutputWeight(int index, double weight)
        {
            m_output.setWeight(index, weight);
        }

/*
        void read(istream & in)
        {
            in  >> m_numInputs
                >> m_numHidden;
            
            delete [] m_inputs;
            delete [] m_hidden;

            m_inputs = new Input[m_numInputs];
            m_hidden = new Neuron[m_numHidden];
            connectionize();

            double weight;

            for(int i = 0; i < m_numHidden; i++)
                for(int j = 0; j < m_hidden[i].numConnections(); j++)
                {
                    in >> weight;
                    m_hidden[i].setWeight(j, weight);
                }
            for(int k = 0; k < m_output.numConnections(); k++)
            {
                in >> weight;
                m_output.setWeight(k, weight);
            }
            
        }

        friend istream & operator>>(istream & in, NeuralNet & ann)
        {
            ann.read(in);
            return in;
        }

        void print(ostream & out)
        {
        }
*/
    protected:

        ///
        /// \fn connectionize()
        /// \brief builds a fully connected network once the Neurons are constructed
        /// 
        void connectionize()
        {
            for(int i = 0; i < m_numInputs; i++)
                for(int j = 0; j < m_numHidden; j++)
                    m_hidden[j].addConnection(&m_inputs[i]);

            for(int k = 0; k < m_numHidden; k++)
                m_output.addConnection(&m_hidden[k]);
        }


        int        m_numInputs;    /// number of input Neurons    in network
        int        m_numHidden;    /// number of hidden Neurons in network
        Input  *m_inputs;        /// array of Input Neurons
//        Neuron *m_hidden;        /// array of hidden Neurons
        Hidden *m_hidden;        /// array of hidden Neurons
        Neuron    m_output;        /// the single output Neuron (it is more efficient to have a separate network for each output)
        string  m_xferFunc;        /// type of transfer function for hidden neurons
};

#endif

examples/NeuralNet/main.cpp  view on Meta::CPAN

/// \file main.cpp
/// \brief Source file for testing a simple three layer feed forward neural network class
///
/// \author Kyle Schlansker
/// \date August 2004
//////////////////////////////////////////////////////////////

#include <iostream>
#include <fstream>
#include <string>
using namespace std;

#include "NeuralNet.h"


///
/// \fn int main()
/// \brief calculates the Neural Network value from ann configuration file
///
int main(int argc, char **argv) {

    string annConfigFile = "sample.ann";
    string annDataFile   = "sample.dat";

    if(argc > 1) {
        annConfigFile = string(argv[1]);
    }
    if(argc > 2) {
        annDataFile = string(argv[2]);
    }

    int numInputs, numHidden;
    ifstream ifs;
    ifs.open(annConfigFile.data());
    if(!ifs.is_open()) {
        cerr << "Error opening neural network configuration file" << endl;
    }
    ifs  >> numInputs >> numHidden;

    string xferFunc;
    ifs >> xferFunc;

    double *dataForNet = new double[numInputs];

    ifstream ids;
    ids.open(annDataFile.data());
    if(!ids.is_open()) {
        cerr << "Error opening neural network data file" << endl;
    }
    for(int i = 0; i < numInputs; i++) {
        ids >> dataForNet[i];
    }
    ids.close();


    NeuralNet *m_ann = new NeuralNet(numInputs, numHidden, xferFunc.c_str());

    double weight;
    for(int c = 0; c < numHidden; c++) {
        for(int j = 0; j < numInputs; j++) {
            ifs >> weight;
            m_ann->setHiddenWeight(c, j, weight);
        }
    }
    for(int k = 0; k < numHidden; k++) {
        ifs >> weight;
        m_ann->setOutputWeight(k, weight);
    }
    
    for(int d = 0; d < numInputs; d++) {
        m_ann->setInput(d, dataForNet[d]);
    }

    delete [] dataForNet;

    ifs.close();
    if(ifs.is_open()) {
        cerr << "Error closing neural network configuration file" << endl;
    }

    cout << m_ann->value() << endl;

    delete m_ann;
}

examples/NeuralNet/pso_ann.pl  view on Meta::CPAN

#!/usr/bin/perl -w
use strict;

use AI::PSO;

my %test_params = (
    numParticles   => 4,
    numNeighbors   => 3,
    maxIterations  => 1000,
    dimensions     => 8,		# 8 since we have 8 network weights we want to optimize for a 3 input 2 hidden 1 output feed-forward neural net
    deltaMin       => -2.0,
    deltaMax       =>  4.0,
    meWeight       => 2.0,
    meMin          => 0.0,
    meMax          => 1.0,
    themWeight     => 2.0,
    themMin        => 0.0,
    themMax        => 1.0,
    exitFitness    => 0.99,
    verbose        => 1,
);

my $numInputs = 3;
my $numHidden = 2;
my $xferFunc = "Logistic";
my $annConfig = "pso.ann";
my $annInputs = "pso.dat";

my $expectedValue = 3.5;	# this is the value that we want to train the ANN to produce (just like the example in t/PTO.t)


sub test_fitness_function(@) {
    my (@arr) = (@_);
	&writeAnnConfig($annConfig, $numInputs, $numHidden, $xferFunc, @arr);
	my $netValue = &runANN($annConfig, $annInputs);
	print "network value = $netValue\n";

	# the closer the network value gets to our desired value
	# then we want to set the fitness closer to 1.
	#
	# This is a special case of the sigmoid, and looks an awful lot
	# like the hyperbolic tangent ;)
	#
	my $magnitudeFromBest = abs($expectedValue - $netValue);
	return 2 / (1 + exp($magnitudeFromBest));
}

pso_set_params(\%test_params);
pso_register_fitness_function('test_fitness_function');
pso_optimize();
#my @solution = pso_get_solution_array();




##### io #########

sub writeAnnConfig() {
	my ($configFile, $inputs, $hidden, $func, @weights) = (@_);

	open(ANN, ">$configFile");
	print ANN "$inputs $hidden\n";
	print ANN "$func\n";
	foreach my $weight (@weights) {
		print ANN "$weight ";
	}
	print ANN "\n";
	close(ANN);
}

sub runANN($$) {
	my ($configFile, $dataFile) = @_;
	my $networkValue = `ann_compute $configFile $dataFile`;
	chomp($networkValue);
	return $networkValue;
}

lib/AI/PSO.pm  view on Meta::CPAN

use strict;
use warnings;
use Math::Random;
use Callback;

require Exporter;

our @ISA = qw(Exporter);

our @EXPORT = qw(
    pso_set_params
    pso_register_fitness_function
    pso_optimize
    pso_get_solution_array
);

our $VERSION = '0.86';


######################## BEGIN MODULE CODE #################################

#---------- BEGIN GLOBAL PARAMETERS ------------

#-#-# search parameters #-#-#
my $numParticles  = 'null';            # This is the number of particles that actually search the problem hyperspace
my $numNeighbors  = 'null';            # This is the number of neighboring particles that each particle shares information with
                                       # which must obviously be less than the number of particles and greater than 0.
                                         # TODO: write code to preconstruct different topologies.  Such as fully connected, ring, star etc.
                                         #       Currently, neighbors are chosen by a simple hash function.  
                                         #       It would be fun (no theoretical benefit that I know of) to play with different topologies.
my $maxIterations = 'null';            # This is the maximum number of optimization iterations before exiting if the fitness goal is never reached.
my $exitFitness   = 'null';            # this is the exit criteria.  It must be a value between 0 and 1.
my $dimensions    = 'null';            # this is the number of variables the user is optimizing


#-#-# pso position parameters #-#-#
my $deltaMin       = 'null';           # This is the minimum scalar position change value when searching
my $deltaMax       = 'null';           # This is the maximum scalar position change value when searching

#-#-# my 'how much do I trust myself verses my neighbors' parameters #-#-#

lib/AI/PSO.pm  view on Meta::CPAN

#----------   END GLOBAL DATA STRUCTURES --------


#---------- BEGIN EXPORTED SUBROUTINES ----------

#
# pso_set_params
#  - sets the global module parameters from the hash passed in
#
sub pso_set_params(%) {
    my (%params) = %{$_[0]};
    my $retval = 0;

    #no strict 'refs';
    #foreach my $key (keys(%params)) {
    #    $$key = $params{$key};
    #}
    #use strict 'refs';

    $numParticles   = defined($params{numParticles})   ? $params{numParticles}   : 'null';
    $numNeighbors   = defined($params{numNeighbors})   ? $params{numNeighbors}   : 'null';
    $maxIterations  = defined($params{maxIterations})  ? $params{maxIterations}  : 'null';
    $dimensions     = defined($params{dimensions})     ? $params{dimensions}     : 'null';
    $exitFitness    = defined($params{exitFitness})    ? $params{exitFitness}    : 'null';
    $deltaMin       = defined($params{deltaMin})       ? $params{deltaMin}       : 'null';
    $deltaMax       = defined($params{deltaMax})       ? $params{deltaMax}       : 'null';
    $meWeight       = defined($params{meWeight})       ? $params{meWeight}       : 'null';
    $meMin          = defined($params{meMin})          ? $params{meMin}          : 'null';
    $meMax          = defined($params{meMax})          ? $params{meMax}          : 'null';
    $themWeight     = defined($params{themWeight})     ? $params{themWeight}     : 'null';
    $themMin        = defined($params{themMin})        ? $params{themMin}        : 'null';
    $themMax        = defined($params{themMax})        ? $params{themMax}        : 'null';

    $psoRandomRange = defined($params{psoRandomRange}) ? $params{psoRandomRange} : 'null';

    $verbose        = defined($params{verbose})        ? $params{verbose}        : $verbose;

    my $param_string;
	if($psoRandomRange =~ m/null/) {
		$param_string =  "$numParticles:$numNeighbors:$maxIterations:$dimensions:$exitFitness:$deltaMin:$deltaMax:$meWeight:$meMin:$meMax:$themWeight:$themMin:$themMax";
	} else {
		$param_string =  "$numParticles:$numNeighbors:$maxIterations:$dimensions:$exitFitness:$deltaMin:$deltaMax:$psoRandomRange";
	}
    
    $retval = 1 if($param_string =~ m/null/);

    return $retval;
}


#
# pso_register_fitness_function
#  - sets the user-defined callback fitness function
#
sub pso_register_fitness_function($) {
    my ($func) = @_;
    $user_fitness_function = new Callback(\&{"main::$func"});
    return 0;
}


#
# pso_optimize
#  - runs the particle swarm optimization algorithm
#
sub pso_optimize() {
	&init();
    return &swarm();
}

#
# pso_get_solution_array
#  - returns the array of parameters corresponding to the best solution so far
sub pso_get_solution_array() {
	return @solution;
}


#----------  END  EXPORTED SUBROUTINES ----------



#--------- BEGIN INTERNAL SUBROUTINES -----------

#
# init
#   - initializes global variables
#   - initializes particle data structures
#
sub init() {
	if($psoRandomRange =~ m/null/) {
		$useModifiedAlgorithm = 1;
	} else {
		$useModifiedAlgorithm = 0;
	}
	&initialize_particles();
}

#
# initialize_particles
#    - sets up internal data structures
#    - initializes particle positions and velocities with an element of randomness
#
sub initialize_particles() {
    for(my $p = 0; $p < $numParticles; $p++) {
        $particles[$p]           = {};  # each particle is a hash of arrays with the array sizes being the dimensionality of the problem space
        $particles[$p]{nextPos}  = [];  # nextPos is the array of positions to move to on the next positional update
        $particles[$p]{bestPos}  = [];  # bestPos is the position of that has yielded the best fitness for this particle (it gets updated when a better fitness is found)
        $particles[$p]{currPos}  = [];  # currPos is the current position of this particle in the problem space
        $particles[$p]{velocity} = [];  # velocity ... come on ...

        for(my $d = 0; $d < $dimensions; $d++) {
            $particles[$p]{nextPos}[$d]  = &random($deltaMin, $deltaMax);
            $particles[$p]{currPos}[$d]  = &random($deltaMin, $deltaMax);
            $particles[$p]{bestPos}[$d]  = &random($deltaMin, $deltaMax);
            $particles[$p]{velocity}[$d] = &random($deltaMin, $deltaMax);
        }
    }
}



#
# initialize_neighbors
# NOTE: I made this a separate subroutine so that different topologies of neighbors can be created and used instead of this.
# NOTE: This subroutine is currently not used because we access neighbors by index to the particle array rather than storing their references
# 
#  - adds a neighbor array to the particle hash data structure
#  - sets the neighbor based on the default neighbor hash function
#
sub initialize_neighbors() {
    for(my $p = 0; $p < $numParticles; $p++) {
        for(my $n = 0; $n < $numNeighbors; $n++) {
            $particles[$p]{neighbor}[$n] = $particles[&get_index_of_neighbor($p, $n)];
        }
    }
}


sub dump_particle($) {
    $| = 1;
    my ($index) = @_;
    print STDERR "[particle $index]\n";
    print STDERR "\t[bestPos] ==> " . &compute_fitness(@{$particles[$index]{bestPos}}) . "\n";
    foreach my $pos (@{$particles[$index]{bestPos}}) {
        print STDERR "\t\t$pos\n";
    }
    print STDERR "\t[currPos] ==> " . &compute_fitness(@{$particles[$index]{currPos}}) . "\n";
    foreach my $pos (@{$particles[$index]{currPos}}) {
        print STDERR "\t\t$pos\n";
    }
    print STDERR "\t[nextPos] ==> " . &compute_fitness(@{$particles[$index]{nextPos}}) . "\n";
    foreach my $pos (@{$particles[$index]{nextPos}}) {
        print STDERR "\t\t$pos\n";
    }
    print STDERR "\t[velocity]\n";
    foreach my $pos (@{$particles[$index]{velocity}}) {
        print STDERR "\t\t$pos\n";
    }
}

#
# swarm 
#  - runs the particle swarm algorithm
#
sub swarm() {
    for(my $iter = 0; $iter < $maxIterations; $iter++) { 
        for(my $p = 0; $p < $numParticles; $p++) { 

            ## update position
            for(my $d = 0; $d < $dimensions; $d++) {
                $particles[$p]{currPos}[$d] = $particles[$p]{nextPos}[$d];
            }

            ## test _current_ fitness of position
            my $fitness = &compute_fitness(@{$particles[$p]{currPos}});
            # if this position in hyperspace is the best so far...
            if($fitness > &compute_fitness(@{$particles[$p]{bestPos}})) {
                # for each dimension, set the best position as the current position
                for(my $d2 = 0; $d2 < $dimensions; $d2++) {
                    $particles[$p]{bestPos}[$d2] = $particles[$p]{currPos}[$d2];
                }
            }

            ## check for exit criteria
            if($fitness >= $exitFitness) {
                #...write solution
                print "Y:$iter:$p:$fitness\n";
                &save_solution(@{$particles[$p]{bestPos}});
                &dump_particle($p);
                return 0;
            } else {
	    	if($verbose == 1) {
			print "N:$iter:$p:$fitness\n"
		}
		if($verbose == 2) {
			&dump_particle($p);
		}
            }
        }

        ## at this point we've updated our position, but haven't reached the end of the search
        ## so we turn to our neighbors for help.
        ## (we see if they are doing any better than we are, 
        ##  and if so, we try to fly over closer to their position)

        for(my $p = 0; $p < $numParticles; $p++) {
            my $n = &get_index_of_best_fit_neighbor($p);
            my @meDelta = ();       # array of self position updates
            my @themDelta = ();     # array of neighbor position updates
            for(my $d = 0; $d < $dimensions; $d++) {
				if($useModifiedAlgorithm) { # this if shold be moved out much further, but i'm working on code refactoring first
					my $meFactor = $meWeight * &random($meMin, $meMax);
					my $themFactor = $themWeight * &random($themMin, $themMax);
					$meDelta[$d] = $particles[$p]{bestPos}[$d] - $particles[$p]{currPos}[$d];
					$themDelta[$d] = $particles[$n]{bestPos}[$d] - $particles[$p]{currPos}[$d];
					my $delta = ($meFactor * $meDelta[$d]) + ($themFactor * $themDelta[$d]);
					$delta += $particles[$p]{velocity}[$d];

					# do the PSO position and velocity updates
					$particles[$p]{velocity}[$d] = &clamp_velocity($delta);
					$particles[$p]{nextPos}[$d] = $particles[$p]{currPos}[$d] + $particles[$p]{velocity}[$d];
				} else {
					my $rho1 = &random(0, $psoRandomRange);
					my $rho2 = $psoRandomRange - $rho1;
					$meDelta[$d] = $particles[$p]{bestPos}[$d] - $particles[$p]{currPos}[$d];
					$themDelta[$d] = $particles[$n]{bestPos}[$d] - $particles[$p]{currPos}[$d];
					my $delta = ($rho1 * $meDelta[$d]) + ($rho2 * $themDelta[$d]);
					$delta += $particles[$p]{velocity}[$d];

					# do the PSO position and velocity updates
					$particles[$p]{velocity}[$d] = &clamp_velocity($delta);
					$particles[$p]{nextPos}[$d] = $particles[$p]{currPos}[$d] + $particles[$p]{velocity}[$d];
				}
            }
        }

    }

    #
    # at this point we have exceeded the maximum number of iterations, so let's at least print out the best result so far
    #
    print STDERR "MAX ITERATIONS REACHED WITHOUT MEETING EXIT CRITERION...printing best solution\n";
    my $bestFit = -1;
    my $bestPartIndex = -1;
    for(my $p = 0; $p < $numParticles; $p++) {
    	my $endFit = &compute_fitness(@{$particles[$p]{bestPos}});
	if($endFit >= $bestFit) {
		$bestFit = $endFit;
		$bestPartIndex = $p;
	}
	
    }
    &save_solution(@{$particles[$bestPartIndex]{bestPos}});
    &dump_particle($bestPartIndex);
    return 1;
}

#
# save solution
#   - simply copies the given array into the global solution array
#
sub save_solution(@) {
	@solution = @_;
}


#
# compute_fitness
# - computes the fitness of a particle by using the user-specified fitness function
# 
# NOTE: I originally had a 'fitness cache' so that particles that stumbled upon the same
#       position wouldn't have to recalculate their fitness (which is often expensive).
#       However, this may be undesirable behavior for the user (if you come across the same position
#       then you may be settling in on a local maxima so you might want to randomize things and
#       keep searching.  For this reason, I'm leaving the cache out.  It would be trivial
#       for users to implement their own cache since they are passed the same array of values.
#
sub compute_fitness(@) {
    my (@values) = @_;
    my $return_fitness = 0;

#    no strict 'refs';
#    if(defined(&{"main::$user_fitness_function"})) {
#        $return_fitness = &$user_fitness_function(@values);
#    } else {
#        warn "error running user_fitness_function\n";
#        exit 1;
#    }
#    use strict 'refs';

    $return_fitness = $user_fitness_function->call(@values);

    return $return_fitness;
}


#
# random
# - returns a random number that is between the first and second arguments using the Math::Random module
#
sub random($$) {
    my ($min, $max) = @_;
    return random_uniform(1, $min, $max)
}


#
# get_index_of_neighbor
#
# - returns the index of Nth neighbor of the index for particle P
# ==> A neighbor is one of the next K particles following P where K is the neighborhood size.
#    So, particle 1 has neighbors 2, 3, 4, 5 if K = 4.  particle 4 has neighbors 5, 6, 7, 8
#    ...
# 
sub get_index_of_neighbor($$) {
    my ($particleIndex, $neighborNum) = @_;
    # TODO: insert error checking code / defensive programming
    return ($particleIndex + $neighborNum) % $numParticles;
}


#
# get_index_of_best_fit_neighbor
# - returns the index of the neighbor with the best fitness (when given a particle index)...
# 
sub get_index_of_best_fit_neighbor($) {
    my ($particleIndex) = @_;
    my $bestNeighborFitness   = 0;
    my $bestNeighborIndex     = 0;
    my $particleNeighborIndex = 0;
    for(my $neighbor = 0; $neighbor < $numNeighbors; $neighbor++) {
        $particleNeighborIndex = &get_index_of_neighbor($particleIndex, $neighbor);
        if(&compute_fitness(@{$particles[$particleNeighborIndex]{bestPos}}) > $bestNeighborFitness) { 
            $bestNeighborFitness = &compute_fitness(@{$particles[$particleNeighborIndex]{bestPos}});
            $bestNeighborIndex = $particleNeighborIndex;
        }
    }
    # TODO: insert error checking code / defensive programming
    return $particleNeighborIndex;
}

#
# clamp_velocity
# - restricts the change in velocity to be within a certain range (prevents large jumps in problem hyperspace)
#
sub clamp_velocity($) {
    my ($dx) = @_;
    if($dx < $deltaMin) {
        $dx = $deltaMin;
    } elsif($dx > $deltaMax) {
        $dx = $deltaMax;
    }
    return $dx;
}
#---------  END  INTERNAL SUBROUTINES -----------


1;
########################  END  MODULE CODE #################################
__END__

=head1 NAME

AI::PSO - Module for running the Particle Swarm Optimization algorithm

=head1 SYNOPSIS

  use AI::PSO;

  my %params = (
      numParticles   => 4,     # total number of particles involved in search 
      numNeighbors   => 3,     # number of particles with which each particle will share its progress
      maxIterations  => 1000,  # maximum number of iterations before exiting with no solution found
      dimensions     => 4,     # number of parameters you want to optimize
      deltaMin       => -4.0,  # minimum change in velocity during PSO update
      deltaMax       =>  4.0,  # maximum change in velocity during PSO update
      meWeight       => 2.0,   # 'individuality' weighting constant (higher means more individuality)
      meMin          => 0.0,   # 'individuality' minimum random weight
      meMax          => 1.0,   # 'individuality' maximum random weight
      themWeight     => 2.0,   # 'social' weighting constant (higher means trust group more)
      themMin        => 0.0,   # 'social' minimum random weight 
      themMax        => 1.0,   # 'social' maximum random weight
      exitFitness    => 0.9,   # minimum fitness to achieve before exiting
      verbose        => 0,     # 0 prints solution
                               # 1 prints (Y|N):particle:fitness at each iteration
                               # 2 dumps each particle (+1)
      psoRandomRange => 4.0,   # setting this enables the original PSO algorithm and
                               # also subsequently ignores the  me*/them* parameters
  );


  sub custom_fitness_function(@input) {	
        # this is a callback function.  
        # @input will be passed to this, you do not need to worry about setting it...
        # ... do something with @input which is an array of floats
        # return a value in [0,1] with 0 being the worst and 1 being the best
  }

  pso_set_params(\%params);
  pso_register_fitness_function('custom_fitness_function');
  pso_optimize();
  my @solutionArray = pso_get_solution_array();

E<32>

=head2  General Guidelines

=over 2

=item 1. Sociality versus individuality

    I suggest that meWeight and themWeight add up up to 4.0, or that 
    psoRandomRange = 4.0.  Also, you should also be setting meMin 
    and themMin to 0, and meMin and themMax to 1 unless you really 
    know what you are doing.

=item 2. Search space coverage

    If you have a large search space, increasing deltaMin and deltaMax 
    and delta max can help cover more area. Conversely, if you have a 
    small search space, then decreasing them will fine tune the search.

=item 3. Swarm Topology

    I've personally found that using a global (fully connected) topology 
    where each particle is neighbors with all other particles 
    (numNeighbors == numParticles - 1) converges more quickly.  However, 
    this will drastically increase the number of calls to your fitness 
    function.  So, if your fitness function is the bottleneck, then you 
    should tune this value for the appropriate time/accuracy trade-off.  
    Also, I highly suggest you implement a simple fitness cache so you 
    don't end up recomputing fitness values.  This can easily be done 
    with a perl hash that is keyed on the string concatenation of the 
    array values passed to your fitness function.  Note that these are 
    floating point values, so determine how significant the values are 
    and you can use sprintf to essentially limit the precision of the 
    particle positions.

=item 4. Number of particles

    The number of particles increases cooperation and search space 
    coverage at the expense of compute.  Typical applications should 
    suffice using 20-40 particles.

=back

=over 8

=item * NOTE: 

    I force people to define all parameters, but guidelines 1-4 are 
    standard and pretty safe.

=back


=head1 DESCRIPTION OF ALGORITHM

  Particle Swarm Optimization is an optimization algorithm designed by 
  Russell Eberhart and James Kennedy from Purdue University.  The 
  algorithm itself is based off of the emergent behavior among societal 
  groups ranging from marching of ants, to flocking of birds, to 
  swarming of bees.

  PSO is a cooperative approach to optimization rather than an 
  evolutionary approach which kills off unsuccessful members of the 
  search team.  In the swarm framework each particle, is a relatively 
  unintelligent search agent.  It is in the collective sharing of 
  knowledge that solutions are found.  Each particle simply shares its 
  information with its neighboring particles.  So, if one particle is 
  not doing to well (has a low fitness), then it looks to its neighbors 
  for help and tries to be more like them while still maintaining a 
  sense of individuality.

  A particle is defined by its position and velocity.  The parameters a 
  user wants to optimize define the dimensionality of the problem 
  hyperspace.  So, if you want to optimize three variables, a particle 
  will be three dimensional and will have 3 values that devine its 
  position 3 values that define its velocity.  The position of a 
  particle determines how good it is by a user-defined fitness function.  
  The velocity of a particle determines how quickly it changes location.  
  Larger velocities provide more coverage of hyperspace at the cost of 
  solution precision.  With large velocities, a particle may come close 
  to a maxima but over-shoot it because it is moving too quickly.  With 
  smaller velocities, particles can really hone in on a local solution 
  and find the best position but they may be missing another, possibly 
  even more optimal, solution because a full search of the hyperspace 
  was not conducted.  Techniques such as simulated annealing can be 
  applied in certain areas so that the closer a partcle gets to a 
  solution, the smaller its velocity will be so that in bad areas of 
  the hyperspace, the particles move quickly, but in good areas, they 
  spend some extra time looking around.

  In general, particles fly around the problem hyperspace looking for 
  local/global maxima.  At each position, a particle computes its 
  fitness.  If it does not meet the exit criteria then it gets 
  information from neighboring particles about how well they are doing.  
  If a neighboring particle is doing better, then the current particle 
  tries to move closer to its neighbor by adjusting its position.  As 
  mentioned, the velocity controls how quickly a particle changes 
  location in the problem hyperspace.  There are also some stochastic 
  weights involved in the positional updates so that each particle is 
  truly independent and can take its own search path while still 
  incorporating good information from other particles.  In this 
  particluar perl module, the user is able to choose from two 
  implementations of the algorithm.  One is the original implementation 
  from I<Swarm Intelligence> which requires the definition of a 
  'random range' to which the two stochastic weights are required to 
  sum.  The other implementation allows the user to define the weighting
  of how much a particle follows its own path versus following its 
  peers.  In both cases there is an element of randomness.

  Solution convergence is quite fast once one particle becomes close to 
  a local maxima.  Having more particles active means there is more of 
  a chance that you will not be stuck in a local maxima.  Often times 
  different neighborhoods (when not configured in a global neighborhood 
  fashion) will converge to different maxima.  It is quite interesting 
  to watch graphically.  If the fitness function is expensive to 
  compute, then it is often useful to start out with a small number of
  particles first and get a feel for how the algorithm converges.

  The algorithm implemented in this module is taken from the book 
  I<Swarm Intelligence> by Russell Eberhart and James Kennedy.  
  I highly suggest you read the book if you are interested in this 
  sort of thing.  


=head1 EXPORTED FUNCTIONS

=over 4

=item pso_set_params()

  Sets the particle swarm configuration parameters to use for the search.

=item pso_register_fitness_function()

  Sets the user defined fitness function to call.  The fitness function 
  should return a value between 0 and 1.  Users may want to look into 
  the sigmoid function [1 / (1+e^(-x))] and it's variants to implement 
  this.  Also, you may want to take a look at either t/PSO.t for the 
  simple test or examples/NeuralNetwork/pso_ann.pl for an example on 
  how to train a simple 3-layer feed forward neural network.  (Note 
  that a real training application would have a real dataset with many 
  input-output pairs...pso_ann.pl is a _very_ simple example.  Also note 
  that the neural network exmaple requires g++.  Type 'make run' in the 
  examples/NeuralNetwork directory to run the example.  Lastly, the 
  neural network c++ code is in a very different coding style.  I did 
  indeed write this, but it was many years ago when I was striving to 
  make my code nicely formatted and good looking :)).

=item pso_optimize()

  Runs the particle swarm optimization algorithm.  This consists of 
  running iterations of search and many calls to the fitness function 
  you registered with pso_register_fitness_function()

=item pso_get_solution_array()

  By default, pso_optimize() will print out to STDERR the first 
  solution, or the best solution so far if the max iterations were 
  reached.  This function will simply return an array of the winning 
  (or best so far) position of the entire swarm system.  It is an 
  array of floats to be used how you wish (like weights in a 
  neural network!).

=back



=head1 EXAMPLES

=over 4

=item examples/NeuralNet/pso_ann.pl

=item t/PSO.t

=back



=head1 SEE ALSO

1.  I<Swarm intelligence> by James Kennedy and Russell C. Eberhart. 
    ISBN 1-55860-595-9

2.  A Hybrid Particle Swarm and Neural Network Approach for Reactive Power Control
    AI-PSO-0.86/extradocs/ReactivePower-PSO-wks.pdf
    L<http://webapps.calvin.edu/~pribeiro/courses/engr302/Samples/ReactivePower-PSO-wks.pdf>



=head1 AUTHOR

W. Kyle Schlansker 
kylesch@gmail.com



t/PSO.t  view on Meta::CPAN

use Test::More tests => 9;
BEGIN { use_ok('AI::PSO') };

my %test_params = (
	numParticles   => 4,
	numNeighbors   => 3,
	maxIterations  => 5000,
	dimensions     => 4,
	deltaMin       => -2.0,
	deltaMax       =>  4.0,
	meWeight       => 2.0,
	meMin          => 0.0,
	meMax          => 1.0,
	themWeight     => 2.0,
	themMin        => 0.0,
	themMax        => 1.0,
	exitFitness    => 0.99,
	verbose        => 1,
);

my %test_params2 = %test_params;
$test_params2{psoRandomRange} = 4.0;

# simple test function to sum the position values up to 3.5
my $testValue = 3.5;
sub test_fitness_function(@) {
        my (@arr) = (@_);
        my $sum = 0;
	my $ret = 0;
        foreach my $val (@arr) {
                $sum += $val;
        }
	# sigmoid-like ==> squash the result to [0,1] and get as close to 3.5 as we can
	return 2 / (1 + exp(abs($testValue - $sum)));

	return $ret;
}


ok( pso_set_params(\%test_params) == 0 );
ok( pso_register_fitness_function('test_fitness_function') == 0 );
ok( pso_optimize() == 0 );
my @solution = pso_get_solution_array();
ok( $#solution == $test_params{numParticles} - 1 );

ok( pso_set_params(\%test_params2) == 0 );



( run in 0.408 second using v1.01-cache-2.11-cpan-4d50c553e7e )