AI-PredictionClient

 view release on metacpan or  search on metacpan

LICENSE  view on Meta::CPAN

TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

                     END OF TERMS AND CONDITIONS

        Appendix: How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to humanity, the best way to achieve this is to make it
free software which everyone can redistribute and change under these
terms.

  To do so, attach the following notices to the program.  It is safest to
attach them to the start of each source file to most effectively convey
the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) 19yy  <name of author>

lib/AI/PredictionClient.pm  view on Meta::CPAN

A longer article on setting up a server is here:

 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

In this design I flattened the architecture out and where the native data handling of Perl is best, 
the modules use plain old Perl data structures rather than creating another layer of accessors.

The Tensor interface is used repetitively so this package includes a simplified Tensor class 
to pack and unpack data to and from the models.

In the case of most clients, the Tensor class is simply sending and receiving rank one tensors - vectors. 
In the case of higher rank tensors, the tensor data is sent and received flattened. 
The size property would be used for importing/exporting the tensors in/out of a math package.   

The design takes advantage of the native JSON serialization capabilities built into the C++ Protocol Buffers. 

lib/AI/PredictionClient/Docs/Overview.pod  view on Meta::CPAN

A longer article on setting up a server is here:

 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

In this design I flattened the architecture out and where the native data handling of Perl is best, 
the modules use plain old Perl data structures rather than creating another layer of accessors.

The Tensor interface is used repetitively so this package includes a simplified Tensor class 
to pack and unpack data to and from the models.

In the case of most clients, the Tensor class is simply sending and receiving rank one tensors - vectors. 
In the case of higher rank tensors, the tensor data is sent and received flattened. 
The size property would be used for importing/exporting the tensors in/out of a math package.   

The design takes advantage of the native JSON serialization capabilities built into the C++ Protocol Buffers. 



( run in 0.766 second using v1.01-cache-2.11-cpan-4e96b696675 )