AI-PredictionClient

 view release on metacpan or  search on metacpan

LICENSE  view on Meta::CPAN

software and to any other program whose authors commit to using it.
You can use it for your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Specifically, the General Public License is designed to make
sure that you have the freedom to give away or sell copies of free
software, that you receive source code or can get it if you want it,
that you can change the software or use pieces of it in new free
programs; and that you know you can do these things.

  To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.

  For example, if you distribute copies of a such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have.  You must make sure that they, too, receive or can get the
source code.  And you must tell them their rights.

  We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.

LICENSE  view on Meta::CPAN


                    GNU GENERAL PUBLIC LICENSE
   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

  0. This License Agreement applies to any program or other work which
contains a notice placed by the copyright holder saying it may be
distributed under the terms of this General Public License.  The
"Program", below, refers to any such program or work, and a "work based
on the Program" means either the Program or any work containing the
Program or a portion of it, either verbatim or with modifications.  Each
licensee is addressed as "you".

  1. You may copy and distribute verbatim copies of the Program's source
code as you receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice and
disclaimer of warranty; keep intact all the notices that refer to this
General Public License and to the absence of any warranty; and give any
other recipients of the Program a copy of this General Public License
along with the Program.  You may charge a fee for the physical act of
transferring a copy.

LICENSE  view on Meta::CPAN

    exchange for a fee.

Mere aggregation of another independent work with the Program (or its
derivative) on a volume of a storage or distribution medium does not bring
the other work under the scope of these terms.

  3. You may copy and distribute the Program (or a portion or derivative of
it, under Paragraph 2) in object code or executable form under the terms of
Paragraphs 1 and 2 above provided that you also do one of the following:

    a) accompany it with the complete corresponding machine-readable
    source code, which must be distributed under the terms of
    Paragraphs 1 and 2 above; or,

    b) accompany it with a written offer, valid for at least three
    years, to give any third party free (except for a nominal charge
    for the cost of distribution) a complete machine-readable copy of the
    corresponding source code, to be distributed under the terms of
    Paragraphs 1 and 2 above; or,

    c) accompany it with the information you received as to where the
    corresponding source code may be obtained.  (This alternative is
    allowed only for noncommercial distribution and only if you
    received the program in object code or executable form alone.)

Source code for a work means the preferred form of the work for making
modifications to it.  For an executable file, complete source code means
all the source code for all modules it contains; but, as a special
exception, it need not include source code for modules which are standard
libraries that accompany the operating system on which the executable
file runs, or for standard header files or definitions files that
accompany that operating system.

  4. You may not copy, modify, sublicense, distribute or transfer the
Program except as expressly provided under this General Public License.
Any attempt otherwise to copy, modify, sublicense, distribute or transfer
the Program is void, and will automatically terminate your rights to use
the Program under this License.  However, parties who have received
copies, or rights to use copies, from you under this General Public
License will not have their licenses terminated so long as such parties
remain in full compliance.

  5. By copying, distributing or modifying the Program (or any work based
on the Program) you indicate your acceptance of this license to do so,
and all its terms and conditions.

  6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the original
licensor to copy, distribute or modify the Program subject to these
terms and conditions.  You may not impose any further restrictions on the
recipients' exercise of the rights granted herein.

  7. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

Each version is given a distinguishing version number.  If the Program
specifies a version number of the license which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation.  If the Program does not specify a version number of
the license, you may choose any version ever published by the Free Software
Foundation.

  8. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission.  For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this.  Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.

                            NO WARRANTY

  9. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS

LICENSE  view on Meta::CPAN

The hypothetical commands `show w' and `show c' should show the
appropriate parts of the General Public License.  Of course, the
commands you use may be called something other than `show w' and `show
c'; they could even be mouse-clicks or menu items--whatever suits your
program.

You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary.  Here a sample; alter the names:

  Yoyodyne, Inc., hereby disclaims all copyright interest in the
  program `Gnomovision' (a program to direct compilers to make passes
  at assemblers) written by James Hacker.

  <signature of Ty Coon>, 1 April 1989
  Ty Coon, President of Vice

That's all there is to it!


--- The Artistic License 1.0 ---

This software is Copyright (c) 2017 by Tom Stall.

This is free software, licensed under:

LICENSE  view on Meta::CPAN

  - "Reasonable copying fee" is whatever you can justify on the basis of media
    cost, duplication charges, time of people involved, and so on. (You will
    not be required to justify it to the Copyright Holder, but only to the
    computing community at large as a market that must bear the fee.) 
  - "Freely Available" means that no fee is charged for the item itself, though
    there may be fees involved in handling the item. It also means that
    recipients of the item may redistribute it under the same conditions they
    received it. 

1. You may make and give away verbatim copies of the source form of the
Standard Version of this Package without restriction, provided that you
duplicate all of the original copyright notices and associated disclaimers.

2. You may apply bug fixes, portability fixes and other modifications derived
from the Public Domain or from the Copyright Holder. A Package modified in such
a way shall still be considered the Standard Version.

3. You may otherwise modify your copy of this Package in any way, provided that
you insert a prominent notice in each changed file stating how and when you
changed that file, and provided that you do at least ONE of the following:

LICENSE  view on Meta::CPAN

4. You may distribute the programs of this Package in object code or executable
form, provided that you do at least ONE of the following:

  a) distribute a Standard Version of the executables and library files,
     together with instructions (in the manual page or equivalent) on where to
     get the Standard Version.

  b) accompany the distribution with the machine-readable source of the Package
     with your modifications.

  c) accompany any non-standard executables with their corresponding Standard
     Version executables, giving the non-standard executables non-standard
     names, and clearly documenting the differences in manual pages (or
     equivalent), together with instructions on where to get the Standard
     Version.

  d) make other distribution arrangements with the Copyright Holder.

5. You may charge a reasonable copying fee for any distribution of this
Package.  You may charge any fee you choose for support of this Package. You
may not charge a fee for this Package itself. However, you may distribute this

META.json  view on Meta::CPAN

   "license" : [
      "perl_5"
   ],
   "meta-spec" : {
      "url" : "http://search.cpan.org/perldoc?CPAN::Meta::Spec",
      "version" : 2
   },
   "name" : "AI-PredictionClient",
   "prereqs" : {
      "configure" : {
         "requires" : {
            "AI::PredictionClient::Alien::TensorFlowServingProtos" : "0.05",
            "Alien::Google::GRPC" : "0.06",
            "ExtUtils::MakeMaker" : "0",
            "Inline" : "0",
            "Inline::CPP" : "0",
            "Inline::MakeMaker" : "0"
         }
      },
      "develop" : {
         "requires" : {
            "Test::MinimumVersion" : "0",
            "Test::Perl::Critic" : "0",
            "Test::Pod" : "1.41",
            "Test::Spelling" : "0.12"
         }
      },
      "runtime" : {
         "requires" : {
            "AI::PredictionClient::Alien::TensorFlowServingProtos" : "0",
            "Alien::Google::GRPC" : "0",
            "Cwd" : "0",
            "Data::Dumper" : "0",
            "Inline" : "0",
            "JSON" : "0",
            "MIME::Base64" : "0",
            "Moo" : "0",
            "Moo::Role" : "0",
            "MooX::Options" : "0",
            "Perl6::Form" : "0",
            "perl" : "5.01",
            "strict" : "0",
            "warnings" : "0"
         }
      },
      "test" : {
         "requires" : {
            "Test::More" : "0"
         }
      }
   },
   "provides" : {
      "AI::PredictionClient" : {
         "file" : "lib/AI/PredictionClient.pm",
         "version" : "0.05"
      },
      "AI::PredictionClient::CPP::PredictionGrpcCpp" : {

META.json  view on Meta::CPAN

      "AI::PredictionClient::Testing::Camel" : {
         "file" : "lib/AI/PredictionClient/Testing/Camel.pm",
         "version" : "0.05"
      },
      "AI::PredictionClient::Testing::PredictionLoopback" : {
         "file" : "lib/AI/PredictionClient/Testing/PredictionLoopback.pm",
         "version" : "0.05"
      }
   },
   "release_status" : "stable",
   "resources" : {
      "homepage" : "https://github.com/mountaintom/AI-PredictionClient",
      "repository" : {
         "type" : "git",
         "url" : "https://github.com/mountaintom/AI-PredictionClient.git",
         "web" : "https://github.com/mountaintom/AI-PredictionClient"
      }
   },
   "version" : "0.05",
   "x_serialization_backend" : "Cpanel::JSON::XS version 3.0233"
}

META.yml  view on Meta::CPAN

---
abstract: 'A Perl Prediction client for Google TensorFlow Serving.'
author:
  - 'Tom Stall <stall@cpan.org>'
build_requires:
  Test::More: '0'
configure_requires:
  AI::PredictionClient::Alien::TensorFlowServingProtos: '0.05'
  Alien::Google::GRPC: '0.06'
  ExtUtils::MakeMaker: '0'
  Inline: '0'
  Inline::CPP: '0'
  Inline::MakeMaker: '0'
dynamic_config: 0
generated_by: 'Dist::Zilla version 6.009, CPAN::Meta::Converter version 2.143240'
license: perl
meta-spec:

META.yml  view on Meta::CPAN

    version: '0.05'
  AI::PredictionClient::Roles::PredictionRole:
    file: lib/AI/PredictionClient/Roles/PredictionRole.pm
    version: '0.05'
  AI::PredictionClient::Testing::Camel:
    file: lib/AI/PredictionClient/Testing/Camel.pm
    version: '0.05'
  AI::PredictionClient::Testing::PredictionLoopback:
    file: lib/AI/PredictionClient/Testing/PredictionLoopback.pm
    version: '0.05'
requires:
  AI::PredictionClient::Alien::TensorFlowServingProtos: '0'
  Alien::Google::GRPC: '0'
  Cwd: '0'
  Data::Dumper: '0'
  Inline: '0'
  JSON: '0'
  MIME::Base64: '0'
  Moo: '0'
  Moo::Role: '0'
  MooX::Options: '0'
  Perl6::Form: '0'
  perl: '5.01'
  strict: '0'
  warnings: '0'
resources:
  homepage: https://github.com/mountaintom/AI-PredictionClient
  repository: https://github.com/mountaintom/AI-PredictionClient.git
version: '0.05'
x_serialization_backend: 'YAML::Tiny version 1.70'

bin/Inception.pl  view on Meta::CPAN

  is       => 'ro',
  required => 1,
  format   => 's',
  doc      => '* Required: Path to image to be processed'
);
option host => (
  is       => 'ro',
  required => 0,
  format   => 's',
  default  => $default_host,
  doc      => "IP address of the server [Default: $default_host]"
);
option port => (
  is       => 'ro',
  required => 0,
  format   => 's',
  default  => $default_port,
  doc      => "Port number of the server [Default: $default_port]"
);
option model_name => (
  is       => 'ro',

bin/Inception.pl  view on Meta::CPAN

  $client->model_signature($self->model_signature);
  $client->debug_verbose($self->debug_verbose);
  $client->loopback($self->debug_loopback_interface);
  $client->camel($self->debug_camel);

  printf("Sending image %s to server at host:%s  port:%s\n",
    $self->image_file, $self->host, $self->port);

  if ($client->call_inception($image_ref)) {

    my $results_ref         = $client->inception_results;
    my $classifications_ref = $results_ref->{'classes'};
    my $scores_ref          = $results_ref->{'scores'};
    my $comments            = 'Clasification Results for ' . $self->image_file;

    my $results_text
      = form
      '.===========================================================================.',
      '| Class                                                     | Score         |',
      '|-----------------------------------------------------------+---------------|',
      '| {[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[} |{]].[[[[[[[[}  |',
      $classifications_ref, $scores_ref,
      '|===========================================================================|',
      '| {[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[}                   |',
      $comments,
      "'==========================================================================='";

    print $results_text;

  } else {
    printf("Failed. Status: %s, Status Code: %s, Status Message: %s \n",
      $client->status, $client->status_code, $client->status_message);
    return 1;
  }
  return 0;
}

sub read_image {

cpanfile  view on Meta::CPAN

requires "AI::PredictionClient::Alien::TensorFlowServingProtos" => "0";
requires "Alien::Google::GRPC" => "0";
requires "Cwd" => "0";
requires "Data::Dumper" => "0";
requires "Inline" => "0";
requires "JSON" => "0";
requires "MIME::Base64" => "0";
requires "Moo" => "0";
requires "Moo::Role" => "0";
requires "MooX::Options" => "0";
requires "Perl6::Form" => "0";
requires "perl" => "5.01";
requires "strict" => "0";
requires "warnings" => "0";

on 'test' => sub {
  requires "Test::More" => "0";
};

on 'configure' => sub {
  requires "AI::PredictionClient::Alien::TensorFlowServingProtos" => "0.05";
  requires "Alien::Google::GRPC" => "0.06";
  requires "ExtUtils::MakeMaker" => "0";
  requires "Inline" => "0";
  requires "Inline::CPP" => "0";
  requires "Inline::MakeMaker" => "0";
};

on 'develop' => sub {
  requires "Test::MinimumVersion" => "0";
  requires "Test::Perl::Critic" => "0";
  requires "Test::Pod" => "1.41";
  requires "Test::Spelling" => "0.12";
};

dist.ini  view on Meta::CPAN

[PodWeaver]

[ReadmeAnyFromPod]
type = pod
filename = README.pod
location = root

[Prereqs]
perl = 5.01

[Prereqs / ConfigureRequires]
Inline = 0
Inline::CPP = 0
Inline::MakeMaker = 0
Alien::Google::GRPC = 0.06
AI::PredictionClient::Alien::TensorFlowServingProtos = 0.05

[Test::MinimumVersion]
max_target_perl = 5.10.1

[MetaProvides::Package]

lib/AI/PredictionClient.pm  view on Meta::CPAN

AI::PredictionClient - A Perl Prediction client for Google TensorFlow Serving.

=head1 VERSION

version 0.05

=head1 DESCRIPTION

This is a package for creating Perl clients for TensorFlow Serving model servers. 
TensorFlow Serving is the system that allows TensorFlow neural network AI models 
to be moved from the research environment to your production environment. 

Currently this package implements a client for the Predict service and a model specific Inception client.

The Predict service 'Predict.pm' is the most versatile of the TensorFlow Serving Prediction services. 
A large portion of the model specific clients are implemented from this service.

The model specific client 'InceptionClient.pm'  is implemented. This is the most popular client. 

Additionally, a command line Inception client 'Inception.pl' is included 
as an example of a complete client built form this package.

lib/AI/PredictionClient.pm  view on Meta::CPAN


The commands for the Inception client can be displayed by running the Inception.pl client with no arguments.

 $ Inception.pl 
 image_file is missing
 USAGE: Inception.pl [-h] [long options ...]

    --debug_camel               Test using camel image
    --debug_loopback_interface  Test loopback through dummy server
    --debug_verbose             Verbose output
    --host=String               IP address of the server [Default:
                                127.0.0.1]
    --image_file=String         * Required: Path to image to be processed
    --model_name=String         Model to process image [Default: inception]
    --model_signature=String    API signature for model [Default:
                                predict_images]
    --port=String               Port number of the server [Default: 9000]
    -h                          show a compact help message

Some typical command line examples include:

 Inception.pl --image_file=anything --debug_camel --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=grace_hopper.jpg --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=anything --debug_camel --debug_loopback --port 2004 --host technologic

=head3 In the examples above, the following points are demonstrated:

If you don't have an image handy --debug_camel will provide a sample image to send to the server. 
The image file argument still needs to be provided to make the command line parser happy.

If you don't have a server to talk to, but want to see if most everything else is working use 
the --debug_loopback_interface. This will provide a sample response you can test the client with. 
The module can use the same loopback interface for debugging your bespoke clients.

The --debug_verbose option will dump the data structures of the request and response to allow
you to see what is going on.

=head3 The response from a live server to the camel image looks like this:

 Inception.pl --image_file=zzzzz --debug_camel --host=107.170.xx.xxx --port=9000    
 Sending image zzzzz to server at host:107.170.xx.xxx  port:9000
 .===========================================================================.
 | Class                                                     | Score         |
 |-----------------------------------------------------------+---------------|
 | Arabian camel, dromedary, Camelus dromedarius             | 11.968746     |
 | triumphal arch                                            |  4.0692205    |
 | panpipe, pandean pipe, syrinx                             |  3.4675434    |
 | thresher, thrasher, threshing machine                     |  3.4537551    |
 | sorrel                                                    |  3.1359406    |
 |===========================================================================|
 | Classification Results for zzzzz                                           |
 '==========================================================================='

=head2 SETTING UP A TEST SERVER 

You can set up a server by following the instructions on the TensorFlow Serving site:

 https://www.tensorflow.org/deploy/tfserve

lib/AI/PredictionClient.pm  view on Meta::CPAN


 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

In this design I flattened the architecture out and where the native data handling of Perl is best, 
the modules use plain old Perl data structures rather than creating another layer of accessors.

The Tensor interface is used repetitively so this package includes a simplified Tensor class 
to pack and unpack data to and from the models.

In the case of most clients, the Tensor class is simply sending and receiving rank one tensors - vectors. 
In the case of higher rank tensors, the tensor data is sent and received flattened. 
The size property would be used for importing/exporting the tensors in/out of a math package.   

The design takes advantage of the native JSON serialization capabilities built into the C++ Protocol Buffers. 
Serialization allows a much simpler more robust interface to be created between the Perl environment 
and the C++ environment. 
One of the biggest advantages is for the developer who would like to quickly extend what this package does. 
You can see how the data structures are built and directly manipulate them in Perl. 
Of course, if you can be more forward looking, building the proper roles and classes and contributing them would be great. 

=head1 DEPENDENCIES

This module is dependent on gRPC. This module will use the cpan module Alien::Google::GRPC to 
either use an existing gRPC installation on your system or if not found, the Alien::Google::GRPC
module will download and build a private copy.

The system dependencies needed for this module to build are most often already installed. 
If not, the following dependencies need to be installed.

lib/AI/PredictionClient/CPP/PredictionGrpcCpp.pm  view on Meta::CPAN

  std::unique_ptr<PredictionService::Stub> stub_;
  std::string to_base64(std::string text);
};

PredictionClient::PredictionClient(std::string server_port)
    : stub_(PredictionService::NewStub(grpc::CreateChannel(
          server_port, grpc::InsecureChannelCredentials()))) {}

std::string PredictionClient::callPredict(std::string serialized_request_object) {
  PredictRequest predictRequest;
  PredictResponse response;
  ClientContext context;
  std::string serialized_result_object;

  google::protobuf::util::JsonPrintOptions jprint_options;
  google::protobuf::util::JsonParseOptions jparse_options;

  google::protobuf::util::Status request_serialized_status =
      google::protobuf::util::JsonStringToMessage(
          serialized_request_object, &predictRequest, jparse_options);

  if (!request_serialized_status.ok()) {
    std::string error_result =
        "{\"Status\": \"Error:object:request_deserialization:protocol_buffers\", ";
    error_result += "\"StatusCode\": \"" +
                    std::to_string(request_serialized_status.error_code()) +
                    "\", ";
    error_result += "\"StatusMessage\":" +
                    to_base64(request_serialized_status.error_message()) +
                    "}";
    return error_result;
  }

  Status status = stub_->Predict(&context, predictRequest, &response);

  if (status.ok()) {
    google::protobuf::util::Status response_serialize_status =
        google::protobuf::util::MessageToJsonString(
            response, &serialized_result_object, jprint_options);

    if (!response_serialize_status.ok()) {
      std::string error_result =
          "{\"Status\": \"Error:object:response_serialization:protocol_buffers\", ";
      error_result += "\"StatusCode\": \"" +
                      std::to_string(response_serialize_status.error_code()) +
                      "\", ";
      error_result += "\"StatusMessage\":" +
                      to_base64(response_serialize_status.error_message()) +
                      "}";
      return error_result;
    }

    std::string success_result = "{\"Status\": \"OK\", ";
    success_result += "\"StatusCode\": \"\", ";
    success_result += "\"StatusMessage\": \"\", ";
    success_result += "\"Result\": " + serialized_result_object + "}";
    return success_result;

  } else {

    std::string error_result = "{\"Status\": \"Error:transport:grpc\", ";
    error_result +=
        "\"StatusCode\": \"" + std::to_string(status.error_code()) + "\", ";
    error_result += "\"StatusMessage\":" + to_base64(status.error_message()) + "}";
    return error_result;
  }
}

std::string PredictionClient::to_base64(std::string text) {

  base64::Base64Proto base64pb;
  std::string serialized_base64_message;
  google::protobuf::util::JsonPrintOptions jprint_options;

  base64pb.add_base64(text.c_str(), text.size());

lib/AI/PredictionClient/Classes/SimpleTensor.pm  view on Meta::CPAN

      DT_FLOAT      => 'floatVal',
      DT_DOUBLE     => 'doubleVal',
      DT_INT16      => 'intVal',
      DT_INT8       => 'intVal',
      DT_UINT8      => 'intVal',
      DT_STRING     => 'stringVal',
      DT_COMPLEX64  => 'scomplexVal',
      DT_INT64      => 'int64Val',
      DT_BOOL       => 'boolVal',
      DT_COMPLEX128 => 'dcomplexVal',
      DT_RESOURCE   => 'resourceHandleVal'
    };
  });

sub value {
  my ($self, $value_aref) = @_;

  my $decoded_aref;

  my $value_type       = $self->dtype_values->{ $self->dtype };
  my $tensor_value_ref = \$self->tensor_ds->{$value_type};

lib/AI/PredictionClient/Docs/Overview.pod  view on Meta::CPAN

Overview.pod - A Perl Prediction client for Google TensorFlow Serving.

=head1 VERSION

version 0.05

head1 DESCRIPTION

This is a package for creating Perl clients for TensorFlow Serving model servers. 
TensorFlow Serving is the system that allows TensorFlow neural network AI models 
to be moved from the research environment to your production environment. 

Currently this package implements a client for the Predict service and a model specific Inception client.

The Predict service 'Predict.pm' is the most versatile of the TensorFlow Serving Prediction services. 
A large portion of the model specific clients are implemented from this service.

The model specific client 'InceptionClient.pm'  is implemented. This is the most popular client. 

Additionally, a command line Inception client 'Inception.pl' is included 
as an example of a complete client built form this package.

lib/AI/PredictionClient/Docs/Overview.pod  view on Meta::CPAN


The commands for the Inception client can be displayed by running the Inception.pl client with no arguments.

 $ Inception.pl 
 image_file is missing
 USAGE: Inception.pl [-h] [long options ...]

    --debug_camel               Test using camel image
    --debug_loopback_interface  Test loopback through dummy server
    --debug_verbose             Verbose output
    --host=String               IP address of the server [Default:
                                127.0.0.1]
    --image_file=String         * Required: Path to image to be processed
    --model_name=String         Model to process image [Default: inception]
    --model_signature=String    API signature for model [Default:
                                predict_images]
    --port=String               Port number of the server [Default: 9000]
    -h                          show a compact help message

Some typical command line examples include:

 Inception.pl --image_file=anything --debug_camel --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=grace_hopper.jpg --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=anything --debug_camel --debug_loopback --port 2004 --host technologic

=head3 In the examples above, the following points are demonstrated:

If you don't have an image handy --debug_camel will provide a sample image to send to the server. 
The image file argument still needs to be provided to make the command line parser happy.

If you don't have a server to talk to, but want to see if most everything else is working use 
the --debug_loopback_interface. This will provide a sample response you can test the client with. 
The module can use the same loopback interface for debugging your bespoke clients.

The --debug_verbose option will dump the data structures of the request and response to allow
you to see what is going on.

=head3 The response from a live server to the camel image looks like this:

 Inception.pl --image_file=zzzzz --debug_camel --host=107.170.xx.xxx --port=9000    
 Sending image zzzzz to server at host:107.170.xx.xxx  port:9000
 .===========================================================================.
 | Class                                                     | Score         |
 |-----------------------------------------------------------+---------------|
 | Arabian camel, dromedary, Camelus dromedarius             | 11.968746     |
 | triumphal arch                                            |  4.0692205    |
 | panpipe, pandean pipe, syrinx                             |  3.4675434    |
 | thresher, thrasher, threshing machine                     |  3.4537551    |
 | sorrel                                                    |  3.1359406    |
 |===========================================================================|
 | Classification Results for zzzzz                                           |
 '==========================================================================='

=head2 SETTING UP A TEST SERVER 

You can set up a server by following the instructions on the TensorFlow Serving site:

 https://www.tensorflow.org/deploy/tfserve

lib/AI/PredictionClient/Docs/Overview.pod  view on Meta::CPAN


 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

In this design I flattened the architecture out and where the native data handling of Perl is best, 
the modules use plain old Perl data structures rather than creating another layer of accessors.

The Tensor interface is used repetitively so this package includes a simplified Tensor class 
to pack and unpack data to and from the models.

In the case of most clients, the Tensor class is simply sending and receiving rank one tensors - vectors. 
In the case of higher rank tensors, the tensor data is sent and received flattened. 
The size property would be used for importing/exporting the tensors in/out of a math package.   

The design takes advantage of the native JSON serialization capabilities built into the C++ Protocol Buffers. 
Serialization allows a much simpler more robust interface to be created between the Perl environment 
and the C++ environment. 
One of the biggest advantages is for the developer who would like to quickly extend what this package does. 
You can see how the data structures are built and directly manipulate them in Perl. 
Of course, if you can be more forward looking, building the proper roles and classes and contributing them would be great. 

=head1 DEPENDENCIES

This module is dependent on gRPC. This module will use the cpan module Alien::Google::GRPC to 
either use an existing gRPC installation on your system or if not found, the Alien::Google::GRPC
module will download and build a private copy.

The system dependencies needed for this module to build are most often already installed. 
If not, the following dependencies need to be installed.

lib/AI/PredictionClient/InceptionClient.pm  view on Meta::CPAN

use 5.010;

use Data::Dumper;
use Moo;

use AI::PredictionClient::Classes::SimpleTensor;
use AI::PredictionClient::Testing::Camel;

extends 'AI::PredictionClient::Predict';

has inception_results => (is => 'rwp');

has camel => (is => 'rw',);

sub call_inception {
  my $self  = shift;
  my $image = shift;

  my $tensor = AI::PredictionClient::Classes::SimpleTensor->new();
  $tensor->shape([ { size => 1 } ]);
  $tensor->dtype("DT_STRING");

lib/AI/PredictionClient/InceptionClient.pm  view on Meta::CPAN

    $tensor->value([ $camel_test->camel_jpeg_ref ]);
  } else {
    $tensor->value([$image]);
  }

  $self->inputs({ images => $tensor });

  if ($self->callPredict()) {

    my $predict_output_map_href = $self->outputs;
    my $inception_results_href;

    foreach my $key (keys %$predict_output_map_href) {
      $inception_results_href->{$key} = $predict_output_map_href->{$key}
        ->value;  #Because returns Tensor objects.
    }

    $self->_set_inception_results($inception_results_href);

    return 1;
  } else {
    return 0;
  }

}

1;

lib/AI/PredictionClient/Roles/PredictRole.pm  view on Meta::CPAN

use strict;
use warnings;
package AI::PredictionClient::Roles::PredictRole;
$AI::PredictionClient::Roles::PredictRole::VERSION = '0.05';
# ABSTRACT: Implements the Predict service specific interface

use AI::PredictionClient::Classes::SimpleTensor;

use Moo::Role;

requires 'request_ds', 'reply_ds';

sub inputs {
  my ($self, $inputs_href) = @_;

  my $inputs_converted_href;

  foreach my $inkey (keys %$inputs_href) {
    $inputs_converted_href->{$inkey} = $inputs_href->{$inkey}->tensor_ds;
  }

  $self->request_ds->{"inputs"} = $inputs_converted_href;

  return;
}

sub callPredict {
  my $self = shift;

  my $request_ref = $self->serialize_request();

  my $result_ref = $self->perception_client_object->callPredict($request_ref);

  return $self->deserialize_reply($result_ref);
}

sub outputs {
  my $self = shift;

  my $predict_outputs_ref = $self->reply_ds->{outputs};

  my $tensorsout_href;

  foreach my $outkey (keys %$predict_outputs_ref) {

lib/AI/PredictionClient/Testing/PredictionLoopback.pm  view on Meta::CPAN

    return $class->$orig(@_);
  }
};

has server_port => (is => 'rw',);

sub callPredict {
  my ($self, $request_data) = @_;

  my $test_return01
    = '{"outputs":{"classes":{"dtype":"DT_STRING","tensorShape":{"dim":[{"size":"1"},{"size":"6"}]},"stringVal":["bG9vcGJhY2sgdGVzdCBkYXRhCg==","bWlsaXRhcnkgdW5pZm9ybQ==","Ym93IHRpZSwgYm93LXRpZSwgYm93dGll","bW9ydGFyYm9hcmQ=","c3VpdCwgc3VpdCBvZiBjbG90...

  my $test_return02
    = '{"outputs":{"classes":{"dtype":"DT_STRING","tensorShape":{"dim":[{"size":"1"},{"size":"5"}]},"stringVal":["bG9hZCBpdAo=","Y2hlY2sgaXQK","cXVpY2sgLSByZXdyaXRlIGl0Cg==","dGVjaG5vbG9naWMK","dGVjaG5vbG9naWMK"]},"scores":{"dtype":"DT_FLOAT","tensor...

  my $return_ser = '{"Status": "OK", ';
  $return_ser .= '"StatusCode": "42", ';
  $return_ser .= '"StatusMessage": "", ';
  $return_ser .= '"DebugRequestLoopback": ' . $request_data . ', ';

  if ($self->server_port eq 'technologic:2004') {
    $return_ser .= '"Result": ' . $test_return02 . '}';
  } else {
    $return_ser .= '"Result": ' . $test_return01 . '}';



( run in 1.084 second using v1.01-cache-2.11-cpan-49f99fa48dc )