AI-PredictionClient

 view release on metacpan or  search on metacpan

lib/AI/PredictionClient.pm  view on Meta::CPAN

    --model_name=String         Model to process image [Default: inception]
    --model_signature=String    API signature for model [Default:
                                predict_images]
    --port=String               Port number of the server [Default: 9000]
    -h                          show a compact help message

Some typical command line examples include:

 Inception.pl --image_file=anything --debug_camel --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=grace_hopper.jpg --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=anything --debug_camel --debug_loopback --port 2004 --host technologic

=head3 In the examples above, the following points are demonstrated:

If you don't have an image handy --debug_camel will provide a sample image to send to the server. 
The image file argument still needs to be provided to make the command line parser happy.

If you don't have a server to talk to, but want to see if most everything else is working use 
the --debug_loopback_interface. This will provide a sample response you can test the client with. 
The module can use the same loopback interface for debugging your bespoke clients.

lib/AI/PredictionClient.pm  view on Meta::CPAN


I have a prebuilt Docker container available here:

 docker pull mountaintom/tensorflow-serving-inception-docker-swarm-demo

This container has the Inception model already loaded and ready to go.

Start this container and run the following commands within it to get the server running:

 $ cd /serving
 $ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=inception-export &> inception_log &

A longer article on setting up a server is here:

 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

lib/AI/PredictionClient/Docs/Overview.pod  view on Meta::CPAN

    --model_name=String         Model to process image [Default: inception]
    --model_signature=String    API signature for model [Default:
                                predict_images]
    --port=String               Port number of the server [Default: 9000]
    -h                          show a compact help message

Some typical command line examples include:

 Inception.pl --image_file=anything --debug_camel --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=grace_hopper.jpg --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=anything --debug_camel --debug_loopback --port 2004 --host technologic

=head3 In the examples above, the following points are demonstrated:

If you don't have an image handy --debug_camel will provide a sample image to send to the server. 
The image file argument still needs to be provided to make the command line parser happy.

If you don't have a server to talk to, but want to see if most everything else is working use 
the --debug_loopback_interface. This will provide a sample response you can test the client with. 
The module can use the same loopback interface for debugging your bespoke clients.

lib/AI/PredictionClient/Docs/Overview.pod  view on Meta::CPAN


I have a prebuilt Docker container available here:

 docker pull mountaintom/tensorflow-serving-inception-docker-swarm-demo

This container has the Inception model already loaded and ready to go.

Start this container and run the following commands within it to get the server running:

 $ cd /serving
 $ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=inception-export &> inception_log &

A longer article on setting up a server is here:

 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

lib/AI/PredictionClient/Testing/PredictionLoopback.pm  view on Meta::CPAN

    = '{"outputs":{"classes":{"dtype":"DT_STRING","tensorShape":{"dim":[{"size":"1"},{"size":"6"}]},"stringVal":["bG9vcGJhY2sgdGVzdCBkYXRhCg==","bWlsaXRhcnkgdW5pZm9ybQ==","Ym93IHRpZSwgYm93LXRpZSwgYm93dGll","bW9ydGFyYm9hcmQ=","c3VpdCwgc3VpdCBvZiBjbG90...

  my $test_return02
    = '{"outputs":{"classes":{"dtype":"DT_STRING","tensorShape":{"dim":[{"size":"1"},{"size":"5"}]},"stringVal":["bG9hZCBpdAo=","Y2hlY2sgaXQK","cXVpY2sgLSByZXdyaXRlIGl0Cg==","dGVjaG5vbG9naWMK","dGVjaG5vbG9naWMK"]},"scores":{"dtype":"DT_FLOAT","tensor...

  my $return_ser = '{"Status": "OK", ';
  $return_ser .= '"StatusCode": "42", ';
  $return_ser .= '"StatusMessage": "", ';
  $return_ser .= '"DebugRequestLoopback": ' . $request_data . ', ';

  if ($self->server_port eq 'technologic:2004') {
    $return_ser .= '"Result": ' . $test_return02 . '}';
  } else {
    $return_ser .= '"Result": ' . $test_return01 . '}';
  }

  return $return_ser;
}

1;



( run in 1.045 second using v1.01-cache-2.11-cpan-49f99fa48dc )