AI-PredictionClient

 view release on metacpan or  search on metacpan

lib/AI/PredictionClient.pm  view on Meta::CPAN

 | triumphal arch                                            |  4.0692205    |
 | panpipe, pandean pipe, syrinx                             |  3.4675434    |
 | thresher, thrasher, threshing machine                     |  3.4537551    |
 | sorrel                                                    |  3.1359406    |
 |===========================================================================|
 | Classification Results for zzzzz                                           |
 '==========================================================================='

=head2 SETTING UP A TEST SERVER 

You can set up a server by following the instructions on the TensorFlow Serving site:

 https://www.tensorflow.org/deploy/tfserve
 https://tensorflow.github.io/serving/setup
 https://tensorflow.github.io/serving/docker

I have a prebuilt Docker container available here:

 docker pull mountaintom/tensorflow-serving-inception-docker-swarm-demo

This container has the Inception model already loaded and ready to go.

Start this container and run the following commands within it to get the server running:

 $ cd /serving
 $ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=inception-export &> inception_log &

A longer article on setting up a server is here:

 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

In this design I flattened the architecture out and where the native data handling of Perl is best, 

lib/AI/PredictionClient/Docs/Overview.pod  view on Meta::CPAN

 | triumphal arch                                            |  4.0692205    |
 | panpipe, pandean pipe, syrinx                             |  3.4675434    |
 | thresher, thrasher, threshing machine                     |  3.4537551    |
 | sorrel                                                    |  3.1359406    |
 |===========================================================================|
 | Classification Results for zzzzz                                           |
 '==========================================================================='

=head2 SETTING UP A TEST SERVER 

You can set up a server by following the instructions on the TensorFlow Serving site:

 https://www.tensorflow.org/deploy/tfserve
 https://tensorflow.github.io/serving/setup
 https://tensorflow.github.io/serving/docker

I have a prebuilt Docker container available here:

 docker pull mountaintom/tensorflow-serving-inception-docker-swarm-demo

This container has the Inception model already loaded and ready to go.

Start this container and run the following commands within it to get the server running:

 $ cd /serving
 $ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=inception-export &> inception_log &

A longer article on setting up a server is here:

 https://www.tomstall.com/content/create-a-globally-distributed-tensorflow-serving-cluster-with-nearly-no-pain/

=head1 ADDITIONAL INFO

The design of this client is to be fairly easy for a developer to see how the data is formed and received. 
The TensorFlow interface is based on Protocol Buffers and gRPC. 
That implementation is built on a complex architecture of nested protofiles.

In this design I flattened the architecture out and where the native data handling of Perl is best, 

lib/AI/PredictionClient/InceptionClient.pm  view on Meta::CPAN

  if ($self->callPredict()) {

    my $predict_output_map_href = $self->outputs;
    my $inception_results_href;

    foreach my $key (keys %$predict_output_map_href) {
      $inception_results_href->{$key} = $predict_output_map_href->{$key}
        ->value;  #Because returns Tensor objects.
    }

    $self->_set_inception_results($inception_results_href);

    return 1;
  } else {
    return 0;
  }

}

1;

lib/AI/PredictionClient/Roles/PredictionRole.pm  view on Meta::CPAN

  my $self              = shift;
  my $serialized_return = shift;

  printf("Debug - JSON Response: %s \n", Dumper(\$serialized_return))
    if $self->debug_verbose;

  my $json = JSON->new;

  my $returned_ds = $json->decode(
    ref($serialized_return) ? $$serialized_return : $serialized_return);
  $self->_set_status($returned_ds->{'Status'});
  $self->_set_status_code($returned_ds->{'StatusCode'});

  my $message_base = $returned_ds->{'StatusMessage'};
  my $message
    = ref($message_base)
    ? decode_base64($message_base->{'base64'}->[0])
    : $message_base;
  $self->_set_status_message($message ? $message : "");

  $self->_set_reply_ds($returned_ds->{'Result'});

  printf("Debug - Response: %s \n", Dumper(\$returned_ds))
    if $self->debug_verbose;

  if ($self->status =~ /^OK/i) {
    return 1;
  }
  return 0;
}



( run in 1.120 second using v1.01-cache-2.11-cpan-49f99fa48dc )