AI-PredictionClient
view release on metacpan or search on metacpan
lib/AI/PredictionClient.pm view on Meta::CPAN
use strict;
use warnings;
package AI::PredictionClient;
$AI::PredictionClient::VERSION = '0.05';
use AI::PredictionClient::Predict;
use AI::PredictionClient::InceptionClient;
# ABSTRACT: A Perl Prediction client for Google TensorFlow Serving.
1;
__END__
=pod
=encoding UTF-8
=head1 NAME
AI::PredictionClient - A Perl Prediction client for Google TensorFlow Serving.
=head1 VERSION
version 0.05
=head1 DESCRIPTION
This is a package for creating Perl clients for TensorFlow Serving model servers.
TensorFlow Serving is the system that allows TensorFlow neural network AI models
to be moved from the research environment to your production environment.
Currently this package implements a client for the Predict service and a model specific Inception client.
The Predict service 'Predict.pm' is the most versatile of the TensorFlow Serving Prediction services.
A large portion of the model specific clients are implemented from this service.
The model specific client 'InceptionClient.pm' is implemented. This is the most popular client.
Additionally, a command line Inception client 'Inception.pl' is included
as an example of a complete client built form this package.
=head2 Using the example client
The example client is installed in your local bin directory and
will allow you to send an image to an Inception model server and display
the classifications of what the Inception neural network model "thought" it saw.
This client implements a command line interface to the
InceptionClient module 'AI::PredictionClient::InceptionClient', and provides
a working example of using this module for building your own clients.
The commands for the Inception client can be displayed by running the Inception.pl client with no arguments.
$ Inception.pl
image_file is missing
USAGE: Inception.pl [-h] [long options ...]
--debug_camel Test using camel image
--debug_loopback_interface Test loopback through dummy server
--debug_verbose Verbose output
--host=String IP address of the server [Default:
127.0.0.1]
--image_file=String * Required: Path to image to be processed
--model_name=String Model to process image [Default: inception]
--model_signature=String API signature for model [Default:
predict_images]
--port=String Port number of the server [Default: 9000]
-h show a compact help message
Some typical command line examples include:
Inception.pl --image_file=anything --debug_camel --host=xx7.x11.xx3.x14 --port=9000
Inception.pl --image_file=grace_hopper.jpg --host=xx7.x11.xx3.x14 --port=9000
Inception.pl --image_file=anything --debug_camel --debug_loopback --port 2004 --host technologic
=head3 In the examples above, the following points are demonstrated:
( run in 1.435 second using v1.01-cache-2.11-cpan-39bf76dae61 )