AI-PredictionClient

 view release on metacpan or  search on metacpan

bin/Inception.pl  view on Meta::CPAN

  default  => $default_model,
  doc      => "Model to process image [Default: $default_model]"
);
option model_signature => (
  is       => 'ro',
  required => 0,
  format   => 's',
  default  => $default_model_signature,
  doc      => "API signature for model [Default: $default_model_signature]"
);
option debug_verbose => (is => 'ro', doc => 'Verbose output');
option debug_loopback_interface => (
  is       => 'ro',
  required => 0,
  doc      => "Test loopback through dummy server"
);
option debug_camel => (
  is       => 'ro',
  required => 0,
  doc      => "Test using camel image"
);

sub run {
  my ($self) = @_;

  my $image_ref = $self->read_image($self->image_file);

  my $client = AI::PredictionClient::InceptionClient->new(
    host => $self->host,
    port => $self->port
  );

  $client->model_name($self->model_name);
  $client->model_signature($self->model_signature);
  $client->debug_verbose($self->debug_verbose);
  $client->loopback($self->debug_loopback_interface);
  $client->camel($self->debug_camel);

  printf("Sending image %s to server at host:%s  port:%s\n",
    $self->image_file, $self->host, $self->port);

  if ($client->call_inception($image_ref)) {

    my $results_ref         = $client->inception_results;
    my $classifications_ref = $results_ref->{'classes'};
    my $scores_ref          = $results_ref->{'scores'};
    my $comments            = 'Clasification Results for ' . $self->image_file;

bin/Inception.pl  view on Meta::CPAN

    printf("Failed. Status: %s, Status Code: %s, Status Message: %s \n",
      $client->status, $client->status_code, $client->status_message);
    return 1;
  }
  return 0;
}

sub read_image {
  my $self = shift;

  return \'' if $self->debug_camel;

  my $file_name     = shift;
  my $max_file_size = 16 * 1000 * 1000;  # A large but safe maximum

  open(my $fh, '<:raw', $file_name)
    or die "Could not open file: $file_name";

  read($fh, my $buffer, $max_file_size);

  close $fh;

lib/AI/PredictionClient.pm  view on Meta::CPAN

This client implements a command line interface to the 
InceptionClient module 'AI::PredictionClient::InceptionClient', and provides 
a working example of using this module for building your own clients.

The commands for the Inception client can be displayed by running the Inception.pl client with no arguments.

 $ Inception.pl 
 image_file is missing
 USAGE: Inception.pl [-h] [long options ...]

    --debug_camel               Test using camel image
    --debug_loopback_interface  Test loopback through dummy server
    --debug_verbose             Verbose output
    --host=String               IP address of the server [Default:
                                127.0.0.1]
    --image_file=String         * Required: Path to image to be processed
    --model_name=String         Model to process image [Default: inception]
    --model_signature=String    API signature for model [Default:
                                predict_images]
    --port=String               Port number of the server [Default: 9000]
    -h                          show a compact help message

Some typical command line examples include:

 Inception.pl --image_file=anything --debug_camel --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=grace_hopper.jpg --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=anything --debug_camel --debug_loopback --port 2004 --host technologic

=head3 In the examples above, the following points are demonstrated:

If you don't have an image handy --debug_camel will provide a sample image to send to the server. 
The image file argument still needs to be provided to make the command line parser happy.

If you don't have a server to talk to, but want to see if most everything else is working use 
the --debug_loopback_interface. This will provide a sample response you can test the client with. 
The module can use the same loopback interface for debugging your bespoke clients.

The --debug_verbose option will dump the data structures of the request and response to allow
you to see what is going on.

=head3 The response from a live server to the camel image looks like this:

 Inception.pl --image_file=zzzzz --debug_camel --host=107.170.xx.xxx --port=9000    
 Sending image zzzzz to server at host:107.170.xx.xxx  port:9000
 .===========================================================================.
 | Class                                                     | Score         |
 |-----------------------------------------------------------+---------------|
 | Arabian camel, dromedary, Camelus dromedarius             | 11.968746     |
 | triumphal arch                                            |  4.0692205    |
 | panpipe, pandean pipe, syrinx                             |  3.4675434    |
 | thresher, thrasher, threshing machine                     |  3.4537551    |
 | sorrel                                                    |  3.1359406    |
 |===========================================================================|

lib/AI/PredictionClient/Docs/Overview.pod  view on Meta::CPAN

This client implements a command line interface to the 
InceptionClient module 'AI::PredictionClient::InceptionClient', and provides 
a working example of using this module for building your own clients.

The commands for the Inception client can be displayed by running the Inception.pl client with no arguments.

 $ Inception.pl 
 image_file is missing
 USAGE: Inception.pl [-h] [long options ...]

    --debug_camel               Test using camel image
    --debug_loopback_interface  Test loopback through dummy server
    --debug_verbose             Verbose output
    --host=String               IP address of the server [Default:
                                127.0.0.1]
    --image_file=String         * Required: Path to image to be processed
    --model_name=String         Model to process image [Default: inception]
    --model_signature=String    API signature for model [Default:
                                predict_images]
    --port=String               Port number of the server [Default: 9000]
    -h                          show a compact help message

Some typical command line examples include:

 Inception.pl --image_file=anything --debug_camel --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=grace_hopper.jpg --host=xx7.x11.xx3.x14 --port=9000
 Inception.pl --image_file=anything --debug_camel --debug_loopback --port 2004 --host technologic

=head3 In the examples above, the following points are demonstrated:

If you don't have an image handy --debug_camel will provide a sample image to send to the server. 
The image file argument still needs to be provided to make the command line parser happy.

If you don't have a server to talk to, but want to see if most everything else is working use 
the --debug_loopback_interface. This will provide a sample response you can test the client with. 
The module can use the same loopback interface for debugging your bespoke clients.

The --debug_verbose option will dump the data structures of the request and response to allow
you to see what is going on.

=head3 The response from a live server to the camel image looks like this:

 Inception.pl --image_file=zzzzz --debug_camel --host=107.170.xx.xxx --port=9000    
 Sending image zzzzz to server at host:107.170.xx.xxx  port:9000
 .===========================================================================.
 | Class                                                     | Score         |
 |-----------------------------------------------------------+---------------|
 | Arabian camel, dromedary, Camelus dromedarius             | 11.968746     |
 | triumphal arch                                            |  4.0692205    |
 | panpipe, pandean pipe, syrinx                             |  3.4675434    |
 | thresher, thrasher, threshing machine                     |  3.4537551    |
 | sorrel                                                    |  3.1359406    |
 |===========================================================================|

lib/AI/PredictionClient/Roles/PredictionRole.pm  view on Meta::CPAN


has host => (is => 'ro');

has port => (is => 'ro',);

has loopback => (
  is      => 'rw',
  default => 0,
);

has debug_verbose => (
  is      => 'rw',
  default => 0,
);

has perception_client_object => (
  is      => 'lazy',
  builder => 1,
);

sub _build_perception_client_object {

lib/AI/PredictionClient/Roles/PredictionRole.pm  view on Meta::CPAN

has status => (is => 'rwp',);

has status_code => (is => 'rwp',);

has status_message => (is => 'rwp',);

sub serialize_request {
  my $self = shift;

  printf("Debug - Request: %s \n", Dumper(\$self->request_ds))
    if $self->debug_verbose;

  my $json = JSON->new;

  my $request_json = $json->encode($self->request_ds);
  printf("Debug - JSON Request: %s \n", Dumper(\$request_json))
    if $self->debug_verbose;

  return $request_json;
}

sub deserialize_reply {
  my $self              = shift;
  my $serialized_return = shift;

  printf("Debug - JSON Response: %s \n", Dumper(\$serialized_return))
    if $self->debug_verbose;

  my $json = JSON->new;

  my $returned_ds = $json->decode(
    ref($serialized_return) ? $$serialized_return : $serialized_return);
  $self->_set_status($returned_ds->{'Status'});
  $self->_set_status_code($returned_ds->{'StatusCode'});

  my $message_base = $returned_ds->{'StatusMessage'};
  my $message
    = ref($message_base)
    ? decode_base64($message_base->{'base64'}->[0])
    : $message_base;
  $self->_set_status_message($message ? $message : "");

  $self->_set_reply_ds($returned_ds->{'Result'});

  printf("Debug - Response: %s \n", Dumper(\$returned_ds))
    if $self->debug_verbose;

  if ($self->status =~ /^OK/i) {
    return 1;
  }
  return 0;
}

1;

__END__



( run in 1.333 second using v1.01-cache-2.11-cpan-49f99fa48dc )