AI-PredictionClient
view release on metacpan or search on metacpan
"file" : "lib/AI/PredictionClient/Testing/Camel.pm",
"version" : "0.05"
},
"AI::PredictionClient::Testing::PredictionLoopback" : {
"file" : "lib/AI/PredictionClient/Testing/PredictionLoopback.pm",
"version" : "0.05"
}
},
"release_status" : "stable",
"resources" : {
"homepage" : "https://github.com/mountaintom/AI-PredictionClient",
"repository" : {
"type" : "git",
"url" : "https://github.com/mountaintom/AI-PredictionClient.git",
"web" : "https://github.com/mountaintom/AI-PredictionClient"
}
},
"version" : "0.05",
"x_serialization_backend" : "Cpanel::JSON::XS version 3.0233"
}
JSON: '0'
MIME::Base64: '0'
Moo: '0'
Moo::Role: '0'
MooX::Options: '0'
Perl6::Form: '0'
perl: '5.01'
strict: '0'
warnings: '0'
resources:
homepage: https://github.com/mountaintom/AI-PredictionClient
repository: https://github.com/mountaintom/AI-PredictionClient.git
version: '0.05'
x_serialization_backend: 'YAML::Tiny version 1.70'
lib/AI/PredictionClient.pm view on Meta::CPAN
=head2 SETTING UP A TEST SERVER
You can set up a server by following the instructions on the TensorFlow Serving site:
https://www.tensorflow.org/deploy/tfserve
https://tensorflow.github.io/serving/setup
https://tensorflow.github.io/serving/docker
I have a prebuilt Docker container available here:
docker pull mountaintom/tensorflow-serving-inception-docker-swarm-demo
This container has the Inception model already loaded and ready to go.
Start this container and run the following commands within it to get the server running:
$ cd /serving
$ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=inception-export &> inception_log &
A longer article on setting up a server is here:
lib/AI/PredictionClient/Docs/Overview.pod view on Meta::CPAN
=head2 SETTING UP A TEST SERVER
You can set up a server by following the instructions on the TensorFlow Serving site:
https://www.tensorflow.org/deploy/tfserve
https://tensorflow.github.io/serving/setup
https://tensorflow.github.io/serving/docker
I have a prebuilt Docker container available here:
docker pull mountaintom/tensorflow-serving-inception-docker-swarm-demo
This container has the Inception model already loaded and ready to go.
Start this container and run the following commands within it to get the server running:
$ cd /serving
$ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=inception-export &> inception_log &
A longer article on setting up a server is here:
( run in 0.246 second using v1.01-cache-2.11-cpan-d6f9594c0a5 )