AI-Ollama-Client
view release on metacpan or search on metacpan
ollama/ollama-curated.json
ollama/ollama-curated.yaml
openapi/petstore-expanded.yaml
README
README.mkdn
scripts/code-completion.pl
scripts/describe-image.pl
scripts/music-genre-json.pl
t/00-load.t
t/generate.request
t/testdata/objectdetection.jpg
testrules.yml
xt/99-changes.t
xt/99-compile.t
xt/99-manifest.t
xt/99-minimumversion.t
xt/99-pod.t
xt/99-synopsis.t
xt/99-test-prerequisites.t
xt/99-todo.t
xt/99-unix-text.t
README.mkdn view on Meta::CPAN
my $res = $client->createModel()->get;
Create a model from a Modelfile.
Returns a [AI::Ollama::CreateModelResponse](https://metacpan.org/pod/AI%3A%3AOllama%3A%3ACreateModelResponse).
## `deleteModel`
my $res = $client->deleteModel()->get;
Delete a model and its data.
## `generateEmbedding`
my $res = $client->generateEmbedding()->get;
Generate embeddings from a model.
Returns a [AI::Ollama::GenerateEmbeddingResponse](https://metacpan.org/pod/AI%3A%3AOllama%3A%3AGenerateEmbeddingResponse).
## `generateCompletion`
lib/AI/Ollama/Client.pm view on Meta::CPAN
Create a model from a Modelfile.
Returns a L<< AI::Ollama::CreateModelResponse >>.
=cut
=head2 C<< deleteModel >>
my $res = $client->deleteModel()->get;
Delete a model and its data.
=cut
=head2 C<< generateEmbedding >>
my $res = $client->generateEmbedding()->get;
Generate embeddings from a model.
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
);
=head1 PROPERTIES
=head2 B<< schema_file >>
The OpenAPI schema file we use for validation
=head2 B<< schema >>
The OpenAPI schema data structure we use for validation. If not given,
we will create one using the C<schema_file> parameter.
=head2 B<< openapi >>
The L<OpenAPI::Modern> object we use for validation. If not given,
we will create one using the C<schema> parameter.
=head2 B<< ua >>
The L<Mojo::UserAgent> to use
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
if( $res ) {
my $str = $res->get;
say $str;
}
Future::Mojo->done( defined $res );
} until => sub($done) { $done->get };
Generate the next message in a chat with a provided model.
This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
=head3 Options
=over 4
=item C<< format >>
The format to return a response in. Currently the only accepted value is json.
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
}
=head2 C<< build_deleteModel_request >>
Build an HTTP request as L<Mojo::Request> object. For the parameters see below.
=head2 C<< deleteModel >>
my $res = $client->deleteModel()->get;
Delete a model and its data.
=head3 Options
=over 4
=item C<< name >>
The model name.
lib/AI/Ollama/Client/Impl.pm view on Meta::CPAN
if( $res ) {
my $str = $res->get;
say $str;
}
Future::Mojo->done( defined $res );
} until => sub($done) { $done->get };
Generate a response for a given prompt with a provided model.
The final response object will include statistics and additional data from the request.
=head3 Options
=over 4
=item C<< context >>
The context parameter returned from a previous request to [generateCompletion], this can be used to keep a short conversational memory.
ollama/ollama-curated.json view on Meta::CPAN
{"openapi":"3.0.3","components":{"schemas":{"PushModelResponse":{"properties":{"total":{"type":"integer","description":"total size of the model","example":"2142590208"},"status":{"$ref":"#/components/schemas/PushModelStatus"},"digest":{"example":"sha...
ollama/ollama-curated.yaml view on Meta::CPAN
- name: Models
description: List and describe the various models available.
paths:
/generate:
post:
operationId: generateCompletion
tags:
- Completions
summary: Generate a response for a given prompt with a provided model.
description: The final response object will include statistics and additional data from the request.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/GenerateCompletionRequest'
responses:
'200':
description: Successful operation.
content:
application/x-ndjson:
schema:
$ref: '#/components/schemas/GenerateCompletionResponse'
/chat:
post:
operationId: generateChatCompletion
tags:
- Chat
summary: Generate the next message in a chat with a provided model.
description: This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/GenerateChatCompletionRequest'
responses:
'200':
description: Successful operation.
content:
application/x-ndjson:
ollama/ollama-curated.yaml view on Meta::CPAN
schema:
$ref: '#/components/schemas/CopyModelRequest'
responses:
'200':
description: Successful operation.
/delete:
delete:
operationId: deleteModel
tags:
- Models
summary: Delete a model and its data.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/DeleteModelRequest'
responses:
'200':
description: Successful operation.
/pull:
post:
scripts/describe-image.pl view on Meta::CPAN
warn $tx->code;
}
});
my $tx = $ol->pullModel(
name => 'llava:latest',
)->catch(sub {
use Data::Dumper; warn Dumper \@_;
})->get;
my @images = @ARGV ? @ARGV : ('t/testdata/objectdetection.jpg');
for my $image (@images) {
my $response = $ol->generateCompletion(
model => 'llava:latest',
prompt => 'You are tagging images. Please list all the objects in this image as tags. Also list the location where it was taken.',
images => [
{ filename => $image },
],
);
my $responses = $response->get;
( run in 0.285 second using v1.01-cache-2.11-cpan-4d50c553e7e )