AI-Ollama-Client

 view release on metacpan or  search on metacpan

lib/AI/Ollama/Client/Impl.pm  view on Meta::CPAN

      if( $res ) {
          my $str = $res->get;
          say $str;
      }

      Future::Mojo->done( defined $res );
  } until => sub($done) { $done->get };

Generate the next message in a chat with a provided model.

This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.


=head3 Options

=over 4

=item C<< format >>

The format to return a response in. Currently the only accepted value is json.

lib/AI/Ollama/GenerateChatCompletionResponse.pm  view on Meta::CPAN


=cut

has 'load_duration' => (
    is       => 'ro',
    isa      => Int,
);

=head2 C<< message >>

A message in the chat endpoint

=cut

has 'message' => (
    is       => 'ro',
    isa      => HashRef,
);

=head2 C<< model >>

ollama/ollama-curated.json  view on Meta::CPAN

{"openapi":"3.0.3","components":{"schemas":{"PushModelResponse":{"properties":{"total":{"type":"integer","description":"total size of the model","example":"2142590208"},"status":{"$ref":"#/components/schemas/PushModelStatus"},"digest":{"example":"sha...

ollama/ollama-curated.yaml  view on Meta::CPAN

          content:
            application/x-ndjson:
              schema:
                $ref: '#/components/schemas/GenerateCompletionResponse'
  /chat:
    post:
      operationId: generateChatCompletion
      tags:
        - Chat
      summary: Generate the next message in a chat with a provided model.
      description: This is a streaming endpoint, so there will be a series of responses. The final response object will include statistics and additional data from the request.
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/GenerateChatCompletionRequest'
      responses:
        '200':
          description: Successful operation.
          content:
            application/x-ndjson:

ollama/ollama-curated.yaml  view on Meta::CPAN

              type: string
              format: binary
      responses:
        '201':
          description: Blob was successfully created

components:
  schemas:
    GenerateCompletionRequest:
      type: object
      description: Request class for the generate endpoint.
      properties:
        model:
          type: string
          description: &model_name |
            The model name.

            Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version.
          example: llama2:7b
        prompt:
          type: string

ollama/ollama-curated.yaml  view on Meta::CPAN

      description: |
        The format to return a response in. Currently the only accepted value is json.

        Enable JSON mode by setting the format parameter to json. This will structure the response as valid JSON.

        Note: it's important to instruct the model to use JSON in the prompt. Otherwise, the model may generate large amounts whitespace.
      enum:
        - json
    GenerateCompletionResponse:
      type: object
      description: The response class for the generate endpoint.
      properties:
        model:
          type: string
          description: *model_name
          example: llama2:7b
        created_at:
          type: string
          format: date-time
          description: Date on which a model was created.
          example: 2023-08-04T19:22:45.499127Z

ollama/ollama-curated.yaml  view on Meta::CPAN

        eval_count:
          type: integer
          description: Number of tokens the response.
          example: 113
        eval_duration:
          type: integer
          description: Time in nanoseconds spent generating the response.
          example: 1325948000
    GenerateChatCompletionRequest:
      type: object
      description: Request class for the chat endpoint.
      properties:
        model:
          type: string
          description: *model_name
          example: llama2:7b
        messages:
          type: array
          description: The messages of the chat, this can be used to keep a chat memory
          items:
            $ref: '#/components/schemas/Message'

ollama/ollama-curated.yaml  view on Meta::CPAN

          description: *stream
          default: false
        keep_alive:
          type: integer
          description: *keep_alive
      required:
        - model
        - messages
    GenerateChatCompletionResponse:
      type: object
      description: The response class for the chat endpoint.
      properties:
        message:
          $ref: '#/components/schemas/Message'
        model:
          type: string
          description: *model_name
          example: llama2:7b
        created_at:
          type: string
          format: date-time

ollama/ollama-curated.yaml  view on Meta::CPAN

        eval_count:
          type: integer
          description: Number of tokens the response.
          example: 113
        eval_duration:
          type: integer
          description: Time in nanoseconds spent generating the response.
          example: 1325948000
    Message:
      type: object
      description: A message in the chat endpoint
      properties:
        role:
          type: string
          description: The role of the message
          enum: [ "system", "user", "assistant" ]
        content:
          type: string
          description: The content of the message
          example: Why is the sky blue?
        images:

ollama/ollama-curated.yaml  view on Meta::CPAN

        status:
          $ref: '#/components/schemas/CreateModelStatus'
    CreateModelStatus:
      type: string
      description: Status creating the model
      enum:
        - creating system layer
        - parsing modelfile
        - success
    ModelsResponse:
      description: Response class for the list models endpoint.
      type: object
      properties:
        models:
          type: array
          description: List of models available locally.
          items:
            $ref: '#/components/schemas/Model'
    Model:
      type: object
      description: A model available locally.

ollama/ollama-curated.yaml  view on Meta::CPAN

        modified_at:
          type: string
          format: date-time
          description: Model modification date.
          example: 2023-08-02T17:02:23.713454393-07:00
        size:
          type: integer
          description: Size of the model on disk.
          example: 7323310500
    ModelInfoRequest:
      description: Request class for the show model info endpoint.
      type: object
      properties:
        name:
          type: string
          description: *model_name
          example: llama2:7b
      required:
        - name
    ModelInfo:
      description: Details about a model including modelfile, template, parameters, license, and system prompt.



( run in 0.496 second using v1.01-cache-2.11-cpan-27979f6cc8f )