Langertha

 view release on metacpan or  search on metacpan

share/ollama.yaml  view on Meta::CPAN

    description: Server information

paths:
  /api/generate:
    post:
      operationId: generateResponse
      tags:
        - generate
      description: |
        Generate a response for a given prompt with a provided model. This is 
        a streaming endpoint, so there will be a series of responses. The 
        final response object will include statistics and additional data from 
        the request.
      summary: |
        Generate a response for a given prompt with a provided model. This is 
        a streaming endpoint, so there will be a series of responses. The final 
        response object will include statistics and additional data from the 
        request.
      requestBody:
        required: true
        description: Request to generate a response
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/GenerateRequest'
      responses:

share/ollama.yaml  view on Meta::CPAN

                $ref: '#/components/schemas/GenerateResponse'

  /api/chat:
    post:
      operationId: generateChat
      tags:
        - chat
        - generate
      description: |
        Generate the next message in a chat with a provided model. This is a 
        streaming endpoint, so there will be a series of responses. Streaming 
        can be disabled using "stream": false. The final response object will 
        include statistics and additional data from the request.
      summary: |
        Generate the next message in a chat with a provided model. This is a 
        streaming endpoint, so there will be a series of responses. Streaming 
        can be disabled using "stream": false. The final response object will 
        include statistics and additional data from the request.
      requestBody:
        required: true
        description: Request to generate a response in a chat
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/ChatRequest'
      responses:

share/ollama.yaml  view on Meta::CPAN

        model:
          type: string
          description: The model name
        messages:
          type: array
          items:
            $ref: '#/components/schemas/Message'
          description: Messages of the chat - can be used to keep a chat memory
        stream:
          type: boolean
          description: Enable streaming of returned response
        format:
          type: string
          description: Format to return the response in (e.g. "json")
        keep_alive:
          $ref: '#/components/schemas/Duration'
        options:
          $ref: '#/components/schemas/Options'

    ChatResponse:
      type: object

share/ollama.yaml  view on Meta::CPAN

        stream:
          type: boolean
          description: |
            If false the response will be returned as a single response object, 
            rather than a stream of objects
      required: 
        - model
        
    ProgressResponse:
      type: object
      description: The response returned from various streaming endpoints
      properties:
        status:
          type: string
          description: The status of the request
        digest:
          type: string
          description: The SHA256 digest of the blob
        total:
          type: integer
          description: The total size of the task

share/openai.yaml  view on Meta::CPAN

        beta: true
        example: "{\n  \"id\": \"asst_abc123\",\n  \"object\": \"assistant\",\n  \"\
          created_at\": 1698984975,\n  \"name\": \"Math Tutor\",\n  \"description\"\
          : null,\n  \"model\": \"gpt-4-turbo\",\n  \"instructions\": \"You are a\
          \ personal math tutor. When asked a question, write and run Python code\
          \ to answer the question.\",\n  \"tools\": [\n    {\n      \"type\": \"\
          code_interpreter\"\n    }\n  ],\n  \"metadata\": {},\n  \"top_p\": 1.0,\n\
          \  \"temperature\": 1.0,\n  \"response_format\": \"auto\"\n}\n"
        name: The assistant object
    AssistantStreamEvent:
      description: 'Represents an event emitted when streaming a Run.


        Each event in a server-sent events stream has an `event` and `data` property:


        ```

        event: thread.created

        data: {"id": "thread_123", "object": "thread", ...}

share/openai.yaml  view on Meta::CPAN


        `thread.message.completed` event.


        We may add additional events over time, so we recommend handling unknown events
        gracefully

        in your code. See the [Assistants API quickstart](/docs/assistants/overview)
        to learn how to

        integrate the Assistants API with streaming.

        '
      oneOf:
      - $ref: '#/components/schemas/ThreadStreamEvent'
      - $ref: '#/components/schemas/RunStreamEvent'
      - $ref: '#/components/schemas/RunStepStreamEvent'
      - $ref: '#/components/schemas/MessageStreamEvent'
      - $ref: '#/components/schemas/ErrorEvent'
      - $ref: '#/components/schemas/DoneEvent'
      x-oaiMeta:

share/openai.yaml  view on Meta::CPAN

      description: The role of the author of a message
      enum:
      - system
      - user
      - assistant
      - tool
      - function
      type: string
    ChatCompletionStreamOptions:
      default: null
      description: 'Options for streaming response. Only set this when you set `stream:
        true`.

        '
      properties:
        include_usage:
          description: 'If set, an additional chunk will be streamed before the `data:
            [DONE]` message. The `usage` field on this chunk shows the token usage
            statistics for the entire request, and the `choices` field will always
            be an empty array. All other chunks will also include a `usage` field,
            but with a null value.

share/openai.yaml  view on Meta::CPAN

          enum:
          - text
          type: string
      required:
      - index
      - type
      title: Text
      type: object
    MessageDeltaObject:
      description: 'Represents a message delta i.e. any changed fields on a message
        during streaming.

        '
      properties:
        delta:
          description: The delta containing the fields that have changed on the Message.
          properties:
            content:
              description: The content of the message in array of text and/or images.
              items:
                oneOf:

share/openai.yaml  view on Meta::CPAN

            $ref: '#/components/schemas/MessageDeltaObject'
          event:
            enum:
            - thread.message.delta
            type: string
        required:
        - event
        - data
        type: object
        x-oaiMeta:
          dataDescription: '`data` is a [message delta](/docs/api-reference/assistants-streaming/message-delta-object)'
      - description: Occurs when a [message](/docs/api-reference/messages/object)
          is completed.
        properties:
          data:
            $ref: '#/components/schemas/MessageObject'
          event:
            enum:
            - thread.message.completed
            type: string
        required:

share/openai.yaml  view on Meta::CPAN

          type: integer
      required:
      - prompt_tokens
      - completion_tokens
      - total_tokens
      type:
      - object
      - 'null'
    RunStepDeltaObject:
      description: 'Represents a run step delta i.e. any changed fields on a run step
        during streaming.

        '
      properties:
        delta:
          description: The delta containing the fields that have changed on the run
            step.
          properties:
            step_details:
              description: The details of the run step.
              oneOf:

share/openai.yaml  view on Meta::CPAN

            $ref: '#/components/schemas/RunStepDeltaObject'
          event:
            enum:
            - thread.run.step.delta
            type: string
        required:
        - event
        - data
        type: object
        x-oaiMeta:
          dataDescription: '`data` is a [run step delta](/docs/api-reference/assistants-streaming/run-step-delta-object)'
      - description: Occurs when a [run step](/docs/api-reference/runs/step-object)
          is completed.
        properties:
          data:
            $ref: '#/components/schemas/RunStepObject'
          event:
            enum:
            - thread.run.step.completed
            type: string
        required:

share/openai.yaml  view on Meta::CPAN

            \   \"logprob\": -7.184561,\n                \"bytes\": [63, 10]\n   \
            \           }\n            ]\n          }\n        ]\n      },\n     \
            \ \"finish_reason\": \"stop\"\n    }\n  ],\n  \"usage\": {\n    \"prompt_tokens\"\
            : 9,\n    \"completion_tokens\": 9,\n    \"total_tokens\": 18\n  },\n\
            \  \"system_fingerprint\": null\n}\n"
          title: Logprobs
        group: chat
        name: Create chat completion
        path: create
        returns: 'Returns a [chat completion](/docs/api-reference/chat/object) object,
          or a streamed sequence of [chat completion chunk](/docs/api-reference/chat/streaming)
          objects if the request is streamed.

          '
  /completions:
    post:
      operationId: createCompletion
      requestBody:
        content:
          application/json:
            schema:

share/openai.yaml  view on Meta::CPAN

            python: "from openai import OpenAI\nclient = OpenAI()\n\nclient.completions.create(\n\
              \  model=\"VAR_model_id\",\n  prompt=\"Say this is a test\",\n  max_tokens=7,\n\
              \  temperature=0\n)\n"
          response: "{\n  \"id\": \"cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7\",\n  \"object\"\
            : \"text_completion\",\n  \"created\": 1589478378,\n  \"model\": \"VAR_model_id\"\
            ,\n  \"system_fingerprint\": \"fp_44709d6fcb\",\n  \"choices\": [\n  \
            \  {\n      \"text\": \"\\n\\nThis is indeed a test\",\n      \"index\"\
            : 0,\n      \"logprobs\": null,\n      \"finish_reason\": \"length\"\n\
            \    }\n  ],\n  \"usage\": {\n    \"prompt_tokens\": 5,\n    \"completion_tokens\"\
            : 7,\n    \"total_tokens\": 12\n  }\n}\n"
          title: No streaming
        - request:
            curl: "curl https://api.openai.com/v1/completions \\\n  -H \"Content-Type:\
              \ application/json\" \\\n  -H \"Authorization: Bearer $OPENAI_API_KEY\"\
              \ \\\n  -d '{\n    \"model\": \"VAR_model_id\",\n    \"prompt\": \"\
              Say this is a test\",\n    \"max_tokens\": 7,\n    \"temperature\":\
              \ 0,\n    \"stream\": true\n  }'\n"
            node.js: "import OpenAI from \"openai\";\n\nconst openai = new OpenAI();\n\
              \nasync function main() {\n  const stream = await openai.completions.create({\n\
              \    model: \"VAR_model_id\",\n    prompt: \"Say this is a test.\",\n\
              \    stream: true,\n  });\n\n  for await (const chunk of stream) {\n\

share/openai.yaml  view on Meta::CPAN

    id: chat
    navigationGroup: endpoints
    sections:
    - key: createChatCompletion
      path: create
      type: endpoint
    - key: CreateChatCompletionResponse
      path: object
      type: object
    - key: CreateChatCompletionStreamResponse
      path: streaming
      type: object
    title: Chat
  - description: 'Get a vector representation of a given input that can be easily
      consumed by machine learning models and algorithms.


      Related guide: [Embeddings](/docs/guides/embeddings)

      '
    id: embeddings

share/openai.yaml  view on Meta::CPAN



      You can stream events from the [Create Thread and Run](/docs/api-reference/runs/createThreadAndRun),

      [Create Run](/docs/api-reference/runs/createRun), and [Submit Tool Outputs](/docs/api-reference/runs/submitToolOutputs)

      endpoints by passing `"stream": true`. The response will be a [Server-Sent events](https://html.spec.whatwg.org/multipage/server-sent-events.html#server-sent-events)
      stream.


      Our Node and Python SDKs provide helpful utilities to make streaming easy. Reference
      the

      [Assistants API quickstart](/docs/assistants/overview) to learn more.

      '
    id: assistants-streaming
    navigationGroup: assistants
    sections:
    - key: MessageDeltaObject
      path: message-delta-object
      type: object
    - key: RunStepDeltaObject
      path: run-step-delta-object
      type: object
    - key: AssistantStreamEvent
      path: events



( run in 0.351 second using v1.01-cache-2.11-cpan-fd5d4e115d8 )