OpenAPI-Client-OpenAI

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

0.21    2025-04-11
        - Added examples to the docs, along with allowed models for each path.

0.20    2025-04-06
        - Complete revamped documentation. Much easier to read than the
          near-useless Schema.pod that we used to have.

0.16    2025-04-06
        - Audio API Updates: The audio text-to-speech model is updated to
          gpt-4o-mini-tts, and the audio transcription model is updated to
          gpt-4o-transcribe, with the addition of a streaming option for the
          transcription api.
        - Chat Completions Enhancements: Introduces new features for the Chat
          Completions API, including the ability to list, retrieve, update, and delete
          chat completions. Support for metadata filtering is added, and the
          documentation clarifies parameter support across different models.   
        - Realtime API Expansion: Adds a new endpoint for creating realtime
          transcription sessions and incorporates C# examples for both audio generation
          and transcription.   
        - Responses API Improvements:  Significant updates to the Responses
          API, with a focus on enhanced tool usage, including web search and file search,

share/openapi.yaml  view on Meta::CPAN

                $ref: "#/components/schemas/CreateChatCompletionResponse"
            text/event-stream:
              schema:
                $ref: "#/components/schemas/CreateChatCompletionStreamResponse"
      x-oaiMeta:
        name: Create chat completion
        group: chat
        returns: >
          Returns a [chat completion](/docs/api-reference/chat/object) object,
          or a streamed sequence of [chat completion
          chunk](/docs/api-reference/chat/streaming) objects if the request is
          streamed.
        path: create
        examples:
          - title: Default
            request:
              curl: |
                curl https://api.openai.com/v1/chat/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{

share/openapi.yaml  view on Meta::CPAN

              schema:
                $ref: "#/components/schemas/CreateCompletionResponse"
      x-oaiMeta:
        name: Create completion
        group: completions
        returns: >
          Returns a [completion](/docs/api-reference/completions/object) object,
          or a sequence of completion objects if the request is streamed.
        legacy: true
        examples:
          - title: No streaming
            request:
              curl: |
                curl https://api.openai.com/v1/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "model": "VAR_completion_model_id",
                    "prompt": "Say this is a test",
                    "max_tokens": 7,
                    "temperature": 0

share/openapi.yaml  view on Meta::CPAN

                "type": "code_interpreter"
              }
            ],
            "metadata": {},
            "top_p": 1.0,
            "temperature": 1.0,
            "response_format": "auto"
          }
    AssistantStreamEvent:
      description: >
        Represents an event emitted when streaming a Run.


        Each event in a server-sent events stream has an `event` and `data`
        property:


        ```

        event: thread.created

share/openapi.yaml  view on Meta::CPAN


        `thread.message.completed` event.


        We may add additional events over time, so we recommend handling unknown
        events gracefully

        in your code. See the [Assistants API
        quickstart](/docs/assistants/overview) to learn how to

        integrate the Assistants API with streaming.
      oneOf:
        - $ref: "#/components/schemas/ThreadStreamEvent"
        - $ref: "#/components/schemas/RunStreamEvent"
        - $ref: "#/components/schemas/RunStepStreamEvent"
        - $ref: "#/components/schemas/MessageStreamEvent"
        - $ref: "#/components/schemas/ErrorEvent"
        - $ref: "#/components/schemas/DoneEvent"
      x-oaiMeta:
        name: Assistant stream events
        beta: true

share/openapi.yaml  view on Meta::CPAN

      description: The role of the author of a message
      enum:
        - developer
        - system
        - user
        - assistant
        - tool
        - function
    ChatCompletionStreamOptions:
      description: >
        Options for streaming response. Only set this when you set `stream:
        true`.
      type: object
      nullable: true
      default: null
      properties:
        include_usage:
          type: boolean
          description: >
            If set, an additional chunk will be streamed before the `data:
            [DONE]`

share/openapi.yaml  view on Meta::CPAN

                "accepted_prediction_tokens": 0,
                "rejected_prediction_tokens": 0
              }
            }
          }
    CreateChatCompletionImageResponse:
      type: object
      description:
        Represents a streamed chunk of a chat completion response returned
        by model, based on the provided input. [Learn
        more](/docs/guides/streaming-responses).
      x-oaiMeta:
        name: The chat completion chunk object
        group: chat
        example: >
          {
            "id": "chatcmpl-123",
            "object": "chat.completion",
            "created": 1677652288,
            "model": "gpt-4o-mini",
            "system_fingerprint": "fp_44709d6fcb",

share/openapi.yaml  view on Meta::CPAN

                [evals](/docs/guides/evals) products.
            stream:
              description: >
                If set to true, the model response data will be streamed to the
                client

                as it is generated using [server-sent
                events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).

                See the [Streaming section
                below](/docs/api-reference/chat/streaming)

                for more information, along with the [streaming
                responses](/docs/guides/streaming-responses)

                guide for more information on how to handle the streaming
                events.
              type: boolean
              nullable: true
              default: false
            stop:
              $ref: "#/components/schemas/StopConfiguration"
            logit_bias:
              type: object
              x-oaiTypeLabel: map
              default: null

share/openapi.yaml  view on Meta::CPAN

              }
            },
            "service_tier": "default",
            "system_fingerprint": "fp_fc9f1d7035"
          }
    CreateChatCompletionStreamResponse:
      type: object
      description: |
        Represents a streamed chunk of a chat completion response returned
        by the model, based on the provided input. 
        [Learn more](/docs/guides/streaming-responses).
      properties:
        id:
          type: string
          description:
            A unique identifier for the chat completion. Each chunk has the
            same ID.
        choices:
          type: array
          description: >
            A list of chat completion choices. Can contain more than one

share/openapi.yaml  view on Meta::CPAN

              nullable: true
            stream:
              description: >
                If set to true, the model response data will be streamed to the
                client

                as it is generated using [server-sent
                events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).

                See the [Streaming section
                below](/docs/api-reference/responses-streaming)

                for more information.
              type: boolean
              nullable: true
              default: false
          required:
            - model
            - input
    CreateRunRequest:
      type: object

share/openapi.yaml  view on Meta::CPAN

            - segment
        stream:
          description: >
            If set to true, the model response data will be streamed to the
            client

            as it is generated using [server-sent
            events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). 

            See the [Streaming section of the Speech-to-Text
            guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions)

            for more information.


            Note: Streaming is not supported for the `whisper-1` model and will
            be ignored.
          type: boolean
          nullable: true
          default: false
      required:

share/openapi.yaml  view on Meta::CPAN

                  - $ref: "#/components/schemas/MessageDeltaContentTextAnnotationsFilePathObject"
                x-oaiExpandable: true
      required:
        - index
        - type
    MessageDeltaObject:
      type: object
      title: Message delta object
      description: >
        Represents a message delta i.e. any changed fields on a message during
        streaming.
      properties:
        id:
          description:
            The identifier of the message, which can be referenced in API
            endpoints.
          type: string
        object:
          description: The object type, which is always `thread.message.delta`.
          type: string
          enum:

share/openapi.yaml  view on Meta::CPAN

              x-stainless-const: true
            data:
              $ref: "#/components/schemas/MessageDeltaObject"
          required:
            - event
            - data
          description: Occurs when parts of a
            [Message](/docs/api-reference/messages/object) are being streamed.
          x-oaiMeta:
            dataDescription: "`data` is a [message
              delta](/docs/api-reference/assistants-streaming/message-delta-obj\
              ect)"
        - type: object
          properties:
            event:
              type: string
              enum:
                - thread.message.completed
              x-stainless-const: true
            data:
              $ref: "#/components/schemas/MessageObject"

share/openapi.yaml  view on Meta::CPAN


        when to commit. When Server VAD is disabled, you must commit the audio
        buffer

        manually.


        The client may choose how much audio to place in each event up to a
        maximum 

        of 15 MiB, for example streaming smaller chunks from the client may
        allow the 

        VAD to be more responsive. Unlike made other client events, the server
        will 

        not send a confirmation response to this event.
      properties:
        event_id:
          type: string
          description: Optional client-generated ID used to identify this event.

share/openapi.yaml  view on Meta::CPAN

              "response_id": "resp_001",
              "item_id": "msg_008",
              "output_index": 0,
              "content_index": 0,
              "delta": "Hello, how can I a"
          }
    RealtimeServerEventResponseAudioTranscriptDone:
      type: object
      description: |
        Returned when the model-generated transcription of audio output is done
        streaming. Also emitted when a Response is interrupted, incomplete, or
        cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          type: string
          enum:
            - response.audio_transcript.done
          description: The event type, must be `response.audio_transcript.done`.

share/openapi.yaml  view on Meta::CPAN

              "output_index": 0,
              "content_index": 0,
              "part": {
                  "type": "text",
                  "text": ""
              }
          }
    RealtimeServerEventResponseContentPartDone:
      type: object
      description: >
        Returned when a content part is done streaming in an assistant message
        item.

        Also emitted when a Response is interrupted, incomplete, or cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          type: string
          enum:

share/openapi.yaml  view on Meta::CPAN

                  "object": "realtime.response",
                  "status": "in_progress",
                  "status_details": null,
                  "output": [],
                  "usage": null
              }
          }
    RealtimeServerEventResponseDone:
      type: object
      description: >
        Returned when a Response is done streaming. Always emitted, no matter
        the 

        final state. The Response object included in the `response.done` event
        will 

        include all output Items in the Response but will omit the raw audio
        data.
      properties:
        event_id:
          type: string

share/openapi.yaml  view on Meta::CPAN

              "response_id": "resp_002",
              "item_id": "fc_001",
              "output_index": 0,
              "call_id": "call_001",
              "delta": "{\"location\": \"San\""
          }
    RealtimeServerEventResponseFunctionCallArgumentsDone:
      type: object
      description: >
        Returned when the model-generated function call arguments are done
        streaming.

        Also emitted when a Response is interrupted, incomplete, or cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          type: string
          enum:
            - response.function_call_arguments.done

share/openapi.yaml  view on Meta::CPAN

                  "object": "realtime.item",
                  "type": "message",
                  "status": "in_progress",
                  "role": "assistant",
                  "content": []
              }
          }
    RealtimeServerEventResponseOutputItemDone:
      type: object
      description: >
        Returned when an Item is done streaming. Also emitted when a Response
        is 

        interrupted, incomplete, or cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          type: string
          enum:

share/openapi.yaml  view on Meta::CPAN

              "type": "response.text.delta",
              "response_id": "resp_001",
              "item_id": "msg_007",
              "output_index": 0,
              "content_index": 0,
              "delta": "Sure, I can h"
          }
    RealtimeServerEventResponseTextDone:
      type: object
      description: >
        Returned when the text value of a "text" content part is done streaming.
        Also

        emitted when a Response is interrupted, incomplete, or cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          type: string
          enum:

share/openapi.yaml  view on Meta::CPAN

      required:
        - prompt_tokens
        - completion_tokens
        - total_tokens
      nullable: true
    RunStepDeltaObject:
      type: object
      title: Run step delta object
      description: >
        Represents a run step delta i.e. any changed fields on a run step during
        streaming.
      properties:
        id:
          description:
            The identifier of the run step, which can be referenced in API
            endpoints.
          type: string
        object:
          description: The object type, which is always `thread.run.step.delta`.
          type: string
          enum:

share/openapi.yaml  view on Meta::CPAN

              x-stainless-const: true
            data:
              $ref: "#/components/schemas/RunStepDeltaObject"
          required:
            - event
            - data
          description: Occurs when parts of a [run
            step](/docs/api-reference/run-steps/step-object) are being streamed.
          x-oaiMeta:
            dataDescription: "`data` is a [run step
              delta](/docs/api-reference/assistants-streaming/run-step-delta-ob\
              ject)"
        - type: object
          properties:
            event:
              type: string
              enum:
                - thread.run.step.completed
              x-stainless-const: true
            data:
              $ref: "#/components/schemas/RunStepObject"

share/openapi.yaml  view on Meta::CPAN

          path: delete
        - type: object
          key: CreateChatCompletionResponse
          path: object
        - type: object
          key: ChatCompletionList
          path: list-object
        - type: object
          key: ChatCompletionMessageList
          path: message-list
    - id: chat-streaming
      title: Streaming
      description: |
        Stream Chat Completions in real time. Receive chunks of completions
        returned from the model using server-sent events. 
        [Learn more](/docs/guides/streaming-responses?api-mode=chat).
      navigationGroup: chat
      sections:
        - type: object
          key: CreateChatCompletionStreamResponse
          path: streaming
    - id: audio
      title: Audio
      description: |
        Learn how to turn audio into text or text into audio.

        Related guide: [Speech to text](/docs/guides/speech-to-text)
      navigationGroup: endpoints
      sections:
        - type: endpoint
          key: createSpeech

share/openapi.yaml  view on Meta::CPAN

          path: delete
        - type: endpoint
          key: listInputItems
          path: input-items
        - type: object
          key: Response
          path: object
        - type: object
          key: ResponseItemList
          path: list
    - id: responses-streaming
      title: Streaming
      description: >
        When you [create a Response](/docs/api-reference/responses/create) with

        `stream` set to `true`, the server will emit server-sent events to the

        client as the Response is generated. This section contains the events
        that

        are emitted by the server.


        [Learn more about streaming
        responses](/docs/guides/streaming-responses?api-mode=responses).
      navigationGroup: responses
      sections:
        - type: object
          key: ResponseCreatedEvent
          path: <auto>
        - type: object
          key: ResponseInProgressEvent
          path: <auto>
        - type: object
          key: ResponseCompletedEvent

share/openapi.yaml  view on Meta::CPAN

      sections:
        - type: endpoint
          key: listRunSteps
          path: listRunSteps
        - type: endpoint
          key: getRunStep
          path: getRunStep
        - type: object
          key: RunStepObject
          path: step-object
    - id: assistants-streaming
      title: Streaming
      beta: true
      description: >
        Stream the result of executing a Run or resuming a Run after submitting
        tool outputs.

        You can stream events from the [Create Thread and
        Run](/docs/api-reference/runs/createThreadAndRun),

        [Create Run](/docs/api-reference/runs/createRun), and [Submit Tool
        Outputs](/docs/api-reference/runs/submitToolOutputs)

        endpoints by passing `"stream": true`. The response will be a
        [Server-Sent
        events](https://html.spec.whatwg.org/multipage/server-sent-events.html#server-sent-events)
        stream.

        Our Node and Python SDKs provide helpful utilities to make streaming
        easy. Reference the

        [Assistants API quickstart](/docs/assistants/overview) to learn more.
      navigationGroup: assistants
      sections:
        - type: object
          key: MessageDeltaObject
          path: message-delta-object
        - type: object
          key: RunStepDeltaObject



( run in 0.240 second using v1.01-cache-2.11-cpan-4d50c553e7e )