OpenAPI-Client-OpenAI

 view release on metacpan or  search on metacpan

Changes  view on Meta::CPAN

0.24    2025-08-22

        - Model Updates: Added support for new `gpt-5` series models, including
        `gpt-5`, `gpt-5-mini`, and `gpt-5-nano`, across various API endpoints like
        Assistants and Chat Completions.
        - Conversations API Introduction: A new top-level `/conversations` endpoint has
        been added to create, retrieve, update, and delete conversations and their
        items. This provides a new way to manage and persist conversation state. The
        previous `/embeddings` endpoint has been removed to accommodate this new
        feature.
        - Images API Streaming: The Images API now supports streaming for both image
        generation (`/images/generations`) and editing (`/images/edits`). New
        parameters such as `input_fidelity` and `partial_images` have also been
        introduced for more control over image results.
        - Documentation and Code Examples: Replaced all Kotlin code examples with Java
        examples across the entire API specification. Endpoint summaries have been
        standardized for conciseness, with longer explanations moved into the
        `description` field. Additionally, internal documentation links have been
        updated to absolute URLs.
        - File API Enhancements: The organizational file storage limit has been
        increased to 1 TB. A new `expires_after` parameter allows setting an expiration

Changes  view on Meta::CPAN

        parameter to manage the lifecycle of output files. Many schema definitions have
        been updated from `oneOf` to the more flexible `anyOf`.

0.23    2025-07-03
        - Version number bump to replace bad PAUSE upload. No functional change.

0.22    2025-07-03
        - Added new `git-release` script to automate the release process.
        - Audio API Updates: The audio text-to-speech model is updated to
          gpt-4o-mini-tts, and the audio transcription model is updated to
          gpt-4o-transcribe, with the addition of a streaming option for the
          transcription api.
        - Chat Completions Enhancements: Introduces new features for the Chat
          Completions API, including the ability to list, retrieve, update, and delete
          chat completions.  Support for metadata filtering is added, and the
          documentation clarifies parameter support across different models. 
        - Realtime API Expansion: Adds a new endpoint for creating realtime
          transcription sessions and incorporates C# examples for both audio generation
          and transcription.
        - Responses API Improvements:  Significant updates to the Responses
          API, with a focus on enhanced tool usage, including web search and file search,

Changes  view on Meta::CPAN

0.21    2025-04-11
        - Added examples to the docs, along with allowed models for each path.

0.20    2025-04-06
        - Complete revamped documentation. Much easier to read than the
          near-useless Schema.pod that we used to have.

0.16    2025-04-06
        - Audio API Updates: The audio text-to-speech model is updated to
          gpt-4o-mini-tts, and the audio transcription model is updated to
          gpt-4o-transcribe, with the addition of a streaming option for the
          transcription api.
        - Chat Completions Enhancements: Introduces new features for the Chat
          Completions API, including the ability to list, retrieve, update, and delete
          chat completions. Support for metadata filtering is added, and the
          documentation clarifies parameter support across different models.
        - Realtime API Expansion: Adds a new endpoint for creating realtime
          transcription sessions and incorporates C# examples for both audio generation
          and transcription.
        - Responses API Improvements:  Significant updates to the Responses
          API, with a focus on enhanced tool usage, including web search and file search,

lib/OpenAPI/Client/OpenAI/Path/responses-response_id.pod  view on Meta::CPAN

=item * C<include> (in query) (Optional) - Additional fields to include in the response. See the `include`
parameter for Response creation above for more information.


Type: C<array>



=item * C<stream> (in query) (Optional) - If set to true, the model response data will be streamed to the client
as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).
See the [Streaming section below](https://platform.openai.com/docs/api-reference/responses-streaming)
for more information.


Type: C<boolean>



=item * C<starting_after> (in query) (Optional) - The sequence number of the event after which to start streaming.


Type: C<integer>



=item * C<include_obfuscation> (in query) (Optional) - When true, stream obfuscation will be enabled. Stream obfuscation adds
random characters to an `obfuscation` field on streaming delta events
to normalize payload sizes as a mitigation to certain side-channel
attacks. These obfuscation fields are included by default, but add a
small amount of overhead to the data stream. You can set
`include_obfuscation` to false to optimize for bandwidth if you trust
the network links between your application and the OpenAI API.


Type: C<boolean>


share/openapi.yaml  view on Meta::CPAN

                $ref: '#/components/schemas/CreateChatCompletionResponse'
            text/event-stream:
              schema:
                $ref: '#/components/schemas/CreateChatCompletionStreamResponse'
      x-oaiMeta:
        name: Create chat completion
        group: chat
        returns: >
          Returns a [chat completion](https://platform.openai.com/docs/api-reference/chat/object) object, or a
          streamed sequence of [chat completion
          chunk](https://platform.openai.com/docs/api-reference/chat/streaming) objects if the request is
          streamed.
        path: create
        examples:
          - title: Default
            request:
              curl: |
                curl https://api.openai.com/v1/chat/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{

share/openapi.yaml  view on Meta::CPAN

              schema:
                $ref: '#/components/schemas/CreateCompletionResponse'
      x-oaiMeta:
        name: Create completion
        group: completions
        returns: >
          Returns a [completion](https://platform.openai.com/docs/api-reference/completions/object) object, or
          a sequence of completion objects if the request is streamed.
        legacy: true
        examples:
          - title: No streaming
            request:
              curl: |
                curl https://api.openai.com/v1/completions \
                  -H "Content-Type: application/json" \
                  -H "Authorization: Bearer $OPENAI_API_KEY" \
                  -d '{
                    "model": "VAR_completion_model_id",
                    "prompt": "Say this is a test",
                    "max_tokens": 7,
                    "temperature": 0

share/openapi.yaml  view on Meta::CPAN

          name: stream
          schema:
            type: boolean
          description: >
            If set to true, the model response data will be streamed to the client

            as it is generated using [server-sent
            events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).

            See the [Streaming section
            below](https://platform.openai.com/docs/api-reference/responses-streaming)

            for more information.
        - in: query
          name: starting_after
          schema:
            type: integer
          description: |
            The sequence number of the event after which to start streaming.
        - in: query
          name: include_obfuscation
          schema:
            type: boolean
          description: |
            When true, stream obfuscation will be enabled. Stream obfuscation adds
            random characters to an `obfuscation` field on streaming delta events
            to normalize payload sizes as a mitigation to certain side-channel
            attacks. These obfuscation fields are included by default, but add a
            small amount of overhead to the data stream. You can set
            `include_obfuscation` to false to optimize for bandwidth if you trust
            the network links between your application and the OpenAI API.
      responses:
        '200':
          description: OK
          content:
            application/json:

share/openapi.yaml  view on Meta::CPAN

                "type": "code_interpreter"
              }
            ],
            "metadata": {},
            "top_p": 1.0,
            "temperature": 1.0,
            "response_format": "auto"
          }
    AssistantStreamEvent:
      description: >
        Represents an event emitted when streaming a Run.


        Each event in a server-sent events stream has an `event` and `data` property:


        ```

        event: thread.created

        data: {"id": "thread_123", "object": "thread", ...}

share/openapi.yaml  view on Meta::CPAN

        `thread.message.in_progress` event, many `thread.message.delta` events, and finally a

        `thread.message.completed` event.


        We may add additional events over time, so we recommend handling unknown events gracefully

        in your code. See the [Assistants API
        quickstart](https://platform.openai.com/docs/assistants/overview) to learn how to

        integrate the Assistants API with streaming.
      x-oaiMeta:
        name: Assistant stream events
        beta: true
      anyOf:
        - $ref: '#/components/schemas/ThreadStreamEvent'
        - $ref: '#/components/schemas/RunStreamEvent'
        - $ref: '#/components/schemas/RunStepStreamEvent'
        - $ref: '#/components/schemas/MessageStreamEvent'
        - $ref: '#/components/schemas/ErrorEvent'
          x-stainless-variantName: error_event

share/openapi.yaml  view on Meta::CPAN

      description: The role of the author of a message
      enum:
        - developer
        - system
        - user
        - assistant
        - tool
        - function
    ChatCompletionStreamOptions:
      description: |
        Options for streaming response. Only set this when you set `stream: true`.
      type: object
      nullable: true
      default: null
      properties:
        include_usage:
          type: boolean
          description: |
            If set, an additional chunk will be streamed before the `data: [DONE]`
            message. The `usage` field on this chunk shows the token usage statistics
            for the entire request, and the `choices` field will always be an empty
            array.

            All other chunks will also include a `usage` field, but with a null
            value. **NOTE:** If the stream is interrupted, you may not receive the
            final usage chunk which contains the total token usage for the request.
        include_obfuscation:
          type: boolean
          description: |
            When true, stream obfuscation will be enabled. Stream obfuscation adds
            random characters to an `obfuscation` field on streaming delta events to
            normalize payload sizes as a mitigation to certain side-channel attacks.
            These obfuscation fields are included by default, but add a small amount
            of overhead to the data stream. You can set `include_obfuscation` to
            false to optimize for bandwidth if you trust the network links between
            your application and the OpenAI API.
    ChatCompletionStreamResponseDelta:
      type: object
      description: A chat completion delta generated by streamed model responses.
      properties:
        content:

share/openapi.yaml  view on Meta::CPAN


                Supports text and image inputs. Note: image inputs over 8MB will be dropped.
            stream:
              description: >
                If set to true, the model response data will be streamed to the client

                as it is generated using [server-sent
                events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).

                See the [Streaming section
                below](https://platform.openai.com/docs/api-reference/chat/streaming)

                for more information, along with the [streaming
                responses](https://platform.openai.com/docs/guides/streaming-responses)

                guide for more information on how to handle the streaming events.
              type: boolean
              nullable: true
              default: false
            stop:
              $ref: '#/components/schemas/StopConfiguration'
            logit_bias:
              type: object
              x-oaiTypeLabel: map
              default: null
              nullable: true

share/openapi.yaml  view on Meta::CPAN

              }
            },
            "service_tier": "default",
            "system_fingerprint": "fp_fc9f1d7035"
          }
    CreateChatCompletionStreamResponse:
      type: object
      description: |
        Represents a streamed chunk of a chat completion response returned
        by the model, based on the provided input. 
        [Learn more](https://platform.openai.com/docs/guides/streaming-responses).
      properties:
        id:
          type: string
          description: A unique identifier for the chat completion. Each chunk has the same ID.
        choices:
          type: array
          description: >
            A list of chat completion choices. Can contain more than one elements if `n` is greater than 1.
            Can also be empty for the

share/openapi.yaml  view on Meta::CPAN

            A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
            [Learn more](https://platform.openai.com/docs/guides/safety-best-practices#end-user-ids).
        input_fidelity:
          $ref: '#/components/schemas/ImageInputFidelity'
        stream:
          type: boolean
          default: false
          example: false
          nullable: true
          description: >
            Edit the image in streaming mode. Defaults to `false`. See the 

            [Image generation guide](https://platform.openai.com/docs/guides/image-generation) for more
            information.
        partial_images:
          $ref: '#/components/schemas/PartialImages'
        quality:
          type: string
          enum:
            - standard
            - low

share/openapi.yaml  view on Meta::CPAN

          nullable: true
          description: >-
            The compression level (0-100%) for the generated images. This parameter is only supported for
            `gpt-image-1` with the `webp` or `jpeg` output formats, and defaults to 100.
        stream:
          type: boolean
          default: false
          example: false
          nullable: true
          description: >
            Generate the image in streaming mode. Defaults to `false`. See the 

            [Image generation guide](https://platform.openai.com/docs/guides/image-generation) for more
            information.

            This parameter is only supported for `gpt-image-1`.
        partial_images:
          $ref: '#/components/schemas/PartialImages'
        size:
          type: string
          enum:

share/openapi.yaml  view on Meta::CPAN

                response will not be carried over to the next response. This makes it simple
                to swap out system (or developer) messages in new responses.
            stream:
              description: >
                If set to true, the model response data will be streamed to the client

                as it is generated using [server-sent
                events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).

                See the [Streaming section
                below](https://platform.openai.com/docs/api-reference/responses-streaming)

                for more information.
              type: boolean
              nullable: true
              default: false
            stream_options:
              $ref: '#/components/schemas/ResponseStreamOptions'
            conversation:
              description: >
                The conversation that this response belongs to. Items from this conversation are prepended to

share/openapi.yaml  view on Meta::CPAN

          type: number
          default: 0
        stream:
          description: >
            If set to true, the model response data will be streamed to the client

            as it is generated using [server-sent
            events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). 

            See the [Streaming section of the Speech-to-Text
            guide](https://platform.openai.com/docs/guides/speech-to-text?lang=curl#streaming-transcriptions)

            for more information.


            Note: Streaming is not supported for the `whisper-1` model and will be ignored.
          type: boolean
          nullable: true
          default: false
        chunking_strategy:
          $ref: '#/components/schemas/TranscriptionChunkingStrategy'

share/openapi.yaml  view on Meta::CPAN

              "output_tokens": 50,
              "input_tokens_details": {
                "text_tokens": 10,
                "image_tokens": 40
              }
            }
          }
    ImageEditPartialImageEvent:
      type: object
      description: |
        Emitted when a partial image is available during image editing streaming.
      properties:
        type:
          type: string
          description: |
            The type of the event. Always `image_edit.partial_image`.
          enum:
            - image_edit.partial_image
          x-stainless-const: true
        b64_json:
          type: string

share/openapi.yaml  view on Meta::CPAN

          type: string
          description: |
            The output format for the requested edited image.
          enum:
            - png
            - webp
            - jpeg
        partial_image_index:
          type: integer
          description: |
            0-based index for the partial image (streaming).
      required:
        - type
        - b64_json
        - created_at
        - size
        - quality
        - background
        - output_format
        - partial_image_index
      x-oaiMeta:

share/openapi.yaml  view on Meta::CPAN

              "output_tokens": 50,
              "input_tokens_details": {
                "text_tokens": 10,
                "image_tokens": 40
              }
            }
          }
    ImageGenPartialImageEvent:
      type: object
      description: |
        Emitted when a partial image is available during image generation streaming.
      properties:
        type:
          type: string
          description: |
            The type of the event. Always `image_generation.partial_image`.
          enum:
            - image_generation.partial_image
          x-stainless-const: true
        b64_json:
          type: string

share/openapi.yaml  view on Meta::CPAN

          type: string
          description: |
            The output format for the requested image.
          enum:
            - png
            - webp
            - jpeg
        partial_image_index:
          type: integer
          description: |
            0-based index for the partial image (streaming).
      required:
        - type
        - b64_json
        - created_at
        - size
        - quality
        - background
        - output_format
        - partial_image_index
      x-oaiMeta:

share/openapi.yaml  view on Meta::CPAN

              type: string
              description: |
                File ID for the mask image.
          required: []
          additionalProperties: false
        partial_images:
          type: integer
          minimum: 0
          maximum: 3
          description: |
            Number of partial images to generate in streaming mode, from 0 (default value) to 3.
          default: 0
      required:
        - type
    ImageGenToolCall:
      type: object
      title: Image generation call
      description: |
        An image generation request made by the model.
      properties:
        type:

share/openapi.yaml  view on Meta::CPAN

              type: array
              items:
                $ref: '#/components/schemas/TextAnnotationDelta'
      required:
        - index
        - type
    MessageDeltaObject:
      type: object
      title: Message delta object
      description: |
        Represents a message delta i.e. any changed fields on a message during streaming.
      properties:
        id:
          description: The identifier of the message, which can be referenced in API endpoints.
          type: string
        object:
          description: The object type, which is always `thread.message.delta`.
          type: string
          enum:
            - thread.message.delta
          x-stainless-const: true

share/openapi.yaml  view on Meta::CPAN

              x-stainless-const: true
            data:
              $ref: '#/components/schemas/MessageDeltaObject'
          required:
            - event
            - data
          description: >-
            Occurs when parts of a [Message](https://platform.openai.com/docs/api-reference/messages/object)
            are being streamed.
          x-oaiMeta:
            dataDescription: '`data` is a [message delta](/docs/api-reference/assistants-streaming/message-delta-object)'
        - type: object
          properties:
            event:
              type: string
              enum:
                - thread.message.completed
              x-stainless-const: true
            data:
              $ref: '#/components/schemas/MessageObject'
          required:

share/openapi.yaml  view on Meta::CPAN

      default: true
    PartialImages:
      type: integer
      maximum: 3
      minimum: 0
      default: 0
      example: 1
      nullable: true
      description: |
        The number of partial images to generate. This parameter is used for
        streaming responses that return partial images. Value must be between 0 and 3.
        When set to 0, the response will be a single image sent in one streaming event.

        Note that the final image may be sent before the full number of partial images 
        are generated if the full image is generated more quickly.
    PredictionContent:
      type: object
      title: Static Content
      description: |
        Static predicted output content, such as the content of a text file that is
        being regenerated.
      required:

share/openapi.yaml  view on Meta::CPAN

    RealtimeClientEventInputAudioBufferAppend:
      type: object
      description: |
        Send this event to append audio bytes to the input audio buffer. The audio 
        buffer is temporary storage you can write to and later commit. In Server VAD 
        mode, the audio buffer is used to detect speech and the server will decide 
        when to commit. When Server VAD is disabled, you must commit the audio buffer
        manually.

        The client may choose how much audio to place in each event up to a maximum 
        of 15 MiB, for example streaming smaller chunks from the client may allow the 
        VAD to be more responsive. Unlike made other client events, the server will 
        not send a confirmation response to this event.
      properties:
        event_id:
          type: string
          description: Optional client-generated ID used to identify this event.
        type:
          description: The event type, must be `input_audio_buffer.append`.
          x-stainless-const: true
          const: input_audio_buffer.append

share/openapi.yaml  view on Meta::CPAN

        group: realtime
        example: |
          {
              "event_id": "event_abc123",
              "type": "output_audio_buffer.cleared",
              "response_id": "resp_abc123"
          }
    RealtimeServerEventOutputAudioBufferStarted:
      type: object
      description: >
        **WebRTC Only:** Emitted when the server begins streaming audio to the client. This event is

        emitted after an audio content part has been added (`response.content_part.added`)

        to the response.

        [Learn
        more](https://platform.openai.com/docs/guides/realtime-conversations#client-and-server-events-for-audio-in-webrtc).
      properties:
        event_id:
          type: string

share/openapi.yaml  view on Meta::CPAN

              "response_id": "resp_001",
              "item_id": "msg_008",
              "output_index": 0,
              "content_index": 0,
              "delta": "Hello, how can I a"
          }
    RealtimeServerEventResponseAudioTranscriptDone:
      type: object
      description: |
        Returned when the model-generated transcription of audio output is done
        streaming. Also emitted when a Response is interrupted, incomplete, or
        cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          description: The event type, must be `response.audio_transcript.done`.
          x-stainless-const: true
          const: response.audio_transcript.done
        response_id:

share/openapi.yaml  view on Meta::CPAN

              "output_index": 0,
              "content_index": 0,
              "part": {
                  "type": "text",
                  "text": ""
              }
          }
    RealtimeServerEventResponseContentPartDone:
      type: object
      description: |
        Returned when a content part is done streaming in an assistant message item.
        Also emitted when a Response is interrupted, incomplete, or cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          description: The event type, must be `response.content_part.done`.
          x-stainless-const: true
          const: response.content_part.done
        response_id:

share/openapi.yaml  view on Meta::CPAN

                  "object": "realtime.response",
                  "status": "in_progress",
                  "status_details": null,
                  "output": [],
                  "usage": null
              }
          }
    RealtimeServerEventResponseDone:
      type: object
      description: |
        Returned when a Response is done streaming. Always emitted, no matter the 
        final state. The Response object included in the `response.done` event will 
        include all output Items in the Response but will omit the raw audio data.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          description: The event type, must be `response.done`.
          x-stainless-const: true
          const: response.done

share/openapi.yaml  view on Meta::CPAN

              "type": "response.function_call_arguments.delta",
              "response_id": "resp_002",
              "item_id": "fc_001",
              "output_index": 0,
              "call_id": "call_001",
              "delta": "{\"location\": \"San\""
          }
    RealtimeServerEventResponseFunctionCallArgumentsDone:
      type: object
      description: |
        Returned when the model-generated function call arguments are done streaming.
        Also emitted when a Response is interrupted, incomplete, or cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          description: |
            The event type, must be `response.function_call_arguments.done`.
          x-stainless-const: true
          const: response.function_call_arguments.done

share/openapi.yaml  view on Meta::CPAN

                  "object": "realtime.item",
                  "type": "message",
                  "status": "in_progress",
                  "role": "assistant",
                  "content": []
              }
          }
    RealtimeServerEventResponseOutputItemDone:
      type: object
      description: |
        Returned when an Item is done streaming. Also emitted when a Response is 
        interrupted, incomplete, or cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          description: The event type, must be `response.output_item.done`.
          x-stainless-const: true
          const: response.output_item.done
        response_id:

share/openapi.yaml  view on Meta::CPAN

              "type": "response.text.delta",
              "response_id": "resp_001",
              "item_id": "msg_007",
              "output_index": 0,
              "content_index": 0,
              "delta": "Sure, I can h"
          }
    RealtimeServerEventResponseTextDone:
      type: object
      description: |
        Returned when the text value of a "text" content part is done streaming. Also
        emitted when a Response is interrupted, incomplete, or cancelled.
      properties:
        event_id:
          type: string
          description: The unique ID of the server event.
        type:
          description: The event type, must be `response.text.done`.
          x-stainless-const: true
          const: response.text.done
        response_id:

share/openapi.yaml  view on Meta::CPAN

          type: integer
          description: The index of the output item in the response for which the code is being streamed.
        item_id:
          type: string
          description: The unique identifier of the code interpreter tool call item.
        delta:
          type: string
          description: The partial code snippet being streamed by the code interpreter.
        sequence_number:
          type: integer
          description: The sequence number of this event, used to order streaming events.
      required:
        - type
        - output_index
        - item_id
        - delta
        - sequence_number
      x-oaiMeta:
        name: response.code_interpreter_call_code.delta
        group: responses
        example: |

share/openapi.yaml  view on Meta::CPAN

          type: integer
          description: The index of the output item in the response for which the code is finalized.
        item_id:
          type: string
          description: The unique identifier of the code interpreter tool call item.
        code:
          type: string
          description: The final code snippet output by the code interpreter.
        sequence_number:
          type: integer
          description: The sequence number of this event, used to order streaming events.
      required:
        - type
        - output_index
        - item_id
        - code
        - sequence_number
      x-oaiMeta:
        name: response.code_interpreter_call_code.done
        group: responses
        example: |

share/openapi.yaml  view on Meta::CPAN

            - response.code_interpreter_call.completed
          x-stainless-const: true
        output_index:
          type: integer
          description: The index of the output item in the response for which the code interpreter call is completed.
        item_id:
          type: string
          description: The unique identifier of the code interpreter tool call item.
        sequence_number:
          type: integer
          description: The sequence number of this event, used to order streaming events.
      required:
        - type
        - output_index
        - item_id
        - sequence_number
      x-oaiMeta:
        name: response.code_interpreter_call.completed
        group: responses
        example: |
          {

share/openapi.yaml  view on Meta::CPAN

            - response.code_interpreter_call.in_progress
          x-stainless-const: true
        output_index:
          type: integer
          description: The index of the output item in the response for which the code interpreter call is in progress.
        item_id:
          type: string
          description: The unique identifier of the code interpreter tool call item.
        sequence_number:
          type: integer
          description: The sequence number of this event, used to order streaming events.
      required:
        - type
        - output_index
        - item_id
        - sequence_number
      x-oaiMeta:
        name: response.code_interpreter_call.in_progress
        group: responses
        example: |
          {

share/openapi.yaml  view on Meta::CPAN

            - response.code_interpreter_call.interpreting
          x-stainless-const: true
        output_index:
          type: integer
          description: The index of the output item in the response for which the code interpreter is interpreting code.
        item_id:
          type: string
          description: The unique identifier of the code interpreter tool call item.
        sequence_number:
          type: integer
          description: The sequence number of this event, used to order streaming events.
      required:
        - type
        - output_index
        - item_id
        - sequence_number
      x-oaiMeta:
        name: response.code_interpreter_call.interpreting
        group: responses
        example: |
          {

share/openapi.yaml  view on Meta::CPAN

          {
            "type": "response.image_generation_call.in_progress",
            "output_index": 0,
            "item_id": "item-123",
            "sequence_number": 0
          }
    ResponseImageGenCallPartialImageEvent:
      type: object
      title: ResponseImageGenCallPartialImageEvent
      description: |
        Emitted when a partial image is available during image generation streaming.
      properties:
        type:
          type: string
          enum:
            - response.image_generation_call.partial_image
          description: The type of the event. Always 'response.image_generation_call.partial_image'.
          x-stainless-const: true
        output_index:
          type: integer
          description: The index of the output item in the response's output array.

share/openapi.yaml  view on Meta::CPAN

        - $ref: '#/components/schemas/ResponseMCPListToolsFailedEvent'
        - $ref: '#/components/schemas/ResponseMCPListToolsInProgressEvent'
        - $ref: '#/components/schemas/ResponseOutputTextAnnotationAddedEvent'
        - $ref: '#/components/schemas/ResponseQueuedEvent'
        - $ref: '#/components/schemas/ResponseCustomToolCallInputDeltaEvent'
        - $ref: '#/components/schemas/ResponseCustomToolCallInputDoneEvent'
      discriminator:
        propertyName: type
    ResponseStreamOptions:
      description: |
        Options for streaming responses. Only set this when you set `stream: true`.
      type: object
      nullable: true
      default: null
      properties:
        include_obfuscation:
          type: boolean
          description: |
            When true, stream obfuscation will be enabled. Stream obfuscation adds
            random characters to an `obfuscation` field on streaming delta events to
            normalize payload sizes as a mitigation to certain side-channel attacks.
            These obfuscation fields are included by default, but add a small amount
            of overhead to the data stream. You can set `include_obfuscation` to
            false to optimize for bandwidth if you trust the network links between
            your application and the OpenAI API.
    ResponseTextDeltaEvent:
      type: object
      description: Emitted when there is an additional text delta.
      properties:
        type:

share/openapi.yaml  view on Meta::CPAN

          description: Total number of tokens used (prompt + completion).
      required:
        - prompt_tokens
        - completion_tokens
        - total_tokens
      nullable: true
    RunStepDeltaObject:
      type: object
      title: Run step delta object
      description: |
        Represents a run step delta i.e. any changed fields on a run step during streaming.
      properties:
        id:
          description: The identifier of the run step, which can be referenced in API endpoints.
          type: string
        object:
          description: The object type, which is always `thread.run.step.delta`.
          type: string
          enum:
            - thread.run.step.delta
          x-stainless-const: true

share/openapi.yaml  view on Meta::CPAN

              x-stainless-const: true
            data:
              $ref: '#/components/schemas/RunStepDeltaObject'
          required:
            - event
            - data
          description: >-
            Occurs when parts of a [run
            step](https://platform.openai.com/docs/api-reference/run-steps/step-object) are being streamed.
          x-oaiMeta:
            dataDescription: '`data` is a [run step delta](/docs/api-reference/assistants-streaming/run-step-delta-object)'
        - type: object
          properties:
            event:
              type: string
              enum:
                - thread.run.step.completed
              x-stainless-const: true
            data:
              $ref: '#/components/schemas/RunStepObject'
          required:

share/openapi.yaml  view on Meta::CPAN

          path: get-item
        - type: endpoint
          key: deleteConversationItem
          path: delete-item
        - type: object
          key: Conversation
          path: object
        - type: object
          key: ConversationItemList
          path: list-items-object
    - id: responses-streaming
      title: Streaming events
      description: >
        When you [create a Response](https://platform.openai.com/docs/api-reference/responses/create) with

        `stream` set to `true`, the server will emit server-sent events to the

        client as the Response is generated. This section contains the events that

        are emitted by the server.


        [Learn more about streaming
        responses](https://platform.openai.com/docs/guides/streaming-responses?api-mode=responses).
      navigationGroup: responses
      sections:
        - type: object
          key: ResponseCreatedEvent
          path: <auto>
        - type: object
          key: ResponseInProgressEvent
          path: <auto>
        - type: object
          key: ResponseCompletedEvent

share/openapi.yaml  view on Meta::CPAN

          path: create
        - type: endpoint
          key: createImageEdit
          path: createEdit
        - type: endpoint
          key: createImageVariation
          path: createVariation
        - type: object
          key: ImagesResponse
          path: object
    - id: images-streaming
      title: Image Streaming
      description: |
        Stream image generation and editing in real time with server-sent events.
        [Learn more about image streaming](https://platform.openai.com/docs/guides/image-generation).
      navigationGroup: endpoints
      sections:
        - type: object
          key: ImageGenPartialImageEvent
          path: <auto>
        - type: object
          key: ImageGenCompletedEvent
          path: <auto>
        - type: object
          key: ImageEditPartialImageEvent

share/openapi.yaml  view on Meta::CPAN

          path: delete
        - type: object
          key: CreateChatCompletionResponse
          path: object
        - type: object
          key: ChatCompletionList
          path: list-object
        - type: object
          key: ChatCompletionMessageList
          path: message-list
    - id: chat-streaming
      title: Streaming
      description: |
        Stream Chat Completions in real time. Receive chunks of completions
        returned from the model using server-sent events.
        [Learn more](https://platform.openai.com/docs/guides/streaming-responses?api-mode=chat).
      navigationGroup: chat
      sections:
        - type: object
          key: CreateChatCompletionStreamResponse
          path: streaming
    - id: assistants
      title: Assistants
      beta: true
      description: |
        Build assistants that can call models and use tools to perform tasks.

        [Get started with the Assistants API](https://platform.openai.com/docs/assistants)
      navigationGroup: assistants
      sections:
        - type: endpoint

share/openapi.yaml  view on Meta::CPAN

      sections:
        - type: endpoint
          key: listRunSteps
          path: listRunSteps
        - type: endpoint
          key: getRunStep
          path: getRunStep
        - type: object
          key: RunStepObject
          path: step-object
    - id: assistants-streaming
      title: Streaming
      beta: true
      description: >
        Stream the result of executing a Run or resuming a Run after submitting tool outputs.

        You can stream events from the [Create Thread and
        Run](https://platform.openai.com/docs/api-reference/runs/createThreadAndRun),

        [Create Run](https://platform.openai.com/docs/api-reference/runs/createRun), and [Submit Tool
        Outputs](https://platform.openai.com/docs/api-reference/runs/submitToolOutputs)

        endpoints by passing `"stream": true`. The response will be a [Server-Sent
        events](https://html.spec.whatwg.org/multipage/server-sent-events.html#server-sent-events) stream.

        Our Node and Python SDKs provide helpful utilities to make streaming easy. Reference the

        [Assistants API quickstart](https://platform.openai.com/docs/assistants/overview) to learn more.
      navigationGroup: assistants
      sections:
        - type: object
          key: MessageDeltaObject
          path: message-delta-object
        - type: object
          key: RunStepDeltaObject
          path: run-step-delta-object



( run in 0.714 second using v1.01-cache-2.11-cpan-4face438c0f )