Skip to main content
POST
/
v1
/
chat
/
completions
cURL
curl --request POST \
  --url https://proxy.innk.cc/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "stream": false,
  "max_completion_tokens": 123,
  "temperature": 1,
  "top_p": 1
}
'
{
  "id": "<string>",
  "object": "<string>",
  "created": 123,
  "model": "<string>",
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "<string>",
        "content": "<string>"
      },
      "finish_reason": "<string>"
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123,
    "prompt_tokens_details": {
      "cached_tokens": 123,
      "audio_tokens": 123
    },
    "completion_tokens_details": {
      "reasoning_tokens": 123,
      "audio_tokens": 123,
      "accepted_prediction_tokens": 123,
      "rejected_prediction_tokens": 123
    }
  }
}
The current document only lists the basic functions of OpenAI Chat Completion, such as the OpenAI model. More parameter functions can be used, please refer toOpenAI API Reference

Authorizations

Authorization
string
header
required

The authentication header format is Bearer <API_KEY>, where <API_KEY> is your API token.

Body

application/json
model
string
required

The ID of the model to use.

messages
object[]
required

Generate chat completion message in chat format.

stream
boolean
default:false

Whether to stream conversations

max_completion_tokens
integer

Maximum number of completion tokens

temperature
number
default:1

temperature

top_p
number
default:1

nuclear sampling

Response

200 - application/json

Model response

id
string
required

Request ID identifier

object
string
required

Object type

created
integer
required

Creation time

model
string
required

Model version

choices
object[]
required
usage
object
required

Dosage