Skip to main content
POST
/
v1
/
messages
cURL
curl --request POST \
  --url https://proxy.innk.cc/v1/messages \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "claude-sonnet-4-5-20250929",
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "max_tokens": 4096,
  "stream": false,
  "temperature": 1,
  "top_k": 5,
  "top_p": 0.7
}
'
{
  "content": [
    {
      "text": "<string>",
      "type": "<string>"
    }
  ],
  "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
  "model": "<string>",
  "role": "<string>",
  "stop_reason": "end_turn",
  "stop_sequence": null,
  "type": "message",
  "usage": {
    "input_tokens": 123,
    "output_tokens": 123
  }
}
The current document only lists the basic functions of Anthropic Messages, such as the Claude model. More parameter functions can be used, please refer toAnthropic API Reference

Authorizations

Authorization
string
header
required

The authentication header format is Bearer <API_KEY>, where <API_KEY> is your API token.

Headers

anthropic-version
string
default:2023-06-01

Anthropic API version, only valid when using Claude model

anthropic-beta
string[]

Anthropic API beta, only valid when using Claude model

Body

application/json
model
string
required
Example:

"claude-sonnet-4-5-20250929"

messages
object[]
required
max_tokens
integer
default:4096
required

Maximum number of tokens

stream
boolean
default:false

Whether to stream conversations

temperature
number
default:1

temperature

top_k
integer

Top-K sampling

Example:

5

top_p
number

nuclear sampling

Example:

0.7

Response

200 - application/json

Model response

content
object[]
required
id
string
required
Example:

"msg_013Zva2CMHLNnXjNJJKqJ2EF"

model
string
required
role
string
required
stop_reason
enum<string>
required

Reason for stopping operation. This may be one of the following values:

  • "end_turn": The model reaches a natural stopping point
  • "max_tokens": We exceeded the requested max_tokens value or the model's maximum value
  • "stop_sequence": Generates one of the custom stop_sequences you provide
  • "tool_use": The model calls one or more tools
  • "pause_turn": We paused a long running turn. You can provide this response directly in subsequent requests to allow the model to continue running.
  • "refusal": when the streaming classifier steps in to handle a potential policy violation

In non-streaming mode, this value is never empty. In streaming mode, null in the message_start event, non-null otherwise.

Available options:
end_turn,
max_tokens,
stop_sequence,
tool_use,
pause_turn,
refusal
stop_sequence
null
required

Did you generate a custom stop sequence?

If so, this value is a non-empty string. If a custom stop sequence was generated, this value is a non-empty string.

Example:

null

type
string
required
Example:

"message"

usage
object
required

Dosage