Skip to main content
POST
/
chat
/
completions
curl -X POST https://api.gravixlayer.com/v1/inference/chat/completions \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer $GRAVIXLAYER_API_KEY' \
-d '{\n "model": "meta-llama/llama-3.1-8b-instruct",\n "messages": [\n {\n "role": "user",\n "content": "Hello! Tell me about AI."\n }\n ]\n }'
{
  "data": {
    "id": "<string>",
    "object": "chat.completion",
    "created": 123,
    "model": "<string>",
    "choices": [
      {
        "message": {
          "role": "system",
          "content": "<string>"
        },
        "index": 123,
        "finish_reason": "<string>"
      }
    ]
  }
}

Authorizations

Authorization
string
header
required

API key authentication. Get your API key from the Gravix Layer Dashboard.

Body

application/json
model
string
required

Model identifier

Example:

"meta-llama/llama-3.1-8b-instruct"

messages
object[]
required

Messages to generate the completion from

temperature
number
default:1

Sampling temperature

Required range: 0 <= x <= 2
max_tokens
integer

Maximum tokens to generate

Required range: x >= 1
stream
boolean
default:false

Whether to stream the response

Response

Chat completion response

data
object
I