Ollama
Header parameters
x-api-keyany ofOptional
stringOptional
nullOptional
Body
Ollama-compatible chat completion request.
Supports all standard Ollama parameters with explicit validation.
modelstringRequiredExample:
Model identifier. Available: gpt-oss:20b, gpt-oss:120b, llama3.2:3b, deepseek-r1:8b
gpt-oss:120bstreamany ofOptionalDefault:
Enable streaming responses (not yet supported)
falsebooleanOptional
nullOptional
optionsany ofOptional
Model generation options (temperature, top_k, etc.)
or
nullOptional
thinkany ofOptional
Reasoning mode. gpt-oss: 'low'/'medium'/'high', deepseek-r1: true/false, llama3.2: not supported. Omit to use model default.
booleanOptional
string · enumOptionalPossible values:
nullOptional
modeany ofOptionalDefault:
Routing mode: 'auto' (intelligent), 'opengpu' (blockchain), 'direct' (low-latency)
autostring · enumOptionalPossible values:
nullOptional
asyncany ofOptionalDefault:
Async mode: returns task_address immediately, poll /v2/tasks/{task_address} for result. Default: false (sync mode).
falsebooleanOptional
nullOptional
Responses
200
Successful Response
application/json
202
Task accepted (async mode). Poll the poll_url for status.
application/json
422
Validation Error
application/json
post
/v2/ollama/api/chatLast updated