The text completions endpoint is a legacy OpenAI-compatible endpoint that generates a continuation of a raw text prompt rather than a structured message history. It is routed through the same provider dispatch pipeline as chat completions, so the same attestation headers and streaming format apply. For most use cases, prefer POST /v1/chat/completions, which is the actively developed endpoint with full tool-call and response-format support.Documentation Index
Fetch the complete documentation index at: https://docs.darkbloom.dev/llms.txt
Use this file to discover all available pages before exploring further.
Authentication
All inference endpoints require a Bearer token:Request
The input text to complete.
Maximum number of tokens to generate. Defaults to 8192 if not set.
Sampling temperature between 0 and 2.
When
true, returns the response as SSE chunks ending with data: [DONE].Example
cURL