SSE endpoint for search chat processing that streams responses using Server-Sent Events. Executes the full Chat-Search generation pipeline with SSE streaming.
Documentation Index
Fetch the complete documentation index at: https://docs.genow.ai/llms.txt
Use this file to discover all available pages before exploring further.
Bearer token authentication. Format: 'Bearer '
The user's message to the assistant. Server-side normalization may adjust curly braces in the text. For document chat, images already linked to the chat are appended to the prompt automatically.
Chat model identifier (matched case-insensitively; stored lowercase). Must be enabled for your deployment—use the Genow models API to list valid chat_model_name values (e.g. Gemini Pro is often exposed as gemini-2.5-pro via LiteLLM; exact ids depend on your tenant configuration).
Identifier of the conversation. The server creates the chat on first use if it does not exist yet. Document context is whatever is attached to this chat in Genow (uploaded files).
Client-generated unique id for this question/answer turn. It is persisted with the stored chat entry and should be unique within the chat.
Optional locale (ISO 639-1 code) for system prompt and templating, e.g. en or de. Omit to use server defaults.
bg, cs, da, de, el, en, es, et, fi, fr, hr, hu, is, it, ja, ko, lt, lv, nb, nl, pl, pt, ro, ru, sk, sl, sr, sv, tr, zh Optional hint that this is the first user message in a new chat (used by Genow clients for titling and analytics). Document chat retrieval does not depend on this flag.
Optional IANA timezone name (e.g. Europe/Berlin, America/New_York) used to inject the current local date/time into the system prompt.
Optional OpenAI/LiteLLM-style response_format for the final answer model call (e.g. type json_schema with json_schema.name, json_schema.schema, json_schema.strict). Forwarded to the LLM as-is. For type json_schema, json_schema.schema must be valid JSON Schema Draft 2020-12.
Successful Response