Run one turn of the list agent inline and stream the assistant response as text chunks.
message and streams the assistant’s response. The agent has access to the full list toolset — add_rows, add_column, evaluate_profiles, remove_rows, remove_column, query_sheet, get_sheet_status, review_profiles, query_leads_db, web_search. Tools are wrapped with a persistence layer so their writes commit to the database as they run (cells, columns, rows all update in realtime).
The response is a text stream, not a JSON object. The AI SDK’s toTextStreamResponse() emits UTF-8 text chunks — each chunk is a fragment of the assistant’s reply. Read chunks until the stream closes; the full assistant message is persisted server-side via the onFinish hook.
The user message is persisted synchronously before streaming begins, so if the stream fails mid-flight the user’s turn is still saved and the conversation can be resumed.
maxDuration = 60).stopWhen: stepCountIs(25)).AbortSignal into the agent; the stream stops cleanly.trigger/list-agent.ts Trigger.dev task still exists but should not be invoked for new flows.
event: lines, no JSON envelopes per chunk). If you need structured tool-call progress, read GET /api/lists/{id}/messages after the stream completes — the persisted assistant message contains tool calls and results.
{ message: "<user turn>" }. Empty or missing message → 400.Content type. text/plain; charset=utf-8. Do NOT JSON.parse the body.Title generation. On the first message of a conversation, if the list’s name is null, starts with "untitled", or equals "new list" (and doesn’t end with (auto)), a background task generates a 2–4 word title and updates lists.name. This is fire-and-forget — don’t wait for it.Tools persist as they run. If a tool call creates a column, that column is live in Supabase Realtime before the stream finishes. Clients subscribed to the list’s rows/columns/cells will see updates mid-turn.Concurrency. The endpoint does not serialize turns — two overlapping calls will each run an agent and each persist assistant messages, producing an interleaved transcript. Clients should gate turns in the UI.Reading the transcript. After the stream closes, fetch GET /api/lists/{id}/messages to get the structured record (including tool calls and results). The stream itself is just the natural-language reply.Credits and cost. Every turn invokes the search-agent LLM + any tool LLMs (web search, reasoning). Budget accordingly — complex turns with web_search can take 20–40 s and consume meaningful credits.Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
List UUID
^([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[1-8][0-9a-fA-F]{3}-[89abAB][0-9a-fA-F]{3}-[0-9a-fA-F]{12}|00000000-0000-0000-0000-000000000000|ffffffff-ffff-ffff-ffff-ffffffffffff)$JSON body. The server immediately persists the user message, then begins streaming the assistant response.
The user message for this chat turn. Required, non-empty.
1Text stream of the assistant's reply. Content-Type is text/plain; charset=utf-8 (from AI SDK toTextStreamResponse()). Not JSON. Read chunks until the stream closes; the complete message is persisted server-side.