Kick off one conversation turn with the framework LLM. Async via Trigger.dev — returns a run id you poll elsewhere.
framework-chat-turn Trigger.dev task, which reads the session’s prior messages + skeleton, appends the user’s new message, and iterates with the framework LLM. Over the course of the task, the session row is updated with:
messagesskeleton (once enough context is gathered)draft_subject / draft_body reflecting the current draftframework_version incremented on skeleton revisionsrunId (and a publicAccessToken for the Trigger.dev realtime SDK) as soon as the task is enqueued. Observe progress by subscribing to the run via the realtime SDK, or by polling GET /api/campaigns/ai-chat-sessions/{id} until task_status returns to null.
task_status: "chat_processing" before dispatching. If another turn (or a /generate-examples run) is already in flight, the update finds task_status !== null, fails to claim, and the handler returns 409. Wait for task_status to clear, then retry.
"Write an intro email for enterprise VPs of Sales, emphasizing time-to-ROI"). The assistant usually asks clarifying questions on turn 1 — no skeleton yet.skeleton, draft_subject, draft_body.availableVarNames, sampleLeads, and sequenceContext materially improve output quality — send them on every turn if the caller knows them.Rate limit: one active task per session. A 409 means either a prior turn or a /generate-examples run is still going. The minimal retry is to poll GET .../{id} every 2–5 s until task_status === null.On 500 during dispatch. The server releases the claim before returning 500 — the session is immediately retryable. No manual task_status reset is needed.Credits. Chat turns consume LLM credits on the server. Budget roughly 1–3 cents per turn depending on transcript length.Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
AI chat session UUID
^([0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[1-8][0-9a-fA-F]{3}-[89abAB][0-9a-fA-F]{3}-[0-9a-fA-F]{12}|00000000-0000-0000-0000-000000000000|ffffffff-ffff-ffff-ffff-ffffffffffff)$The single required field is message. The rest are optional hints that let the framework LLM produce more relevant output — pass them when available.
The user's chat message for this turn. Required, non-empty.
1Whitelist of template variable names (e.g. first_name, company) the LLM is allowed to emit. Constrains the output to variables the renderer can resolve.
Representative leads from the campaign, used to ground the LLM's examples.
Context about where this email sits in the campaign sequence — helps the LLM write follow-ups that acknowledge prior touches.
Task dispatched. Poll the session or subscribe to the Trigger.dev run for the actual response.
Trigger.dev run id. Use with the Trigger.dev realtime SDK (plus publicAccessToken) to stream the chat response, or poll GET /api/campaigns/ai-chat-sessions/{id} until task_status returns to null.
Short-lived Trigger.dev public access token scoped to this run. Pass to the realtime SDK in the browser to subscribe to ChatStreamEvent updates without exposing the server token.