AI actions
AI actions are opt-in, developer-defined buttons that appear next to fields (”✨ Rewrite”) or block toolbars (”✨ Translate”). They call your completion provider (OpenAI, Anthropic, a custom endpoint — your choice) and patch the document with the result.
Quickstart
import { Blok, defineAiAction } from "@useblok/core";
import type { AiCompletionProvider } from "@useblok/core";
const aiProvider: AiCompletionProvider = {
async complete({ prompt }) {
const res = await fetch("/api/ai/complete", {
method: "POST",
body: JSON.stringify({ prompt }),
});
const { text } = await res.json();
return { text };
},
};
const actions = [
defineAiAction({
kind: "field",
id: "shorten",
label: "Shorten",
appliesTo: (field) => field.type === "textarea" || field.type === "richtext",
run: async (ctx) => {
const { text } = await ctx.ai.complete({
prompt: `Shorten this without losing meaning:\n\n${ctx.value}`,
});
ctx.setValue(text);
},
}),
];
<Blok config={config} aiProvider={aiProvider} aiActions={actions} />Field actions
Surface a ”✨” menu next to matching fields:
defineAiAction({
kind: "field",
id: "generate-subtitle",
label: "Generate subtitle",
appliesTo: (field, block) =>
block.type === "Hero" && field.name === "subtitle",
run: async (ctx) => {
const { text } = await ctx.ai.complete({
prompt: `Write a one-sentence subtitle for a hero with title: "${ctx.block.props.title}"`,
});
ctx.setValue(text);
},
});AiFieldActionContext
interface AiFieldActionContext<V = unknown> {
// Field being acted on
fieldName: string;
field: Field;
value: V;
setValue: (next: V) => void;
// The containing block (for cross-field prompts)
block: BlockInstance;
// The completion provider registered on <Blok>
ai: AiCompletionProvider;
// Cancel signal — plumbed through to fetch
signal: AbortSignal;
}Block actions
Surface a ”✨” menu in the block’s floating toolbar — these operate on the whole block, not a single field:
defineAiAction({
kind: "block",
id: "translate-block",
label: "Translate to Spanish",
appliesTo: (block) => block.type === "Hero",
run: async (ctx) => {
const { text } = await ctx.ai.complete({
prompt: `Translate to Spanish:\n\n${JSON.stringify(ctx.block.props)}`,
});
const next = JSON.parse(text);
ctx.setBlockProps(next);
},
});AiBlockActionContext
interface AiBlockActionContext {
block: BlockInstance;
setBlockProps: (patch: Record<string, unknown>) => void;
ai: AiCompletionProvider;
signal: AbortSignal;
}The completion provider
AiCompletionProvider is the pluggable backend. Its only required
method:
interface AiCompletionProvider {
complete(req: AiCompletionRequest): Promise<AiCompletionResult>;
}
interface AiCompletionRequest {
prompt: string;
model?: string;
maxTokens?: number;
temperature?: number;
signal?: AbortSignal;
}
interface AiCompletionResult {
text: string;
}Mock provider (for demos)
const mockProvider: AiCompletionProvider = {
async complete({ prompt }) {
await new Promise((r) => setTimeout(r, 500));
return { text: `[mock] Response to: ${prompt.slice(0, 60)}` };
},
};Proxying to OpenAI
Proxy through your own server — never ship an API key to the browser:
import OpenAI from "openai";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
export async function POST(req: Request) {
const { prompt, model = "gpt-4o-mini" } = await req.json();
const res = await openai.chat.completions.create({
model,
messages: [{ role: "user", content: prompt }],
});
return Response.json({ text: res.choices[0].message.content });
}Don’t pass an AiCompletionProvider that calls an LLM directly from the
browser with an API key — it leaks the key. Always go through your own
backend.
Settings
Users can toggle AI on/off and pick a scope (field-level, block-level,
or both) from the Settings modal → AI tab. These settings are stored
in localStorage and respected by the UI automatically.
Read or write them programmatically:
import { useAiSettings, useSettingsActions } from "@useblok/core";
function AiToggle() {
const { enabled } = useAiSettings();
const { setAi } = useSettingsActions();
return (
<button onClick={() => setAi({ enabled: !enabled })}>
AI is {enabled ? "on" : "off"}
</button>
);
}Streaming
The current AiCompletionProvider API is non-streaming — you get the
full text back at once. Streaming is on the roadmap; for now, keep
individual prompts short and responsive (under ~8s to first byte).