Skip to content

Prompts

Prompts generate message sequences that can be used to start or continue LLM conversations.

Return a fixed message sequence.

server.prompt("greet", {
description: "Generate a greeting",
handler: () => ({
messages: [{
role: "user",
content: { type: "text", text: "Hello, how are you?" }
}]
})
});

Validate prompt arguments before building messages.

import { z } from "zod";
const SummarySchema = z.object({
text: z.string(),
length: z.enum(["short", "medium", "long"]).optional(),
});
server.prompt("summarize", {
description: "Create a summary prompt",
arguments: SummarySchema,
handler: (args) => ({
description: "Summarization prompt",
messages: [{
role: "user",
content: {
type: "text",
text: `Please summarize this text in ${args.length || "medium"} length:\n\n${args.text}`
}
}]
})
});

Add title and _meta to pass additional information through prompts/list and prompts/get responses.

server.prompt("research-assistant", {
description: "Research assistant prompt with context",
title: "Research Assistant",
_meta: {
category: "research",
complexity: "advanced",
estimatedTokens: 500,
},
arguments: [
{ name: "topic", description: "Research topic", required: true },
{ name: "depth", description: "Research depth", required: false },
],
handler: (args: { topic: string; depth?: string }) => ({
messages: [{
role: "user",
content: {
type: "text",
text: `Research ${args.topic} at ${args.depth || "medium"} depth`
}
}],
_meta: {
templateVersion: "2.0",
generated: true,
},
})
});

The _meta and title from the definition appear in prompts/list responses. Handlers can also return _meta in the result for per-generation metadata.