Prompt Engineering for Everyone: Making AI Workflows Reusable with MCP

    Kent C. DoddsKent C. Dodds

    We're entering a new era of AI interaction—one where people don't have to be "prompt engineers" to get the most out of large language models. Thanks to the Model Context Protocol (MCP), developers can now expose reusable, parameterized prompts directly to LLMs and users, making complex workflows accessible to everyone.

    Let's dive into how MCP's prompt system works, why it's a game-changer for accessibility and productivity, and how you can use it to supercharge your own apps.

    Why Prompts Matter

    If you've ever used an LLM, you know that the right prompt can make all the difference. Include too little context, and the LLM will be unable to complete the task. Include too much, and the LLM will overemphasize the wrong details. But not everyone wants to (or should have to) craft the perfect prompt every time they need to summarize a document, generate alt text, or review some code.

    Especially when you're developing a bunch of MCP tools and resources for the LLM to use, it can be really helpful to your users if you come up with common use cases that instruct the LLM on which tools and resources to use to accomplish a common task.

    MCP's prompt capability lets you, as a developer, define a menu of reusable, structured instructions—each with a clear description and customizable arguments. Users can use these prompts in the UI of their LLM app, fill in the blanks, and get consistent, high-quality results.

    Right now, Claude supports these via a special menu to add context, but other clients may have different ways of exposing them. Prompts are intended to be user-invoked, not automatically added to the conversation.

    How MCP Prompts Work

    With MCP, you can register prompts on your server like this:

    server.prompt(
    'summarize_document',
    'Summarize a document for the user',
    {
    documentUrl: z
    .string()
    .url()
    .describe('The URL of the document to summarize'),
    readingLevel: z
    .enum(['child', 'teen', 'adult'])
    .describe('Target reading level for the summary'),
    },
    async ({ documentUrl, readingLevel }) => ({
    messages: [
    {
    role: 'user',
    content: {
    type: 'text',
    text: `Please use the load_document tool to load the document at ${documentUrl} and summarize it at a ${readingLevel} reading level.`,
    },
    },
    ],
    }),
    )

    Now, any client (including LLMs) can discover this prompt, supply a document URL and reading level, and get a perfectly tailored summary—no prompt engineering required.

    Now, go ahead and enhance this prompt to be perfectly tuned for your use case and it's easy for users to discover and use!

    Ready to Build the Future?

    MCP prompts are about more than just saving time—they're about making powerful AI workflows available to everyone, regardless of their technical background.

    If you're excited about building the next generation of intelligent, accessible apps, sign up for updates at EpicAI.pro. Let's shape the future of AI interaction together!

    Share

    So prompting an AI helps a lot with the quality of stuff that you get out, but not everybody wants to be a prompt engineer. And when we're building MCP tools, sometimes we need to tell the LLM exactly what we want to do with the tools so that we can accomplish a common task or something. And so that's why MCP has support for prompts built in. And so you as the MCP author can create prompts that your users can use and those prompts can tell the LLM what to do with your MCP server. For example, here's Clawed, and as of today, you can add a prompt via this little plus icon.

    So your users will be able to hit that once they have your server installed. And here we're going to go to the MCP fundamentals, that's from my workshop, that server. And here it's got some resources, which we can talk about later, and it also has prompts. There's a prompt for suggesting tags. And I have a journal entry with the ID of 12, so I'm going to add this prompt, and then I'll submit it.

    And let's take a look at what that prompt says. So this came from the MCP server. It says, please look up my Epic Me Journal entry ID of 12 using get entry. So that's one of the tools. I've already approved all of these tools.

    Look up the tags I have available with list tags and then suggest some new tags. And then if I approve them, then create the tag if it doesn't already exist and add it to my entry. So that's what it did. It went and got the entries, it listed the tags, we currently have no tags. And then it gave some suggestions of different tags that might be appropriate.

    So let's say go ahead and create and add tags 1, 2, 7, and 9. And add a good description too. There we go. So that it adds descriptions for all the tags. And so this is just a part of the workflow of me using the MCP server.

    And you could even have it instruct interaction between different MCP servers and all that as well, which is quite nice. And so if you have reusable prompts that your users are going to want to do on a regular basis you have common workflows or something and you want to direct the LLM in a specific way to use your tools or anything else using the prompts feature of MCP is a great way to do that.