Accessibility in AI: How MCP Makes Language Barriers Disappear

    Kent C. DoddsKent C. Dodds

    We're living in a time where interacting with AI is becoming more natural and accessible than ever before. The Model Context Protocol (MCP) is at the forefront of this movement—not just by making workflows easier, but by breaking down language barriers and making AI truly international.

    Let's explore how MCP's natural language interface empowers users from all backgrounds, and why this is a game-changer for accessibility and global reach.

    Why Natural Language Matters

    When you think about accessibility, you might picture screen readers or keyboard navigation. But accessibility is also about making technology usable for people from different cultures and languages. Traditional UIs—like combo boxes and dropdowns—can be tricky to use, even with libraries that aim to make them more accessible. Wouldn't it be better if users could just express what they want in their own words, in their own language?

    That's the promise of MCP: letting users interact with AI using natural language, instead of wrestling with rigid UI components.

    Internationalization, the Easy Way

    In the video above, I show a real-world example. I built an MCP server that communicates with the LLM in English. I hadn't added any translations or internationalization features. But I wondered: what if someone tried to use it in Japanese? Would the system understand?

    I don't speak Japanese, so I turned to Google Translate. I copied a prompt, translated it, and pasted it into the MCP interface. Would it recognize that I wanted to create a journal entry?

    It did! The system created an entry, complete with a title, mood, and location—all in Japanese. The LLM not only understood the intent, but also generated the appropriate content in the target language.

    The LLM as Your Translator

    This experience was eye-opening. Even though my MCP server was built for English, the LLM acted as a bridge, translating and interpreting the user's intent. As a Japanese speaker, I could interact with the system in my own language, and it would still work seamlessly for English users.

    I'm not saying you should skip adding translations to your MCP servers (yet). But it's remarkable that, thanks to the LLM's capabilities, users can often interact in the language they're most comfortable with—and still get the results they want.

    Example

    To see this in action, let's look at a simple MCP tool. Here's an example of a tool that adds a number to a running total:

    agent.server.tool(
    'add_to_total',
    'Add a number to the running total',
    {
    value: z.number().describe('The number to add'),
    },
    async ({ value }) => {
    // Imagine this updates some persistent total
    const newTotal = await agent.db.addToTotal(value)
    return {
    content: [
    {
    type: 'text',
    text: `Added ${value} to the total. New total: ${newTotal}`,
    },
    ],
    }
    },
    )

    What's remarkable is that a user could invoke this tool in any language—Japanese, Spanish, French, or any other. For example, a Japanese user might say:

    合計に5を追加して

    Or a Spanish user might say:

    Suma 5 al total

    The foundational model (LLM) understands the user's intent, regardless of the language, and calls the add_to_total tool with the correct value. The tool itself doesn't need to know anything about the user's language—it just receives the structured input and performs its task.

    This means you can write your MCP tools in your preferred language and your users can interact with them in theirs. The LLM acts as the bridge, making your tools accessible to everyone, everywhere.

    The Future of Accessible AI

    MCP isn't just about making workflows easier. It's about making AI accessible to everyone, everywhere—regardless of language or technical background. By leveraging natural language and the power of LLMs, we're moving toward a world where technology adapts to us, not the other way around.

    If you're excited about building inclusive, globally accessible AI apps, check out more at EpicAI.pro. Let's make sure everyone can join the conversation.

    Share

    When I talk about MCP making things more accessible, I do mean that having natural language is much more accessible than figuring out how to get the combo box to work properly and stuff. There are libraries for that, but wouldn't it be nice for users to just use natural language and they don't have to worry about those UIs. But I also am talking about accessibility from an internationalization standpoint. So let me show you what I mean. I've written this MCP server, and it communicates to the LLM using English.

    And that is just the way that I've done it. I haven't actually translated it or anything. But I thought, what would happen if I tried to create a journal entry in Japanese? Would it know what I'm trying to do? So I don't know Japanese.

    We're going to use Google to help with that. I'm going to copy this prompt. I'm going to paste it in here and let's see what happens. Is it going to identify that it's supposed to create a general entry? Well, there we go.

    It's creating an entry and here it's got the title. It even created a mood and location and stuff for me. Let's see what it said. We'll swap these. So now this is Japanese and it says, says, ran tool.

    Okay. Yeah. Okay. That looks right. So this is really cool.

    I'm not saying that you don't have to add translations to your MCPs and NCP servers, but maybe we're okay that we can just write in the language that we prefer. I could probably, if I am a Japanese person or speak Japanese, I could probably write this in Japanese and still use it as an English user, which I think is a really interesting aspect of the MCP protocol, but also just the whole user interaction paradigm where the LLM is acting as our assistant and effectively as our translator here. It's pretty cool.