Back to Blog
Guide
Featured

MCP for Translation: How Model Context Protocol is Changing i18n Automation

Deep dive into Model Context Protocol (MCP) for translation workflows. Learn how MCP enables AI assistants to manage translations, connect to TMS platforms, and automate localization.

IntlPull Team
IntlPull Team
Engineering
February 10, 202515 min read

The missing link in AI-powered translation

For years, AI could translate text beautifully but couldn't do anything with the result. You'd get a translation from Claude or GPT, then manually copy it into your translation files, push it to your TMS, update your database. The AI was smart, but it had no hands.

Model Context Protocol changes that. MCP is Anthropic's open standard for giving AI models the ability to interact with external systems—file systems, databases, APIs, and yes, translation management systems.

When I first connected IntlPull's MCP server to Claude, the experience was genuinely different. I didn't just ask "translate this to Spanish." I said "add the missing Spanish translations to my project and push them." And it did. No copy-paste. No context switching. Just done.

This guide explains what MCP is, how it works for translation, and how to set it up for your i18n workflow.

What exactly is MCP?

Model Context Protocol is a specification for how AI models can interact with external tools. Think of it as a standardized API for AI capabilities.

Before MCP, every AI tool integration was custom. OpenAI had function calling. LangChain had its tools abstraction. Each vendor had their own approach.

MCP provides a common language. An MCP server exposes capabilities, and any MCP-compatible client (Claude Desktop, Claude Code, Cursor, etc.) can use them.

For translation, this means you write one MCP server for your TMS, and it works everywhere MCP is supported.

MCP architecture for translation

The architecture has three parts:

1. MCP Client (the AI)

Claude Desktop, Claude Code, Cursor, or any MCP-enabled application. This is where you interact with the AI.

2. MCP Server (the integration)

A service that exposes tools to the AI. For translation, this connects to your TMS and provides operations like list projects, get translation status, create translation keys, update translations, and search translation memory.

3. Transport Layer

How the client and server communicate. Options include stdio (standard input/output) for local servers, HTTP/SSE for remote servers, and WebSocket for bidirectional streaming.

Most translation MCP servers use stdio for local development and HTTP for team-wide access.

Setting up a translation MCP server

Let's walk through setting up IntlPull's MCP server as an example.

Step 1: Install the server

Run: npm install -g @intlpull/mcp

Step 2: Configure credentials

Create or update your IntlPull config by running intlpull login, or manually set credentials in .intlpull.json.

Step 3: Configure your MCP client

For Claude Desktop, edit the config file at ~/Library/Application Support/Claude/claude_desktop_config.json. Add an mcpServers object with an "intlpull" entry that has command "npx", args ["-y", "@intlpull/mcp"], and env with your INTLPULL_API_KEY.

For Claude Code, edit ~/.claude/settings.json similarly. Add mcpServers with intlpull command and args.

For Cursor, add the server in the MCP Servers section of settings with the same structure.

Step 4: Verify connection

Open your MCP client and try: "List my IntlPull projects"

If you see your projects, you're connected.

Available translation operations via MCP

A well-designed translation MCP server exposes operations like:

Project management: list_projects to get all translation projects, get_project to get project details and stats, switch_project to change active project context.

Key management: list_keys to list translation keys with filtering, create_key to create new translation key, update_key to modify key metadata, delete_key to remove a key, search_keys for full-text search across keys.

Translation operations: get_translations to get translations for a key, update_translation to set translation value, translate_missing for AI translation of missing content, bulk_translate to translate multiple keys.

Status and reporting: get_status for translation coverage by language, get_missing for keys without translations, get_needs_review for translations flagged for review.

Translation memory: search_tm to search translation memory, add_to_tm to add translation pair to memory.

Real workflows with MCP

Here are actual things I do with MCP-connected translation:

Workflow 1: End-of-day translation sync

Me: "Show me what keys were added today that don't have translations yet"

Claude uses MCP to query recently created keys, filters for missing translations, shows summary.

Me: "Translate those to Spanish and German, mark them for review"

Claude uses MCP to translate each key, sets status to needs_review, reports results.

Total time: maybe 2 minutes. Before MCP: would have involved opening the TMS dashboard, filtering, running translations, updating statuses manually.

Workflow 2: Pre-release check

Me: "Check if we're ready to release in Japanese. Any missing or unreviewed translations?"

Claude queries project status for Japanese, lists any gaps.

Me: "Those 3 missing keys look minor. Translate them and mark as ready."

Claude translates, updates status, confirms.

Workflow 3: Extracting strings from new code

Me: "I just wrote this component. Extract the hardcoded strings, create keys in IntlPull, and update the component to use t()."

Claude identifies strings, generates key names, creates keys via MCP, rewrites component, shows diff.

This is the killer workflow. The AI sees the code, understands the context, creates properly-named keys in the TMS, and refactors the code—all in one interaction.

Workflow 4: Translation consistency check

Me: "Check if 'Submit' is translated consistently across all namespaces"

Claude searches for keys containing 'submit', compares translations, reports inconsistencies.

Me: "Standardize them all to use 'Enviar' in Spanish"

Claude updates all relevant translations via MCP.

Building custom translation MCP servers

If your TMS doesn't have an MCP server, you can build one. The basic structure in TypeScript involves importing McpServer from the MCP SDK, creating a server instance with name and version, then defining tools.

Each tool has a name like 'list_keys', a description, a schema defining parameters (with types and descriptions), and an async handler function that calls your TMS client and returns results.

Finally, you create a transport (usually StdioServerTransport) and connect the server to it.

Key design considerations:

Be specific with tool names: use create_translation_key not just create.

Include validation: check inputs before making API calls.

Handle errors gracefully: return helpful error messages.

Support pagination: large key lists need pagination.

Include context: return enough data for the AI to make decisions.

MCP vs other integration approaches

MCP vs Function calling (OpenAI)

Function calling is single-turn: you define functions, the model calls them, you execute, done.

MCP is persistent: the server maintains state, the model can make multiple calls in a session, and there's a standard discovery mechanism.

For translation workflows that involve multiple steps (check status → find gaps → translate → verify), MCP's session persistence matters.

MCP vs Custom plugins

Before MCP, you'd build a custom integration for each AI platform. With MCP, you build once and deploy anywhere the protocol is supported.

This is particularly valuable as the AI tool landscape evolves. Your MCP server works with Claude Desktop today and whatever comes next.

MCP vs API wrappers

You could teach Claude about your TMS API and have it make HTTP requests directly. But authentication is harder to manage, error handling becomes the model's responsibility, and you're sending API documentation in every prompt.

MCP encapsulates this complexity in the server.

Security considerations

MCP servers have access to your translation data. Think about:

Authentication: Use API keys with minimal required permissions. Rotate keys regularly. Don't commit keys to repos (use environment variables).

Authorization: Consider what operations the MCP server should allow. Read-only servers for sensitive projects. Approval workflows for production changes.

Audit logging: Log all MCP operations. Track who (which AI session) made what changes. Enable review of AI-initiated changes.

Network security: For remote MCP servers, use HTTPS. Consider IP allowlisting for production servers. Monitor for unusual activity patterns.

Performance optimization

MCP calls add latency. For smooth workflows:

Batch operations: Instead of creating 50 keys one at a time, use bulk create endpoints. Define a bulk_create_keys tool that accepts an array of keys.

Caching: Cache project metadata, language lists, and other stable data. Store in memory with a timestamp and refresh after a minute or so.

Streaming responses: For large datasets, consider streaming. Define an export_translations tool that can handle streaming responses from your TMS client.

The future of MCP for translation

Where is this heading?

Multi-server coordination: Today, you configure one MCP server at a time. Soon, AI clients will coordinate across multiple servers—your TMS, your git repo, your CI/CD, all working together. Imagine: "Create a PR that adds German support, including all translated strings." The AI coordinates: creates branch (git MCP), adds language (TMS MCP), translates content (TMS MCP), creates PR (GitHub MCP).

Specialized translation servers: We'll see MCP servers optimized for specific use cases: medical translation with terminology enforcement, legal translation with compliance checks, e-commerce translation with SEO optimization.

Real-time collaboration: MCP could enable real-time translation collaboration—human translators and AI working on the same content, each seeing the other's changes.

Edge deployment: MCP servers running at the edge for lower latency. Particularly important for interactive translation workflows.

Getting started today

  • Install an MCP-compatible client: Claude Desktop (free), Claude Code (for terminal workflows), or Cursor (if you prefer that IDE).
  • Connect a translation MCP server: IntlPull MCP with npm install -g @intlpull/mcp, or build your own for other TMS platforms.
  • Start simple: "List my projects." "What's the translation status for Spanish?" "Create a key for this text."
  • Build workflows: Combine with Claude Code skills. Integrate into your development routine. Automate what you do repeatedly.
  • The gap between "AI can translate" and "AI manages translations" is closing. MCP is the bridge. The teams that master it now will have significant advantages as AI-native workflows become standard.

    ---

    *IntlPull's MCP server provides full translation management capabilities for Claude Desktop, Claude Code, and Cursor. Install with npm install -g @intlpull/mcp and connect your workflow today.*

    mcp
    model-context-protocol
    anthropic
    automation
    i18n
    integration
    2025
    Share:

    Ready to simplify your i18n workflow?

    Start managing translations with IntlPull. Free tier included.