Markdown Translator

In domains such as technical documentation, open-source projects, and blog creation, Markdown has become the most widely used text markup language. However, most existing translation tools tend to break the original formatting when handling Markdown content—especially around code blocks, LaTeX formulas, or structured metadata—often resulting in garbled layouts and lost semantics.

md-translator is an intelligent translation tool designed specifically to address this issue. It delivers high‑quality translations while preserving the Markdown structure, and also offers a “Plain Text Translation Mode” that lets you translate any text document, combining format retention with free‑form translation.

Core Feature 1: Native Support for Markdown Elements

md-translator is deeply optimized for Markdown documents and can recognize and preserve the following common syntax elements:

  • FrontMatter metadata (---)
  • Headings (#)
  • Blockquotes (> quote)
  • Links ([text](url))
  • Unordered lists (- / * / +)
  • Ordered lists (1. 2. 3.)
  • Emphasis (bold, italic)
  • Code blocks (``` )
  • Inline code (`code`)
  • Inline LaTeX formulas ($formula$)
  • Block-level LaTeX formulas ($$formula$$)

FrontMatter, code blocks, and LaTeX formulas can each be optionally translated, so you can choose whether to process them based on your needs.

Core Feature 2: Plain Text Translation for Any Document

Beyond structured Markdown support, md-translator provides a “Plain Text Translation Mode,” which skips format detection and translates any text content directly. Whether it’s Markdown, TXT, HTML, log files, or unformatted technical notes, this mode delivers accurate, efficient language conversion.

Additionally, users can supply custom AI prompts to further enhance terminology consistency, contextual coherence, and uniform translation style.

Extended Functionality: Extracting Clean Text

md-translator can also convert Markdown content into plain text for secondary processing or semantic analysis:

  • Automatically strips all Markdown markers
  • Hides code blocks, links, and other technical elements
  • Outputs plain text optimized for summarization, search indexing, or NLP processing

This feature is ideal for automated workflows such as content summarization, semantic analysis, and knowledge graph construction.

Applicable Scenarios

  • Batch translation of multilingual technical documentation

  • Internationalization of open-source project READMEs

  • Synchronized bilingual (Chinese-English) Markdown blog content

  • Format-preserving translation of mixed documents with code comments and formula explanations

  • Semantic translation and extraction of any structured or unstructured text

Translation API

This tool supports 5 translation APIs and 6 LLM (large language model) interfaces, allowing users to choose the appropriate translation method based on their needs:

Comparison of Translation APIs

API Type Translation Quality Stability Suitable Scenarios Free Quota
DeepL(X) ★★★★★ ★★★★☆ Suitable for long texts; smoother translations 500,000 characters per month
Google Translate ★★★★☆ ★★★★★ Ideal for UI text and common phrases 500,000 characters per month
Azure Translate ★★★★☆ ★★★★★ Broadest language support 2,000,000 characters per month for the first 12 months
GTX API (Free) ★★★☆☆ ★★★☆☆ General text translation Free
GTX Web (Free) ★★★☆☆ ★★☆☆☆ Suitable for small-scale translation Free
  • DeepL: Ideal for long texts with smoother and more natural translations; however, it does not support web API calls and requires local or server-side proxy usage.
  • Google Translate: Offers stable translation quality, suitable for short sentences and UI text, and supports web API calls.
  • Azure Translate: Provides the widest range of language support, meeting diverse multilingual translation needs.
  • GTX API/Web: A free translation option suitable for small-scale use, though its stability is average.

For higher translation speed and quality, you can apply for an API Key from Google Translate, Azure Translate, or DeepL Translate. Refer to the related API application tutorial for the application process.

LLM Translation (AI Large Models)

This tool offers access to six mainstream AI large language models (LLMs) or interfaces, including: DeepSeek, OpenAI, Azure OpenAI, Siliconflow, Groq, and a customizable Custom LLM option.

  • Applicable Scenarios: Ideal for tasks that demand high levels of language comprehension, such as literary works, technical documentation, and multilingual materials.
  • Highly Customizable: Allows configuration of system prompts and user prompts, enabling flexible control over translation style, terminology preferences, and more—catering to a wide range of translation needs.
  • LLM Model: Typically, this field should contain the model name provided by the selected interface; for Azure OpenAI, the corresponding deployment name should be entered.
  • Temperature Parameter: Controls the creativity and consistency of translation results. Higher values yield more diverse and creative outputs but may reduce accuracy; lower values produce more stable and consistent results, making them suitable for formal or highly technical content.

The Custom LLM option allows integration with third-party services or local inference platforms (such as ollama) by configuring the API endpoint and model name. For example, the default API endpoint for a local ollama setup is:

http://127.0.0.1:11434/v1/chat/completions

The default model used is llama3.2. For LM Studio, the local API endpoint is:

http://localhost:61234/v1/chat/completions

To achieve better translation quality, it is recommended to use qwen2.5-14b-instruct or a higher-performing model in the Custom LLM setup.

Language Support

This tool supports translation between over 50 languages, encompassing a broad range of European, Asian, and some African languages. It is suitable for various multilingual content processing scenarios. Supported languages include: English, Chinese, Traditional Chinese, Portuguese, Italian, German, Russian, Spanish, French, Japanese, Korean, Arabic, Turkish, Polish, Ukrainian, Dutch, Greek, Hungarian, Swedish, Danish, Finnish, Czech, Slovak, Bulgarian, Slovenian, Lithuanian, Latvian, Romanian, Estonian, Indonesian, Malay, Hindi, Bengali, Vietnamese, Norwegian, Hebrew, Thai, Filipino (Tagalog), Uzbek, Kyrgyz, Turkmen, Kazakh, Bhojpuri, Kannada, Amharic, Gujarati, Javanese, Persian, Tamil, Swahili, Hausa, Telugu, and Marathi.

For detailed information on supported languages, refer to the official documentation of each service:

API Parameters

Chunk Translation Size

For text files with contextual relationships—such as subtitles or Markdown documents—this tool automatically merges multiple lines into "chunks" for translation. The chunk size refers to the maximum number of characters per grouped block. The character limits for each translation service are as follows:

  • DeepL API: Up to 128K characters per request
  • DeepLX Free: Up to 1,000 characters per request
  • Azure Translate: Up to 10,000 characters per request
  • Google Translate:
    • Web version: Up to 5,000 characters per translation
    • Cloud Translation API: Up to 30,000 characters per request

Note: Google Translate disrupts line breaks during processing, so chunked translation is not used with this service.

Delay Time

Delay time sets the wait interval between chunk translations. When processing large volumes of text, some translation APIs may respond slowly—especially under poor network conditions or when using free interfaces. In such cases, delay settings are particularly important.

For example, when testing with Azure Translate’s free tier, it is recommended to set the delay time to 5,000 milliseconds or more to avoid empty responses or errors.

Translation Rate

Setting the translation rate too high may result in empty API responses or cause requests to be flagged as abnormal. It's recommended to adjust the rate based on the specific translation service and its stability to improve success rates and maintain reliable performance.

Feature Description

Translation Cache

This tool introduces an optional local translation cache to improve translation efficiency and reduce resource consumption:

  • Cache rules: Each translation result is stored with a unique key formatted as source text_target language_source language_translation API_model settings.
  • Cache hit condition: The local cache result is used only when the parameters match exactly, ensuring accuracy.
  • Cache purpose: Avoid repeated translations, reduce API calls, and improve translation speed.

To disable the use of translation cache, you can uncheck "Use translation cache" or click "Clear translation cache" in the API settings.

Multilingual Translation

Supports translating the same file into multiple languages at once, which is especially suitable for international video content:

  • For example: Translate an English file simultaneously into Chinese, Japanese, German, and French for the convenience of global users.
  • Supports 35 mainstream languages and will continue to expand.

Usage Notice

When using this tool, please note the following:

  • DeepL support: Since the DeepL API does not support direct calls from the web, a server-side forwarding interface is provided solely for data transmission, and it will not collect user data. For better stability, users can also choose to deploy this interface themselves.
  • Using the DeepLX free interface may sometimes return null. Please wait a moment and try again, or use your own API KEY or deploy your own forwarding interface.
  • API Key security: This tool does not store your API key; all configuration data is saved in your local browser.
  • GTX Web interface: This interface places considerable load on the server, so it is recommended to enable it manually only when deploying locally. Please avoid using it in networks with a global proxy enabled to prevent translation errors.