This tool integrates 5 translation APIs and 6 major Large Language Model (LLM) interfaces, allowing users to choose the appropriate translation method based on their needs.
| API Type | Translation Quality | Stability | Use Case | Free Tier |
|---|---|---|---|---|
| DeepL(X) | ★★★★★ | ★★★★☆ | Ideal for long texts, more fluent translations | 500,000 characters/month |
| Google Translate | ★★★★☆ | ★★★★★ | Suitable for UI and common sentences | 500,000 characters/month |
| Azure Translate | ★★★★☆ | ★★★★★ | Widest language support | 2 million characters/month for the first 12 months |
| GTX API (Free) | ★★★☆☆ | ★★★☆☆ | General text translation | Rate limited (e.g., ~5M chars per 3 hours) |
| GTX Web (Free) | ★★★☆☆ | ★★☆☆☆ | Suitable for small-scale translations | Free |
mrfragger translated a subtitle file of about 2 million characters (~2MB), the GTX API limit was triggered after only two translation runs.If you have higher requirements for translation speed and quality, you can apply for your own API Key: Google Translate, Google Gemini, Azure Translate, DeepL Translate. For the application process, refer to the relevant API application tutorial.
In addition to traditional translation APIs, this tool also supports calling various LLMs for intelligent translation, including DeepSeek, OpenAI, Azure OpenAI, Siliconflow, Groq, and a freely configurable Custom LLM.
For users who wish to deploy and use custom large language models locally (such as Ollama or LM Studio), the following guide explains how to connect this tool with your local model and resolve potential CORS (Cross-Origin Resource Sharing) issues.
For better translation quality, it is recommended to use models such as qwen3-14b or larger parameter sizes (e.g., 32B, 70B).
The table below lists the default API endpoints for common local model tools. You can use these directly in your configuration or modify them according to your actual port settings.
| Tool | Default API Endpoint |
|---|---|
| Ollama | http://127.0.0.1:11434/v1/chat/completions |
| LM Studio | http://localhost:61234/v1/chat/completions |
When calling a locally deployed model from a browser, connection failures may occur due to ad-blocking extensions or CORS policy restrictions. CORS is a browser security mechanism designed to prevent webpages from accessing resources from other origins arbitrarily. As a result, when you request a local model API from a webpage, the browser may block it.
First, disable any ad-blocking browser extensions and reload the page to test the connection. If the issue persists, continue to the next step.
Start the service with the following command to allow requests from any origin:
The
*symbol allows all origins. For stricter security, you can replace*with a specific domain name.

Once configured, this tool can successfully call your local LLM model. If you still encounter access issues, check whether the port is in use or review the browser console for error messages.(Special thanks to mrfragger for sharing this configuration tip.)
This tool supports translation between over 50 languages, encompassing a broad range of European, Asian, and some African languages. It is suitable for various multilingual content processing scenarios. Supported languages include: English, Chinese, Traditional Chinese, Portuguese, Italian, German, Russian, Spanish, French, Japanese, Korean, Arabic, Turkish, Polish, Ukrainian, Dutch, Greek, Hungarian, Swedish, Danish, Finnish, Czech, Slovak, Bulgarian, Slovenian, Lithuanian, Latvian, Romanian, Estonian, Indonesian, Malay, Hindi, Bengali, Vietnamese, Norwegian, Hebrew, Thai, Filipino (Tagalog), Uzbek, Kyrgyz, Turkmen, Kazakh, Bhojpuri, Kannada, Amharic, Gujarati, Javanese, Persian, Tamil, Swahili, Hausa, Telugu, and Marathi.
For detailed information on supported languages, refer to the official documentation of each service: