Translation API Guide
The tool integrates 8 classic translation APIs and 21 large language models, so you can pick whichever fits your text type, budget, and privacy needs.
Classic Translation APIs
Notes:
- DeepL can't be called directly from the browser — the tool routes through a built-in proxy by default. If you have your own proxy, fill it in the API URL field.
- Qwen-MT is Alibaba Cloud's translation-specialized model. See Qwen-MT essentials below.
- TranslateGemma is Google's open-source translation-specialized Gemma model. You'll need to run it locally with LM Studio / Ollama / llama.cpp — see Local Model Setup.
- GTX API/Web are free but rate-limited. Use a paid API for long-running jobs.
For more reliable service, apply for a commercial API key — see the API application guide.
Large Language Models (LLMs)
Supported: DeepSeek, OpenAI, Claude, Gemini, Qwen, Moonshot (Kimi), Zhipu GLM, Doubao, MiniMax, Tencent Hunyuan, Baidu ERNIE, Cohere, xAI (Grok), Mistral, Perplexity, OpenRouter, Groq, SiliconFlow, Nvidia NIM, Azure OpenAI, plus any OpenAI-compatible endpoint.
LLMs work best for:
- Literature and technical documentation that needs deeper understanding
- Multilingual content where consistent terminology matters
- Custom prompts to control translation style
Key parameters:
- Model: enter the model name from your provider; for Azure OpenAI, enter the deployment name.
- Temperature: defaults to 0.7. Try 0.2 for technical content, 0.9 for marketing or creative paraphrasing.
- Thinking mode: DeepSeek, Claude, and certain NVIDIA NIM models support a thinking-chain toggle plus a
low / medium / highreasoning effort selector. Only shown when the current model actually supports it.
Regional Endpoint Switcher
Many providers run separate endpoints for Mainland China, International, and US regions. The official endpoints appear as quick-pick chips above the URL field — click to switch:
URL Auto-Completion
OpenAI-compatible URL fields auto-complete to the full path when focus leaves the field — paste http://host:port or http://host:port/v1 and the tool fills in the rest. This catches the most common "incomplete URL → connection failure" mistake.
Qwen-MT Essentials
Qwen-MT is a machine translation service (not a general LLM). It has no system-prompt concept and works purely with source/target language codes — so the Prompt settings don't apply.
Picking a Model
You'll need to fill in the Model field manually:
Domain Hint
The domains field tells the model what industry the text is from, so terminology lands closer to the field. Important: write a short English description, not a keyword list. Alibaba's official example:
Leave empty if you don't need it.
Unsupported Languages
These languages are not yet covered by Qwen-MT — the UI will block them: Kyrgyz (ky), Turkmen (tk), Tajik (tg), Mongolian (mn), Malayalam (ml), Punjabi (pa), Bhojpuri (bho), Hausa (ha), Amharic (am), Uyghur (ug).
Built-in API Proxy
DeepL, Nvidia NIM, and similar providers can't be called from the browser due to CORS. The tool routes those through a built-in proxy by default. If you specify a custom API URL in settings, the proxy is bypassed and requests go directly to your URL.
Local Model Setup
Want to run models locally for privacy? The tool works with any OpenAI-compatible local server. For decent translation quality, use qwen3-14b or larger (32B / 70B works even better).
Default Endpoints
These appear as quick-pick chips next to the URL field.
TranslateGemma
Google's translation-specialized Gemma model, trained specifically for translation quality. Quick notes:
- The default URL points to LM Studio on port 1234; one click switches to Ollama / llama.cpp
- Recommended models:
translategemma-4b-it(compact and fast) ortranslategemma-9b-it(better quality) - Source language must be explicit — auto-detect isn't supported
- Cantonese (yue) and Bhojpuri (bho) aren't covered by the model
Solving CORS Issues
If a local model can't be reached, the two usual culprits:
Step 1: Disable ad/privacy extensions, then refresh and retry.
Step 2: Enable CORS on the local server.
Ollama
Run this once in PowerShell (Win + X to open Terminal) to enable it permanently:
*allows all origins. For tighter security, use a specific domain likehttp://192.168.2.20:3000.
Restart the Ollama service for the change to take effect. To enable temporarily, set the variable when starting:
LM Studio
- Open the "Developer" icon in the left menu
- Go to the local server settings page, click "Settings" at the top
- Check the "Enable CORS" box

That's it — local models should work now. If you're still stuck, check for port conflicts and look at the browser console for the actual error. (Special thanks to mrfragger for the configuration tips.)
Language Support
This tool supports translation between 77 major languages.
Language Code Reference
Use the language codes below for batch multi-language configuration (e.g., en, zh, ja, ko):
API Documentation
LLMs support all languages. Machine translation API language support:

