Translation API Guide

This tool integrates 6 translation APIs and 9 mainstream Large Language Model (LLM) interfaces, allowing users to choose the most suitable translation method based on their needs:

Classic Translation APIs

API TypeQualityStabilityUse CaseFree Limit
DeepL(X)★★★★★★★★★☆Suitable for long texts; smoother translation500,000 characters/month
Google Translate★★★★☆★★★★★Suitable for UI interfaces, common sentences500,000 characters/month
Azure Translate★★★★☆★★★★★Widest language supportFirst 12 months 2 million characters/month
Qwen-MT★★★★★★★★★★Supports domain-specific translationBilled by token, free tier for new users
GTX API (Free)★★★☆☆★★★☆☆General text translationSubject to rate limits (e.g., ~5 million chars every 3 hours)
GTX Web (Free)★★★☆☆★★☆☆☆Suitable for small-scale translationFree
  • DeepL: Suitable for long texts, offering more fluid and natural translations, but does not support web-based API calls (requires local or server proxy).
  • Google Translate: Stable quality, suitable for short sentences and interface text; supports web-based calls.
  • Azure Translate: Supports the most languages, ideal for multi-language translation needs.
  • Qwen-MT: An LLM optimized explicitly for translation scenarios by Alibaba Cloud, supporting specific domains (e.g., medical, tech) for more professional results.
  • GTX API/Web: Free translation options suitable for lightweight use, but with limited stability and call frequency. For example, when mrfragger translated a subtitle file of about 2 million characters (~2MB), the GTX API limit was triggered after just two translation executions.

If you have higher requirements for translation speed and quality, you can apply for your own API Key. For application procedures, please refer to the relevant Interface Application Tutorial.

LLM Model Translation

In addition to traditional translation APIs, this tool supports calling various LLMs for intelligent translation, including: DeepSeek, Nvidia, OpenAI, Gemini, Perplexity, Azure OpenAI, Siliconflow, Groq, OpenRouter, and highly configurable Custom LLMs.

  • Use Case: Suitable for content requiring high language comprehension, such as literary works, technical documents, and multilingual materials.
  • Highly Customizable: Supports configuration of System Prompts and User Prompts, allowing flexible control over translation style and terminology preferences to meet diverse needs.
  • LLM Model: Generally, fill in the model name provided by the selected interface; if using Azure OpenAI, fill in the corresponding deployment name.
  • Temperature: Controls the creativity and stability of the translation results. The default value is 0.7. Suggestions: 0–0.3 for strict technical/terminology scenarios; 0.4–0.7 for general content; 0.8–1.0 for creative scenarios (e.g., marketing/paraphrasing).

API Proxy

To resolve Cross-Origin Resource Sharing (CORS) issues when calling official APIs directly from the browser, DeepL and Nvidia use built-in proxy services by default.

  • Default Behavior: When the API URL is empty, the tool automatically uses the built-in proxy (e.g., https://api-edgeone.newzone.top/api/nvidia) to forward requests.
  • Custom URL: If you specify a custom API URL in the settings (e.g., a private deployment or direct official address), the built-in proxy will be bypassed, and the request will be sent directly to your specified address.

Local Model Integration

For users who wish to deploy and use custom large models locally (such as Ollama or LM Studio), you can connect this tool to your local model and resolve potential Cross-Origin Resource Sharing (CORS) issues using the methods below. To achieve better translation quality, it is recommended to use qwen3-14b or models with larger parameter scales (such as 32B, 70B) in your custom model setup.

Common Interface Addresses

The table below lists default interface addresses for common local model tools. You can use them directly in the configuration or modify them according to your actual port number.

ToolDefault Interface Address
Ollamahttp://127.0.0.1:11434/v1/chat/completions
LM Studiohttp://localhost:61234/v1/chat/completions

Solving CORS Issues

When calling a locally deployed model in a browser, if the connection fails, common causes include browser ad-blocking extensions or Cross-Origin Resource Sharing (CORS) restrictions. CORS policy is a browser security mechanism that prevents web pages from accessing resources from different origins arbitrarily. Therefore, when you request a local model interface from a web page, it may be blocked by the browser.

Step 1 | Check Ad/Privacy Plugins: Temporarily disable browser interception extensions, then refresh the page to test.

Step 2 | Enable Local Service CORS: Follow the guide below to allow cross-origin requests for common tools.

Ollama

To enable CORS support for your locally running Ollama service, you can permanently enable it by setting an environment variable. Follow these steps:

  1. Press Win + X and select Windows PowerShell or Terminal.

  2. Paste the following command into the open PowerShell window and press Enter:

    [System.Environment]::SetEnvironmentVariable('OLLAMA_ORIGINS', '*', 'User')

    The * wildcard allows all origins to access the Ollama interface. If you prefer stricter security controls, you can replace it with a specific domain, such as http://192.168.2.20:3000.

Once configured, restart the Ollama service for the changes to take effect.

If you only want to enable CORS temporarily, you can add the environment variable directly when starting the service:

OLLAMA_ORIGINS="*" ollama serve

LM Studio

  1. Open the left menu in the software and click the "Developer" icon.
  2. Enter the local server settings page and click "Settings" at the top.
  3. Check the "Enable CORS" checkbox (as shown below).

LM Studio CORS Configuration Screenshot

After completing the above settings, this tool should be able to successfully call your local LLM model. If you still encounter access issues, check for port conflicts or error messages in the browser console. (Special thanks to mrfragger for sharing configuration experience).

Language Support

This tool supports translation between 77 major languages.

Language Code Reference

Use the language codes below for batch multi-language configuration (e.g., en, zh, ja, ko):

CodeNativeEnglish中文
enEnglishEnglish英语
zh简体Simplified Chinese简体中文
zh-hant繁體Traditional Chinese繁体中文
esEspañolSpanish西班牙语
deDeutschGerman德语
pt-brPortuguês (Brasil)Portuguese (Brazil)葡萄牙语(巴西)
pt-ptPortuguês (Portugal)Portuguese (Portugal)葡萄牙语(葡萄牙)
frFrançaisFrench法语
ja日本語Japanese日语
ko한국어Korean韩语
ruРусскийRussian俄语
itItalianoItalian意大利语
arالعربيةArabic阿拉伯语
viTiếng ViệtVietnamese越南语
hiहिन्दीHindi印地语
idBahasa IndonesiaIndonesian印尼语
yue粵語Cantonese粤语
nlNederlandsDutch荷兰语
svSvenskaSwedish瑞典语
daDanskDanish丹麦语
nbNorsk bokmålNorwegian挪威语
isÍslenskaIcelandic冰岛语
afAfrikaansAfrikaans南非荷兰语
roRomânăRomanian罗马尼亚语
caCatalàCatalan加泰罗尼亚语
ukУкраїнськаUkrainian乌克兰语
plPolskiPolish波兰语
csČeštinaCzech捷克语
skSlovenčinaSlovak斯洛伐克语
bgБългарскиBulgarian保加利亚语
srСрпскиSerbian塞尔维亚语
hrHrvatskiCroatian克罗地亚语
bsBosanskiBosnian波斯尼亚语
slSlovenščinaSlovenian斯洛文尼亚语
mkМакедонскиMacedonian马其顿语
beБеларускаяBelarusian白俄罗斯语
elΕλληνικάGreek希腊语
huMagyarHungarian匈牙利语
fiSuomiFinnish芬兰语
ltLietuviųLithuanian立陶宛语
lvLatviešuLatvian拉脱维亚语
etEestiEstonian爱沙尼亚语
sqShqipAlbanian阿尔巴尼亚语
mtMaltiMaltese马耳他语
hyՀայերենArmenian亚美尼亚语
kaქართულიGeorgian格鲁吉亚语
trTürkçeTurkish土耳其语
heעבריתHebrew希伯来语
faفارسیPersian波斯语
urاردوUrdu乌尔都语
uzOʻzbekchaUzbek乌兹别克语
kkҚазақ тіліKazakh哈萨克语
kyКыргызчаKyrgyz吉尔吉斯语
tkTürkmençeTurkmen土库曼语
azAzərbaycanAzerbaijani阿塞拜疆语
tgТоҷикӣTajik塔吉克语
mnМонголMongolian蒙古语
bnবাংলাBengali孟加拉语
mrमराठीMarathi马拉地语
taதமிழ்Tamil泰米尔语
teతెలుగుTelugu泰卢固语
guગુજરાતીGujarati古吉拉特语
knಕನ್ನಡKannada卡纳达语
mlമലയാളംMalayalam马拉雅拉姆语
paਪੰਜਾਬੀPunjabi旁遮普语
neनेपालीNepali尼泊尔语
bhoभोजपुरीBhojpuri博杰普尔语
thไทยThai泰语
loລາວLao老挝语
myမြန်မာBurmese缅甸语
msBahasa MelayuMalay马来语
filFilipinoFilipino(Tagalog)菲律宾语
jvBasa JawaJavanese爪哇语
swKiswahiliSwahili斯瓦希里语
haهَرْشٜىٰن هَوْسَاHausa豪萨语
amአማርኛAmharic阿姆哈拉语
ugئۇيغۇرچەUyghur维吾尔语

API Documentation

LLMs support all languages. Machine translation API language support: