Translation API Guide
This tool integrates 6 translation APIs and 9 mainstream Large Language Model (LLM) interfaces, allowing users to choose the most suitable translation method based on their needs:
Classic Translation APIs
- DeepL: Suitable for long texts, offering more fluid and natural translations, but does not support web-based API calls (requires local or server proxy).
- Google Translate: Stable quality, suitable for short sentences and interface text; supports web-based calls.
- Azure Translate: Supports the most languages, ideal for multi-language translation needs.
- Qwen-MT: An LLM optimized explicitly for translation scenarios by Alibaba Cloud, supporting specific domains (e.g., medical, tech) for more professional results.
- GTX API/Web: Free translation options suitable for lightweight use, but with limited stability and call frequency. For example, when mrfragger translated a subtitle file of about 2 million characters (~2MB), the GTX API limit was triggered after just two translation executions.
If you have higher requirements for translation speed and quality, you can apply for your own API Key. For application procedures, please refer to the relevant Interface Application Tutorial.
LLM Model Translation
In addition to traditional translation APIs, this tool supports calling various LLMs for intelligent translation, including: DeepSeek, Nvidia, OpenAI, Gemini, Perplexity, Azure OpenAI, Siliconflow, Groq, OpenRouter, and highly configurable Custom LLMs.
- Use Case: Suitable for content requiring high language comprehension, such as literary works, technical documents, and multilingual materials.
- Highly Customizable: Supports configuration of System Prompts and User Prompts, allowing flexible control over translation style and terminology preferences to meet diverse needs.
- LLM Model: Generally, fill in the model name provided by the selected interface; if using Azure OpenAI, fill in the corresponding deployment name.
- Temperature: Controls the creativity and stability of the translation results. The default value is 0.7. Suggestions: 0–0.3 for strict technical/terminology scenarios; 0.4–0.7 for general content; 0.8–1.0 for creative scenarios (e.g., marketing/paraphrasing).
API Proxy
To resolve Cross-Origin Resource Sharing (CORS) issues when calling official APIs directly from the browser, DeepL and Nvidia use built-in proxy services by default.
- Default Behavior: When the API URL is empty, the tool automatically uses the built-in proxy (e.g.,
https://api-edgeone.newzone.top/api/nvidia) to forward requests. - Custom URL: If you specify a custom API URL in the settings (e.g., a private deployment or direct official address), the built-in proxy will be bypassed, and the request will be sent directly to your specified address.
Local Model Integration
For users who wish to deploy and use custom large models locally (such as Ollama or LM Studio), you can connect this tool to your local model and resolve potential Cross-Origin Resource Sharing (CORS) issues using the methods below. To achieve better translation quality, it is recommended to use qwen3-14b or models with larger parameter scales (such as 32B, 70B) in your custom model setup.
Common Interface Addresses
The table below lists default interface addresses for common local model tools. You can use them directly in the configuration or modify them according to your actual port number.
Solving CORS Issues
When calling a locally deployed model in a browser, if the connection fails, common causes include browser ad-blocking extensions or Cross-Origin Resource Sharing (CORS) restrictions. CORS policy is a browser security mechanism that prevents web pages from accessing resources from different origins arbitrarily. Therefore, when you request a local model interface from a web page, it may be blocked by the browser.
Step 1 | Check Ad/Privacy Plugins: Temporarily disable browser interception extensions, then refresh the page to test.
Step 2 | Enable Local Service CORS: Follow the guide below to allow cross-origin requests for common tools.
Ollama
To enable CORS support for your locally running Ollama service, you can permanently enable it by setting an environment variable. Follow these steps:
-
Press
Win + Xand select Windows PowerShell or Terminal. -
Paste the following command into the open PowerShell window and press Enter:
The
*wildcard allows all origins to access the Ollama interface. If you prefer stricter security controls, you can replace it with a specific domain, such ashttp://192.168.2.20:3000.
Once configured, restart the Ollama service for the changes to take effect.
If you only want to enable CORS temporarily, you can add the environment variable directly when starting the service:
LM Studio
- Open the left menu in the software and click the "Developer" icon.
- Enter the local server settings page and click "Settings" at the top.
- Check the "Enable CORS" checkbox (as shown below).

After completing the above settings, this tool should be able to successfully call your local LLM model. If you still encounter access issues, check for port conflicts or error messages in the browser console. (Special thanks to mrfragger for sharing configuration experience).
Language Support
This tool supports translation between 77 major languages.
Language Code Reference
Use the language codes below for batch multi-language configuration (e.g., en, zh, ja, ko):
API Documentation
LLMs support all languages. Machine translation API language support:

