Translation API Guide

This tool integrates 5 translation APIs and 8 mainstream Large Language Model (LLM) interfaces, allowing users to choose the most suitable translation method based on their needs:

Classic Translation APIs

API Type Quality Stability Use Case Free Limit
DeepL(X) ★★★★★ ★★★★☆ Suitable for long texts; smoother translation 500,000 characters/month
Google Translate ★★★★☆ ★★★★★ Suitable for UI interfaces, common sentences 500,000 characters/month
Azure Translate ★★★★☆ ★★★★★ Widest language support First 12 months 2 million characters/month
GTX API (Free) ★★★☆☆ ★★★☆☆ General text translation Subject to rate limits (e.g., ~5 million chars every 3 hours)
GTX Web (Free) ★★★☆☆ ★★☆☆☆ Suitable for small-scale translation Free
  • DeepL: Suitable for long texts, offering more fluid and natural translations, but does not support web-based API calls (requires local or server proxy).
  • Google Translate: Stable quality, suitable for short sentences and interface text; supports web-based calls.
  • Azure Translate: Supports the most languages, ideal for multi-language translation needs.
  • GTX API/Web: Free translation options suitable for lightweight use, but with limited stability and call frequency. For example, when mrfragger translated a subtitle file of about 2 million characters (~2MB), the GTX API limit was triggered after just two translation executions.

If you have higher requirements for translation speed and quality, you can apply for your own API Key. For application procedures, please refer to the relevant Interface Application Tutorial.

LLM Model Translation

In addition to traditional translation APIs, this tool supports calling various LLMs for intelligent translation, including: DeepSeek, OpenAI, Gemini, Perplexity, Azure OpenAI, Siliconflow, Groq, and highly configurable Custom LLMs.

  • Use Case: Suitable for content requiring high language comprehension, such as literary works, technical documents, and multilingual materials.
  • Highly Customizable: Supports configuration of System Prompts and User Prompts, allowing flexible control over translation style and terminology preferences to meet diverse needs.
  • LLM Model: Generally, fill in the model name provided by the selected interface; if using Azure OpenAI, fill in the corresponding deployment name.
  • Temperature: Controls the creativity and stability of the translation results. The default value is 0.7. Suggestions: 0–0.3 for strict technical/terminology scenarios; 0.4–0.7 for general content; 0.8–1.0 for creative scenarios (e.g., marketing/paraphrasing).

Local Model Integration

For users who wish to deploy and use custom large models locally (such as Ollama or LM Studio), you can connect this tool to your local model and resolve potential Cross-Origin Resource Sharing (CORS) issues using the methods below. To achieve better translation quality, it is recommended to use qwen3-14b or models with larger parameter scales (such as 32B, 70B) in your custom model setup.

Common Interface Addresses

The table below lists default interface addresses for common local model tools. You can use them directly in the configuration or modify them according to your actual port number.

Tool Default Interface Address
Ollama http://127.0.0.1:11434/v1/chat/completions
LM Studio http://localhost:61234/v1/chat/completions

Solving CORS Issues

When calling a locally deployed model in a browser, if the connection fails, common causes include browser ad-blocking extensions or Cross-Origin Resource Sharing (CORS) restrictions. CORS policy is a browser security mechanism that prevents web pages from accessing resources from different origins arbitrarily. Therefore, when you request a local model interface from a web page, it may be blocked by the browser.

Step 1 | Check Ad/Privacy Plugins: Temporarily disable browser interception extensions, then refresh the page to test.

Step 2 | Enable Local Service CORS: Follow the guide below to allow cross-origin requests for common tools.

Ollama

To enable CORS support for your locally running Ollama service, you can permanently enable it by setting an environment variable. Follow these steps:

  1. Press Win + X and select Windows PowerShell or Terminal.

  2. Paste the following command into the open PowerShell window and press Enter:

    [System.Environment]::SetEnvironmentVariable('OLLAMA_ORIGINS', '*', 'User')

    The * wildcard allows all origins to access the Ollama interface. If you prefer stricter security controls, you can replace it with a specific domain, such as http://192.168.2.20:3000.

Once configured, restart the Ollama service for the changes to take effect.

If you only want to enable CORS temporarily, you can add the environment variable directly when starting the service:

OLLAMA_ORIGINS="*" ollama serve

LM Studio

  1. Open the left menu in the software and click the "Developer" icon.
  2. Enter the local server settings page and click "Settings" at the top.
  3. Check the "Enable CORS" checkbox (as shown below).

After completing the above settings, this tool should be able to successfully call your local LLM model. If you still encounter access issues, check for port conflicts or error messages in the browser console. (Special thanks to mrfragger for sharing configuration experience).

Language Support

This tool supports translation between over 50 languages, encompassing a broad range of European, Asian, and some African languages. It is suitable for various multilingual content processing scenarios. Supported languages include: English, Chinese, Traditional Chinese, Portuguese, Italian, German, Russian, Spanish, French, Japanese, Korean, Arabic, Turkish, Polish, Ukrainian, Dutch, Greek, Hungarian, Swedish, Danish, Finnish, Czech, Slovak, Bulgarian, Slovenian, Lithuanian, Latvian, Romanian, Estonian, Indonesian, Malay, Hindi, Bengali, Vietnamese, Norwegian, Hebrew, Thai, Filipino (Tagalog), Uzbek, Kyrgyz, Turkmen, Kazakh, Bhojpuri, Kannada, Amharic, Gujarati, Javanese, Persian, Tamil, Swahili, Hausa, Telugu, and Marathi.

For detailed information on supported languages, refer to the official documentation of each service: