Translation APIs

This tool integrates 5 translation APIs and 6 major Large Language Model (LLM) interfaces, allowing users to choose the appropriate translation method based on their needs.

Translation API Comparison

API Type Translation Quality Stability Use Case Free Tier
DeepL(X) ★★★★★ ★★★★☆ Ideal for long texts, more fluent translations 500,000 characters/month
Google Translate ★★★★☆ ★★★★★ Suitable for UI and common sentences 500,000 characters/month
Azure Translate ★★★★☆ ★★★★★ Widest language support 2 million characters/month for the first 12 months
GTX API (Free) ★★★☆☆ ★★★☆☆ General text translation Rate limited (e.g., ~5M chars per 3 hours)
GTX Web (Free) ★★★☆☆ ★★☆☆☆ Suitable for small-scale translations Free
  • DeepL: Best for long-form text, providing more natural and fluent translations. It does not support a web-based API and requires a local or server-side proxy to be called.
  • Google Translate: Offers stable translation quality, suitable for short sentences and interface text, and supports web-based calls.
  • Azure Translate: Has the most extensive language support, making it ideal for multilingual translation needs.
  • GTX API/Web: Free translation options suitable for lightweight use, but with limited stability and rate limits. For example, when user mrfragger translated a subtitle file of about 2 million characters (~2MB), the GTX API limit was triggered after only two translation runs.

If you have higher requirements for translation speed and quality, you can apply for your own API Key: Google Translate, Google Gemini, Azure Translate, DeepL Translate. For the application process, refer to the relevant API application tutorial.

LLM Translation (AI Large Models)

In addition to traditional translation APIs, this tool also supports calling various LLMs for intelligent translation, including DeepSeek, OpenAI, Azure OpenAI, Siliconflow, Groq, and a freely configurable Custom LLM.

  • Use Case: Suitable for content that requires a high degree of language understanding, such as literary works, technical documents, and multilingual materials.
  • Highly Customizable: Supports configuring a System Prompt and a User Prompt, allowing for flexible control over translation style, terminology preferences, and more to meet diverse translation needs.
  • LLM Model: In general, enter the model name provided by the selected interface. If using Azure OpenAI, you must enter the corresponding deployment name.
  • Temperature: Controls creativity vs. stability. Default 0.7. Guidance: 0–0.3 for technical/strict terminology; 0.4–0.7 for general content; 0.8–1.0 for more creative outputs (e.g., marketing/paraphrasing).

Local Model Connection Guide

For users who wish to deploy and use custom large language models locally (such as Ollama or LM Studio), the following guide explains how to connect this tool with your local model and resolve potential CORS (Cross-Origin Resource Sharing) issues. For better translation quality, it is recommended to use models such as qwen3-14b or larger parameter sizes (e.g., 32B, 70B).

Default API Endpoint Examples

The table below lists the default API endpoints for common local model tools. You can use these directly in your configuration or modify them according to your actual port settings.

Tool Default API Endpoint
Ollama http://127.0.0.1:11434/v1/chat/completions
LM Studio http://localhost:61234/v1/chat/completions

Configuring CORS for Local Models

When calling a locally deployed model from a browser, connection failures may occur due to ad-blocking extensions or CORS policy restrictions. CORS is a browser security mechanism designed to prevent webpages from accessing resources from other origins arbitrarily. As a result, when you request a local model API from a webpage, the browser may block it.

1. Check for Ad-Blocking Extensions

First, disable any ad-blocking browser extensions and reload the page to test the connection. If the issue persists, continue to the next step.

2. Enable CORS Support on the Local Model Server

1. Ollama

Start the service with the following command to allow requests from any origin:

OLLAMA_ORIGINS="*" ollama serve

The * symbol allows all origins. For stricter security, you can replace * with a specific domain name.

2. LM Studio
  1. Open the left-side menu and click the “Developer” icon.
  2. Go to the Local Server Settings page and click the “Settings” tab.
  3. Check the “Enable CORS” box (as shown below).

LM Studio CORS Configuration Screenshot

Once configured, this tool can successfully call your local LLM model. If you still encounter access issues, check whether the port is in use or review the browser console for error messages.(Special thanks to mrfragger for sharing this configuration tip.)

Language Support

This tool supports translation between over 50 languages, encompassing a broad range of European, Asian, and some African languages. It is suitable for various multilingual content processing scenarios. Supported languages include: English, Chinese, Traditional Chinese, Portuguese, Italian, German, Russian, Spanish, French, Japanese, Korean, Arabic, Turkish, Polish, Ukrainian, Dutch, Greek, Hungarian, Swedish, Danish, Finnish, Czech, Slovak, Bulgarian, Slovenian, Lithuanian, Latvian, Romanian, Estonian, Indonesian, Malay, Hindi, Bengali, Vietnamese, Norwegian, Hebrew, Thai, Filipino (Tagalog), Uzbek, Kyrgyz, Turkmen, Kazakh, Bhojpuri, Kannada, Amharic, Gujarati, Javanese, Persian, Tamil, Swahili, Hausa, Telugu, and Marathi.

For detailed information on supported languages, refer to the official documentation of each service: