Frequently Asked Questions
General troubleshooting: Press F12 to open DevTools, check the Network tab and inspect the Response. Most errors can be located there.
Why is the translation result empty, showing only the original text, or displaying as null?
This issue may occur due to the following reasons:
- Configuration Issues
- The API Key is entered incorrectly or has not yet taken effect;
- Translation API parameters are improperly configured.
- Quota and Rate Limit Issues
- The available account credits or tokens have been exhausted;
- The translation request rate is too high, or the API is temporarily unstable.
- Caching Issues
- The original text was mistakenly saved in the translation cache, causing repeated requests to return the original text directly.
- Network and Access Restrictions
- The selected AI API (such as OpenAI, Gemini, Claude, etc.) does not support access from Mainland China IP addresses;
- The current network or proxy configuration is abnormal, preventing requests from being properly sent.
✅ Solutions:
- Verify the API Key and API configuration;
- Check account balance and request rate limits;
- Disable or clear the translation cache and try again;
- Ensure that the network environment meets the API access requirements.
If only part of the content fails to translate, click the “Translate” button again. When caching is enabled, the system will automatically skip already translated sections to avoid duplicate requests and repeated charges.
What if AI translation quality isn’t ideal?
Troubleshoot in this order:
- Reset to defaults: In “Translation Settings”, reset and retest with a short sample.
- Tune temperature (default 0.7):
- 0–0.3: technical/strict terminology (stable/consistent)
- 0.4–0.7: general content (balanced)
- 0.8–1.0: more creative (marketing/paraphrasing)
Note: Most mainstream online 70B+ models are fine; issues are commonly due to inappropriate temperature.
Why is translation slower? Why does free GTX feel faster?
The key is the “translation rate,” which is essentially the concurrency (how many requests are sent at once).
- GTX feels faster: Higher default concurrency and shorter responses, so it runs faster overall. But it’s more strictly rate‑limited, so large/long runs may get throttled.
- AI interfaces are safer by default: Lower default concurrency to reduce 429 (rate limit) and “context/token limit” errors, so beginners are less likely to hit failures.
Want it faster? Increase the “translation rate” from 20 to 30/50 gradually; if you see 429 or empty results, step back a bit.
Tips:
- With translation cache enabled, repeated content is skipped; subsequent runs are faster.
- Network conditions and the chosen model also affect speed; try a different model or time window if needed.
Why does this use a third-party interface to access DeepL?
Because DeepL's official service does not allow direct calls from a webpage, we use a "proxy channel" to send the requests for you.
This proxy interface is only used to transmit data and will not collect any of your information, so you can use it with confidence. If you have higher stability requirements, you can also set up this channel yourself.
Will my API Key be saved?
No! Your API Key and other settings are only saved in your own browser. We do not upload or record any of your information.
Why isn't the GTX Web interface enabled?
The GTX Web interface puts significant pressure on the server, so it is not enabled by default.
If you are using this tool on your own computer, you can enable it manually. Please avoid using this interface in a network environment with a global proxy enabled, as it may cause translation anomalies.