FAQ
General troubleshooting: Press F12 to open the browser developer tools, switch to the "Network" tab, and check the Response details of the request. Most errors can be identified here.
What if the translation result is empty, shows only the original text, or returns null?
Common causes:
- Configuration Error: The API Key is invalid, or the translation interface parameters are incomplete.
- Quota/Rate Limited: Account Credits are exhausted, or the request rate is too high and has been temporarily restricted by the interface.
- Cache Hit Original: Previous original text stored in the cache is being returned directly.
- Network Restricted: The selected interface (e.g., OpenAI, Gemini, Claude, etc.) is restricted in the current region, or proxy/network anomalies caused the request to fail.
✅ Troubleshooting Order:
- Verify the API Key and interface settings.
- Check account quota, rate limits, and
429 errors.
- Disable or clear the translation cache and retry.
- Confirm the network environment supports the interface being used.
- Inspect the API response in DevTools: press F12 (or Ctrl+Shift+I) → open Network → click "Translate" again → open the latest translation request and check Status (e.g., 401/429/5xx) plus the error details in Response/Preview.
If only a few sentences failed to translate, you can simply click "Translate" again; the cache will skip completed content and will not deduct fees repeatedly.
Local model reporting cross-origin (CORS) or connection failure?
When using local models like Ollama or LM Studio, common causes for failure are browser CORS policies or ad blockers:
- Temporarily disable ad/privacy extensions and refresh.
- Enable CORS for the local service according to the Translation API Guide (e.g., set
OLLAMA_ORIGINS=*, check "Enable CORS" in LM Studio).
- If it still fails, check for port conflicts and view the return status code in the Network panel. Company/Campus networks also need to ensure the firewall isn't blocking local ports.
The "Concurrent Lines / Context Lines" parameters in API settings directly determine the translation concurrency and speed.
- GTX (Free): Default high concurrency, but prone to rate limiting (429) with large volumes of requests.
- AI Interfaces: Default concurrency is lower to ensure stability.
Speed-up Suggestions:
- Increase Concurrency: Gradually increase "Concurrent Lines" from 20 to 30 or 50. If 429 errors or empty results occur, lower it immediately.
- Enable Context: For LLM translation, enabling "Context-Aware Translation" and appropriately increasing "Context Lines" can effectively improve coherence and efficiency.
- Use Cache: Enabling translation cache allows direct reuse of results, significantly speeding up secondary translations.
What if the AI translation quality is unsatisfactory?
- Restore defaults first: Reset in "Translation Settings" and re-test with a small text segment.
- Then adjust Temperature:
- 0~0.3: Strict terminology, requires stability;
- 0.4~0.7: General scenarios;
- 0.8~1.0: Allows paraphrasing/more creativity.
- Enable context-aware translation to improve dialogue coherence.
The basic translation capability of most online mainstream 70B+ models is generally fine; abnormal output is usually related to temperature or prompt settings.
Why use a third-party interface to access DeepL?
DeepL officially prohibits direct API calls from web pages, so a relay channel is needed to forward requests. This channel is only used for forwarding and will not record your data; if you need extreme stability, you can build your own dedicated proxy.
Is the API Key saved?
No. The API Key and all settings are stored only in the local browser; no server can access them.
Why is GTX Web disabled by default?
GTX Web puts significant pressure on the shared service, so it is disabled by default. If using it locally for personal use, you can manually enable it; enabling a global proxy or unstable networks may cause request anomalies.