Feature Guide

Core Features

Translate Once, Output Many

Translate the same file into multiple languages in a single run — perfect for multilingual subtitles or i18n projects. For example, translate an English subtitle file into Chinese, Japanese, German, and French at once and download all versions packaged. 70+ languages supported, with more added regularly.

Translation Cache

Translation results are saved locally in your browser. When parameters match, the tool returns the cached result and skips the API call:

  • Persistent: survives refreshes and browser restarts
  • High capacity: holds millions of records without bloating memory
  • Toggle off: disable temporarily when debugging prompts or model settings
  • One-click clear: clean up everything from the settings panel

Long Text & Concurrency

Tuned for large documents and batch jobs:

  • Concurrency control: customize request rate — max out paid APIs or throttle free ones to avoid bans
  • Streaming for large files: chunk-based handling keeps the UI responsive
  • Context-aware translation: subtitles and documents get sent with surrounding context so the AI understands flow
  • Per-line retry: failed lines are tracked separately; the rest of the batch isn't blocked

Failed-Line Retry

LLMs occasionally drop a line, return an empty response, or break formatting. When that happens:

  • A red alert at the top of the result panel says how many lines failed
  • Click Retry failed lines to reissue only those — completed content stays as-is and isn't re-billed
  • A copy button lets you grab the failed source rows for manual handling elsewhere

Cancel Translation

Click the close button on the progress modal to abort a running batch. Already-translated lines are cached, so clicking "Translate" again resumes from where you stopped.

RTL Language Auto-Adaptation

Right-to-left languages (Arabic, Hebrew, Persian, Urdu) automatically render right-to-left in the textarea and result view — no manual configuration.


Usage Modes

Batch vs. Single-File

The tool switches modes based on what you upload:

  • Batch mode (default): drop multiple files, they queue up automatically and download as a bundle when done.
  • Single-file mode: upload one file or paste text — review line-by-line, edit before exporting.

Advanced settings let you lock to single-file mode if you prefer.

Tip

JSON Translate is single-file mode only.

One-Click Source/Target Swap

A button sits between the source and target language dropdowns — click to swap them. The button greys out when the source is "Auto-detect" or when multi-language mode is on (you can't swap "auto" or against multiple targets).

API Connection Status

The badge at the top of the main page tells you the current API's status at a glance:

  • Not configured / Needs config: URL or API key missing
  • Configured: filled in but not yet tested
  • Testing✓ Connected or Connection failed: test results
  • Free API: free, no-config services like GTX

Click the badge to jump to the API settings panel.

Presets: API Config and Prompts, Separately

API configs and prompts are stored as two independent preset types so they combine freely:

  • API presets: snapshot the current service's URL, key, model, temperature, etc. Useful for switching between local Ollama, a remote gateway, and a paid cloud endpoint.
  • Prompt presets: snapshot the system + user prompts. Switch between a "strict terminology" prompt and a "creative paraphrase" prompt without touching the API config.

Both types support add / load / rename / update / delete, and travel with settings import/export.

Post-Translation Cleanup

After translation, the tool can automatically apply simple string replacements:

  • Character filtering: strip stray symbols like ♪ ♫ from subtitles
  • Format cleanup: remove leftover HTML tags
Tip

This feature does plain string replacement only — no escape sequences (\n, \t etc.). Use the Text Splitter for richer transformations.


Advanced Settings

Settings Import/Export

One-click backup of every configuration: API credentials, model parameters, API presets, Prompt presets. The exported JSON imports across devices, ideal for team sharing or moving to a new machine.

General Options

  • Use Cache: enabled by default. Reads cached results when parameters match. Disable temporarily while debugging.
  • Retry Count: maximum retries on failure. Bump it up on shaky networks or rate-limited free endpoints.
  • Retry Timeout (seconds): per-request timeout. Increase for slow models or long text.
  • Remove characters after translation: auto-strip specified characters or fragments from results (e.g., in subtitles, leftover <i> tags).
  • Custom export filename: standardize filenames in batch exports. Placeholders: {name} (source filename), {lang} (target language), {ext} (extension), {date}, {time}. Example: {name}_{lang}_{date}.{ext}.

API Parameter Tuning

Chunk Size

Non-LLM APIs (Google / Azure) split long text into chunks before sending. Chunk size is the per-chunk character cap. Common limits:

APIMax characters per request
DeepL API128,000
DeepLX Free1,000
Azure Translate10,000
Google Translate Web5,000
Google Cloud API30,000

⚠️ Google Translate Web breaks line breaks, so chunking is disabled there.

Delay (ms)

The cooldown between chunked requests. Increase on poor networks or free APIs. For example, Azure Translate Free Tier works best at 5000 ms or higher.

Concurrent Lines

  • Default: 20
  • What it does: the maximum number of lines translated simultaneously
  • Caveat: too high triggers rate limiting (429) or empty responses

Context Batch Size

  • Default: 3 (1 for some providers)
  • What it does: in context-aware mode, how many "target lines" go into a single request. Each request also carries the surrounding context.
  • Trade-off: larger values throughput-wise are faster but the model has to emit more lines per response, raising the chance of formatting drift. Smaller values are steadier but use more requests. Stick with 3 for documents/subtitles, push to 5 for plain text.

Context Lines

  • Default: 50
  • What it does: how many surrounding lines to include in each context-aware request
  • Caveat: more context = richer but may exceed the model's token limit