A context match is one of the most reliable ways to reuse translations without risking quality. This guide explains what 100%, 101% (in-context), and 102%/XLT/ICE context tiers mean across tools. It also covers how to configure files and TMs to preserve them, when to automate, how to price them, and how to troubleshoot loss of context. It’s vendor‑neutral and links to authoritative docs so you can align teams and procurement with confidence.
Overview
A context match (often shown as 101%) is an exact source match with extra confirmation that the context is the same. The context is usually the surrounding text‑flow and/or a stable segment ID.
A 100% match is an identical source text without validated context. A 102% (also called XLT, double‑context, or ICE in some tools) confirms both context signals.
In many CAT tools, context matches outrank 100% matches for pre‑translation and automation. For example, memoQ ranks 102% above 101% above 100% in matching and statistics, and documents “double context” behavior in detail memoQ: double context matches.
Definitions and tiers: 100% exact, 101% in-context, 102% double-context/XLT, ICE
Clear definitions help you read leverage reports, set pricing, and decide what can be automated without review. The tiers below normalize common terminology so your policies hold across platforms.
100% exact match
A 100% exact match is identical source text found in the translation memory without validated context. It improves speed and consistency.
It can be wrong if the same sentence appears in multiple places with different meanings. In structured content, the same label or message may be reused differently depending on neighboring sentences or UI state.
Treat 100% matches as helpful suggestions. Let context signals or a quick review protect quality where ambiguity is likely.
101% in-context match
A 101% match is an exact match with at least one validated context signal. The signal is either that the previous/next segment matches (text‑flow) or that a stable identifier (segment key) is the same.
This reduces ambiguity and is safe to auto‑apply in low‑risk content with stable IDs. Tools call this an in‑context match or context exact.
Phrase TMS documents both text‑flow and ID‑based sources and shows where the settings live at file import. It includes examples of when “No context” can overwrite existing translations if used carelessly Phrase TMS: TM match context.
102% double-context (XLT) / ICE
A 102% match (XLT/double‑context) is an exact match where both text‑flow and ID‑based context align. It can also be where the tool’s strict “in‑context exact” (ICE) criteria are met.
This is typically the highest‑confidence tier and is often pre‑approved in workflows. The core principle is the same across tools: more validated context signals mean higher confidence.
memoQ defines and ranks double context above single‑context matches. It also shows how it affects X‑translate and statistics memoQ: double context matches.
How context signals work: text-flow vs ID-based and weighting
Understanding how context is validated lets you engineer files and workflows that maximize high‑confidence leverage. Most tools consider two context sources and apply internal prioritization when both are present.
Text-flow (previous/next segment) context
Text‑flow context checks whether the neighboring segments in the current file are the same as in the TM entry. When both previous and next segments match, confidence rises.
In structured documents, text‑flow may be scoped to a parent node, such as a paragraph, list item, or UI screen. Moving a sentence across sections can break context even if the text stays the same.
Use conservative segmentation and avoid unnecessary reshuffling to preserve text‑flow continuity.
ID-based (segment key) context
ID‑based context uses a stable key from your source system to assert that “this is the same string in the same place.” Examples include a JSON path, PO msgctxt, Android/iOS resource key, or a CMS field ID.
This is robust across releases when keys remain stable, even if order changes. Favor ID‑based context in string‑based localization pipelines, and verify that connectors and importers pass keys through intact.
Tie-breakers and scoring priorities
When both text‑flow and ID‑based signals are present, most teams treat ID‑based as the stronger signal. It survives reordering and batching.
If signals conflict—matching neighbors but a different key—prefer the key and flag for review. Document this in your TM governance policy so linguists know how to resolve conflicts and engineers know which metadata must be preserved.
In practice, setting a clear hierarchy (ID‑based > text‑flow > 100% only) aligns with tool behavior and minimizes false positives.
Cross-tool mapping of context tiers (memoQ, Phrase TMS, Trados, MadCap Lingo)
Tool naming differs, but the underlying tiers are comparable. Mapping terms avoids confusion in reporting and makes vendor negotiations smoother.
What “XLT/102%,” “ICE,” and “context repetition” mean across tools
“XLT/102%/double context” commonly denotes matches with both text‑flow and ID‑based context verified. “ICE” (In‑Context Exact) is a Trados‑family term often applied to the strictest in‑context exact matches in a single run.
“Context repetitions” are repeated segments that carry the same context within the current file or project. They may be auto‑propagated.
For clarity, align on: 100% (exact only), 101% (single‑context), 102%/ICE (double‑context/highest tier). Use vendor documentation to validate nuance in your stack and to set consistent pre‑translation and reporting rules.
Where behavior diverges and how to normalize in reporting
Divergence happens in how tools compute “neighbors,” how strictly they require both neighbors, and whether they promote an ID to a context key by default. Some tools only do so when configured at import.
Normalize by creating an internal mapping table that states which report bucket (100/101/102) each tool’s label maps to. Document any caveats, such as “ICE equals 102%; ‘context repetition’ counts as 101% within the job only.”
Standardize rate cards on the normalized tiers to remove tool‑specific jargon from invoices and KPIs.
Pricing and leverage policies for context matches
Pricing needs to reflect both productivity and risk. Set ranges you can defend with data, then adjust by content type and language pair complexity.
Sample discount ranges and when to use them
Context matches usually deserve deeper discounts than 100% matches, but not all contexts are equal. As a starting point for many programs:
- 101% in‑context: 25–60% discount relative to new words, assuming light review.
- 102%/ICE: 60–100% discount relative to new words, with auto‑apply allowed in low‑risk content.
For regulated content, highly inflected languages, or rapid UI churn, narrow the discount. For example, cap 101% at 25–40% and 102% at 50–80%, and require spot checks.
Where your history shows near‑zero defects on ICE for certain locales and file types, move toward higher discounts tied to measured quality.
ROI model inputs: speed, quality, and review effort
Model ROI by tracking three inputs per tier: average time per word, defect rate, and percent auto‑applied. Use MQM/DQF major/minor categories for defect tracking.
If 102% saves 70–90% time with negligible errors and 101% saves 40–60% with occasional edits, your blended savings will justify deeper discounts. Log leverage and LQA outcomes in dashboards and review quarterly.
Mapping error categories to a common framework like the TAUS DQF/MQM typology helps make quality impacts visible to procurement and leadership.
When to auto-apply vs require human review
Automation should be conservative where context can mislead and aggressive where signals are strong. A simple decision framework keeps teams aligned.
Safe automation scenarios
Auto‑apply context matches when IDs are stable, neighbors haven’t changed, and the language pair is less sensitive to agreement and formality. This is common for UI strings with durable keys and release‑over‑release maintenance.
It also works well in languages with minimal morphology. Keep a watch list of high‑risk strings, such as error messages with variables, for targeted review.
If context is missing or disabled during import, some tools can overwrite existing translations during pre‑translation. Configure imports carefully and lock down automation when context is absent Phrase TMS: TM match context.
Review-required scenarios and governance rules
Require human verification when keys changed, neighbors differ, or the content is legal, medical, or brand‑critical. This includes gendered languages, complex plural systems, or style shifts such as formal vs. informal.
Governance rules should state: do not add translations to TM without proper context metadata. Do not downgrade keys during export/import. Quarantine suspicious matches for review.
MadCap Lingo’s status logic shows how matches appear across views and helps enforce acceptance behavior at the segment level MadCap Lingo: statuses and matches.
File preparation and segmentation rules to preserve context (JSON, YAML, PO, RESX, strings)
File prep determines how much context survives into your CAT tool. Configure extractors and segmentation so keys and neighbors are preserved, and avoid preprocessing that strips identifiers.
JSON and YAML key-based context
For JSON and YAML, treat the full path or explicit id fields as segment keys. Keep order stable between releases.
Avoid reformatting that reorders keys or collapses arrays, as it will break text‑flow context. When building pipelines, pass keys through connectors intact so the tool can map them to context at import.
If you must rename keys, provide a deterministic mapping to retain continuity.
PO and Gettext nuances
Gettext supports msgctxt for disambiguation and handles plural forms via msgid/msgid_plural with nplurals rules. Preserve msgctxt and avoid merging PO files with conflicting contexts.
Mixing contexts erodes 101% reliability. Keep plural variants aligned with your TM rules so each plural form can benefit from context without false matches, especially in Slavic languages.
RESX/Android/iOS strings and placeholders
On .resx, Android XML, and iOS .strings/.stringsdict, use the resource key as the primary context signal. Preserve placeholders exactly as in source.
Changing placeholder style or order can degrade match quality and cause runtime bugs. If you refactor keys, maintain a mapping layer during one release to avoid a mass loss of context and to let the TM migrate safely.
TMX and interoperability: storing and exchanging context metadata
Context must survive exports and migrations. Understanding how IDs and context are represented helps you avoid data loss and keep high‑confidence matches working across systems.
Mapping IDs and context in TMX
Store stable identifiers using tuid and consistent properties at the TU or TUV level. Do this so other tools can preserve them.
Many vendors add custom properties for context. Use vendor extensions sparingly and document them.
Where possible, rely on standards to carry identifiers and metadata. XLIFF 2.1 natively preserves unit and segment IDs and allows metadata to travel alongside content OASIS XLIFF 2.1. ITS 2.0 data categories, like idValue and locNote, can also help annotate context in interchange formats W3C ITS 2.0.
Avoiding data loss during export/import
The most common causes of lost context are unsupported custom fields, namespace stripping, and resegmentation on import. To reduce risk, keep identifiers in well‑known attributes.
Avoid renaming or flattening IDs during ETL. Validate round‑trips with sample jobs before cutover.
If a receiving tool cannot ingest a property, map it to a safe fallback and log the loss. Remediate rather than silently degrading context.
API and CMS pipelines: maintaining context through connectors and versioning
Your connectors and source systems decide whether context is stable or brittle. Design for durable keys and predictable ordering, and version content in ways that minimize churn.
Versioning changes that break context
Large refactors—splitting paragraphs, merging headings, reordering menus—break text‑flow and may produce new keys. Even small CMS template changes can reshuffle segments.
Such changes can drop you from 102% to 100% overnight. Plan changes with localization in mind. Batch refactors, provide key mappings, and communicate deprecations so engineers can adjust import settings and protect TM value.
Connector settings to preserve IDs and order
Enable options that export stable IDs, include full paths, and maintain source order. Disable normalization that rewrites keys or removes empty nodes, as it can shift neighbors.
For multi‑branch development, ensure the connector deduplicates identical IDs across branches. Tag them with version metadata so the TM can reconcile variants without collisions.
LQA and reporting: measuring the impact of context matches
Tie your leverage policy to quality and speed outcomes. When reporting shows consistent benefits, procurement and vendors will align more easily on discounts and automation policies.
Interpreting leverage breakdowns
Make sure reports and invoices separate 100%, 101%, and 102%/ICE buckets. Document the mapping across tools.
Track how often each tier is auto‑applied versus reviewed and the edit distance in post‑editing. If your 102% bucket shows minimal changes and low defect rates, expand its automation scope and discount.
If 101% shows frequent edits in certain locales, tighten review rules and adjust pricing accordingly.
Using MQM/DQF categories to track error reductions
Measure error rates using a recognized framework such as the TAUS DQF/MQM typology. Attribute defects to leverage tiers to see where context matches reduce terminology, grammar, or meaning errors.
Over time, you should see fewer functional and accuracy defects in 101%/102% segments compared to 100%. That quality signal supports automation and pricing decisions.
Troubleshooting context mismatches and missing matches
When context suddenly disappears, use a simple playbook to identify what changed and how to restore signals. Start with the pipeline’s most fragile points: segmentation, IDs, and export/import mappings.
After migration between tools
If a migration breaks context matches, first compare TMX property mappings and tuid usage in a small sample. Next, check whether the new tool’s segmentation rules differ.
Even a minor regex change can shift boundaries and nullify context. Finally, validate that ID fields made it through import intact. If not, add mapping rules or custom fields and reimport.
After segmentation rule changes
When segmentation rules change, segments may split or merge. This affects both 100% and 101% behavior.
Identify the patterns affected, re‑align legacy content where needed, and re‑index TMs so the new segments can be found. Communicate the change to linguists and vendors so they expect a temporary drop in leverage and can avoid adding misaligned entries during the transition.
Handling duplicate or colliding IDs
Duplicate or colliding IDs break ID‑based context and can lead to cross‑contamination in the TM. Detect collisions by scanning for identical keys with different source strings.
Enforce namespaces, such as product.screen.key, to restore uniqueness. Backfill corrected keys into the TM with batch updates so future jobs regain 101%/102% matches.
Edge cases and linguistic considerations
Context is not a silver bullet for all languages and content types. Know where it can mislead and design policies to mitigate risk without giving up leverage.
Gender, number, and formality
In gendered or highly inflected languages, the same English source may need different translations. The correct form can depend on the subject’s gender, plurality, or register.
Even with a stable key, neighboring content or variables can change the correct form. Add guardrails: avoid auto‑apply for strings with person‑dependent grammar and require review for formality‑sensitive locales.
RTL/LTR and multi-locale projects
Right‑to‑left scripts introduce ordering and punctuation issues that can affect how neighboring segments are interpreted in the UI. The Unicode Bidirectional Algorithm governs display order.
Mismatches between logical and visual order can confuse reviewers and lead to false confidence in context matches. In mixed‑direction projects, preview in the target UI and ensure placeholders and punctuation are mirrored appropriately before considering auto‑apply.
Quick reference: where to configure context in major CAT tools
You don’t have to memorize menus. Use this quick reference to find context settings fast and confirm how your tool treats IDs and neighbors.
memoQ
Enable and prioritize double context (XLT/102%) under project settings. Review X‑Translate and Statistics behavior to ensure 102% outranks 101% and 100%.
Confirm how structured documents and neighbor checks are handled in your version.
Phrase TMS
Configure Translation Memory Match Context at file import to choose text‑flow and/or segment‑key sources. Review supported file formats and context types.
Understand how “No context” behaves before enabling large‑scale pre‑translation.
MadCap Lingo
Check how statuses and matches appear across panes and how acceptance behavior works for in‑context suggestions. Ensure TMX imports preserve context metadata so it appears consistently across views.
Trados Studio
Use project settings to enable in‑context exact behavior. Ensure IDs from your file types are mapped as context where supported.
Align your leverage report buckets to 100/101/102 internally. Verify how “ICE” entries are counted in statistics and pre‑translation before setting pricing policies.
FAQs
What is a context match in translation memory?
A context match is an exact source match where the tool also verifies context—via neighboring segments and/or a stable segment ID—providing higher confidence than a plain 100% match. Many tools rank context matches above exact matches for automation and statistics; memoQ documents 102% > 101% > 100% in its matching and statistics model memoQ: double context matches.
101% vs 100%: what’s the difference?
A 100% match is an identical source without validated context. A 101% in‑context match confirms at least one context signal (neighbors or ID), making it safer to auto‑apply in stable content.
What is a 102% match (XLT) or ICE match?
A 102%/XLT/ICE match confirms both context signals or meets the tool’s strictest in‑context exact criteria. It’s typically the highest‑confidence tier and may be pre‑approved for auto‑apply.
How do memoQ, Phrase TMS, Trados, and Lingo define context tiers?
All distinguish exact from in‑context matches. memoQ defines and ranks double context above single context; Phrase TMS maps context types to file‑format import settings; MadCap Lingo documents statuses and match behavior so teams can enforce acceptance rules.
How to configure context for JSON, YAML, PO, and RESX?
For JSON/YAML, preserve full path or explicit IDs as segment keys and keep order stable. For PO, keep msgctxt and plural entries intact. For RESX/Android/iOS, retain resource keys and placeholder formats. Avoid preprocessing that renames keys or reorders entries.
How are text-flow and ID-based contexts weighted?
In most governance policies, ID‑based context outranks text‑flow because it survives reordering. If signals conflict, prefer the ID and require review. Document your tie‑breakers so teams behave consistently.
Do context matches outrank exact matches in pre-translation?
Yes, in most tools context matches rank above exact matches for pre‑translation and statistics; memoQ explicitly documents 102% > 101% > 100% memoQ: double context matches.
What discounts should apply to context matches?
Common starting points are 25–60% off for 101% and 60–100% off for 102%/ICE relative to new words. Adjust by content risk and language pair complexity. Validate with your time/defect data before finalizing rate cards.
When is it safe to auto-apply context matches?
When resource keys are stable, neighbors haven’t changed, and the language pair is less morphology‑sensitive. Avoid auto‑apply for gendered/formality‑sensitive strings, regulated content, or when keys/segments have changed.
How to preserve context in TMX and across connectors?
Store stable IDs in tuid and mapped properties. Avoid stripping namespaces and keep segmentation consistent. Use standards to carry metadata (XLIFF 2.1 IDs and ITS 2.0 data categories) to reduce vendor‑specific loss OASIS XLIFF 2.1 and W3C ITS 2.0.
How to troubleshoot missing context after a CMS migration?
Check TMX property mappings first, then segmentation differences, then whether IDs survived connector and import. Fix mappings, align segments, and re‑index TMs before resuming automation.
Do context matches measurably reduce LQA defects?
In mature programs, 101%/102% segments often show fewer accuracy and terminology errors than 100% matches when IDs and neighbors are stable. Track defects using DQF/MQM categories to quantify improvements TAUS DQF/MQM typology.
How to prevent TM contamination from incorrect context matches?
Enforce stable IDs, require context metadata on TM additions, block “No context” overwrites, and quarantine suspicious matches for review. Periodically audit TM entries for duplicate or colliding IDs.
Do context matches work reliably across RTL/LTR projects?
Yes, but verify in UI because bidirectional rendering can affect perceived neighbor order. Follow the Unicode Bidirectional Algorithm and preview with real UI data before auto‑applying at scale.
How are ID-based and text-flow contexts weighted when both are present?
Prefer ID‑based context as the primary signal. Use text‑flow to promote to 102%/ICE when both align, and require review if they conflict. Document this hierarchy in your policy and enforcement checklists.