Localisation

Human Translation vs Post-Editing: Choosing for Localisation

May 16, 20266 min read
Human Translation vs Post-Editing: Choosing for Localisation

When preparing a software localisation project, one of the first decisions is which production method to use: full human translation or machine translation post-editing (MTPE). The right answer depends on the type of content, the acceptable error rate, and the volume being processed. The decision is not difficult once the criteria are clear.

What separates human translation from post-editing

In human translation, a specialist translator produces the target-language text from scratch. The process involves contextual understanding, deliberate terminology choices, and register adaptation for the target audience. In an ISO 17100 workflow, an independent reviewer checks the text, catching errors that self-review misses.

In machine translation post-editing, an AI engine generates a first version and a human linguist corrects the highest-risk segments. The extent of human review varies. In full post-editing, the linguist reviews every segment. In light or selective post-editing, the human intervenes only in the most critical sections or those the engine handles least reliably.

The practical difference goes beyond final quality. It affects delivery time, cost factors, and the level of terminological control guaranteed across every segment of the text.

When to use each approach in software localisation

Content type is the primary decision criterion. Some elements of a digital product tolerate a wider margin of error; others do not.

  • User interface strings: buttons, error messages, notifications. Short text, decontextualised for the AI engine, with direct impact on the user experience.
  • Terms and conditions, privacy policies, and licence agreements. Errors carry legal consequences.
  • Marketing content and onboarding flows. Brand voice and tone are difficult to recover in post-editing.
  • Regulatory or clinical documentation associated with the product.
  • Extensive FAQs and knowledge bases with frequent updates.
  • Release notes and changelogs.
  • Technical reference documentation with repetitive terminology and predictable structure.
  • Internal support content not exposed to the end user.

For SaaS platforms with rapid release cycles, combining both approaches is common: UI strings in human translation, support documentation in post-editing. For a closer look at quality requirements in that context, the article on ISO 17100 localisation for SaaS platforms covers the certification side in detail.

The factors that affect post-editing quality

Post-editing does not produce a uniform result. The quality of the output depends on several factors worth evaluating before committing to this route.

Language pair. Machine translation performs best in pairs with higher volumes of training data, such as English-Spanish or English-French. For less common pairs or specific regional variants, the engine generates more errors and the human review workload increases.

Text type. Highly structured, terminologically consistent text is handled well by AI. Text with cultural nuance, humour, metaphor, or informal register produces less reliable output.

Glossary and translation memory. Engines fed with validated terminology and existing translation memories produce significantly better output. Without those assets, post-editing becomes closer to rewriting.

Expected error rate. In a selective post-editing workflow, a residual error rate of 5 to 15 per cent is realistic. For critical content, that margin is unacceptable. For internal reference material, it may be sufficient.

How to structure the decision within the project

The most effective approach is to map the content inventory before defining the workflow. The mapping should classify each content type on two axes: exposure to the end user and risk of error.

Content with high exposure and high risk goes to human translation with independent review. Content with low exposure and low risk is a natural candidate for post-editing. What falls in between requires an informed, case-by-case decision.

This exercise has a secondary benefit: it forces the organisation to make quality decisions explicitly, rather than applying the same approach to all content by default.

How M21Global supports this decision

M21Global provides both technology and software localisation in ISO 17100-certified human translation and ISO 18587-certified machine translation post-editing. The process begins with a content inventory analysis to identify the appropriate workflow for each component of the project. There is no single solution applied uniformly: there is a diagnostic before a proposal. Contact M21Global to discuss the content profile of the project and receive a structured recommendation.

Request a free software localisation quote

Frequently Asked Questions

Is machine translation post-editing suitable for user interface strings?

Generally not. UI strings are short, decontextualised, and have direct impact on the user experience. Errors are immediately visible. Human translation is the recommended approach for these elements.

What is selective post-editing and how does it differ from full post-editing?

In full post-editing, the linguist reviews every segment produced by the AI engine. In selective post-editing, human review focuses only on the highest-risk segments. Selective post-editing is faster and suits internal reference content with lower stakes.

Does ISO 17100 certification apply to machine translation post-editing?

Not directly. ISO 17100 sets requirements for human translation workflows. Machine translation post-editing is governed by ISO 18587. Some providers hold certification in both standards.

How does the language pair affect machine translation quality?

AI engines perform best in pairs with larger training datasets, such as English-Spanish or English-French. For less common pairs or specific regional variants, error rates tend to be higher and the human review workload increases accordingly.

Can human translation and post-editing be combined in the same localisation project?

Yes, and it is common practice in projects with mixed content inventories. The appropriate workflow is assigned by content type: critical elements go to human translation, low-risk reference content to post-editing.

Need Professional Translation?

Request a free, no-obligation quote for your translation project.

Request Quote