Imprimer
Partage :

Quality in the MT Era: Importance and Measurement

By Barbara McClintock, Étienne McKenven and Malcolm Williams

The traditional Canadian translation industry is being challenged by both globalization pressures to cut costs and claims by statistical machine translation (MT) providers. Researchers have even announced that they are able to use human neural processes (Artificial Intelligence) for translation purposes.1 A new trend is that clients are willing to accept a “good enough” translation without revision if it is fast and costs less. Quality used to be evaluated by means of a revision process, which is now often eliminated to cut costs or because no revisers are available. On a contradictory note, a recent industry survey revealed that translation quality is six times more important to clients than speed or cost.2

In this tumultuous environment, we asked the authors for this feature issue how they saw language professionals’ eternal quest for quality. Our authors are presented below in alphabetical order.

James Archibald describes how the European Union maintains quality while translating a very high number of words. The EU’s solution is to have highly qualified language professionals with subject area expertise who are supported by automated systems and a multi-level revision process.

Louise Brunette, whose main area of research is translation quality assessment, provides an overview of the history of research in translation quality. Ms. Brunette has worked extensively on identifying revision criteria, including accuracy, readability, appropriateness and linguistic coding.

Claude Jean provides a humorous look at quality from the point of view of a language advisor in the scientific translation field.

Marc Lambert describes how important it is to find a middle ground between abusive revision and no revision at all. He refuses to give in to the good enough philosophy, stating that “Good enough is just not good enough.”

Éric Poirier discusses the quality assessment of MT and compares the results of Google Translate with human translation. He confirms that statistical machine translation is in fact lacking in a number of areas compared with human translation.

Judith Rémillard talks about how to assess MT quality from the point of view of developers. Developers use both mathematical formulas and human evaluation techniques, e.g., fluidity, linguistic structure (morphology, spelling and grammar) and the accuracy of machine translations.

Kara Warburton describes how the quality of terminology resources is increasingly being equated with return on investment and points out that a new framework of best practices and quality metrics is needed to meet demands in the Web 2.0 era.

Some see MT as a threat and others see it as an opportunity. Many language professionals have embraced the new technology as another tool in their toolboxes. It is no surprise that clients want quality, but at the lowest possible cost. However, translation quality is hard to pin down, particularly for statistical machine translation. Our authors still believe that quality is king and have made relevant suggestions about MT and the quality assurance process. We hope that our readers will find this issue of Circuit informative and thought-provoking.

 


1 UK Business Insider

 

2 Research Survey 2016: Translation Technology Insights


Partage :