Circuit is very pleased to be interviewing Malcolm Williams, a professor at the University of Ottawa’s School of Translation and Interpretation and Co-Chair of the Certification Board of the Canadian Translators, Terminologists and Interpreters Council (CTTIC). He worked for many years at the Translation Bureau as translator, reviser, trainer, evaluator and manager, and led the team that developed the second version of the organization’s translation quality measurement system, known in those days as Sical. His work at the Bureau probably planted the seeds for his very influential book, The Canadian Style: A Guide to Writing and Editing, first and second editions. His other publications include Translation Assessment: An Argumentation-Centred Approach, and he is now preparing a textbook on French-to-English translation.
Circuit: What are the criteria for assessing a translation or a revision? Some say that, in the past, for a translation to be of good quality, it was supposed to have 90% of words correctly translated, or not have more than one correction for every10 words. At the revision stage, the criterion used to be 99% correctly revised words, with a 1% margin of error. Could you give our readers some guidance on how to assess a translation or a revision?
Malcolm Williams: There is no doubt that quantification yields knowledge. As Lord Kelvin said, “When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure it, cannot express it in numbers, then your knowledge is of a meager and unsatisfactory kind.” Furthermore, quantifying errors helps to make translation quality assessment (TQA) more defensible for administrative purposes such as translator certification.
However, the problem with quantifying translation errors is that translation in not a physical product and therefore cannot be evaluated solely according to the tenets and methods of statistical industrial quality control.
I favour a functionalist, criterion-referenced approach to TQA. First, the assessor needs to determine the situational factors at play, such as client requirements and preferences, readership and end use, and on that basis set the criteria against which the translation should be assessed: accuracy, target language quality (which could be broken down into several components), quality of terminology, etc. Second, the analysis of situational factors will no doubt show that some criteria are more important than others, so the assessor needs to weight the criteria, assigning a relative value or percentage to each in the assessment grid or rubric. Third, the same analysis should enable the assessor to classify errors according to seriousness (critical, major, minor). This is an essential part of the approach, since one critical error can make an otherwise excellent translation unusable without revision.
The results of all this is a grid, or rubric, with a description of quality indicators for each quality or performance level (unsatisfactory, satisfactory, very satisfactory, etc.) and each criterion. The description elements change with the situational factors at play, so the rubric is modular.
Number of errors per number of words can be incorporated into the description, but room should be left for a “holistic” judgment, because numbers do not tell the whole story. This approach is definitely more complicated than a simple error count, but in my opinion it yields more detailed, more accurate and more useful information about the quality characteristics of a translation.
C.: How do we know when to throw out a translation and start all over again? With the advent of automatic translation, some clients believe they can cut costs by doing a “pre-translation” themselves by statistical machine translation and then asking a translator to “fix it up.” Does this mean that the translator’s role will change in the future? Should translators be studying editing at university?
M.W.: We already have a course in editing and revision for our graduating students. Given the prevalence of translation technologies and clients’ cost-cutting concerns, a case could be made for introducing a post-editing component not only into this course but also into general and specialized translation courses.
That being said, the translation community has a job to do in educating clients about risk and value—about the danger of overreliance on post-editing and the difference between it and what Alan Melby (2013 ATA conference) calls “machine-assisted human translation,” which gives the client greater value and mitigates the risk.
C.: SDL, the well-known provider of translation technology, recently announced the release of its Translation Technology Insights research study. The 2016 survey had 2,784 respondents in more than 100 countries and nine languages. The survey revealed some critical insights, in particular, “Translation quality is 6x more important than speed and 2.5x more important than cost.” Translation quality and assessment is a area of great interest for you. What do you think of the survey finding? Is there a place for “good enough” translations in your view?
M.W.: It’s not surprising that a survey of language professionals would produce these findings, given the perceived impact of translation technologies on prices and job opportunities. A survey of language service providers’ clients would in all likelihood generate very different numbers.
With regard to “good enough,” I think we have to apply the functionalist approach once again. In today’s climate, the term has come to be equated with the idea that, even though a translation contains many transfer and target language defects, it is acceptable to the client and therefore deliverable if it gets the main arguments across and is delivered on time. However, we would define the term differently if we factored in readership and end use. “Good enough” for the internal information purposes of a small number of specialists is one thing; “good enough” for a public health bulletin or promotional document to be distributed nationwide is quite another.
C.: No one wrote a specifically Canadian grammar before you, and The Canadian Style is still used extensively today by the federal public service. Is your new textbook intended to replace The Canadian Style? Please tell us about it.
M.W.: The primary target readership is translation students and teachers in universities. For English-to-French translation, Canadian translation schools can build their translation courses in large part around Jean Delisle’s acclaimed textbook La traduction raisonnée. There is no equivalent resource for French-to-English translation, although books such as Michele Jones’ The Beginning Translator’s Workbook and Hervey and Higgins’ Thinking French Translation do provide some good material for course design and content.
The overriding purpose of the new book will be to help train F-E translation students for work in government and business, whether as employees of public and private sector institutions or as independent translators. Accordingly, the context chosen for the textbook is instrumental translation (other authors use the terms pragmatic and professional), and examples will be taken from a variety of non-literary fields of specialization in the humanities, social sciences, technology, and natural sciences. Most of the material used will be from Canadian sources, but the learning objectives are applicable in any French-to-English translation environment.
What are the target courses? While the examples, texts, and exercises will be drawn from a variety of specialized fields, the goal is not to help develop trainees’ competency as specialists in a given field. Instead, the textbook is designed to help general translation competency through learning that will occur over several courses and academic years. That being said, students learning with the help of this book should acquire knowledge, skills, methods and techniques that will enable them to tackle source texts in a variety of fields successfully.
Will the book be of interest to practising translators? We’ll see. I plan to cover translation methods and procedures, documentation and search tools, recurrent translation problems, and the many issues involved in ensuring target language quality at the microtextual and macrotextual levels.
C.: Do you think that there is still a uniquely Canadian English grammar?
M.W.: I’ve spent a lot of time comparing Canadian, American and British editorial style manuals, dictionaries and grammars. I’ve also compared The Canadian Style with the two earlier versions of the Canadian government style guide. There is clearly a uniquely Canadian lexicon, but it would be difficult to argue that there is, or ever was, a distinct Canadian typography or grammar. The similarities clearly outweigh the differences.
C.: Are there any books that have made an impact on you? What are you reading these days?
M.W.: For a start, Seleskovitch and Lederer’s Interpréter pour traduire (1984), Gutt’s Translation and Relevance (1991), and Colina’s Translation Teaching. From Research to the Classroom (2003). All three books have helped me, as a teacher, to establish a strong connection between theory and translation practice and to get across to students the need to “interpret” the author’s intent in light of co-text and context, rise above the level of the sentence, and understand the text as argument. In addition, the many publications coming out of the plain language movement in the U.S. and U.K. have strongly influenced the way I approach translation and teach translation, revision and editing.
This year I’ve been heavily involved in “train the trainer” courses, so I’ve being doing a lot of reading in education theory and examining how the “generic” principles and methods developed in that field can be applied to translation teaching and assessment.
C.: Thank you, Mr. Williams.