It’s not that difficult: Translators, Interpreters, and Linguists

A surprising number of people seem to misunderstand the distinctions between translators, interpreters, and linguists. Worse yet is the misunderstanding that any of these categories of professionals should be expected to be able to do the job of the others.

Admittedly, even respected dictionaries leave room for—and can be accused of promoting—confusion between these terms. People spending large budgets on language services, however, should reasonably be expected to distinguish between these three terms of art in the field of language services. The differences are not difficult to grasp.

To be sure, there are a small number of people who cross the boundaries between the professions, but these are quite rare, and a translator should not be assumed capable of interpreting, or an interpreter of translating.


A translator engages in translation, which is the production of a text written in a target-language from a text written in a source language. Translators write words, but work without uttering a word that they are translating. A Japanese-to-English translator works from a Japanese source text, translating it into an English target text. Only a small portion of Japanese-to-English or English-to-Japanese translators are capable of interpreting between those languages, and most do not even want to be interpreters.


An interpreter engages in interpreting (rarely, but confusingly, sometimes called interpretation), which is the expression of a message spoken originally in the source language as a message spoken in the target language. While there are exceptions, most Japanese/English interpreters consider themselves exclusively interpreters and do not actively seek out translation assignments. Many of them would not be good translators.


The term linguist is just a bit more problematical, because of a range of meanings. Strictly speaking, a linguist is a specialist in, not surprisingly, linguistics, which deals with the characteristics of language, including aspects such as structure, syntax, semantics, and origins.

In many years of serving the commercial translation market, we have encountered only a small number of working commercial translators who were also linguists, and have met very few linguists who are actively translating or who are even capable of translating or wish to translate as a profession. That separation is even greater when we consider linguists who might interpret. There are very few such people. Similar to the case of translators, interpreters and linguists are two quite distinct groups.

People who should know better, but don’t, misuse the term linguist, and some who know better, purposefully misuse the term.

You often see translation companies (particularly the ones more accurately characterized as translation brokers) boasting of all the “linguists” they have. This makes one wonder why they would talk about a group of professionals not generally engaged in or proficient at translation when they are trying to sell translation services.

Perhaps they think it makes the people they sell translations to feel better that their documents are being translated by people called linguists. Or perhaps they think that the translators they purchase translations from will feel better working for low rates if they can wear the title of linguist.

To be fair, there is the argument that linguist just means someone who is good at a number of languages, but professional translators realize that being “good at a number of languages” doesn’t mean you can translate.

There you have it, a short description of three often-confused professions. Although it might be optimistic for language professionals to expect people outside these fields never to confuse them, when a non-specialist such as a client gets it right, we feel more comfortable than when we need, for example, to inform an interpreting client that will we not be translating in their meeting or deposition.

Where did the chatbot hear that?

The buzz over more than the last year in cyberspace has been arguably buzzier than we’ve seen in a while. It is the buzz about AI chatbots, the highest profile one at the moment being ChatGPT and its peripheral functions, created by OpenAI.

The buzz has been triggered by ChatGPT’s abilities in several areas. One is ChatGPT’s ability to come up with plausible answers to questions, and in English bordering on human-created text.

Another is its amazing ability to come up with things in diverse styles such as haiku and rap on demand.

Yet another is ChatGPT’s ability to make breathtakingly stupid factual mistakes, some being total fabrications, which have come to be called hallucinations, but that could still fool unwary and credulous chatbot-struck users. A related problem is its own credulity in believing leading questions and producing responses that rely on falsehoods and mischaracterizations in questions put to it.

These aspects of ChatGPT’s behavior aside, the appearance of such chatbots means that humans must pay more attention to credibility and accountability than ever before.

If a human friend tells you something that is not only shocking but incredible in the true sense of the word, you can ask the friend “Where in the world did you hear that?” And if your friend says she heard it from YouTube, you might be just a bit skeptical. If she learned it from a certain highly opinionated podcaster known for promoting conspiracy theories, you might start to wonder about the trustworthiness of that friend’s statement, including statements about other subjects. But you should be thankful that your human friend is at least willing and able to reveal the source of her information, enabling you to evaluate it. That’s where AI chatbots part ways with the real world.

ChatGPT and its like collect information from countless Internet sources, some good, some not-so-good, and some totally wrong. The learning process is an opaque and impenetrable black box. You might wonder what sources were used to generate a totally fabricated and factually incorrect account of events that you know is wrong; or about what sources were used to generate a true, useful response. You might not care if you know the answer to the question you asked and are only window-shopping for chatbot failure stories to post online.

But what about when you ask ChatGPT or its now-multiplying wannabe clones a non-trivial question you don’t know the answer to? If the chatbot gives you a plausible-sounding answer, you or others might believe it and could make decisions based on the chatbot response.

I have experimented numerous times with some leading questions I know the answers to; ChatGPT failed miserably in too many cases to repair the damage already done to its reputation with me. Getting facts wrong about events that are not likely to affect our lives or fortunes is one thing. Fabricating answers to questions that are more important, however, is potentially very dangerous.

Since AI chatbots learn from what humans have written on the Internet, the quality of what the humans write is even more important than before. When you consider that much of what is written on the Internet is not even written by fully identified humans, the potential problems come into focus. It is important to be able to know and evaluate the sources of an AI chatbot’s learning. But before that, it would be better if the chatbot itself could know and evaluate the sources of the information from which it is learning, thereby front-loading quality into its knowledge base and, by extension, its responses. The anonymity and lack of accountability that has long been a characteristic of Internet information makes that quite difficult.

That anonymity and lack of accountability is a problem even when chatbots are learning from human-sourced information. But when chatbots start flooding the Internet with their own content, sometimes helped along by humans who trusted them, will chatbots effectively start learning from other chatbots that themselves have learned from not-very-learned humans or even from other chatbots? The image of multiplying mops in Disney’s Sorcerer’s Apprentice comes to mind. Let the believer beware.

Choosing what feels good is useful in avoiding facing what is true.

Adopting AI into your translation workflow is fine if it helps a translator and it provides a wonderfully anodyne topic for a conference presentation, while avoiding talking about the more serious challenges faced by freelance translators. But adopting AI will not get you work or be a survival solution if you are dependent on brokers for your translation work.

The translation brokers that most translators depend on for translation work are well into their shift to looking for only post-editors. They pay peanuts and you cannot live on peanuts unless…