Who gets to decide how good is good enough in translation?

Online interaction between freelance translators is filled lately with comments to the effect that translations created using AI are not good enough. Assertions are made that grave things will happen if AI is allowed to translate. Further assertions are made that clients will come back to “us” (to whom, I wonder?) when they realize the problems. And some translators posit that loss of lives and grave legal consequences will follow if AI is used to translate medical texts or documentation for machinery that could be dangerous if operated incorrectly.

All of these comments serve the very good purpose of what I call—borrowing from my NY upbringing—kvetch-bonding (wound-licking also comes to mind), between translators who feel they are being attacked on all sides by the migration of agencies to using AI to replace professional translators.

But most of the comments ignore the reality that the translation the kvetchers have been making their living from is, before—and, yes, even above—concerns over quality, a business, and people pursuing that business and their paying clients (translation consumers) get to decide whether a translation is good enough.

The subtext here is that most translators are not pursuing translation as a business, but have for decades—and to an increasing degree lately—been participating in what has been more recently been characterized as a gig-work economy.

The agencies have denied translators their agency in determining how they work. Requiring the use of specific software products and the use of hamster wheel translation platforms are good examples of this. Reverse auctions where translators bid jobs down are another. Of course, complicity on the part of freelance translators is a necessary element in making this gig-work economy function, and function it does.

Ultimately, as has been the case for as long as I can remember, the balance between cost and quality will be evaluated by and will inform business decisions by people in the translation business and their paying clients. The volume of translation work given to professional translators has significantly dropped precisely because significant numbers of clients are willing to bear the risks of lower quality if it is accompanied by a much lower cost.

No amount of complaining by freelance translators is going to change that. And the level of complaining among translators themseves, who cannot change things, reminds me of that old Chuck Berry song. Translators need to stop playing with their own ding-a-ling and interact with the people who can make a difference and who are making decisions about how good is good enough for them.

Translators will not be successful at moving agencies already heavily invested in efforts to eliminate professionals. That points clearly to clients who pay for and consume translations.

As a translator, your marching orders are clear. Can you hear the drums? Or are you playing with your…

With a few lifeboats still available, too many translators are both cursing and holding onto a sinking ship.

Numerous translators are actively discussing in various online venues the problems with AI translation and are saying that clients will come back to them when they discover the problems of AI. Although these discussions provide opportunities for bonding among colleagues, they serve no other identifiable purpose, and they certainly do nothing to impede the obvious headlong race into a world in which translation is viewed as a commodity by both translation brokers and their translation-consuming clients.

The underlying, persistent reality is that translation is a business.

The amount of money translation brokers have needed to pay translations they purchase for resale has been a constant profit-diluting annoyance to the LSB (language service broker) community. In response, brokers have employed numerous devices over the years to lower their translation purchase price. One device is the mandatory use of broker-specified CAT tools, with an accompanying discounting of rates that can be received by translators. Another is forcing translators to work on hamster-wheel online translation platforms in order to receive work.

But now the brokers on which most translators depend have a new way to lower (or almost eliminate) the cost of obtaining translations to sell, this being the elimination of professional translators from the translation process step.

And there is abundant evidence that they are succeeding at doing just that.

One reason for the brokers’ success is that the good-enough paradigm has been widely adopted and is working for a huge portion of the translation market.

Another reason is more serious for freelance translators and needs to be recognized by translators wishing to survive:

Brokers conduct themselves based on the correct understanding that very few translators from which they purchase translations can compete with them in acquiring direct clients themselves. Most translators don’t even know who their potential direct clients are. And, even if they do know, they generally don’t know who to approach at those clients or how to approach them. Many, for a variety of reasons, do not have the ability to access potential direct clients.

The adoption of AI by brokers succeeds largely by the monetization of their control of customers, combined with the inability of most translators to compete with brokers. It succeeds because good enough is good enough and, more critically, because most translators are trapped, with little ability to compete with brokers and no alternative income-earning path.

To survive by translating for earnings anywhere near what they previously could expect to earn, translators will need to acquire direct clients. For most translators, that will not be possible.

That is where broker-dependent freelance translators are, and it is essentially the end of the road for most translators wishing to pursue translation as a way to earn a living.

Thoughts on stock photos and AI-generated photos

You often see company websites with photos of what are intended to look like groups of employees, sometimes sitting in a meeting room or standing around chatting. These are almost all stock photos, purchased for the purpose of decorating a company website with attractive photos of attractive people who have no connection with the company using the photo.

A typical stock photo of a group includes:

  • handsome males,
  • beautiful females, and
  • a woke makeup of genders, ethnicities, and ages.

Some people might look at the photo and believe that these are actually people who work at the company or are customers for the company’s products or services. Many will not. Is that an honest way to present the company? Perhaps some people would say no.

Now take an example of a company using a typical AI-generated photo depicting the same type of group, which includes:

  • handsome males,
  • beautiful females, and
  • a woke makeup of genders, ethnicities, and ages.

There are still people who would say this is dishonest, but there is an aspect of the photo that would disclose clearly to visitors to the website that what they are viewing is fake. One out of five of the people depicted will have the wrong number of fingers on one of their hands or have their left or right hand attached to the end of the wrong arm.

There you have it, honesty restored by embracing one of the strengths of AI, anatomical hallucination.

(On the occasions we might use AI for photos (we never use it for translation), we flag that fact by using a mouseover text that indicates the source.)

Where did the chatbot hear that?

The buzz over more than the last year in cyberspace has been arguably buzzier than we’ve seen in a while. It is the buzz about AI chatbots, the highest profile one at the moment being ChatGPT and its peripheral functions, created by OpenAI.

The buzz has been triggered by ChatGPT’s abilities in several areas. One is ChatGPT’s ability to come up with plausible answers to questions, and in English bordering on human-created text.

Another is its amazing ability to come up with things in diverse styles such as haiku and rap on demand.

Yet another is ChatGPT’s ability to make breathtakingly stupid factual mistakes, some being total fabrications, which have come to be called hallucinations, but that could still fool unwary and credulous chatbot-struck users. A related problem is its own credulity in believing leading questions and producing responses that rely on falsehoods and mischaracterizations in questions put to it.

These aspects of ChatGPT’s behavior aside, the appearance of such chatbots means that humans must pay more attention to credibility and accountability than ever before.

If a human friend tells you something that is not only shocking but incredible in the true sense of the word, you can ask the friend “Where in the world did you hear that?” And if your friend says she heard it from YouTube, you might be just a bit skeptical. If she learned it from a certain highly opinionated podcaster known for promoting conspiracy theories, you might start to wonder about the trustworthiness of that friend’s statement, including statements about other subjects. But you should be thankful that your human friend is at least willing and able to reveal the source of her information, enabling you to evaluate it. That’s where AI chatbots part ways with the real world.

ChatGPT and its like collect information from countless Internet sources, some good, some not-so-good, and some totally wrong. The learning process is an opaque and impenetrable black box. You might wonder what sources were used to generate a totally fabricated and factually incorrect account of events that you know is wrong; or about what sources were used to generate a true, useful response. You might not care if you know the answer to the question you asked and are only window-shopping for chatbot failure stories to post online.

But what about when you ask ChatGPT or its now-multiplying wannabe clones a non-trivial question you don’t know the answer to? If the chatbot gives you a plausible-sounding answer, you or others might believe it and could make decisions based on the chatbot response.

I have experimented numerous times with some leading questions I know the answers to; ChatGPT failed miserably in too many cases to repair the damage already done to its reputation with me. Getting facts wrong about events that are not likely to affect our lives or fortunes is one thing. Fabricating answers to questions that are more important, however, is potentially very dangerous.

Since AI chatbots learn from what humans have written on the Internet, the quality of what the humans write is even more important than before. When you consider that much of what is written on the Internet is not even written by fully identified humans, the potential problems come into focus. It is important to be able to know and evaluate the sources of an AI chatbot’s learning. But before that, it would be better if the chatbot itself could know and evaluate the sources of the information from which it is learning, thereby front-loading quality into its knowledge base and, by extension, its responses. The anonymity and lack of accountability that has long been a characteristic of Internet information makes that quite difficult.

That anonymity and lack of accountability is a problem even when chatbots are learning from human-sourced information. But when chatbots start flooding the Internet with their own content, sometimes helped along by humans who trusted them, will chatbots effectively start learning from other chatbots that themselves have learned from not-very-learned humans or even from other chatbots? The image of multiplying mops in Disney’s Sorcerer’s Apprentice comes to mind. Let the believer beware.

Species of Translation Origin

Many countries, particularly ones with their own manufacturing capability or that are wary of products produced elsewhere, require products sold domestically to be marked with the country of origin.

Translation sellers have never had to fulfill that requirement and, in recent decades, the large translation brokers selling Japanese-to-English translation became power users of yet other translation brokers in China, where almost no translators have either Japanese or English as their native language. What could go wrong? Well, lots of things, but that is a topic for a different article.

Enter AI, and the problem of origin is escalated to one of whether a translation originated from a human or something else. Just as products of questionable origin have their origin laundered by having the product processed in some way in a respectable and trusted country, artificial translations can and do have their origins laundered by having members of our species process them to make them at least look usable. Translation purchasers and users should beware of such species of origin laundering.

There are good reasons why we do not use AI to translate.