Plan B needs to be upgraded to plan A.

Executive Summary: People aiming to build a career as a freelance translator and thinking that they can do it by working for agencies need to rethink that plan. Freelance translating for agencies is quickly ending. The best advice at this point is simply don’t bother. There are more hopeful paths and other things you should think about doing with your language ability.

Every time I read through posts made by freelance translators on platforms such as LinkedIn, many of which complain about their plight and the behavior of translation-brokering agencies, it becomes more obvious that freelance translating for agencies is ending. It started ending a few years ago with AI use by agencies and, arguably, much earlier than that, when agencies started using other, non-AI technologies to reduce their price for purchasing translation services from freelancers. Now, the major players that sell most of the translation that users purchase are eliminating their need for freelance translators entirely.

If you are in a university learning languages, continuing with the goal of becoming a freelance translator is unwise. In fact, it’s reckless. If you have what it takes to acquire clients not in the translation business themselves, you might want to try for that. But most freelancers will find that impossible.

One path for averting career failure is to acquire a non-language specialization that can stand on its own as a basis for a career, leaving the option of combining language knowledge with that non-language specialization.

At this point in the evolution of the translation busines, thinking that expertise in languages or translation alone, without another specialization, is going to turn into a career is recklessly optimistic. You need to have a career Plan B not reliant on freelance translating for agencies, and you should treat that as your Plan A

Huh? Where’s the Hospital?

Having a brain that still functions well enough and a ready stock of real-world experiences, I am not a frequent flier to AI destinations, but a confusion that happened (to someone else) the other day prompted me to see if ChatGPT would have been any less confused.

I was sitting waiting for a bus at the みなと赤十字病院入口 (Minato Red Cross Hospital Entrance) bus stop, when I saw a clearly confused young foreign woman looking alternately at her mobile phone and the name of the bus stop at which she had just gotten off a bus.

Seeing her confusion and that she was on crutches, I asked her where she was looking to go.

She pointed to the name of the bus stop and was wondering why she couldn’t see the hospital. Where were they hiding what was surely a substantial medical facility? The answer to the mystery lies in the way bus stops are sometimes named, and in the fact that other buses make a stop at a bus stop named みなと赤十字病院, without the 入口, which means entrance. Why does is a bus stop named XYZ Entrance not at the entrance and one without “Entrance” at the actual entrance to XYZ?

The Entrance in the bus stop name refers to it being at the entrance of the road leading to the hospital on a route that does not stop at the hospital itself.

Would ChatGPT, in all it’s wisdom, have solved this problem before forcing someone on crutches to wait for yet another bus or walk what would be a considerable and perhaps annoying distance on crutches?

My prompt to ChatGPT:

There is a bus stop in Yokohama with the name 赤十字病院入口. Where is that bus stop?

Here’s what it came back with:

The bus stop 赤十字病院入口 (正確には「みなと赤十字病院入口」) is located in Yokohama’s Naka Ward, serving the Yokohama Minato Red Cross Hospital.

[Wow, it could give me the formal name of the hospital !]

Here’s the detailed information:

  • Address: 神奈川県横浜市中区新山下3丁目15 (or nearby 3-1–6)

[I guess that’s right, but didn’t check]

[Who cares, unless you are planning on dropping a hospital buster bomb on the place?]

  • It’s situated along the bus routes that connect Yokohama / Sakuragicho / Motomachi-Chukagai stations to the hospital. [Correct, except that there is no Motomachi-Chukagai bus stop on some of the buses stopping where this occurred.] The stop is right by the hospital entrance. [Wrong. It was fooled just like the young lady on crutches. It’s nowhere near being “right by the hospital entrance.”]

ChatGPT proved that, just as a human without knowledge or experience regarding how bus stops could be named, it could mistake the name as being literally descriptive of its location.

Of course, I suspected that it would be fooled and knew the reason for the confusion.

People in the neighborhood who might frequent the hospital and have more general real-world experience riding buses (something that ChatGPT does not have and will never have) realize that the hospital entrance bus stop is not at the hospital entrance. The hospital stop is at the hospital, and the hospital entrace stop is about a five-minute walk away.

Most of them probably sense that bus stop as being right at the entrance to a  Manchinro shop, at which I frequently buy shumai, which is what I had just done when I encountered the confused visitor.

Microsoft’s LinkedIn Reaches Deep Enshittification

After devoting sufficient time to verifying whether my problems with Microsoft’s LinkedIn are simply misunderstandings on my part, it has become clear that the image that Microsoft projects for its LinkedIn platform is a far cry from what is actually going on behind the curtain. That, combined with the ever-increasing phoniness, fakery, and desperation of people whose posts are puked at me by Microsoft’s algorithm makes a change of approach an urgent imperative.

The Microsoft agenda is demonstrated by shadow-banning of posts.

Microsoft regularly shadow-bans posts that go counter to its business model and its agenda of promoting AI and rewarding people who promote AI. A post calling this technology out or even gently criticizing AI or its proponents will get very little reach. In fact, just about any negative post on other topics as well will receive the same reception. Cheerleading posts that are mindlessly positive are rewarded. People who want to game the algorithm to get reach are welcomed to do so. I have neither the time nor the inclination to be so inauthentic.

Phony is the new real: Faking it ’til you make it has been extended to faking it even after you make it or fail to make it.

While a large portion of LinkedIn posts have long been characterized by self-congratulatory fluff, including people proclaiming their excitement or honor about things neither worthy of excitement nor capable of bestowing honor, additionally annoying elements have been recently added to the fluffosphere on LinkedIn. One is the proliferation of charlatan coaches. Originally, these coaches were mostly claiming to teach people how to “stand out” (an overworked expression we need to retire) by creating a killer profile or posting things that attracted work.

The coaching business has now been adopted by AI prompt engineering coaches who claim you can “succeed at AI” (whatever that might mean) by learning how to give AI the proper prompts. They’ll teach you. Just sign up.

Sadly, the LinkedIn coaching business is now being engaged in by some translators looking to make money from their colleagues. They claim to teach translators how to succeed in an AI-transformed business environment. Just attend one of their paid webinars or contact them for a consultation. I have seen no evidence that these translation coaches have any intention of telling their willing victims that, having adopted AI, they will still need to acquire non-AI using clients, something which is not possible for more than a tiny portion of freelancers.

This same type of disingenuous behavior is also engaged in by translators’ organizations, some of which promote paid seminars about how to adapt to or adopt AI, aimed at freelancers for whom all the adapting and adopting in the world will not save from falling into the low-paid post-editing swamp that awaits freelancers. Without clients, AI means nothing, but translators are not told that part of the story. This is shameful behavior on the part of both individual coaches and translators’ organizations.

Posts promoting AI are so numerous that banning user accounts is meaningless.

Every day I receive countless LinkedIn posts that are suggested to me by Microsoft’s algorithm and that promote AI as the greatest thing since sliced bread; AI is the final solution to all our problems, and all you need to do is learn how to give AI the correct prompts. I had started banning accounts that made these posts, but it is clear that the account-banning approach is futile. Ban ten accounts today, and Microsoft will just come up with ten (or twenty) more tomorrow that promote AI. As Microsoft pushes forward with its agenda and business model, it becomes obvious that I should pull back from LinkedIn.

Desperation and Delusions

My feed on LinkedIn is also graced by countless posts from colleague translators delusionally denying that they can be replaced by AI, when in fact AI is already replacing large numbers of freelancers for the translation process. They also claim—correctly, of course—that AI cannot beat a good human translator, but are apparently oblivious to or unwilling to accept the reality that it doesn’t matter. There is a huge demand for translation that is good enough if cheap enough, and that demand only grows as the price drops and speed increases. No amount of complaining by freelancers is going to stop AI-using agencies they have depended on from accessing that market to eliminate the need to pay professional translators. who are left with only extremely low-paid post-editing.

All of this silliness and annoying AI promotion has brought me to the decision to post nothing more on my page on LinkedIn beyond links to content in venues I control. I might make an occasional comment on the pages of other translators or in a group, but I’m not going to donate to Microsoft any content on my LinkedIn page, which does not promote any business activity for me.

This decision is the natural result of Microsoft’s push to make AI the only topic and the only bandwagon people are urged to jump on. I am not going to jump.

AI Translation: Uncaring Emulation

I welcome entities that create documents using a collection of software commands known as an AI to order translations of those documents from a “colleague AI.” Those documents don’t deserve less, but they don’t deserve more.

Most entities, however, have sentient humans write things that need to be translated. Their translation deserves the skill and care that only human professionals can provide.

AI translation merely emulates human skill—sometimes not very well—by emulating the behavior of a human. To do that, AI doesn’t need to understand anything, and it doesn’t understand anything; it just emulates understanding.

The most serious flaw of AI translation, however, is that, when dealing with human clients needing translation, it is not capable of caring.

Uncaring emulation. Don’t you and your documents deserve better?

Surprise! ChatGPT thinks it knows me.

I asked ChatGPT who William Lise is, and it came back with this in just a few seconds.

It believed everything I say about myself on my own website. Good for ChatGPT. It slipped up, however, by including, ostensibly as a photo of me, a photo of a guy who died in New Zealand a few years ago. Nice try. I am still around. A translator in Austria, Michael Bailey, helped me with this, probably by using image search.