AI Translation: Uncaring Emulation of Human Behavior

Entities that create documents using a collection of software commands known as an AI should order translations of those documents from a “colleague AI.” Those documents deserve no less, and certainly no more.

Most entities, however, have sentient humans write things that need to be translated. Their translation deserves the skill and care that only human professionals can provide.

AI translation merely emulates a human—sometimes not very well—by emulating the behavior of a human. To do that, AI doesn’t need to understand anything, and it doesn’t understand anything; it just emulates understanding.

The most serious flaw of AI translation, however, is that, when dealing with human clients needing translation, it is not capable of caring.

Uncaring emulation. Don’t you and your documents deserve better?

De-AIification

Recently, I had two images on my parent business website that I generated using AI, meaning that I am guilty of causing the associated energy use to create non-essential images. I have taken them down and commit to not using AI-generated images (or AI-generated anything) in the future.

Oh, and unlike countless people active in cyberspace, I do not steal images of any sort and unlawfully republish them in cyberspace without permission of the owner.

Unlawful use of copyrighted material—including images—is rampant in cyberspace. The almost guaranteed anonymity and unreachability of the offenders has led people to make their peace with, meaning surrender to, this unlawful behavior, and I don’t think a system with accountability is going to appear any time soon.

Cyberspace is a lawless land, and that lawlessness destroys trust and fattens the bank accounts of cyber-oligarchs with no demonstrable socially redeeming qualities.

Would you like some AI help writing that note? No, I’m good.

There is a great deal of discussion these days about silicon-based AI helping carbon-based individuals write. I do not use any of the many available online AI writing assistants.

While I wasn’t looking, however, Apple installed just such an AI feature right on my iPhone. Now, when I write a note off-line, I am presented with the option of having it rewritten, including selecting one of a few styles, and even giving the writing tool my own instructions on how to rewrite what I have written.

As a test, for one note, although it was not a plea for assistance, I told it to “make this sound like a plea for assistance.” It worked, but the result was written in a style that is not mine and with expressions that I never use.

The availability of such functions on a device that almost everybody already owns raises the specter of a world in which many people are able to write things that, well, they are not able to write, and in a way they are not able to write them (or write anything, perhaps), and this could suggest a persona that they cannot rightly claim as theirs. Essentially, it is AI-assisted persona spoofing.

This does not bode well for either people whose livelihoods depend upon writing or people who must judge others or make decisions based on what they write. Let the reader beware, and let the writer be real.