Nothing to See but a Performance

It’s rather comical on both sides of the recent kerfuffle about Australia’s performative banning of under 16-year-olds from having social media accounts.

There are a number of things that just don’t make sense.

(1) There is apparently no punishment for having an account if you’re under 16.

(2) The social media platform companies are ostensibly risking multimillion dollar fines for not complying with this law. It is pretty obvious that what will happen is that people like Zuckerberg will be called to testify somewhere and they’ll say “We spent all this money, trying to comply, so we’re doing our best.” Does anybody believe that there is actually a desire to lose all those customers, which means to lose all that behavioral information?

(3) As I input this note, there are countless young people making new anonymous social media accounts—backed up by new anonymous email accounts—in which they say they are 16 or older or somehow fake being 16 or older. There are reports of that already happening. There is a will to do that and they will find a way to do that.

There will also be a market for social media accounts that can be repurposed to get around this performative restriction.

(4) The vast majority of social media accounts are essentially anonymous to start with. I’ve never made a social media account where I had to prove my identity.

The platform owners don’t care. All they care about is your behavioral information, which they vacuum up and used to sell advertising, meaning they have no incentive to actually kill accounts or prevent accounts from being made by people under 16.

(5) Social media companies already use countless people in the third world to monitor for murderers, rapes, and other atrocities in posts on their platforms. Are those people going to be diverted to the task of verifying that account owners are 16 years or older?

Even if social media wanted to comply (they don’t) I can’t believe that people in developed economies are going to enlisted at reasonable salaries to do all this monitoring, which is a level of diligence on the part of tech bro billionaires that they have never shown in trying to figure out even who it is that has hold of currently active accounts, let alone their ages. It is a lawless land and that’s just fine with them because it’s a very profitable land.

(6) The banning of children under 16 from social media is purely performative and might actually serve as a diversion from concerns about what social media is doing to people 16 and older. In that sense, it could actually be welcomed by social media owners. They will fail at doing what would actually be necessary—because it’s inevitable that they fail—and then they’ll look like they’ve done their part, and the remaining social media users are still targets for all the evil things that go on in social media, including, but not unlimited to, behavior manipulation by the platform owners, deep fakes, and much worse.

What has cyberspace turned into?

On “normal” social media platforms:

  • AI-generated videos of UK royalty dancing with their royal children.
  • Crotch shots of AI-generated female athletes with definition in the crotch area that would get them disqualified from whatever AI-generated event they were going to compete in.
  • AI-generated dogs saving the lives of AI-generated babies.
  • AI-generated aircraft crashing onto the decks of AI-generated aircraft carriers.
  • AI-generated cars crashing into AI-generated trailer trucks.
  • AI-generated animals having AI-generated foreign objects removed from their skin by AI-generated veterinarians, this becoming common recently, along with many more, even more-revolting fake videos involving animals.
  • A constant stream of AI text slop from places like India, Macedonia, Pakistan, Cambodia, and Vietnam, often accompanied by unlawfully published photos, the poster neither giving credit for the images nor citing sources, and the slopper certainly not having permission to use the images.
  • A constant stream of ads—mostly confirmable as coming from Vietnam—for WiFi connection equipment and services in Japan, clearly aimed at Asian laborers who have been brokered into Japan to work.
  • Loan-shark ads aimed at foreign laborers working in Japan.
  • AI-generated voices of well-known people on a video of two seconds of the person to establish “authenticity” and continuing for several minutes with the faked voice but without any image, because the lip movement would give it away instantly. There are countless fake Neil deGrasse Tyson videos like this. The voice is very close, but the cadence of the fake narration clearly is not his. He has recently called this out in a video of his own, pointing out the damage this does to trust. This theft of images and spoofing of voices is criminal, but will go unpunished, thanks to the guaranteed anonymity of social media and apathy of users, many of whom have been numbed to this behavior by a torrent of unlawful posts.
  • Inspirational stories that never happened, mostly from places like Macedonia.
  • Ads claiming to sell you the method of getting rich quickly using ChatGPT. One recent testimonial boasts of being able to buy a luxury car and a home after just two months of stock market investment using ChatGPT.
  • Ads for underground banks aimed at Asian laborers in Japan who want to repatriate money they earn in Japan.

On some platforms, video slop you didn’t ask for and don’t need will autoplay after you watch something that you have actually elected to watch, requiring you to escape to avoid seeing it.

From the LinkedIn social media platform in particular—and it is as social media platform:

  • Vapid AI slop posts with both text and graphics generated AI, the subject matter of which most often being totally unrelated to what the poster purports to do when they are not generating AI slop.
  • AI-smelling text posts that evoke many comments, with each comment being replied to by AI, the replies being within a word or two of each other in length.
  • Microsoft-suggested posts promoting AI or promoting AI promoters.
  • Ads promoting AI.
  • Posts from soon-to-be-out-of-work translators claiming that AI will not replace human translators because AI doesn’t understand culture.
  • Unwanted irrelevant connection requests, mostly from the Global South (although I have fixed that problem).
  • Ads for paid webinars run or promoted by translation organizations to teach translators how to succeed and be better at jobs, although those jobs are quickly disappearing.
  • Translators’ organizations announcing activities of little or no relevance to translators working in the high-demand mainstream domains that are rapidly shrinking because of AI-using agencies.
  • Constant non-productive and futile complaints from freelance translators about this or that agency, this preaching to the choir constituting a waste of attention and time that could be better spent thinking about what to do next (hint: for most, it’s not freelance translation or probably translation at all).
  • Investment scam ads (including investment in mango plantations).
  • Ads for homes in Dubai for USD 1 million.

There you have it. Who could ask for anything more? More importantly, who asked for any of this?

LinkedIn is becoming just another social media cesspool.

In just a week or so, I have seen a rapid and disturbing increase in the number of posts thrown at me by Microsoft’s LinkedIn that are clearly Facebook-like engagement-harvesting slop.

A typical post describes at length some historical or current event that might have happened or a person, although some are clearly total fabrications. Sources are not cited, because there are none to cite.

Most of these posts are lengthy (as if someone told ChatGPT to write N hundred words about XYZ), and much of the writing has the undeniably cadence and style of AI.

Many of these posts are from non-anglophone places. Many of them are accompanied by AI-generated images, and sometimes by photographs that the poster is highly unlikely to have obtained permission to use. This turns a post that is merely annoying drivel into an unlawful act that is annoying drivel.

In any event, while Microsoft seems skilled at detecting when posts are in any way negative, particularly with regarding its platform or AI, and effectively shadow-bans such posts (as it did to this blog post today when it was uploaded to LinkedIn), it actively promotes the above-noted garbage, which is nothing more than AI-slop aimed at harvesting engagement for someone or something with nothing to say or offer.

This garbage needs to be kept on Facebook or other social media platforms, although an argument can be made that the social media platform called LinkedIn is rapidly coming to resemble the Facebook cesspool, and I’m making that argument.