A Business Model for Manufacturing Automobiles That Are Good Enough

Custom Global Mobility is in the business of selling bespoke cars manufactured from the ground up to meet detailed customer requirements. This includes the body, the engine, the power train, and all other parts of the cars they sell.

Although CGM has no manufacturing facilities and has no knowledge or understanding of automotive technology, it positions itself as a car manufacturer producing high-quality vehicles to satisfy detailed customer requirements. In the process of doing that, however, there are a few things going on that their customers don’t know about.

Because the business model CGM follows does not require them to manufacture vehicles themselves and even explicitly calls for the outsourcing of every aspect of vehicle production, the company is aiming to drastically reduce the costs normally associated with a traditional manufacturing operation by outsourcing everything.

CGM boasts of a global team of automotive experts and claims to have thousands of expert automotive design and manufacturing people on their team, turning out best-in-class cars precisely meeting customer requirements. A look under the CGM hood reveals quite a different story.

Since CGM has no automotive manufacturing personnel, they must find and use vendors to design and manufacture the vehicles they sell. Those tasks are complicated by CGM themselves having almost no expertise in automotive manufacturing or even any knowledge of (and ability to evaluate) automobiles. But they undertake to overcome these deficiencies by what they call smart outsourcing.

Since they lack in-house expertise, even the task of evaluating and certifying vendors is outsourced to other vendors. For a manufacturing vendor, the typical sequence is that a vendor candidate is given manufacturing drawings for an auto part (which are generated by outsourcing to one of CGM’s design vendors) and is asked to deliver a part for evaluation. For example, as a test, CGM might ask a candidate to manufacture a con rod.

Naturally, since CGM cannot judge the quality of the trial con rod and doesn’t itself have an incoming inspection department, it sends the part to another vendor—sometimes a vendor they use just for such very certification—for evaluation. If the test evaluation vendor gives thumbs up, the candidate is certified as a manufacturing vendor for CGM.

Such certified vendors have been able to sell parts or assembled vehicles to CGM under the condition that they use only the machine tools specified by CGM. The use of any other milling machines, lathes, or machining centers, for example, is prohibited.

Although a vendor might have some questions during the process of manufacturing, since CGM is totally unknowledgeable about automotive technology, it cannot themselves answer those questions, and usually tells the vendor to just manufacture in accordance with the drawings. For CGM, going back to the vehicle purchaser, who is not an automotive expert and just wants to buy a car, is not an option, since the purchaser would probably not be able to answer such questions from the CGM vendor and might (correctly) conclude that CGM doesn’t know what it’s doing.

Because of a lack of skilled vendors to work cheaply enough to satisfy CGM, they commonly need to use vendors that produce sub-standard products. Although that might sound problematical, the use of such vendors has actually reduced CGM’s overall manufacturing costs. These defective products are sent to other vendors to repair, followed by an outsourced “quality assurance” process, relying on what CGM calls “automotive quality assurance experts.”

In recent years, CGM has started using an automated manufacturing system to produce fully assembled vehicles in-house. They simply dump the client’s requirements for a vehicle into the system, and out pops an assembled vehicle.

Occasionally (and, more seriously, unpredictably), the automatic vehicle-production system used by CGM builds totally faulty parts into vehicles or assembles them incorrectly, so they use vendors (more of those automotive quality assurance experts) to find and correct these problems, sometimes including re-machining and assembly of numerous parts. CGM finds that the abundant availability of automotive quality assurance experts willing to work cheaply enough—combined with the low cost of the initial manufacturing of vehicles by using their automated in-house manufacturing systems—enables their business model to succeed. And the key to all of this is that the vehicles produced are good enough to satisfy customers and cause no safety problems.

If you are a freelance translator, the above should sound quite familiar. If you are a translation consumer, however, you might not realize what goes on after you order a translation from a translation broker, but be aware that, more and more these days, it is likely to be similar to CGM’s approach to manufacturing vehicles. There are better ways of providing products and services to clients.

Thoughts on stock photos and AI-generated photos

You often see company websites with photos of what are intended to look like groups of employees, sometimes sitting in a meeting room or standing around chatting. These are almost all stock photos, purchased for the purpose of decorating a company website with attractive photos of attractive people who have no connection with the company using the photo.

A typical stock photo of a group includes:

  • handsome males,
  • beautiful females, and
  • a woke makeup of genders, ethnicities, and ages.

Some people might look at the photo and believe that these are actually people who work at the company or are customers for the company’s products or services. Many will not. Is that an honest way to present the company? Perhaps some people would say no.

Now take an example of a company using a typical AI-generated photo depicting the same type of group, which includes:

  • handsome males,
  • beautiful females, and
  • a woke makeup of genders, ethnicities, and ages.

There are still people who would say this is dishonest, but there is an aspect of the photo that would disclose clearly to visitors to the website that what they are viewing is fake. One out of five of the people depicted will have the wrong number of fingers on one of their hands or have their left or right hand attached to the end of the wrong arm.

There you have it, honesty restored by embracing one of the strengths of AI, anatomical hallucination.

(On the occasions we might use AI for photos (we never use it for translation), we flag that fact by using a mouseover text that indicates the source.)

Where did the chatbot hear that?

The buzz over more than the last year in cyberspace has been arguably buzzier than we’ve seen in a while. It is the buzz about AI chatbots, the highest profile one at the moment being ChatGPT and its peripheral functions, created by OpenAI.

The buzz has been triggered by ChatGPT’s abilities in several areas. One is ChatGPT’s ability to come up with plausible answers to questions, and in English bordering on human-created text.

Another is its amazing ability to come up with things in diverse styles such as haiku and rap on demand.

Yet another is ChatGPT’s ability to make breathtakingly stupid factual mistakes, some being total fabrications, which have come to be called hallucinations, but that could still fool unwary and credulous chatbot-struck users. A related problem is its own credulity in believing leading questions and producing responses that rely on falsehoods and mischaracterizations in questions put to it.

These aspects of ChatGPT’s behavior aside, the appearance of such chatbots means that humans must pay more attention to credibility and accountability than ever before.

If a human friend tells you something that is not only shocking but incredible in the true sense of the word, you can ask the friend “Where in the world did you hear that?” And if your friend says she heard it from YouTube, you might be just a bit skeptical. If she learned it from a certain highly opinionated podcaster known for promoting conspiracy theories, you might start to wonder about the trustworthiness of that friend’s statement, including statements about other subjects. But you should be thankful that your human friend is at least willing and able to reveal the source of her information, enabling you to evaluate it. That’s where AI chatbots part ways with the real world.

ChatGPT and its like collect information from countless Internet sources, some good, some not-so-good, and some totally wrong. The learning process is an opaque and impenetrable black box. You might wonder what sources were used to generate a totally fabricated and factually incorrect account of events that you know is wrong; or about what sources were used to generate a true, useful response. You might not care if you know the answer to the question you asked and are only window-shopping for chatbot failure stories to post online.

But what about when you ask ChatGPT or its now-multiplying wannabe clones a non-trivial question you don’t know the answer to? If the chatbot gives you a plausible-sounding answer, you or others might believe it and could make decisions based on the chatbot response.

I have experimented numerous times with some leading questions I know the answers to; ChatGPT failed miserably in too many cases to repair the damage already done to its reputation with me. Getting facts wrong about events that are not likely to affect our lives or fortunes is one thing. Fabricating answers to questions that are more important, however, is potentially very dangerous.

Since AI chatbots learn from what humans have written on the Internet, the quality of what the humans write is even more important than before. When you consider that much of what is written on the Internet is not even written by fully identified humans, the potential problems come into focus. It is important to be able to know and evaluate the sources of an AI chatbot’s learning. But before that, it would be better if the chatbot itself could know and evaluate the sources of the information from which it is learning, thereby front-loading quality into its knowledge base and, by extension, its responses. The anonymity and lack of accountability that has long been a characteristic of Internet information makes that quite difficult.

That anonymity and lack of accountability is a problem even when chatbots are learning from human-sourced information. But when chatbots start flooding the Internet with their own content, sometimes helped along by humans who trusted them, will chatbots effectively start learning from other chatbots that themselves have learned from not-very-learned humans or even from other chatbots? The image of multiplying mops in Disney’s Sorcerer’s Apprentice comes to mind. Let the believer beware.

Species of Translation Origin

Many countries, particularly ones with their own manufacturing capability or that are wary of products produced elsewhere, require products sold domestically to be marked with the country of origin.

Translation sellers have never had to fulfill that requirement and, in recent decades, the large translation brokers selling Japanese-to-English translation became power users of yet other translation brokers in China, where almost no translators have either Japanese or English as their native language. What could go wrong? Well, lots of things, but that is a topic for a different article.

Enter AI, and the problem of origin is escalated to one of whether a translation originated from a human or something else. Just as products of questionable origin have their origin laundered by having the product processed in some way in a respectable and trusted country, artificial translations can and do have their origins laundered by having members of our species process them to make them at least look usable. Translation purchasers and users should beware of such species of origin laundering.

There are good reasons why we do not use AI to translate.

Of Mice and Mousetraps

If you don’t have the budget or don’t want to pay for real cheese, you might try putting a photograph of a piece of cheese in your mousetrap, but don’t expect to catch anything but a photograph of a mouse, and an out-of-focus one at that. People who choose to use artificial intelligence to translate should not be surprised to find that they receive an artificial translation, and a poor one at that.

There are good reasons why we choose to continue to provide only professional translation.