Parmy Olson, a Bloomberg Opinion columnist covering technology, says there is no such thing as Artificial Intelligence. The more accurate term is “machine learning.” The difference in labeling is important as “Al” pins the blame on machines rather than the people designing these systems.
Artificial Intelligence conjures the notion of thinking machines. But no machine can think, and no software is truly intelligent. The term alone, according to Olsen, may be one of the most successful marketing terms of all time.
ChatGPT and other large language models like it are simply mirroring databases of text – close to a trillion words for the previous model – whose scale is difficult to contemplate. Helped along by an army of humans reprograming it with corrections, the models glom words together based on probability. That is not intelligence.
These systems are trained to generate text that sounds plausible, yet they are marketed as new oracles of knowledge that can be plugged into search engines. That, according to Olsen, is foolhardy.
Not helping matters: Terms like “neural networks” and “deep learning” only bolster the idea that these programs are human-like.
Neural networks aren’t copies of the human brain in any way; they are only loosely inspired by its workings. Long-running efforts to try and replicate the human brain with its roughly 85 billion neurons have all failed. The closest scientists have come is to emulating the brain of a worm, with 302 neurons.
No wonder though ChatGPT was powerful enough to pass four University of Minnesota Law School exams, a final exam at Stanford’s medical school and a Wharton Business School MBA test, but it failed miserably at the Singapore PSLE (Primary School Leaving Examination) – a test for 12-year-olds.
The program has been trained on terabytes of text to “learn” the probabilities that words occur together. Then, when you prompt it with a unit of text, it outputs its prediction of what the next unit of text will be.
That’s essentially all it does. ChatGPT’s predictions are not based on reasoning or understanding of our prompts – only the likelihood of which words come next. Hence linguist Noam Chomsky’s disparagement of ChatGPT as “super autocomplete.”
Its linguistic output sometimes mimics logic but not consistently.
Elon Musk and over 1,000 other artificial intelligence luminaries, have published an open letter calling for a six-month “pause” on further AI development. Why? So it doesn’t threaten humanity by creating digital minds so powerful that they can’t be controlled by humans.
What idiots!
Come on, ChatGPT is a text generator, not even a fact checker! It won’t threaten humanity!
However, ascribing intelligence to programs like that gives them undeserved independence from humans, and it abdicates their creators of responsibility.
The promise of working with intelligent machines is almost misleading. “AI is one of those labels that expresses a kind of utopian hope rather than present reality, somewhat as the rise of the phrase ‘smart weapons’ during the first Gulf War implied a bloodless vision of totally precise targeting that still isn’t possible,” says Steven Poole, author of the book Unspeak, about the dangerous power of words and labels.
Poole says he prefers to call chatbots like ChatGPT and image generators like Midjourney “giant plagiarism machines” since they mainly recombine prose and pictures that were originally created by humans.
But “I’m not confident it will catch on,” he says.