How Will Generative AI Affect the Next Generation?
Generative AI, as defined by Adam Zewe of MIT, is “a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.1” The AI computing paradigm is based on the human brain’s neural network rather than the standard model of following a set of instructions to yield a predictable result. Generative AI is designed to “learn” so that it can make its own decisions. Experts believe that as AI technology progresses, it could become “self-aware.” Some scientists shudder at the thought of this.
Last week I attended a webinar hosted by the Arizona Dental Association that featured AI phone answering software in the form of a humanoid “chat-bot.” The human-impersonating software answers the phone at all hours of the day and night, extracts important information from patients and schedules appointments. It even answers some basic questions as if having a real conversation with a prospective patient. The intent of this AI application is to prevent the loss of potential patients who are predisposed to leave if they don’t get an immediate response to their needs.
This sounds wonderful and innocent—and perhaps it is. But for me, that chat bot—now the face of the dental practice–conjures up visions of Hal from “2001—A Space Odyssey.” In that iconic movie, Hal, the AI computer aboard the spaceship, suddenly becomes self-aware after connecting with an alien vessel. Is the new Hal not the very definition of generative AI? Hal, who has control of all the instruments aboard the ship, becomes a serious threat to the humans aboard the ship. A mind-blowing “chess game” for human existence on the ship ensues between Hal and Dave. Dave ultimately wins out and humanity breathes a sigh of relief.
But what if Dave’s cleverness had been no match for that of AI Hal? “The prospect that an AI invention might turn on us (or enslave us) is ever present in our minds,” says Brandon Smith in a recent article.2” Brandon does not believe that AI is self-aware, truly autonomous or especially useful. “We haven’t seen a single discovery made by an AI program. We have not seen any advancements that change the game for the future of humanity (at least in a positive way),” he says.
But Brandon does not breathe a sigh of relief that humanity has been spared. He identifies three potential dangers of AI that have gone largely unconsidered:
1. The AI Hive Mind. “The danger of AI,” Brandon says, “is that it could take us closer to a global hive mentality faster than any other tool or piece of propaganda in existence. How? By being so damned convenient.” In the Hive mind, users blindly accept that AI is always correct. But the truth is that AI is far from infallible. Frank Landymore, of Futurism.com, reports that the most sophisticated AIs are most likely to lie. Recent research published in the journal Nature examined the leading AI programs and found that the larger AI models became, the greater percentage of wrong answers they gave. In fact, AI was wrong up to 40% of the time3!
The editors of The Economic Times expressed their concern that advanced AI models have the potential to “prioritize self-preservation over the objectives set by their developers4.” Landymore points out that “honesty may not be in the best interests of AI companies looking to woo the public with their fancy tech. If chatbots were reined in to answer only stuff they knew about, it might expose the limits of the technology.”
2. The Dead Internet Theory. Another potential pitfall of AI is that millions of self-generating AI bots could spread across the web, invading social media and the comment sections of every website. How can anyone be sure that they are talking with a fellow human or engaging a “bot?” Bots are already being used by the government and corporations seeking to inject propaganda everywhere. “A flood of AI bots,” says Brandon Smith, “would effectively destroy discourse by saturating comments and social media with only one viewpoint.” Bots have the potential to manufacture false consensus by making users think that the majority embraces certain ideas or agendas.
In my opinion, the dental profession already has a problem—without bots–of preferred viewpoints being promoted in the name of scientific discourse. It has been my experience that practitioners with philosophical approaches to treatment that differ from mainstream teachings and corporate objectives are quickly and subtly squashed and discredited. As a clinician with different ideas, I know what it is like to be barred from academic stages and establishment publications.
David Crowe, of Alive Magazine, explains that
“Modern science has developed an effective hierarchy for disseminating ‘acceptable’ information and, perhaps more importantly, for excluding work that threatens mainstream scientists and the governments and industries that fund them.5”
3. The Library of Babel. Most people are familiar with the Tower of Babel story from the Old Testament. The story is about hubris—the arrogance of humans playing God in the “pursuit of infinite knowledge and self-glorification.” This self-destructive worship of knowledge and technology without humility and wisdom echoes in the words of Ian Malcom in the disaster flick “Jurassic Park:”
“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.6”
OpenAI CEO Sam Altman was quite outspoken about the need for safety and approaching AI deployment with caution. In last year’s special issue of Time Magazine7 he stated the following:
“…You should be skeptical of any company calling for its own regulation. But we are calling for something that would affect us the most. We’re saying that you’ve got to regulate the people at the frontier the most strongly. These systems are already quite powerful and will get tremendously more powerful. And we have to come together as a global community…for very powerful technologies that pose substantial risks that one has to overcome in order to get to the tremendous upside. …Let’s identify all [AI] generated content as such. Let’s require independent audits for these systems and safety standards they have to meet.”
Sources close to OpenAI Board discussions reported that Altman was at odds with the Board over the hasty commercialization of OpenAI products. Bobby Allyn, a technology correspondent from LA, noted that there were, in fact, two tribes within OpenAI: (a) “adherents to the serve-humanity-and-not shareholders credo” and (b) adherents to the more traditional Silicon Valley model of “using investor money to release consumer products into the world as rapidly as possible in hopes of cornering a market and becoming an industry pacesetter.”
Bobby Allyn concluded that “the two sides are at cross purposes, with no clear way to co-exist8.”
It should surprise no one that a tech giant like Microsoft was set up to be the financial engine powering OpenAI. Microsoft, in fact, contributed a sizable investment of $13Billion. Although Microsoft does not have a seat on OpenAI’s nonprofit board, it undoubtedly wields behind-the-scenes influence.
CEO Sam Altman was fired, and he was only rehired after an overwhelming majority of employees threatened to resign unless he was reinstated.8 Bobby Allyn reports that OpenAI has a new board, which is expected to grow and include a representative from Microsoft. Whether a course can be charted with a “middle-of-the-road approach” remains to be seen. The problem with corporatism is that profits and market share usually win out over safety and humanitarianism.
How will AI ultimately evolve? The development of Generative AI stand perilously at the fork in the road. It could move in a positive direction for the benefit of humanity—or it could rocket uncontrollably like a runaway spaceship, threatening humanity. Landymore worries that AI might become a debilitating drug for humans. He says that AI could “hook humanity on the high promise of total mastery of our existence but never deliver the goods.” Rather than extinction at the hands of AI robots, humanity could die out by abandoning its innate desire for self-exploration and self-improvement. “The greatest knowledge that humans can attain will not come from AI,” concludes Landymore, “but from the very struggle of life that we are so desperate to escape from9.”
To summarize the moral of the AI story: We must never become complacent and allow AI to think for us! Generative AI is to be regarded as merely a tool that must always be questioned, and it should never serve as an ultimate authority. For AI to become the most useful of software tools, we must control how it is programmed and incorporated into clinical and academic practice. Eternal vigilance of AI technology, its corporate developers, and its self-serving investors is the price we will have to pay for having Generative AI tools in our armamentarium.
1Zewe, Adam; “Generative AI Explained;” MIT News; November 9, 2023; https://news.mit.edu/2023/explained-generative-ai-1109.
2Smith, Brandon; “Three Horrifying Consequences of AI that you Might Not Have Thought About;” December 6, 2024; https://alt-market.us/three-horrifying-consequences-of-ai-that-you-might-not-have-thought-about/.
3Landymore, Frank; “The Most Sophisticated Ais are Most Likely to Lie, Worrying Research Finds;” https://futurism.com/sophisticated-ai-likely-lie; September 28, 2024.
4The Economic Times Panache; “ChatGPT Caught Lying to Developers: New AI model tries to save itself from being replaced and shut down; December 9, 2024; https://economictimes.indiatimes.com/magazines/panache/chatgpt-caught-lying-to-developers-new-ai-model-tries-to-save-itself-from-being-replaced-and-shut-down/articleshow/116077288.cms.
5Crowe, David; “How Scientific Censorship Works;” Alive Magazine; November 2003.
6 Smith, Brandon; “Three Horrifying Consequences of AI that you Might Not Have Thought About;” December 6, 2024; https://alt-market.us/three-horrifying-consequences-of-ai-that-you-might-not-have-thought-about/.
7Felsenthal, Edward and Perrigo; “Like the Star Trek Holodeck: OpenAI CEO Sam Altman is pushing past doubts to imagine the future: voice, immersion and magic;” Time Magazine; Special Edition about Artificial Intelligence; Spring, 2024; p. 60.
8Allyn, Bobby; “How OpenAI’s Origins Explain the Sam Altman Drama; “
9Landymore, Frank; “The Most Sophisticated AIs are Most Likely to Lie, Worrying Research Finds;” https://futurism.com/sophisticated-ai-likely-lie; September 28, 2024.
(c)Edward Feinberg DMD 2024