The High Stakes Futility of the AI Race

The High Stakes Futility of the AI Race

4 minutes, 24 seconds Read

When it comes to the dangers of artificial intelligence (AI) and now artificial superintelligence (ASI) (sometimes called artificial general intelligence), I feel as if we’ve been transported onto the set of the 1983 film “WarGames.”

In the film teenage hacker David Lightman stumbles onto the military’s most sensitive war scenario planning computer while believing he has simply found a soon-to-be-released game called “Global Thermonuclear War” on the server of a computer game company.  Lightman activates the game which ultimately makes personnel at the North American Air Defense Command (NORAD) mistakenly believe that the Soviet Union is preparing for an attack. On big screens throughout the war room, Soviet movements and preparations become ever more threatening by the hour. As we are told later, the object of the game is to win and so the computer sets out to win a thermonuclear war.

When Lightman realizes what he’s done, he seeks out the one person he believes can stop the madness. (I’m skipping a lot of steps here.) He catches up with the architect of that war planning computer system, Stephen Falken. Falken is living a solitary, anonymous existence (under a different name) in a home that Falken says is near a primary nuclear target. He explains to the young hacker: “A millisecond of brilliant light and we’re vaporized. Much more fortunate than the millions who’ll wander sightless through the smouldering aftermath. We’ll be spared the horror of survival.”

Lightman pleads with Falken to call his former associates at NORAD to tell them what is happening. Falken refuses saying that the world might gain a few years if he makes the call, “but humanity planning its own destruction, that a phone call won’t stop.”

Like the fictional Stephen Falken, the computer industry’s geniuses are now playing games with very complex systems (with literally trillions of inputs) called AI, systems that have emergent properties. Emergent properties are ones you don’t program in and that you don’t expect—not unlike the computer in “WarGames” making its users think that a simulation is the real thing. That’s why we are now treated to a constant barrage of reports about so-called “hallucinations” emitted by AI programs, that is, information that is incorrect or simply nonexistent. See this listing for some interesting and disturbing “hallucinations.”  AI chatbots have also been known to counsel teenagers on how to commit suicide and one teenager succeeded.

Right now what is called AI is primarily based on what are called large language models. This type of AI hoovers up huge amounts of text, typically from the web (often violating copyright) and “trains” on that text. What it really does when it responds to an inquiry is predict based on statistical analysis what the next word on a particular topic should be. It doesn’t have “knowledge,” just statistical inclinations based on its training which is why it is prone to mistakes.

How China’s Rare Earth Ban Backfired into a U.S. Tech Breakthrough

The many minor mistakes that current chatbots make may seem amusing or possibly inconvenient unless the AI is used for critical purposes such as surgery, when mistakes can and now already have been very damaging. In a piece I wrote in 2024 I noted: “Now think about the mess AI will make if used without respect for its limitations in the fields of medicine and law where honed judgment from seasoned professionals who know the subject matter extremely well is crucial.” The mess is upon us.

We are seeing troubling results in other fields as well. “Futurism” magazine contemplates the damage to the food system if ordering and delivery systems now increasingly powered by AI go on the fritz or if such systems are intentionally disabled by a cyberattack. No one working in the grocery stores would have any idea about how to fix them, at least not in any timely manner. Unlike Stephen Falken, grocery workers may not be able to solve such a problem by simply picking up the phone to order more supplies. And, a phone call will certainly do nothing to put a dent in the long-term vulnerability.

Think about what happens when airline computer systems go down. If that happens to the food system—say, through a nationwide cyberattack—we will be in a race against time since due to increased logistics efficiency, most communities in the the United States have only a three-day supply of food in groceries and restaurants.

Inside North America’s First Fully Integrated Rare Earth Facility

Okay, you say. But we are in the shakedown period for AI. AI is really going to supercharge the economy, so we should keep working on it. Maybe that’s true. But maybe it’s not. “Futurism” magazine again reports: “In a new analysis of a survey published by the National Bureau of Economic Research and highlighted by Fortune, around 90 percent of the nearly 6,000 interviewed CEOs, chief financial officers, and other top executives at firms across the US, UK, Germany, and Australia, said that AI has had no impact on productivity or employment at their business.”

With a record like that I’m not concerned that AI as it is currently configured will destroy human civilization unless a bunch of idiots decide to allow it to run critical infrastructure autonomously without human supervision. I suppose that could happen,

Read More

Similar Posts