My masters degree from a long time ago, was in Computer Science and in particular, Artificial Intelligence. My thesis documented a natural language processing system for querying a database — not in SQL or some other programming language, but in simple English. The tools of the time were very crude, and no large, online repositories of text existed — which meant that thing like Large Language Models were note even remotely feasible. Still, the underlying concept was the same as what we see from things like ChatGPT or Claude.ai. So I appreciate the potential of AI as I was there at the founding, so to speak.1
But I’m seeing growing evidence that the technology is getting out of control and I fear dire consequences if we don’t change course.
We don’t need to posit the coming of an intentionally murderous AI, like HAL in 2001, but the threat of what is out there now is more than alarming.
Deep Fakes
Consider the rise of so-called “deep fakes” — photos and videos that are all but indistinguishable from real images. The New York Times has an article that features a series of 10 videos and challenges the viewer to determine which are real and which are AI. I got 7 out of 10 correct, but almost all of those were a coin toss.2 The potential for creating disinformation with these tools, and flooding social media with them is only going to get worse. In a country as divided as this, and with the average voter’s lack of sophistication when it comes to media literacy, the potential for real damage to our democracy is self evident.
Affective AI
I’ve written before about concerns over people forming “relationships” with chatbots, and now there is some specific data on point. According to Axios, Anthropic — the creator of the Claude chatbot — has released new data on so-called affective uses of the service. While the vast majority of Claude interactions are more technical in nature, Anthropic’s research found that 2.9% of all interactions are affective:
People seek Claude's help for practical, emotional, and existential concerns. Topics and concerns discussed with Claude range from career development and navigating relationships to managing persistent loneliness and exploring existence, consciousness, and meaning.
Sure, let’s query a computer with no emotions, no consciousness, nor even any sense of actual existence, to provide guidance on our most emotionally fraught concerns.
What could go wrong?
The data suggests that over time, the conversations become increasingly positive. Maybe that’s a good thing, but maybe it is happening for the wrong reasons:
Claude rarely pushes back in counseling or coaching chats—except to protect well-being. Less than 10% of coaching or counseling conversations involve Claude resisting user requests.
Who wouldn’t gravitate toward “someone” who is always positive and reinforcing? Except it’s not real!
Ghouls like Facebook founder Mark Zuckerberg, are already touting the notion that AI can “fill the gap” between the number of friends people have, and the number of friends that they “need” — once again missing the point that humans need humans as friends, companions, partners, lovers.
Algorithmic Decision Making
But if thoughts of mechanical friends are disturbing, professor emeritus of national-security affairs at the U.S. Naval War College, Tom Nichols, writes about an even more frrightening application of AI, decision making around nuclear war. In a must-read article titled, The President’s Weapon, Nichols writes:
Of course, none of this solves the fundamental nuclear dilemma: Human survival depends on an imperfect system working perfectly. Command and control relies on technology that must always function and heads that must always stay cool. Some defense analysts wonder if AI—which reacts faster and more dispassionately to information than human beings—could alleviate some of the burden of nuclear decision making. This is a spectacularly dangerous idea. AI might be helpful in rapidly sorting data, and in distinguishing a real attack from an error, but it is not infallible. The president doesn’t need instantaneous decisions from an algorithm.
As someone who taught high school in the 80’s, and showed the movie, War Games,3 in my computer technology course, we clearly do not need a president implementing “instantaneous decisions from an algorith.”
Congress to the Rescue — Not
What is needed is regulation at the national level, but with this Congress that’s not only never happening, they are trying to block states from issuing their own regulations. Trump’s abomination of a budget bill that would throw millions off of Medicaid and explode the deficit by trillions (yeah, that’s trillions with a ‘t’) would prevent any state from passing any regulation of a broadly defined universe of AI systems for the next five years. At the rate that technology is exploding, the world could look entirely different in five years.
Looking back on the rise of social media, it is pretty clear that a technology that seemed so promising got completely corrupted by tech moguls who manipulated the algorithms to ensnare as many users as possible, thereby making themselves obscenely wealthy, but doing immense damage to young people, particularly young girls.
We are at a similar moment with AI, the primary difference being that there is way less innocence in this moment. It would be good if we have learned something from the past twenty years of big tech.
I fear that we have not.
In community, forward.
Notes
How People Use Claude for Support, Advice, and Companionship Anthropic
Mark Zuckerberg Envisions a Future Where Your Friends Are AI Chatbots Entrepreneur
The President’s Weapon — Why does the power to launch nuclear weapons rest with a single American? by Tom Nichols, The Atlantic.
The Senate’s new A.I. moratorium proposal is drawing criticism from Democrats and consumer protection groups. New York Times.
Indeed, I have used Claude.ai for a variety of tasks, most recently in planning my menu for the 4th of July.
Here’s the dialog sequence from War Games that highlights the risk of making an AI the de facto decision maker:
Stephen Falken: Hello, General Beringer! Stephen Falken!
General Beringer: Mr. Falken you picked a hell of a day for a visit!
Stephen Falken: Uh, uh, General, what you see on these screens up here is a fantasy; a computer-enhanced hallucination. Those blips are not real missiles. They're phantoms.
McKittrick: Jack, there's nothing to indicate a simulation at all. Everything is working perfectly!
Stephen Falken: But does it make any sense?
General Beringer: Does what make any sense?
Stephen Falken: [points to the screens] That!
General Beringer: Look, I don't have time for a conversation right now.
Stephen Falken: General, are you prepared to destroy the enemy?
General Beringer: You betcha!
Stephen Falken: Do you think they know that?
General Beringer: I believe we've made that clear enough.
Stephen Falken: [face to face] Then don't! Tell the President to ride out the attack.
Colonel Joe Conley: Sir, they need a decision.
Stephen Falken: General, do you really believe that the enemy would attack without provocation, using so many missiles, bombers, and subs so that we would have no choice but to totally annihilate them?
Female Airman First Class: [on loudspeaker] One minute and thirty seconds to impact.
Stephen Falken: General, you are listening to a machine! Do the world a favor and don't act like one.