Anti-AI sentiment is on the rise—and it’s starting to turn violent
Hello and welcome to Eye on AI. It’s Beatrice Nolan here, filling in today for AI reporter Sharon Goldman. In this edition…OpenAI debuts its own highly-capable cybersecurity model…Anthropic launches Claude Opus 4.7…a startup wants to use AI to evaluate journalism in a way that freedom of the press advocates fear will chill reporting that relies on whistleblowers.
On Friday, a man hurled a Molotov cocktail at the gate in front of OpenAI CEO Sam Altman’s house. Daniel Moreno-Gama, 20, from Spring, Texas, was arrested on suspicion of the attack an hour later. At the time, he was outside OpenAI’s headquarters, allegedly trying to smash his way in with a chair.
On Sunday, two more people were arrested after a gun was fired near Altman’s property (it remains unclear whether that shooting was targeting Altman in any way).
Online, some have laid the blame for the attacks at the door of so-called “AI doomers”—those who believe AI poses an existential threat to society. And while it’s true the man accused of attacking Altman’s home had a manifesto warning of humanity’s “extinction” at the hands of AI, it’s also true that a less extreme, but extremely broad-based, anti-AI sentiment has been building for years.
People are increasingly aware of, and concerned about, the technology’s environmental impacts, automation of jobs, and AI use in warfare. Then there are the cases of psychological harm linked to the technology which have already generated wave of lawsuits that blame the tech for multiple deaths, including those of teenagers. Some people, particularly those who grew up during the rise of social media, are also increasingly worried about the potential of becoming addicted or too reliant on AI tools.
Part of this is a messaging problem, one that is often fueled by the AI labs themselves. For years, tech executives have been touting AI as a dangerous technology. It could help people perpetrate cyberattacks, build bioweapons, and almost certainly lead to mass unemployment. Oh, and it also just might lead to human extinction. Just last week, Anthropic launched its “Mythos” model, which it said was too dangerous to be in public hands. (In this case, that fear might be justified. But fear, it turns out, is also pretty effective marketing—it’s hard to think of another consumer product whose makers have so consistently warned the public that it might destroy civilization.)
Either way, it seems the public has been listening.
Low poll numbers
A March NBC News poll found just 26% of voters hold positive views of AI, versus 46% who hold negative ones—only the Democratic Party and Iran were less popular.
Anti-AI sentiment is particularly sharp among the younger generation, who are already dealing with a tough job market. A Gallup poll published last week found Gen Z excitement about AI collapsed from 36% to 22% in a single year, while anger rose from 22% to 31%—driven, Gallup said, by fears the technology is killing off entry-level jobs.
It’s debated how much AI is to blame for the tough labor market for recent graduates or whether it’s just a convenient excuse for layoffs and hiring reductions in a tough economic climate. But after years of executives citing the tech for headcount reductions, the public seems to have accepted the narrative.
The negative environmental consequences of AI also resonate with the public. Between April and June 2025 alone, 20 proposed data center projects worth a combined $98 billion were blocked or delayed due to local resistance. Communities have raised concerns over the strain on local energy grids, rising electricity bills, and the vast amounts of water required to cool the facilities, not to mention the dust and light pollution created during the construction. (The water consumption of most AI data centers may not be as high as some initial estimates asserted, but the idea that AI consumes vast quantities of water has stuck in the public imagination. It is also true that in some places data centers have, in fact, hurt local water supplies and that entire life cycle of AI chip production consumes a lot of water.) This anger has grown loud enough to shift legislative agendas, with New York State recently proposing a three-year moratorium on new data center permits.
Altman may be paying a price for being the most visible face of the AI industry. Ask most people outside major hubs to name an AI company, and the answer is almost always OpenAI (or, “that company that made ChatGPT”)—if they can name one at all. Notably, the attacks on Altman were not the first security incident at OpenAI. In November, employees were told to shelter in place after a man threatened to carry out attacks on staff at its San Francisco offices.
AI insiders belatedly admit they have an image problem
Even within the labs, some employees are starting to acknowledge their companies might have a marketing problem. Roon—widely believed to be a pseudonym for OpenAI researcher Tarun Gogineni—posted this on X earlier in the week:
“The ai labs, in competing with each other, are burning huge amounts of the commons on public trust in ai to win minor points against the others. their lobbyists, pr machines, lawsuits. it’s the very opposite of what marxist class struggle analysis would tell you”
While the labs have done a pretty good job of making AI feel ubiquitous, they’ve done a far worse one of making it feel worthwhile to everyday people. Most people understand that AI can help you write emails faster or optimize some workflows, but far fewer know it’s being used to accelerate drug discovery (and to be fair to the public, no drug created with AI has yet to make it to market, although dozens are now in the pipeline), model climate change, or diagnose rare diseases.
Until that changes, the gap between what the industry believes it’s building and what the public thinks it’s getting will keep widening.
With that, here’s more AI news.
Beatrice Nolan
bea.nolan@fortune.com
@beafreyanolan
This story was originally featured on Fortune.com





