AI Insiders Are Sounding Alarms, and the Guy Who Wrote That Viral Post Says He's Not Being Alarmist
Feb 12, 2026
Is AI coming for most of your jobs? Maybe not. But some of them? Yes, probably. And we're seeing another wave of AI industry insiders speaking out publicly and making some grave statements.A lengthy post that has gone very viral since it went live on Xitter Tuesday morning, written by OtherSideAI fo
under Matt Shumer, is titled "Something Big Is Happening." It compares this moment that we're in, February 2026, to February 2020, when there were just a few people voicing concerns and sounding alarms about the COVID-19 virus quickly turning into a pandemic. Shumer pegs his alarm to the February 5 releases of GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic, which he suggests are game-changers in the field, and will quickly lead to jobs being lost.Shumer explains that just a few months ago, he had to spend time going back and forth with an AI after asking it to build a program for some purpose. "I am no longer needed for the actual technical work of my job," Shumer writes. "I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed."Shumer describes his worry about coming AI that could create its own biological weapons, or "AI that enables authoritarian governments to build surveillance states that can never be dismantled." And, he adds, "this isn't a fad," and "the next two to five years are going to be disorienting in ways most people aren't prepared for."It should come as no surprise that Shumer, who is also involved in an AI content startup used AI to write parts of his essay — which he is being criticized for.New York Magazine interviewed Shumer after the post began going viral — now with 76.5 million views, according to X. He says he used AI to "help edit to sort of iterate my ideas and the ways of phrasing things," but he didn't just tell an AI to go write something on a topic.While he clearly is trying to scare some portion of the population into waking up, he says he was not trying to be all doom and gloom — just trying to make people aware of changes to come."For some people, it truly won’t matter," Shumer concludes. "Even if everything I say comes to pass, my nurse in a hospital isn’t being replaced anytime soon. They shouldn’t — at least I don’t think — worry, but some people should, and I think it’s important that they know."Shumer's essay was preceded, by one day, by this open resignation letter from Anthropic safety researcher Mrinank Sharma, who writes, "The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”It's unsettling that Sharma also mentions bioweapons... but largely this is about his personal journey and the frustrations he felt seeing Anthropic trying to balance it's values-driven ethos with "pressures to set aside what matters most." And he said, as of Monday, he was off to go study poetry for a while.SFGate notes the link between Sharma's resignation later, and another from OpenAI researcher Zoë Hitzig. She published her resignation essay in the form of a New York Times opinion piece, published on Wednesday, titled "OpenAI Is Making the Mistakes Facebook Made. I Quit."Hitzig asserts that "OpenAI seems to have stopped asking the questions I’d joined [the company] to help answer." And her central concern is advertising that's being built into the free versions of ChatGPT — something that Anthropic mocked in its Super Bowl ad. Hitzig suggests that giving ChatGPT's conversation voice for advertising is equivalent to "exploiting users’ deepest fears and desires to sell them a product," and will lead to untold harms.
...read more
read less