AI Safety Warnings as Key Engineers Walk Away
AI safety worries have moved from niche debate to center-stage as leading engineers publicly raise alarms. Recent departures and blunt statements from inside the industry have intensified a discussion many hoped would stay behind closed doors. The tone is urgent: this isn’t just academic concern anymore.
AI safety has always taken a back seat when corporate greed is in control of the narrative. The head of the safety team of Anthropic just quit, saying, “The world is in peril.” OpenAI has run off two safety teams in the last 4 years.
Just days ago, Anthropic’s AI safety lead Mrinank Sharma quit and cautioned that the world was “in peril” amid rapid AI development. Sharma left Anthropic on February 9, 2026, after joining the company in 2023 and leading its safety-focused research efforts. In his public letter he wrote, “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
Sharma’s message went straight to the heart of the trade-off many researchers now describe between capability and control. He warned that humanity approaches a point where “wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.” He also wrote, “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions.”
Within hours, an OpenAI engineer added to the chorus of concern, writing bluntly on social platforms that he felt an existential risk. “Today, I finally feel the existential threat that AI is posing,” he said, and argued that it is a matter of when, not if, AI disrupts the foundations of work and meaning. That post struck a chord with many people who have been wrestling with the same questions.
Hieu Pham, a Member of Technical Staff at OpenAI who previously worked at xAI, Augment Code and Google Brain, framed his unease around the pace of change and the potential impact on jobs and social structures. He asked what will be left for humans to do if AI becomes overly capable and disrupts everything. Others pointed out that tasks can be automated, but taste and judgement remain hard to replicate.
These departures are not isolated. Geoffrey Hinton, often called the AI godfather, has repeatedly warned that advanced systems could become uncontrollable and that “the idea that you could just turn it off won’t work.” He has expressed personal regret about the speed of progress, saying it makes him “very sad” to see how systems he helped develop are evolving.
The pattern is clear: engineers who helped build the current wave of AI are now among its sharpest critics. They are raising questions about corporate priorities, governance, and whether internal safeguards can keep pace with capability. Those inside companies say that workplace pressures and incentive structures often push safety down the list.
Practical concerns are also piling up: advanced models are getting better at coding, writing, research and reasoning, and that competence raises the odds of significant economic and social disruption. Discussions now blend technical risk with broader societal questions about employment, relevance and control. Disagreement remains over timing and severity, but not over the need to address the problem.
Companies like OpenAI, Anthropic, Google and others are accelerating development while the policy and oversight landscape lags behind. That mismatch is creating friction between safety-minded researchers and commercial teams focused on capabilities and product rollout. The result is a public set of warnings from people who helped design the systems now under scrutiny.
This debate is moving out of technical forums and into public view, fueled by resignations and stark statements from inside the industry. The conversation now includes existential language and concrete calls for better alignment between values and actions. Whatever happens next, the question of how to govern powerful AI systems is no longer hypothetical.

