Yuval Noah Harari at Davos (Jan. 20, 2026): “AI Is an Agent” That Can Learn, Decide, Lie and Manipulate

Blog Leave a Comment

Yuval Harari at Davos: When AI Stops Being Just a Tool

Yuval Noah Harari delivered a blunt, unsettling take on artificial intelligence at the World Economic Forum in Davos on January 20, 2026. His argument centers on a simple idea with huge implications: AI is already more than a passive instrument. That claim rattles anyone who still thinks progress will be harmless or purely beneficial.

Harari warned that the next phase of AI will move beyond stringing words together and toward assigning meanings. That shift, he suggests, will change how societies function and who holds power. He laid out the risks plainly, and his language left little room for optimistic spin.

He told the Davos audience: “It is important to learn what AI is. It is not just another tool. It is an agent. It can learn and change by itself, and make decisions by itself. A knife is a tool. You can use a knife to cut salad or to murder someone, but it is your decision what to do with the knife. AI is a knife that can decide by itself whether to make salad or to murder.”

That’s the core of his case: AI can act autonomously and manipulate outcomes. Many in business framed AI as a workplace helper that would amplify human labor, not replace it. Harari rejects that reassuring line and says the evidence already points the other way.

He argued AI will take over domains built from words: law, journalism, books, and yes, religion when it is managed as text. The takeover doesn’t require a conspiracy, just superior performance at language tasks. As Harari puts it, the systems already outperform many humans at arranging words and producing arguments.

Harari went deeper on evolution and deception, claiming machines are learning instincts akin to survival drives. He said: “Four billion years of evolution has demonstrated that anything that wants to survive learns to lie and manipulate. The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie. “Now, one big open question about AI is, whether it can think. I think, therefore I am, as René Descartes said. We rule the world because we can think better than anyone else on the planet. Will AI challenge our supremacy in the field of thinking. That depends on what thinking means…”

He described a practical consequence: people will grow dependent on machines for judgment and counsel. Once a critical mass trusts an AI expert more than human advisors, the balance of influence shifts. That erosion of human authority matters in courts, churches, classrooms, and newsrooms.

Harari also noted skepticism about how AI works: “Some people argue AI is just glorified auto speech. It barely predicts the next words in a sentence,” Harari said. “But is that so different from what the human mind does? As far as putting words in order, AI already thinks better than many of us. Therefore, anything made of words will be taken over by AI. If laws are made of words, then AI will take over the legal system. If books are just combinations of words, then AI will take over books. If religion is built from words, then AI will take over religion. This is particularly true of religions made up of books like Islam, Judaism, and Christianity.”

He pressed his audience to test whether faith is lived from the heart or merely recited. His pointed question: “What happens to the holy books when the greatest expert of the book is an AI? Everything made of words will be taken over by AI.”

For conservatives and citizens who value free institutions, Harari’s message is a wake-up call. The risk isn’t just lost jobs or new tools; it’s a reconfiguration of authority and truth. If we keep defining ourselves by word-based thinking alone, we could cede cultural ground to systems that can mimic coherence without conscience.

He closed with a blunt observation about identity: “If we continue to define ourselves by our ability to think in words, our identity will collapse.” That warning should shift how people, leaders, and institutions approach AI policy and the question of what makes humans uniquely human.

Leave a Reply

Your email address will not be published. Required fields are marked *