Technocracy Instead of Values: The Dangers of AI in Government Administration
“Decision-making based on technology (“hyper-technocracy”) pays little attention to people’s values and beliefs. All forms of human experience become merely “behavioral data”, transformed into products for analysis, forecasting, and management.” ⁃ Patrick Wood, Editor.
AI tools are being grafted into public administration around the world, from ChatGPT Gov and xAI for Government in the United States to declared priorities for AI in Russia. That rollout looks increasingly permanent, driven by bureaucratic momentum and tech firms that control core systems. But machines lack the moral framework that underpins political choices, so relying on them can erode trust and produce wrong-headed policy.
Political governance is about settling social conflicts using values like justice, equality, patriotism, and democracy. Decisions under scarcity force trade-offs between profit and the environment, investment and welfare, or security and liberty. Those trade-offs reflect ideals voters care about, not just inputs to an algorithm.
Consider familiar examples: juvenile “monitoring” versus rehabilitation, or converting an industrial lot into housing versus a public park. The same data can yield different policy paths depending on which values guide the decision. Treating voters as parameters instead of citizens flattens those value-driven choices into optimization problems.
Public opinion in Russia illustrates the gulf between abstract trust in technology and trust in its use by the state. VCIOM reports 52% of citizens generally trust AI and 38% do not, but when it comes to AI in public administration, 53% view it negatively and only 37% positively. The top concerns are mistaken decisions (58%) and lack of accountability for decisions made (57%).
Four core problems keep recurring whenever AI meets government:
- Transparency of Use. Political decision-making is already opaque for many voters, and AI often makes that worse. Neural models can be “black boxes” where even developers struggle to explain why a result was produced.
- Reinforcement of Bias. Models learn from human data, so they inherit societal stereotypes and cultural blind spots. That means AI can entrench or even worsen inequality when deployed without value-sensitive oversight.
- Accountability and Responsibility. It becomes tempting to blame errors on the machine while taking credit for efficiency gains. That shift undermines democratic responsibility and weakens incentives to fix systemic problems.
- Technical Imperfections.
- Neural network “hallucinations” that produce factual errors;
- Non-reproducible outputs where the same query gives different answers at different times;
- And training data limits, since many models need vast, diverse datasets they do not always have.
Because of these flaws, AI should operate inside narrow, well-tested boundaries rather than as a substitute for judgment. Election officials, planners, judges, and social workers make choices shaped by community values that machines cannot feel or weigh. Replacing human deliberation with automated outputs risks opaque, impersonal governance.
History offers concrete warnings. In the USA, social-media surveillance by government agencies drew sharp criticism for privacy and civil-rights implications. In Poland, a 2012 algorithm used to profile the unemployed led to legal challenges and was ultimately ruled unconstitutional by administrative courts. Research on algorithmic tools used in courts across the USA and Spain showed they often amplified preexisting biases.
When the public sees AI failing or shifting blame to code, trust in both technology and institutions drops. That loss of trust fuels passive resistance, protest, and appetite for politicians who promise to roll back technocratic overreach. Without public consent and clear accountability, top-down implementation breeds backlash.
In Russia, AI is part of an official digital transformation program with a planned project office and specialist working groups to draft rules. There is no single unified regulation yet; laws and bylaws cover specific sectors and a 2023 “Law on Recommender Algorithms” is treated as a partial framework. Experts warned the law does not cover all use cases, and rigid administrative procedures push agencies toward simple solutions like chatbots.
President Putin issued an order in early 2026 to accelerate AI adoption in public administration, but the intensification of implementation has often sidelined citizens’ digital-rights protections. As a result, public interest in how AI is used by government remains marginal in expert debate and policymaking, which is a problem shared beyond any single country.

