Alaska’s Agentic AI Push: Digital ID, Payments, and a Risk to Liberty
“A system that can act ‘on behalf’ of a person is also capable of observing and predicting their decisions.” If you watched the Matrix series, you will remember how the AI system relentlessly hunted down the outliers and sought to kill them on the spot. The robotic and mobile machines continuously roamed to and fro to find their hiding locations, and the whole system was based on “simulacra” hiding the truth from “blue pilled” people, while those who took the “red pill” were eliminated.
Alaska is planning a major redesign of its myAlaska digital identity platform that would combine so-called Agentic Artificial Intelligence with payments and credentialing into a single app. The state’s Request for Information from the Office of Information Technology describes AI able to handle transactions, submit applications, and manage personal data where a user has granted consent. That shift promises convenience but also hands unprecedented power over civic life to software acting for citizens.
A copy of the RFI outlines how a simple login used for Permanent Fund Dividend claims and basic forms could morph into a centralized mechanism controlling identity, services, and money flows. AI modules are envisioned to read documents, populate forms, verify eligibility, and even trigger tokenized payments. For many routine interactions, the human role could shrink to a background checkbox or a one-time approval.
The proposal talks up security and standards, citing NIST controls, detailed audit trails, adversarial testing, explainability tools, and human override features. From a Republican standpoint, this raises familiar concerns about individual liberty and concentrated government power when technical safeguards rely on policy enforcement that is not yet defined. Standards and promises matter, but they do not replace independent oversight or real constraints on data aggregation.
Biometric authentication is part of the plan, with facial recognition and fingerprint verification named as options for identity proofing. Those data types are among the most sensitive and historically have been difficult to protect from breaches and misuse. Adding biometrics into a single government-managed system only increases the stakes if controls fail or policies change.
Later phases would expand into digital payments and verifiable credentials, covering mobile driver’s licenses, professional certificates, hunting and fishing permits, and tokenized prepaid balances. The technical specs reference W3C Verifiable Credentials and ISO 18013-5, the same standards shaping national mobile ID efforts. That alignment suggests the initiative is consistent with broader interoperability moves rather than a one-off state experiment.
The plan also aims for a single app experience capable of handling as many as 300 separate government services, with voice navigation and multi-language support built in. Observers warn that these capabilities, when stitched across agencies, can form a cross-agency tracking infrastructure if left unchecked. Consolidation of services feels efficient until it becomes the only practical way to access public benefits and private networks.
Across Europe, Canada, and Australia, digital identity frameworks are increasingly presented as gateways to both public and private services, and proposals in the United States point toward routine identity checks for many online interactions. These projects promise convenience and security, but their cumulative effect is to normalize constant identification and reduce the anonymous or pseudonymous spaces that once defined much of the internet. That normalization has real consequences for free expression and association.
Once digital identity and AI agents become the default for financial transactions, healthcare access, or social platforms, consent risks becoming a formality rather than a meaningful choice. A tiered digital environment can emerge: one experience for the verified and another for those who opt out or are excluded. That outcome raises not only technical data-protection questions but also fundamental issues about who controls civic participation.
Linking AI-driven automation to identity infrastructure multiplies these concerns because a system that can act “on behalf” of someone also learns patterns and predicts behavior. When that capability lives inside government networks, the line between delivering services and behavioral monitoring gets dangerously thin. Even with audit logs and human override features, once embedded and widely used, reversing or limiting such systems becomes extremely difficult.

