ICE, AI and the New Surveillance Dragnet
Just like the TSA manhandles all travelers to potentially catch a few terrorists, so ICE is trolling all citizens in order to find a few deportees. This is wrong on so many levels and is an egregious assault on the First and Fourth Amendments, but nobody seems to care, especially in the government. Since ICE works for and reports to the Federal government, it feeds all data collected back to those agencies. Layered on top of Fusion Centers, which the DHS also runs, ICE is adding boots on the ground to the burgeoning police state. ⁃ Patrick Wood, Editor.
Immigration and Customs Enforcement has quietly moved to expand a digital surveillance program that aims to collect and analyze massive amounts of public social media content, and the agency is paying for commercial AI to do it. The contract in question is valued at $5.7 million for a platform called Zignal Labs, which advertises a “real-time intelligence” capability to ingest publicly available social posts across many sources. Critics argue that the combination of enforced deportation activity and always-on monitoring creates a potent privacy and civil liberties risk.
Zignal’s materials describe heavy automation: machine learning, computer vision, and optical character recognition operating across more than 8 billion posts per day in over 100 languages. The platform promises curated detection feeds and alerts for “operators” who would act on that signal data. In practice that can mean flagging accounts or posts for follow up and potentially directing enforcement attention to people who appear to be in a particular place.
The tool claims it can extract geolocated imagery and video metadata and identify insignia, patches, or other visual markers to confirm who appears in footage. A cited example shows analysis of a Telegram video identifying the location of an operation and matching emblems to particular actors. That implies ICE could trace a person from a TikTok clip or a photograph posted on a social site back to a physical address or neighborhood.
ICE obtained access to the platform through a government-focused reseller and the vendor has worked with other federal bodies, including weather and security agencies. The vendor’s work has extended to federal partners beyond traditional law enforcement, and the tech has been adopted in contexts ranging from weather analysis to protective services. Those relationships show how quickly a commercial monitoring tool can be folded into multiple federal missions.
Social media surveillance by authorities is not new; existing tools were used to track protests in past years. What is different now is scale and automation, plus ICE’s budget that can support a broader, sustained program. With significant funding, a federal agency can maintain a constant monitoring posture and couple algorithmic outputs to enforcement operations.
“With billions of dollars to spend on spyware, it’s extremely alarming to think how far ICE will go in surveilling social media,” Owen says. “ICE is a lawless agency that will use AI-driven social media monitoring not only to terrorize immigrant families, but also to target activists fighting back against their abuses. This is an assault on our democracy and right to free speech, powered by the algorithm and paid for with our tax dollars.”
A recent planning document shows ICE preparing to staff a monitoring operation that could include nearly 30 people working around the clock to sift content on major platforms. The draft material names placements for about 12 contractors in a facility in Vermont and around 16 staff in California, with some roles expected to be available “at all times.” That structure signals an intent to pair steady human review with automated detection streams.
“monitor social media for viewpoints it doesn’t like on a scale that was never possible with human review alone.” Greene adds, “The scale of this spying is matched by an equally massive chilling effect on free speech.”
Beyond social platforms, separate reporting shows ICE has tapped into other surveillance systems such as license plate readers and tools that track movement of mobile phones. Agencies have also explored asking visa or citizenship applicants to share social media handles as part of vetting, and some enforcement initiatives now use AI to flag posts tied to national security concerns. Those steps expand the data ecosystem available to federal actors.
There have already been operational moves tied to social signals, from visa revocations to local arrests after online posts drew attention. Social content can be both intelligence and evidence, and when matched to enforcement it can change how communities interact with digital speech. For many observers the core worry is not only the collection but the downstream use of that collection in ways that reach beyond established legal checks.
“This is another example of Big Tech CEOs partnering with an increasingly authoritarian federal government as part of Trump’s ongoing attempts to clamp down on free speech,” Sacha Haworth, executive director of the Tech Oversight Project, tells The Verge. “This should terrify and anger every American.”

