FDA Reports: AI-Updated TruDi Surgical Navigation System Associated with 100+ Malfunctions and At Least 10 Injuries

Blog Leave a Comment

When AI Steers the Scalpel: Risks, Reports and Lawsuits

AI misidentifies body parts and can give wrong directions to surgeons on where to cut. No problemo, right? Wrong.

The FDA is getting swamped with complaints tied to AI-assisted surgical tools while tech boosters promise fixes, saying “we’ll fix the bugs” or “wait for the next version.” Technocrats such as Larry Ellison are often cited as promising AI solutions for everything from cancer to surgery, and the optimism keeps rolling even as problems mount.

In 2021, a unit of Johnson & Johnson said it had added machine learning to a navigation device for sinus surgery, branding that change a major advance. The TruDi Navigation System had been on the market for roughly three years before the software update, and FDA records show a sharp rise in malfunction reports after AI was introduced. Where there had been unconfirmed reports of seven malfunctions plus one patient injury previously, regulators logged at least 100 malfunctions and adverse events after AI was added.

Between late 2021 and November 2025, at least 10 people were reported injured in events tied to the device, with many allegations saying the system misled surgeons about instrument location inside patients’ heads. Reported harms include cerebrospinal fluid leaking from a patient’s nose, a punctured skull base, and at least two strokes allegedly caused by injury to a major artery. Two stroke victims filed lawsuits in Texas claiming the AI contributed to their injuries, and one suit alleges, “The product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented.”

Johnson & Johnson referred questions about those reports to Integra LifeSciences, which acquired Acclarent and the TruDi system in 2024. Integra told reporters the reports “do nothing more than indicate that a TruDi system was in use in a surgery where an adverse event took place,” and said there is “no credible evidence to show any causal connection between the TruDi Navigation System, AI technology, and any alleged injuries.”

A Reuters review of safety records, court filings and expert interviews shows these incidents arrive as AI is rapidly expanding across health care, from diagnostics to devices in the operating room. The FDA now lists 1,357 medical devices it says use AI, roughly double the number it had approved by 2022, and regulators are receiving reports linked to many of them. Between 2021 and October 2025, at least 1,401 reports filed to the FDA concerned devices appearing on that AI list, and at least 115 of those reports mention software, algorithm or programming problems.

Academic researchers found 60 FDA-authorized AI devices tied to 182 product recalls, with 43% of those recalls occurring less than a year after authorization. That recall rate is about twice that of all devices cleared under similar rules, the researchers reported in a JAMA Health Forum letter. Agency insiders say the FDA is having trouble keeping up with a flood of AI submissions after losing key staff, and HHS says it’s working to expand capacity.

Generative AI chatbots have also moved into medicine, helping with note-taking while some patients use them for self-diagnosis and second opinions. ChatGPT and other LLM-based tools burst into public view roughly three years ago and now power many consumer and clinical apps, but LLMs are only part of the AI used in medical devices. Machine learning and deep learning models have been embedded in imaging, monitoring and navigation systems for decades, with the FDA authorizing its first AI-enhanced devices in 1995.

Several lawsuits detail harrowing procedural events tied to TruDi. In one case, surgeon Marc Dean allegedly relied on the system during a sinuplasty that a plaintiff says left her with a carotid injury and a stroke; court filings note the surgeon’s records said he “had no idea he was anywhere near the carotid artery.” The patient later required a section of skull to be removed to allow her brain to swell, and she has described long-term rehab, saying, “I am still working in therapy.”

Another suit describes a separate surgery where a carotid artery “blew” and blood “was spraying all over” — even landing on an Acclarent representative who was observing the case. Plaintiffs allege Acclarent pursued AI as a marketing tool, lowered safety standards to rush software into TruDi, and set an 80% accuracy goal for some AI features before integration. Acclarent and Integra dispute those claims and deny a causal link between the AI and injuries.

The FDA reports also include complaints about other AI tools, such as a prenatal ultrasound algorithm that the report claimed, “Sonio detect software ai algorithm is faulty and wrongly labels fetal structures and associates them with the wrong body parts.” Medtronic reported at least 16 incidents where its LINQ implantable monitors missed abnormal rhythms, attributing some problems to user confusion while noting AccuRhythm AI can misclassify actual events. These examples show how AI can reduce false alerts but still misfire in ways that matter clinically and legally.

Leave a Reply

Your email address will not be published. Required fields are marked *