Investigations Reveal Risks of AI Surgical Tools in Hospitals

Recent investigations have raised concerns over the safety of AI-powered surgical tools, prompting a reevaluation of their use in operating rooms. While these devices assist human surgeons rather than perform procedures independently, increasing reports of malfunctions and subsequent lawsuits have medical professionals questioning their reliability.

The U.S. Food and Drug Administration (FDA) has authorized at least 1,357 AI-integrated medical devices, a doubling of the number permitted through 2022. Among these is the TruDi Navigation System, produced by Johnson & Johnson, which employs a machine-learning algorithm to aid ear, nose, and throat surgeries. Other AI-assisted tools are designed for various surgical applications, focusing significantly on enhancing visual capabilities.

Traditional laparoscopic surgery presents several challenges, including obscured views from smoke, two-dimensional imaging complicating depth perception, and difficulties in distinguishing critical anatomical structures. AI surgical tools aim to resolve these issues by providing surgeons with “crystal-clear views of the operative field,” according to Forbes.

Despite these advancements, a series of lawsuits and allegations have surfaced, claiming that some AI tools have directly harmed patients. The FDA has reportedly received unverified accounts of over 100 malfunctions and adverse events relating to the TruDi device. Allegations include incidents where the AI misinformed surgeons about instrument locations during operations.

In one notable case, cerebrospinal fluid leaked from a patient’s nose due to erroneous guidance from the system. Another instance involved a surgeon inadvertently puncturing the base of a patient’s skull. Further allegations suggest that patients have suffered strokes after being injured by mistakenly cut major arteries. In one case, a plaintiff claimed that the TruDi’s AI misled a surgeon, resulting in a carotid artery injury that caused a blood clot and subsequent stroke, as reported by Futurism.

The FDA’s reports on device malfunctions do not assess the causes behind medical mishaps, making it difficult to determine the exact role AI may have played in these incidents. However, the TruDi is not alone in facing scrutiny. The Sonio Detect, a device designed to analyze prenatal images, has been accused of using faulty algorithms that misidentify fetal structures. Additionally, Medtronic, known for its AI-assisted heart monitors, has faced accusations of failing to detect abnormal heart rhythms or pauses in patients.

Research published in the JAMA Health Forum indicates that at least 60 AI-assisted medical devices have been linked to 182 product recalls by the FDA. Notably, 43% of these recalls occurred within the first year of the devices’ FDA approval, suggesting potential shortcomings in the approval process that may overlook early performance failures of AI technologies.

Despite the challenges, there is optimism that improvements can be made. Experts advocate for enhancing premarket clinical testing requirements and strengthening postmarket surveillance measures to better identify and mitigate device errors. As the integration of AI technology in healthcare continues to evolve, ensuring patient safety remains a top priority for medical institutions and regulatory bodies alike.