AI's Expanding Control Over Healthcare Insurance
- Jul 14
- 2 min read

Today, we examine a critical shift in how your healthcare coverage is determined: the increasing use of Artificial Intelligence (AI) by insurance companies to control decisions about treatments and services. While proponents highlight its potential for efficiency, experts are sounding alarms about its impact on patient safety and the urgent need for regulation.
For physicians, the prior authorization process has long been a significant administrative burden, diverting time from patient care. Companies like Yosi Health aim to alleviate this by using AI to streamline prior authorizations through digitization and real-time approvals. Hari Prasad, CEO of Yosi Health, believes AI can significantly reduce administrative delays, leading to a "net positive outcome" by freeing up staff and providers for more meaningful work. Brad Boyd of BDO USA notes that new AI technologies can integrate with electronic health records to gather data, match it against payer criteria, and even predict likely denials, potentially reducing the time and cost associated with prior authorizations.
However, the rapid embrace of AI by health insurers to control coverage decisions has sparked considerable concern. Jennifer D. Oliva, a legal scholar, warns that while AI can improve care and reduce costs, it can also lead to delays or outright denials of care, often in the name of saving money. There's "strong evidence" these systems are used to delay or deny care that should be covered. Prasad himself stressed that AI tools should not operate in isolation; human oversight is crucial because, as he put it, "behind every one of these decisions is a patient, there is a family, there’s medical outcomes". Without strong checks, AI could lead to rushed decisions or inappropriate denials.
A disturbing pattern of withholding care emerges, with concerns that insurers might use algorithms to control coverage for expensive, long-term, or terminal health problems, particularly impacting patients with chronic illnesses who are more likely to be denied coverage. Furthermore, disparities exist, with Black, Hispanic, other non-white ethnicities, and LGBTQ+ individuals more likely to experience claims denials. Insurers often refuse to disclose how these algorithms work, citing them as "trade secrets," which prevents public information or independent testing for safety, fairness, or effectiveness.
Unlike medical AI tools, insurance AI algorithms are largely unregulated and do not undergo Food and Drug Administration (FDA) review. While some momentum for change exists—with the Centers for Medicare and Medicaid Services (CMS) requiring Medicare Advantage plans to base decisions on individual patient needs, and some states proposing or passing laws to rein in insurance AI—critics argue these measures still leave too much control with insurers and lack requirements for neutral expert review. Many health law experts, including Oliva, argue that FDA oversight is imperative for a uniform national regulatory scheme. However, current FDA authority might require a change in law from Congress to cover insurance algorithms, as they are not used to diagnose, treat, or prevent disease.
The push for robust regulation of how health insurers use AI to control coverage decisions has begun. The stakes are high, as patient safety and lives are literally on the line.










Interesting news. I truly believe AI would greatly help alleviate delays, but it's also important to regulate its use so that there are no erroneous denials and that its use is under human supervision.