
Court Ruling Overview
Humana, the nation’s second-largest Medicare Advantage insurer, must face a significant class action lawsuit alleging systematic fraud through artificial intelligence-driven claim denials. Judge Rebecca Grady Jennings of the US District Court for the Western District of Kentucky delivered a landmark ruling Friday, allowing the case to proceed without requiring plaintiffs to exhaust Medicare administrative appeals first.
The lawsuit, filed in late 2023, centers on allegations that Humana’s use of artificial intelligence to deny post-acute care to Medicare Advantage beneficiaries constitutes fraud when automated decisions replace clinical judgment. This ruling represents a critical precedent for how courts will handle AI-related healthcare disputes in the Medicare system.
Jurisdictional Challenge Rejected
In March 2024, Humana attempted to dismiss the claims, arguing the District Court lacked jurisdiction because the original plaintiffs failed to request Quality Improvement Organization reviews of their coverage denials. However, Judge Jennings found that the insurer’s repeated denials created a risk of irreparable harm to beneficiaries.
The court recognized that forcing patients to wait for appeals created an impossible choice: remain in post-acute care facilities while risking personal financial responsibility for medical bills, or forgo necessary care while awaiting administrative decisions.
The AI System at the Center
nH Predict Technology
The case revolves around Humana’s use of nH Predict, an artificial intelligence system designed to make coverage determinations for post-acute care services. Plaintiffs allege this AI system results in wrongful denials of legitimate claims, with very few beneficiaries successfully appealing these automated decisions.
The lawsuit claims Humana “fraudulently misled its insureds into believing that their health plans would individually assess their claims and pay for medically necessary care.” Instead, the company allegedly delegated legally required medical reviews to an AI algorithm, fundamentally changing the nature of coverage decisions.
Lack of Transparency
A critical issue highlighted in the case is Humana’s alleged practice of denying patients access to the results or rationale behind nH Predict decisions. This opacity leaves beneficiaries unable to understand why their services were denied, making meaningful appeals nearly impossible.
Patient Impact and Case Examples
Sharon Merkley’s Experience
Judge Jennings cited the compelling example of plaintiff Sharon Merkley, who received seven denials for identical care within a 30-day period. When Merkley required hospitalization a month later, Humana allegedly issued five additional denials for post-acute care, even after previous successful appeals.
This pattern demonstrates what the court found to be systematic abuse of the administrative review process, with Humana capable of repeating harmful conduct while evading meaningful oversight.
Medical Consequences
The ruling emphasized that class members have “forgone medically necessary care which resulted in admittance to the hospital or additional treatment.” Judge Jennings noted these medical setbacks cannot be remedied through retroactive benefits or eventual appeal success, highlighting the immediate and lasting harm caused by AI-driven denials.
Appeals Process Challenges
Futile Administrative Reviews
Judge Jennings agreed that requiring plaintiffs to exhaust administrative appeals would be futile, given evidence that “Humana abuses and undermines the administrative review process such that its conduct is capable of repetition while evading review.”
The court found that even when Humana or Administrative Law Judges overturn denials, the company often restarts the same denial-appeal cycle, creating an endless loop of bureaucratic obstacles for patients seeking necessary care.
Systematic Evasion
The opinion noted that patients cannot challenge the systematic process underlying nH Predict refusals. Even successful appeals at the Quality Improvement Organization level often result in Humana requesting updated medical records, issuing new denials, and forcing patients to restart the appeals process entirely.
Legal Claims and Contract Disputes
Contract Breach vs. Medicare Violations
Judge Jennings clarified that the central question isn’t whether Humana violated the Medicare Act in denying benefits, but whether the company knowingly violated its contractual obligations to beneficiaries while retaining premium proceeds.
The plaintiffs argue they would have chosen different insurers had they known about Humana’s practice of delegating legally required medical reviews to artificial intelligence rather than qualified healthcare professionals.
Surviving Claims
The court allowed several significant claims to proceed, including:
- Breach of contract
- Breach of good faith and fair dealing
- Unjust enrichment
- Common law fraud
However, Judge Jennings dismissed four claims related to claims settlement, unfair competition, insurance bad faith, and unfair insurance practices due to federal preemption of state law in these areas.
What This Means for Medicare Advantage
Industry Implications
This lawsuit could set important precedents for how Medicare Advantage insurers implement artificial intelligence in coverage decisions. The case challenges the fundamental premise that AI can replace human medical judgment in determining patient care needs.
Seeking Relief
The class action seeks comprehensive relief including actual damages, statutory damages, punitive damages, emotional distress compensation, and restitution. Crucially, plaintiffs also want a court order prohibiting Humana from continuing its AI-enabled claim handling practices.
The lawsuit demands that claims be “individually assessed by a medical professional rather than artificial intelligence,” potentially forcing significant changes to how the Medicare Advantage industry operates in the digital age.
Leave a Reply