Section 230 has so far failed to shield Meta and TikTok owner ByteDance from a lawsuit raised by a mother who alleged that her son’s wrongful death followed a flood of “subway surfing” videos platforms intentionally targeted to teens in New York.
In a decision Monday, New York State Supreme Court Judge Paul Goetz largely denied social media companies’ motions to dismiss claims they argued should be barred under Section 230 and the First Amendment. Goetz said that the mother, Norma Nazario, had adequately alleged that subway surfing content “was purposefully fed” to her son Zackery “because of his age” and “not because of any user inputs that indicated he was interested in seeing such content.”
Unlike other Section 230 cases in which platforms’ algorithms were determined to be content-neutral, Goetz wrote that in this case, “it is plausible that the social media defendants’ role exceeded that of neutral assistance in promoting content and constituted active identification of users who would be most impacted by the content.”
Platforms may be forced to demystify algorithms
Moving forward, Nazario will have a chance to seek discovery that could show exactly how Zackery came to interact with the subway surfing content. In her complaint, she did not ask for the removal of all subway surfing content but rather wants to see platforms held accountable for allegedly dangerous design choices that supposedly target unwitting teens.
“Social media defendants should not be permitted to actively target young users of its applications with dangerous ‘challenges’ before the user gives any indication that they are specifically interested in such content and without warning,” Nazario has argued.
And if she’s proven right, that means platforms won’t be forced to censor any content but must instead update algorithms to stop sending “dangerous” challenges to keep teens engaged at a time when they’re more likely to make reckless decisions, Goetz suggested.
Enlarge/ A nursing home resident is pushed along a corridor by a nurse.
Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.
The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed, AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.
According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.
Specific warning
It’s unclear how nH Predict works exactly, but it reportedly uses a database of 6 million patients to develop its predictions. Still, according to people familiar with the software, it only accounts for a small set of patient factors, not a full look at a patient’s individual circumstances.
This is a clear no-no, according to the CMS’s memo. For coverage decisions, insurers must “base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant,” the CMS wrote.
The CMS then provided a hypothetical that matches the circumstances laid out in the lawsuits, writing:
In an example involving a decision to terminate post-acute care services, an algorithm or software tool can be used to assist providers or MA plans in predicting a potential length of stay, but that prediction alone cannot be used as the basis to terminate post-acute care services.
Instead, the CMS wrote, in order for an insurer to end coverage, the individual patient’s condition must be reassessed, and denial must be based on coverage criteria that is publicly posted on a website that is not password protected. In addition, insurers who deny care “must supply a specific and detailed explanation why services are either no longer reasonable and necessary or are no longer covered, including a description of the applicable coverage criteria and rules.”
In the lawsuits, patients claimed that when coverage of their physician-recommended care was unexpectedly wrongfully denied, insurers didn’t give them full explanations.
Fidelity
In all, the CMS finds that AI tools can be used by insurers when evaluating coverage—but really only as a check to make sure the insurer is following the rules. An “algorithm or software tool should only be used to ensure fidelity,” with coverage criteria, the CMS wrote. And, because “publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time” or apply hidden coverage criteria.
The CMS sidesteps any debate about what qualifies as artificial intelligence by offering a broad warning about algorithms and artificial intelligence. “There are many overlapping terms used in the context of rapidly developing software tools,” the CMS wrote.
Algorithms can imply a decisional flow chart of a series of if-then statements (i.e., if the patient has a certain diagnosis, they should be able to receive a test), as well as predictive algorithms (predicting the likelihood of a future admission, for example). Artificial intelligence has been defined as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
The CMS also openly worried that the use of either of these types of tools can reinforce discrimination and biases—which has already happened with racial bias. The CMS warned insurers to ensure any AI tool or algorithm they use “is not perpetuating or exacerbating existing bias, or introducing new biases.”
While the memo overall was an explicit clarification of existing MA rules, the CMS ended by putting insurers on notice that it is increasing its audit activities and “will be monitoring closely whether MA plans are utilizing and applying internal coverage criteria that are not found in Medicare laws.” Non-compliance can result in warning letters, corrective action plans, monetary penalties, and enrollment and marketing sanctions.