By Nicholson Price, Rachel Sachs, Jacob S. Sherkow, and Lisa Larrimore Ouellette
Promising results for the Pfizer/BioNTech and Moderna vaccines have been the most exciting COVID-19 innovation news in the past few weeks. But while vaccines are a crucial step toward controlling this virus, it is important not to overlook the many other technological developments spurred by the pandemic. In this week’s post, we explore how the COVID-19 pandemic is proving a fertile ground for the use of artificial intelligence and machine learning in medicine. AI offers the tantalizing possibility of solutions and recommendations when scientists don’t understand what’s going on—and that is sometimes exactly what society needs in the pandemic. The lack of oversight and wide deployment without much in the way of validation, however, raise concerns about whether researchers are actually getting it right.
How is AI being used to combat the COVID-19 pandemic?
The label “artificial intelligence” is sometimes applied to any kind of automation, but in this post we will focus on developments in machine learning, in which computer models are designed to learn from data. Machine learning can be supervised, with models predicting labels based on training data, or unsupervised, with models identifying patterns in unlabeled data. For example, a supervised machine learning algorithm might be given a training dataset of people who have or have not been diagnosed with COVID-19 and tasked with predicting COVID-19 diagnoses in new data; an unsupervised algorithm might be given information about people who have COVID-19 and tasked with identifying latent structures within the dataset. Many of the most exciting developments in machine learning over the past decade have been driven by the deep learning revolution, as increases in computing power have enabled analysis of enormous layered datasets.
The number of applications of machine learning approaches to COVID-19 is staggering—there are already close to fifty thousand works on Google Scholar using the terms “machine learning” and “COVID-19”—so we will focus here on some accessible examples rather than a systematic academic review.
Machine learning is being used for basic research, such as understanding COVID-19 and potential interventions at a biomolecular level. For instance, deep learning algorithms have been used to predict the structure of proteins associated with SARS-CoV-2 and to suggest proteins that might be good targets for vaccines. Deep learning has facilitated drug repurposing efforts involving scanning the literature and public databases for patterns. And AI is being used to help conductive adaptive clinical trials to determine as effectively as possible the differences between potential COVID-19 therapies.
AI is also being used to help diagnose and manage COVID-19 patients. Some AI researchers are focused on helping people self-diagnose through technologies like wearable rings or chatbots. One algorithm predicts whether a patient has COVID-19 based on the sound of their cough. Once a patient reaches the hospital, datasets of lung x-rays are being used to both diagnose COVID-19 and predict disease severity, and models like Epic’s Deterioration Index have been widely adopted to predict whether and when a patient’s symptoms would worsen.
How do we know that AI is actually helping?
In most cases, we don’t. Many companies and institutions that have developed or repurposed AI tools in the fight against COVID-19 have not published any data demonstrating how well their analytical tools work. In many cases, this data is being gathered—for instance, Epic has repurposed a tool it used to predict critical outcomes in non-COVID-19 patients for the COVID-19 context, and has tested the tool on more than 29,000 COVID-19 hospital admissions—but those data have yet to be made public.
In general, high-quality trials of the type we expect in other areas of medical technology are scarce in the AI space, in part because of the relative absence of the FDA in this area. Most of these AI tools are being developed and deployed without agency oversight, in a way that enables them to be used quickly by clinicians but which also raises questions about whether they are actually safe and effective for the intended use. To date, just two companies have received emergency use authorizations from the FDA for their AI models; each aims to predict which patients are at particularly high risk for developing the more severe complications of COVID-19.
To be sure, at least some studies have been published evaluating these models—but the results have been mixed. One study of a model aiming to predict a patient’s likelihood of developing COVID-19 complications found that it performed fairly well in identifying patients who were particularly high-risk or low-risk, but performed less well in predicting results for patients in the vast middle. But the model had already been adopted and used before gathering and publishing this information, due to the urgency of the pandemic.
These studies are consistent with other observations that many AI tools are not yet delivering on the promise of the technology. Take the example of the AI chatbots which aim to ask you about your symptoms and provide an initial screening for COVID-19—one reporter tested eight of these chatbots with the same inputs and received a widely variable range of answers. One symptom checker found him to be at “low” risk of having COVID-19, others declared him to be at “medium risk,” while a third directed him to “start home isolation immediately.” (To be sure, the chatbots are designed for slightly different purposes, but to the extent that they are designed to deliver helpful advice to patients, the disparities are still concerning.)
How should policymakers treat the use of AI in the COVID-19 context?
Policymakers should encourage best uses of AI in combatting COVID-19—but should be wary of some of its serious negative limitations. Despite its promise, AI doesn’t immediately resolve one of the classic tensions of new technologies—the tension between getting the science right and trying everything to see what works. There is undoubtedly a sense of urgency to develop new tools to treat the disease—especially given its rapid and fatal spread—so a sense of optimistic experimentalism makes sense. But such an approach isn’t helpful if the diagnostics and therapies employed do not ultimately work. Nor should policymakers assume these tensions are in some ways complementary: bad therapies often preclude the use of good ones or diminish our ability to properly test good ones.
AI also presents some major challenges related to racial bias; these can arise without any conscious bias on the part of developers. This is simply explained: AI is only good as its inputs, and where its inputs contain biases—a sad but true reflection of the world around it—any algorithm developed from those inputs will embody those biases. Even before COVID-19, researchers found that because less money is spent on Black patients, a popular commercial algorithm for guiding healthcare decisions falsely concluded that Black patients are healthier than they are. This algorithmic bias is an example of “proxy discrimination”—the tendency to use proxies to take into account differences between groups, even where the training data omit group identification. As a consequence, overusing AI may contribute to COVID-19 bias or disparities. And algorithmic bias is not just a concern in the clinical context; for example, a new working paper shows that relying on smartphone-based mobility data to inform COVID-19 responses “could disproportionately harm high-risk elderly and minority groups” who are less likely to be represented in such data. To be sure, AI can also be used to combat racial bias; there are some efforts to use AI specifically to figure out the determinants of disproportionate COVID-19 problems by race/ethnicity. But whether AI developers can systematically and legally combat algorithmic discrimination more broadly remains to be seen.
AI in clinical care holds promise but—if used poorly—has the potential to make things worse, not better. As noted by Arti Rai, Isha Sharma, and Christina Silcox, “[t]o avoid unintended harms, actors in the [AI] development and adoption ecosystem must promote accountability . . . that assure careful evaluation of risk and benefit relative to plausible alternatives.” Doing so—intelligently—will best encourage the technological development of AI in clinical settings while avoiding some of its worst excesses.
This post is part of a series on COVID-19 innovation law and policy. Author order is rotated with each post.
No comments:
Post a Comment