Health regulators such as the FDA and CDC always operate under uncertainty—evidence about health interventions inevitably comes with error bars. Under normal circumstances, regulators demand evidence that meets certain thresholds of validity and reliability before taking action, such as making a public health recommendation or approving a new drug. During a public health emergency like the COVID-19 pandemic, regulators are forced to act under higher-than-usual uncertainty: the social costs of waiting for better evidence often outweigh the costs of taking action before all the evidence is in. Regulators will thus make more mistakes than usual. And in addition to the direct costs of these mistakes, apparent flip-flopping on regulatory decisions risks undermining public trust in health agencies. In this post, we provide some examples of regulators reversing COVID-19-related decisions, describe the considerations these agencies are attempting to balance, and suggest ways for health regulators to maintain public trust while acting under high scientific uncertainty.
Where have regulators changed their decisions related to COVID-19?
In the last few months, regulators have made striking reversals in three areas: the wearing of masks by the public, the emergency use authorization (EUA) of hydroxychloroquine and chloroquine, and the EUA for the diagnostic company Chembio’s COVID-19 antibody testing.
One high-profile reversal addresses whether the general public should wear masks. As we have described, N95 masks do a good job protecting the wearer from the virus; other masks don’t protect the wearer as well, but reduce the chance that an infected wearer will spread the virus to others. In the early days of the pandemic, public health officials, including at the CDC, were understandably concerned about shortages of PPE, including N95 masks, for health-care workers. To curtail panic-buying and hoarding, and relying on equivocal evidence about mask protectiveness, the CDC recommended in February that members of the public not buy or wear masks. The Surgeon General tweeted, “Seriously people- STOP BUYING MASKS! They are NOT effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!” On April 3, the CDC changed course and recommended that the general public wear cloth masks to reduce transmission. As of today, eighteen states (including DC) have statewide mask mandates for the general public, and most others have partial mandates; Iowa, Montana, Wisconsin, and South Dakota are currently the only states with no mask requirement.
Almost as striking is the FDA’s revocation of the EUA for hydroxychloroquine and chloroquine (jointly, HCQ) to treat COVID-19. An EUA may be granted in emergency situations to authorize the use of a product that, while not supported by enough evidence to be cleared or approved, “may be effective” in treating, preventing, or diagnosing a disease. President Trump repeatedly touted the benefits of HCQ before the EUA was granted, leading to speculation that political reasons were involved in the FDA’s March 28 issuance of the EUA. As HCQ use increased, and along with it studies of the drug’s effects, safety and efficacy questions arose. On April 24, the FDA cautioned against use of the drug outside hospitals or clinical trials due to the risk of serious heart problems. And on June 15, the FDA revoked the EUA.
As a third example, on June 16, the FDA revoked the EUA for Chembio’s SARS-CoV-2 antibody test. We explained in a previous post that antibody tests—which determine whether someone has been infected in the past—were initially subject to little regulatory oversight, but that in early May the agency began to increase its scrutiny, including by requiring companies to apply for EUAs. Chembio was one of 10 manufacturers and laboratories that received an EUA before May, but the FDA states that it has now revoked this EUA based on two new pieces of information. First, the FDA learned more about the test itself; in addition to the performance data supplied by Chembio, the FDA received data from the National Cancer Institute’s independent evaluation of the test (among many others). Second, the FDA’s increased experience with the many COVID-19 antibody tests on the market has led it to develop general performance expectations, based both on how well existing tests can perform and on “what performance is necessary for users to make well-informed decisions.” Apparently, the new data on Chembio’s test fell short of FDA’s new standard. As of June 17, the test is still authorized in Europe and Brazil.
What kinds of considerations are regulators balancing under these circumstances?
In these situations, regulators are dealing with at least three types of challenges. The first is scientific uncertainty. Although there is always some degree of scientific uncertainty in the regulation and approval of new healthcare technologies, in emergency situations like this one, regulators need to make decisions earlier in the scientific process of understanding a new technology. In the midst of skyrocketing COVID-19 cases and widespread PPE shortages, the CDC could not wait for the best evidence on masks before making some public recommendation. And the FDA is being asked to authorize the use of prescription drugs or diagnostic tests on evidence that would not be sufficient for full approval of the products, and which is often of lower quality (such as when a clinical trial is not randomized). Like all of us, the government is also constantly learning more about the novel coronavirus, such that the FDA may update its performance expectations for diagnostic tests over time.
Second, regulators must consider the right balance of Type I and Type II errors in their regulatory decisions. That is, one possibility is for the FDA to minimize the number of unsafe or ineffective drugs it approves (Type I errors), for fear of harming patients and jeopardizing the public trust in the agency. But another possibility is for the agency to minimize the number of safe, effective drugs it fails to approve (Type II errors), because doing so would deny patients access to a drug which is actually safe and effective. When focusing on this patient access perspective, economists have argued that the FDA is often too conservative in approving new drugs for particularly deadly conditions. These same considerations might weigh in favor of the agency taking rapid action to authorize the use of new products—like HCQ or antibody testing—for a condition wreaking havoc around the globe. However, the negative coverage of the agency for its decisions on both fronts means that the agency already has two strikes against it, from the public’s perspective. The FDA thus may be especially cautious when it comes to the authorization of a vaccine candidate.
Third, the negative coverage of reversed decisions highlights the need for regulators to maintain the public trust. As Professor Dan Carpenter has written in his seminal book on the FDA, the agency’s reputation and public image as an organization committed to consumer safety has cemented public trust in the agency. The agency would be jeopardizing existing high levels of trust if it made too many decisions about which it later changed its mind. In the context of the pandemic, it is especially critical that the public trusts that any vaccine the FDA authorizes or approves is both safe and effective for its intended use. This task is made more difficult in light of reporting that the administration may pressure the FDA to authorize or approve a vaccine before the election. Similarly, the CDC’s mixed messaging on the use of masks created a contradiction that may have increased mistrust in the agency. Health regulators cannot simply assume widespread compliance with their recommendations and focus on maximizing health impact; they also need to consider how to maintain public trust.
How can health regulators maintain public trust under these circumstances?
Even with a host of empirical unknowns, agencies and policymakers have several strategies to better incorporate public trust concerns into these decisions, including being clearer about the evidentiary support for decisions and how those decisions will be updated in light of new evidence; recognizing the particular perils that Type I errors raise for public trust; and learning from their international counterparts.
First, regulators should be more explicit about both the level of uncertainty at the time a decision is made and how regulatory decisions will be updated as more data come in. Agencies should make clear that reversing decisions in light of new data is part and parcel of, not an exception to, administrative business. Communicating about uncertainty on the front end includes, for example, publicly explaining the differences between words like “authorized,” “approved,” or “cleared,”—shibboleths of FDA practitioners that turn, in large part, on differences in evidence of safety and effectiveness.
To help clarify policies for reversing decisions on the back end, regulators should consider using precommitment mechanisms to specify why authorization decisions might be reversed. The initial EUA for HCQ, for example, could have stated that it was based on limited evidence from observational studies and would be withdrawn if the randomized trials that were underway failed to achieve certain results (as they did). If randomized trials were not currently underway, the agency could have precommitted to withdrawing the EUA under clearly defined circumstances, such as a failure to show a benefit from a clinical trial meeting certain conditions. Separately, establishing procedures for how the agency is to assess previously unanticipated data should shield agencies from criticisms that they lack “clear policy” on “unanswered scientific questions of utility and accuracy.” Fortunately, for the FDA, this is already baked into the EUA statute; the agency has broad leeway to revoke EUAs that may no longer be effective or safe “based on the totality of scientific evidence.” But this has yet to lead the agency to release guidelines calibrating “what constitutes the totality of scientific evidence” on both the front end—authorization—and back—withdrawal—as the pandemic continues.
Second—and instructive in the antibody testing case—regulators should recognize that some errors have a greater potential to undermine public trust than others. If the FDA approves or authorizes a drug which turns out to be unsafe or ineffective (a Type I error), this is more likely to be discovered—and to serve as an arrow of mistrust of the agency. Drugs and devices erroneously denied emergency authorizations (Type II errors), by contrast, are less likely to later be found safe and effective. To be clear, public trust is not the only consideration in balancing Type I versus Type II errors, so this does not mean that Type I errors are more problematic overall. But the asymmetric effect on public trust is one factor that regulators should consider when balancing the fundamental tradeoff between risk and access.
Third, U.S. agencies could combat some of the uncertainty and public trust issues faced at home by learning from and relying on their international counterparts—not only in terms of shifting domestic recommendations based on internationally derived evidence, but also the administrative processes for withdrawals. France’s Ministry for Solidarity and Health, for example, barred the use of HCQ in May after particularly negative evidence about the drug began to roll in—weeks before FDA revoked its EUA of the same. Understanding not just the evidentiary basis for these decisions but how, mechanically, they were made may help U.S. agencies make swifter, better-informed decisions.
Of course, each of these suggestions is based on our assumptions of how the public will respond to certain agency actions, and public reaction to scientific information can be counterintuitive. In some contexts, efforts to improve public education about technical information can increase polarization about scientific facts. The idea that public trust will be enhanced by more explicit communication about uncertainty in FDA decisions should thus be treated as a plausible but testable hypothesis—and moving forward (including for the next pandemic), health regulators should fund and pay attention to research on the science of science communication in this context.
In the end, agencies’ “reputation and power” comes down to not just resolving uncertainty but also garnering public trust in the process. Correct scientific decisions from agencies that no one trusts are unlikely to change social behavior to stop a pandemic. This is ever more true for COVID-19. Getting things “right” the first time, 100% of the time, just isn’t feasible. What agencies and policymakers should strive for, especially in the face of significant scientific uncertainty, is to do what they hope to do best: to be transparent that their decisions must evolve with the underlying scientific landscape.
This post is part of a weekly series on COVID-19 innovation law and policy. Author order is rotated each week.
No comments:
Post a Comment