By Lisa Larrimore Ouellette, Nicholson Price, Rachel Sachs, and Jacob Sherkow
Health systems worldwide are facing shortages of crucial medical supplies, including personal protective equipment (PPE), diagnostic testing components including kits and nasal swabs, and even ventilators or ventilator parts. Enter 3D printing. A growing network of hobbyists, small-scale makers, 3D printing firms themselves, and even larger companies with some 3D printing capacity are using the technology to help address ongoing shortages in the COVID-19 response.
What kinds of COVID-19-related products are being 3D printed?
3D printing, also called additive manufacturing, involves building a 3D form from the bottom up by adding one thin layer of material at a time. (Here’s a handy Congressional Research Service overview.) Many materials can be used, including plastic, metal, or resin. The printer is controlled by a computer which prints based on a computer-assisted design (CAD) file. 3D printers have been used for rapid prototyping or by hobbyists for years, but are more recently being used at larger scale.
3D printing is being used to create a host of products potentially relevant to COVID-19. PPE is perhaps the most common example; many people are producing the headbands of protective face shields; some are printing masks. Nasal swabs are also being printed in substantial numbers (the swabby bit at the end is a bristled resin structure, not cotton fibers like you’d find in a Q-Tip), and have been evaluated by, among others, Beth Israel Deaconess Medical Center. Printing has also been used for more complex components of medical devices, such as ventilator valves or splitters so that patients can share a ventilator—or even most of the components of emergency open-source ventilators.
Patent & IP blog, discussing recent news & scholarship on patents, IP theory & innovation.
Monday, April 27, 2020
How does the law impact the use of 3D printing to address COVID-19 production shortages?
Posted by
Lisa Larrimore Ouellette
Friday, April 24, 2020
Who’s Afraid of Section 1498?: Government Patent Use as Versatile Policy Tool
Posted by
Lisa Larrimore Ouellette
Guest post by Christopher Morten & Charles Duan
Chris Morten (@cmorten2) is the Clinical Teaching Fellow and Supervising Attorney in NYU’s Technology Law and Policy Clinic. Charles Duan (@Charles_Duan) is Director of Technology and Innovation Policy at the R Street Institute.
From vaccines to ventilators to diagnostic tests, technology has dominated response strategies to the ongoing COVID-19 pandemic. Where technology leads, patent law and policy follow. Recently, some attention has turned to federal government patent use under 28 U.S.C. § 1498. Jamie Love of KEI has called on the federal government to explore use of section 1498 in its response to COVID-19, to reduce prices, expand supplies, and ensure widespread, equitable access to patented technologies. (We have, too.) There is a long line of scholarship, including Amy Kapczynski and Aaron Kesselheim, Hannah Brennan et al., Dennis Crouch, Daniel Cahoy, and others discussing the relevance of section 1498 in a variety of contexts.
Yet others have encouraged the government to “tread lightly” and described use of section 1498 as a “nuclear option”—potent but dangerous—because it can be used to make massive interventions in the market for patented products—e.g., by issuing compulsory licenses to patents on high-priced brand-name drugs, “breaking” patent monopolies and accelerating the entry of numerous generic competitors. One recent example: a few years ago, Gilead’s high prices on hepatitis C drugs exacerbated a different public health crisis and prompted a chorus of voices, including Senator Bernie Sanders and the New York Times editorial board, to call on the federal government to exercise its section 1498 power to “break” Gilead’s patents in just this way, which might have saved tens of billions of dollars in public spending. (The federal government did not do so.)
Irrespective of the merits of 1498 as a general matter, in the context of a crisis such as the COVID-19 pandemic we see real value in bold, “nuclear option” use of section 1498 to save billions on high-priced prescription drugs and maximize their availability. But that is not the only way section 1498 can be used. It can also be used in modest, incremental, unexceptional ways—it can be as much as a scalpel or a Swiss Army knife as a nuclear weapon, and some of its virtues in this regard have gone underappreciated.
Accordingly, we highlight four particularly valuable features of government patent use under section 1498 in a crisis like the present one: (1) speed, (2) flexibility, (3) ex post determination of the appropriate compensation, and (4) determination of that compensation by an impartial adjudicator. In particular, we compare section 1498 with an alternative policy tool, patent buyouts, which can also expand public access to patented technologies, and identify several reasons why section 1498 may be the preferable tool.
Chris Morten (@cmorten2) is the Clinical Teaching Fellow and Supervising Attorney in NYU’s Technology Law and Policy Clinic. Charles Duan (@Charles_Duan) is Director of Technology and Innovation Policy at the R Street Institute.
From vaccines to ventilators to diagnostic tests, technology has dominated response strategies to the ongoing COVID-19 pandemic. Where technology leads, patent law and policy follow. Recently, some attention has turned to federal government patent use under 28 U.S.C. § 1498. Jamie Love of KEI has called on the federal government to explore use of section 1498 in its response to COVID-19, to reduce prices, expand supplies, and ensure widespread, equitable access to patented technologies. (We have, too.) There is a long line of scholarship, including Amy Kapczynski and Aaron Kesselheim, Hannah Brennan et al., Dennis Crouch, Daniel Cahoy, and others discussing the relevance of section 1498 in a variety of contexts.
Yet others have encouraged the government to “tread lightly” and described use of section 1498 as a “nuclear option”—potent but dangerous—because it can be used to make massive interventions in the market for patented products—e.g., by issuing compulsory licenses to patents on high-priced brand-name drugs, “breaking” patent monopolies and accelerating the entry of numerous generic competitors. One recent example: a few years ago, Gilead’s high prices on hepatitis C drugs exacerbated a different public health crisis and prompted a chorus of voices, including Senator Bernie Sanders and the New York Times editorial board, to call on the federal government to exercise its section 1498 power to “break” Gilead’s patents in just this way, which might have saved tens of billions of dollars in public spending. (The federal government did not do so.)
Irrespective of the merits of 1498 as a general matter, in the context of a crisis such as the COVID-19 pandemic we see real value in bold, “nuclear option” use of section 1498 to save billions on high-priced prescription drugs and maximize their availability. But that is not the only way section 1498 can be used. It can also be used in modest, incremental, unexceptional ways—it can be as much as a scalpel or a Swiss Army knife as a nuclear weapon, and some of its virtues in this regard have gone underappreciated.
Accordingly, we highlight four particularly valuable features of government patent use under section 1498 in a crisis like the present one: (1) speed, (2) flexibility, (3) ex post determination of the appropriate compensation, and (4) determination of that compensation by an impartial adjudicator. In particular, we compare section 1498 with an alternative policy tool, patent buyouts, which can also expand public access to patented technologies, and identify several reasons why section 1498 may be the preferable tool.
Tuesday, April 21, 2020
Regulatory Responses to N95 Respirator Shortages
Posted by
Lisa Larrimore Ouellette
By Lisa Larrimore Ouellette, Nicholson Price, Rachel Sachs, and Jacob Sherkow
Our recent posts have highlighted shortages in three COVID-19-related knowledge goods: testing, drugs (such as those needed to put patients on ventilators), and clinical trial information about effective treatments. This week we focus on the role of legal regulators in another critical shortage: N95 respirators, one of the key forms of personal protective equipment (PPE) for healthcare workers. We explain how N95 regulation, like COVID-19 testing, presented an interagency coordination problem. The FDA has successfully removed key regulatory hurdles—though the problem should have been anticipated earlier, and much more needs to be done to ensure an adequate supply.
What are N95 respirators?
N95 respirators are a specialized subset of face masks (here’s a handy NY Times explainer with photos). A normal surgical mask (what you see, for example, in a typical medical TV show) fits fairly loosely around the face; it blocks splashes and relatively large droplets, but not tiny particles. An N95 respirator, on the other hand, is relatively rigid rather than being flexible, and is designed to fit closely to the face and create a tight seal—tightly enough that the masks don’t work with certain beards (or for children). The “95” in N95 refers to the requirement that the mask block at least 95% of 0.3 micron particles (about a thousandth the width of a human hair). Both types of masks are meant to be single-use.
N95 masks are meant to keep droplets that include the SARS-CoV-2 virus out. They’re not perfect, but if they’re well fitted, they are effective at protecting the wearer (most crucially right now, the healthcare providers who are caring for patients and are themselves still getting sick in droves). Surgical masks, on the other hand, do a worse job of keeping the virus out. Cloth masks even less so. But cloth masks can help keep droplets in—that is, if someone is sick, wearing a cloth mask may keep them from projecting droplets that can infect other people. The CDC now recommends that everyone wear a face mask when in public, to avoid infecting other people (because even asymptomatic individuals can infect others, and the lack of testing means it’s very hard for most people to know whether they have been infected). However, surgical masks and especially N95 masks are in very short supply, and should be reserved for medical professionals.
N95s were initially developed for industrial uses including mining. The key feature—and what makes N95s harder to manufacture than other masks—is that the masks are made using what’s called “melt-blown” fabric. A polymer (such as polystyrene, polyurethane, or nylon), is melted then blown through small nozzles; it forms a matrix of tiny fibers with many holes (think: cotton candy), which can capture particles. But the machines to make this fabric are complex and expensive, and manufacturers of the fabric are struggling—and failing—to meet demand.
Our recent posts have highlighted shortages in three COVID-19-related knowledge goods: testing, drugs (such as those needed to put patients on ventilators), and clinical trial information about effective treatments. This week we focus on the role of legal regulators in another critical shortage: N95 respirators, one of the key forms of personal protective equipment (PPE) for healthcare workers. We explain how N95 regulation, like COVID-19 testing, presented an interagency coordination problem. The FDA has successfully removed key regulatory hurdles—though the problem should have been anticipated earlier, and much more needs to be done to ensure an adequate supply.
What are N95 respirators?
N95 respirators are a specialized subset of face masks (here’s a handy NY Times explainer with photos). A normal surgical mask (what you see, for example, in a typical medical TV show) fits fairly loosely around the face; it blocks splashes and relatively large droplets, but not tiny particles. An N95 respirator, on the other hand, is relatively rigid rather than being flexible, and is designed to fit closely to the face and create a tight seal—tightly enough that the masks don’t work with certain beards (or for children). The “95” in N95 refers to the requirement that the mask block at least 95% of 0.3 micron particles (about a thousandth the width of a human hair). Both types of masks are meant to be single-use.
N95 masks are meant to keep droplets that include the SARS-CoV-2 virus out. They’re not perfect, but if they’re well fitted, they are effective at protecting the wearer (most crucially right now, the healthcare providers who are caring for patients and are themselves still getting sick in droves). Surgical masks, on the other hand, do a worse job of keeping the virus out. Cloth masks even less so. But cloth masks can help keep droplets in—that is, if someone is sick, wearing a cloth mask may keep them from projecting droplets that can infect other people. The CDC now recommends that everyone wear a face mask when in public, to avoid infecting other people (because even asymptomatic individuals can infect others, and the lack of testing means it’s very hard for most people to know whether they have been infected). However, surgical masks and especially N95 masks are in very short supply, and should be reserved for medical professionals.
N95s were initially developed for industrial uses including mining. The key feature—and what makes N95s harder to manufacture than other masks—is that the masks are made using what’s called “melt-blown” fabric. A polymer (such as polystyrene, polyurethane, or nylon), is melted then blown through small nozzles; it forms a matrix of tiny fibers with many holes (think: cotton candy), which can capture particles. But the machines to make this fabric are complex and expensive, and manufacturers of the fabric are struggling—and failing—to meet demand.
Wednesday, April 15, 2020
How can innovation and regulatory policy accomplish robust COVID-19 testing?
Posted by
Lisa Larrimore Ouellette
By Lisa Larrimore Ouellette, Nicholson Price, Rachel Sachs, and Jacob Sherkow
It’s now clear that expansive, population-wide testing is part-and-parcel of every successful COVID-19 containment strategy. But US testing efforts, from the beginning of the pandemic until now, have been widely criticized as lacking. Perhaps as a direct consequence of this failure, the US now leads the world in COVID-19 cases and deaths. What are these tests and what’s our capacity to test; why is it important to test; how have the FDA and other administrative agencies addressed the issue; and what can we do about it?
What is the status of US testing capacity?
It is important to distinguish between two types of COVID-19 tests: reverse transcription polymerase chain reaction (RT-PCR) tests for SARS-CoV-2, the virus that causes COVID-19; and serological tests for the body’s immune response to SARS-CoV-2. The tests are not interchangeable: RT-PCR tests detect the presence of the virus’s genome, itself, and thus determine whether someone is currently infected. Someone who was once infected and has since recovered will return a negative result. A serological test, by contrast, detects whether the body has produced antibodies to the virus; that’s useful to determine whether someone has been infected for long enough to mount an immune response.
To date, virtually all of the testing has been of the RT-PCR type, useful for answering the question: Is the patient infected now? Testing centers in the US are currently running approximately 135,000 tests a day—far fewer per capita than in other countries. The US’s maximum, overall testing capacity is unclear and is, in any event, a moving target given that new tests are now being cleared by the FDA with some frequency. But it’s widely acknowledged that testing is not at the level that it needs to be to accurately assess the number of people infected with SARS-CoV-2.
There are myriad reasons for this deficit in testing: an initially slow ramp-up of tests approved by the FDA; difficulties in speeding manufacturing of kits used to conduct the tests; a shortage of reagents to conduct the tests, including solutions, primers, and even the swabs used to collect samples from patients; the capacity of clinical laboratories to run tests and return results; and less technical hang-ups like patients’ difficulties in finding or physical getting to testing sites and questions concerning who will pay for such testing.
It’s now clear that expansive, population-wide testing is part-and-parcel of every successful COVID-19 containment strategy. But US testing efforts, from the beginning of the pandemic until now, have been widely criticized as lacking. Perhaps as a direct consequence of this failure, the US now leads the world in COVID-19 cases and deaths. What are these tests and what’s our capacity to test; why is it important to test; how have the FDA and other administrative agencies addressed the issue; and what can we do about it?
What is the status of US testing capacity?
It is important to distinguish between two types of COVID-19 tests: reverse transcription polymerase chain reaction (RT-PCR) tests for SARS-CoV-2, the virus that causes COVID-19; and serological tests for the body’s immune response to SARS-CoV-2. The tests are not interchangeable: RT-PCR tests detect the presence of the virus’s genome, itself, and thus determine whether someone is currently infected. Someone who was once infected and has since recovered will return a negative result. A serological test, by contrast, detects whether the body has produced antibodies to the virus; that’s useful to determine whether someone has been infected for long enough to mount an immune response.
To date, virtually all of the testing has been of the RT-PCR type, useful for answering the question: Is the patient infected now? Testing centers in the US are currently running approximately 135,000 tests a day—far fewer per capita than in other countries. The US’s maximum, overall testing capacity is unclear and is, in any event, a moving target given that new tests are now being cleared by the FDA with some frequency. But it’s widely acknowledged that testing is not at the level that it needs to be to accurately assess the number of people infected with SARS-CoV-2.
There are myriad reasons for this deficit in testing: an initially slow ramp-up of tests approved by the FDA; difficulties in speeding manufacturing of kits used to conduct the tests; a shortage of reagents to conduct the tests, including solutions, primers, and even the swabs used to collect samples from patients; the capacity of clinical laboratories to run tests and return results; and less technical hang-ups like patients’ difficulties in finding or physical getting to testing sites and questions concerning who will pay for such testing.
Tuesday, April 7, 2020
How can the US address coronavirus drug shortages?
Posted by
Lisa Larrimore Ouellette
By Lisa Larrimore Ouellette, Nicholson Price, Rachel Sachs, and Jacob Sherkow
The escalating pandemic has caused devastating shortages not only of ventilators and personal protective equipment like masks, but also of essential medicines needed to treat COVID-19 patients. As detailed by STAT and the New York Times, prescriptions for painkillers, sedatives, anesthetics, and antibiotics are up, but the rate at which prescriptions are filled and shipped to hospitals is down. The FDA helpfully tracks drug shortages, but this doesn’t solve the problem. With the sudden spike in hospitalized patients with COVID-19 symptoms, physicians are using these drugs faster than manufacturers are making them.
What is causing these drug shortages?
Drug shortages are frighteningly common even in the best of times. A 2019 FDA report noted that from 2013 to 2017, at least 163 drugs went into shortage. (The actual number is likely much higher.) That report blamed “economic forces”—namely, price-eroding generic competition, a lack of incentives to make quality generic manufacturing more efficient, and supply chain difficulties that made the continued manufacture of older generics unprofitable. These problems are now exacerbated by the sudden demand spikes caused by COVID-19 patients. As just one example: propofol, an important drug for sedating patients who need intubation—and, historically, already in waxing and waning states of shortage—has seen prescriptions shoot up about 100%.
Supply has been slow to meet COVID-19-related demand—but slower still because of the outbreak’s disruption to the global supply chain. Many pharmaceutical ingredients are manufactured in China, which has seen slowdowns (and in some cases, shutdowns) in manufacturing sectors across the country. Furthermore, because drugs do expire, they’re not stockpiled when there’s a surplus. In some instances, countries have banned the export of drug products important for treating COVID-19 to ensure adequate supply for their own citizens. India, for example, has banned exports on hydroxychloroquine in the event the drug proves useful in treating COVID-19. It’s a wicked problem: the very thing causing the sudden spike in demand is shutting down the means of supply.
The escalating pandemic has caused devastating shortages not only of ventilators and personal protective equipment like masks, but also of essential medicines needed to treat COVID-19 patients. As detailed by STAT and the New York Times, prescriptions for painkillers, sedatives, anesthetics, and antibiotics are up, but the rate at which prescriptions are filled and shipped to hospitals is down. The FDA helpfully tracks drug shortages, but this doesn’t solve the problem. With the sudden spike in hospitalized patients with COVID-19 symptoms, physicians are using these drugs faster than manufacturers are making them.
What is causing these drug shortages?
Drug shortages are frighteningly common even in the best of times. A 2019 FDA report noted that from 2013 to 2017, at least 163 drugs went into shortage. (The actual number is likely much higher.) That report blamed “economic forces”—namely, price-eroding generic competition, a lack of incentives to make quality generic manufacturing more efficient, and supply chain difficulties that made the continued manufacture of older generics unprofitable. These problems are now exacerbated by the sudden demand spikes caused by COVID-19 patients. As just one example: propofol, an important drug for sedating patients who need intubation—and, historically, already in waxing and waning states of shortage—has seen prescriptions shoot up about 100%.
Supply has been slow to meet COVID-19-related demand—but slower still because of the outbreak’s disruption to the global supply chain. Many pharmaceutical ingredients are manufactured in China, which has seen slowdowns (and in some cases, shutdowns) in manufacturing sectors across the country. Furthermore, because drugs do expire, they’re not stockpiled when there’s a surplus. In some instances, countries have banned the export of drug products important for treating COVID-19 to ensure adequate supply for their own citizens. India, for example, has banned exports on hydroxychloroquine in the event the drug proves useful in treating COVID-19. It’s a wicked problem: the very thing causing the sudden spike in demand is shutting down the means of supply.
Monday, March 30, 2020
What does it mean that Oracle is partnering with the Trump administration to study unproven COVID-19 drugs?
Posted by
Lisa Larrimore Ouellette
By Lisa Larrimore Ouellette, Nicholson Price, Rachel Sachs, and Jacob Sherkow
One of the dizzying stream of innovation and health law stories to emerge last week is Oracle’s partnership with the White House to study unproven pharmaceuticals for treating COVID-19. We decided to unpack this story for ourselves and then to collectively share our thoughts in a short explainer.
What are the drugs being studied?
The initial stories about Oracle’s platform mention its use for two older drugs approved to treat malaria—chloroquine and hydroxychloroquine—that are now being tested to treat COVID-19. Both drugs are quite old: chloroquine was first approved by the FDA in 1949 and sold, until recently, under the brand name Aralen. Hydroxychloroquine, which is also used to treat lupus and rheumatoid arthritis, is sold under the brand Plaquenil and was first approved in 1955.
The impetus behind studying these two drugs stems from in vitro studies following the 2005 SARS-CoV-1 outbreak. Those studies suggested the drugs could inhibit some types of coronaviruses from both entering cells and replicating after infection—potentially serving as a preventative and a treatment. But the studies were small, in cell culture rather than living animals, and not conducted against the virus that causes COVID-19, SARS-CoV-2. Some early work with the drugs against SARS-CoV-2 may be promising but it, too, has been done in a test tube rather than an animal model.
One of the dizzying stream of innovation and health law stories to emerge last week is Oracle’s partnership with the White House to study unproven pharmaceuticals for treating COVID-19. We decided to unpack this story for ourselves and then to collectively share our thoughts in a short explainer.
What are the drugs being studied?
The initial stories about Oracle’s platform mention its use for two older drugs approved to treat malaria—chloroquine and hydroxychloroquine—that are now being tested to treat COVID-19. Both drugs are quite old: chloroquine was first approved by the FDA in 1949 and sold, until recently, under the brand name Aralen. Hydroxychloroquine, which is also used to treat lupus and rheumatoid arthritis, is sold under the brand Plaquenil and was first approved in 1955.
The impetus behind studying these two drugs stems from in vitro studies following the 2005 SARS-CoV-1 outbreak. Those studies suggested the drugs could inhibit some types of coronaviruses from both entering cells and replicating after infection—potentially serving as a preventative and a treatment. But the studies were small, in cell culture rather than living animals, and not conducted against the virus that causes COVID-19, SARS-CoV-2. Some early work with the drugs against SARS-CoV-2 may be promising but it, too, has been done in a test tube rather than an animal model.
Wednesday, March 25, 2020
Does Gilead's (withdrawn) orphan designation request for a potential coronavirus treatment deserve your outrage?
Posted by
Lisa Larrimore Ouellette
Many commentators were outraged by the FDA's announcement on Monday that Gilead received orphan drug designation for using the drug remdesivir to treat COVID-19. The backlash led to a quick about-face by Gilead, which announced today that it is asking the FDA to rescind the orphan designation. For those trying to understand what happened here and the underlying policy questions, here's a quick explainer:
How could the Orphan Drug Act possibly apply to COVID-19?
Under 21 U.S.C. § 360bb(a)(2), a pharmaceutical company can request orphan designation for a drug that either (A) treats a disease that "affects less than 200,000 persons in the United States" at the time of the request or (B) "for which there is no reasonable expectation that the cost of developing and making available in the United States a drug for such disease or condition will be recovered from sales in the United States of such drug." An ArsTechnica explainer suggests that remdesivir received orphan designation under option (B), but this email from the FDA indicates that it was option (A).
The designation seems correct based on the plain language of the relevant statute and regulations: As of Monday, there were 44,183 cases diagnosed in the United States (and even fewer at the time of Gilead's request), and the Orphan Drug Act regulations indicate that orphan designation "will not be revoked on the ground that the prevalence of the disease . . . becomes more than 200,000 persons." But given the CDC's low-end estimates of 2 million Americans eventually requiring hospitalization, commentators have noted that this feels like a loophole that gets around the purpose of the Orphan Drug Act.
What benefits would Gilead have received from an orphan designation?
The main effect would have been a tax credit for 25% of Gilead's expenses for the clinical trials it is running to figure out whether remdesivir is actually effective for treating COVID-19. (The tax credit was 50% when the Orphan Drug Act became effective in 1983, but was reduced to 25% by the December 2017 tax reform.)
How could the Orphan Drug Act possibly apply to COVID-19?
Under 21 U.S.C. § 360bb(a)(2), a pharmaceutical company can request orphan designation for a drug that either (A) treats a disease that "affects less than 200,000 persons in the United States" at the time of the request or (B) "for which there is no reasonable expectation that the cost of developing and making available in the United States a drug for such disease or condition will be recovered from sales in the United States of such drug." An ArsTechnica explainer suggests that remdesivir received orphan designation under option (B), but this email from the FDA indicates that it was option (A).
The designation seems correct based on the plain language of the relevant statute and regulations: As of Monday, there were 44,183 cases diagnosed in the United States (and even fewer at the time of Gilead's request), and the Orphan Drug Act regulations indicate that orphan designation "will not be revoked on the ground that the prevalence of the disease . . . becomes more than 200,000 persons." But given the CDC's low-end estimates of 2 million Americans eventually requiring hospitalization, commentators have noted that this feels like a loophole that gets around the purpose of the Orphan Drug Act.
What benefits would Gilead have received from an orphan designation?
The main effect would have been a tax credit for 25% of Gilead's expenses for the clinical trials it is running to figure out whether remdesivir is actually effective for treating COVID-19. (The tax credit was 50% when the Orphan Drug Act became effective in 1983, but was reduced to 25% by the December 2017 tax reform.)
Tuesday, March 17, 2020
Challenging what we think we know about "market failures" and "innovation"
Posted by
Camilla Hrdy
I really enjoyed the final version of Brett Frischmann and Mark McKenna's article, "Comparative Analysis of Innovation Failures and Institutions in Context." The article was published in Houston Law Review in 2019. But I initially encountered it when the authors presented an early draft at the 2012 Yale Law School Information Society Project's "Innovation Beyond IP Conference," conceived and brought together by Amy Kapczynski and Written Description's Lisa Ouellette. The conference explored mechanisms and institutions besides federal intellectual property rights (IP) that government uses, or could use, in order to achieve some of IP's stated goals. Examples explored include research grants, prizes, and tax credits, among countless others.
Saturday, February 22, 2020
Deepa Varadarajan on Trade Secret Injunctions and Trade Secret "Trolls"
Posted by
Camilla Hrdy
I wrote some previous posts about eBay in trade secret law, and in particular Elizabeth Rowe's empirical work. Rowe found not all trade secret plaintiffs actually ask for injunctions, even after prevailing at trial, along with many other fascinating findings. I would be remiss if I did not flag a characteristically excellent discussion of this issue by Deepa Varadarajan, in a piece that may have flown under readers' radars.
Tuesday, January 14, 2020
Google v. Oracle: Amicus Briefing
Posted by
Michael Risch
Hello again, it's been a while. My administrative duties have sadly kept me busy, limiting my blogging since the summer. But since I've blogged consistently about Google v. Oracle (fka Oracle v. Google) about every two years, the time has come to blog again.
I won't recap the case here -my former post(s) do so nicely. I'm just reporting that 20+ amicus briefs were filed in the last week, which SCOTUSblog has nicely curated from the electronic filing system.
There are many industry briefs. They all say much the same thing - an Oracle win would be bad for industry, and also inconsistent with the law (The R Street brief -and prior op ed-describes how Oracle has copied Amazon's cloud based API declarations). But since I'm an academic, I'll focus on those briefs:
1. Brief of IP Scholars (Samuelson & Crump): Merger means that the API declarations cannot be protected
2. Brief of IP Scholars (Tushnet): Fair Use is warranted
3. Brief of Menell, Nimmer, Balganesh: Channeling dictates that API declarations are not protected
4. Brief of Lunney: Infringement only occurs if whole work is copied; protection does not promote the progress
5. Brief of Snow, Eichhorn, Sheppard: Fair Use jury verdict should not be overturned
6. Brief of Risch: Protection should be viewed through the lens of abstraction, filtration, and comparison
My brief is listed last, but it's certainly not least. Indeed, I think it's a really good brief. But of course I would say that. If you've read my prior blog posts on this (including the one linked above), you'll note that my brief puts into legal terms what I've been complaining about for eight or so years: by framing this case as a pure copyrightability question, the courts have lost sight of the context in which we consider the protection of the API declarations. There might be a world where the declarations, if published as part of a novel and reprinted in a pamphlet made by Google, are eligible for copyright registration. But in the context of filtration, which neither the district court nor the Federal Circuit performed, the declarations are not the type of expression that can be infringed by a competing compiler. Give it a read. It's better than Cats (and not just the new movie), you'll read it again and again.
An even better summary of all the briefs is here.
I conclude with a paragraph from the brief, which I think sums things up:
I won't recap the case here -my former post(s) do so nicely. I'm just reporting that 20+ amicus briefs were filed in the last week, which SCOTUSblog has nicely curated from the electronic filing system.
There are many industry briefs. They all say much the same thing - an Oracle win would be bad for industry, and also inconsistent with the law (The R Street brief -and prior op ed-describes how Oracle has copied Amazon's cloud based API declarations). But since I'm an academic, I'll focus on those briefs:
1. Brief of IP Scholars (Samuelson & Crump): Merger means that the API declarations cannot be protected
2. Brief of IP Scholars (Tushnet): Fair Use is warranted
3. Brief of Menell, Nimmer, Balganesh: Channeling dictates that API declarations are not protected
4. Brief of Lunney: Infringement only occurs if whole work is copied; protection does not promote the progress
5. Brief of Snow, Eichhorn, Sheppard: Fair Use jury verdict should not be overturned
6. Brief of Risch: Protection should be viewed through the lens of abstraction, filtration, and comparison
My brief is listed last, but it's certainly not least. Indeed, I think it's a really good brief. But of course I would say that. If you've read my prior blog posts on this (including the one linked above), you'll note that my brief puts into legal terms what I've been complaining about for eight or so years: by framing this case as a pure copyrightability question, the courts have lost sight of the context in which we consider the protection of the API declarations. There might be a world where the declarations, if published as part of a novel and reprinted in a pamphlet made by Google, are eligible for copyright registration. But in the context of filtration, which neither the district court nor the Federal Circuit performed, the declarations are not the type of expression that can be infringed by a competing compiler. Give it a read. It's better than Cats (and not just the new movie), you'll read it again and again.
An even better summary of all the briefs is here.
I conclude with a paragraph from the brief, which I think sums things up:
This case boils down to [the] question: can a company own a programming language through copyright? Oracle would say yes, but the entire history of compatible language compilers, compatible APIs, compatible video games and game systems, and other compatible software says no. Michael Risch, How Can Whelan v. Jaslow and Lotus v. Borland Both Be Right? Reexamining the Economics of Computer Software Reuse, 17 The J. Marshall J. Info. Tech. & Priv. L. 511, 539–44 (1999) (analyzing economics of switching costs, lock-in, de facto standards, and competitive need for compatibility).
Sunday, November 10, 2019
Elizabeth Rowe: does eBay apply to trade secret injunctions?
Posted by
Camilla Hrdy
Elizabeth Rowe has a highly informative new empirical paper, called “eBay, Permanent Injunctions, & Trade Secrets,” forthcoming in Washington and Lee Law Review. Professor Rowe examines—through both high-level case coding and individual case analysis—when, and under what circumstances, courts are willing to grant permanent injunctions in trade secret cases. (So-called “permanent” injunctions are granted or denied after the trade secret plaintiff is victorious, as opposed to “preliminary” injunctions granted or denied prior to on-the-merits review. They need not actually last forever).
Thursday, October 24, 2019
Response to Similar Secrets by Fishman & Varadarajan
Posted by
Camilla Hrdy
Professors Joseph Fishman and Deepa Varadarajan have argued trade secret law should be more like copyright law. Specifically, they argue trade secret law should not prevent people (especially departing employees who obtained the trade secret lawfully within the scope of their employment) from making new end uses of trade secret information, so long as it's not a foreseeable use of the underlying information and is generally outside of the plaintiff's market. The authors made this controversial argument at last year's IP Scholars conference at Berkeley Law, and in their new article in University of Pennsylvania Law Review, called "Similar Secrets."
My full response to Similar Secrets is now published in the University of Pennsylvania Law Review Online. It is called: "Should Dissimilar Uses Of Trade Secrets Be Actionable?" The response explains in detail why I think the answer is, as a general matter, YES. It can be downloaded at: https://www.pennlawreview.com/online/168-U-Pa-L-Rev-Online-78.pdf
My full response to Similar Secrets is now published in the University of Pennsylvania Law Review Online. It is called: "Should Dissimilar Uses Of Trade Secrets Be Actionable?" The response explains in detail why I think the answer is, as a general matter, YES. It can be downloaded at: https://www.pennlawreview.com/online/168-U-Pa-L-Rev-Online-78.pdf
Tuesday, September 24, 2019
Lucy Xiaolu Wang on the Medicines Patent Pool
Posted by
Lisa Larrimore Ouellette
Patent pools are agreements by multiple patent owners to license related patents for a fixed price. The net welfare effect of patent pools is theoretically ambiguous: they can reduce numerous transaction costs, but they also can impose anti-competitive costs (due to collusive price-fixing) and costs to future innovation (due to terms requiring pool members to license future technologies back to the pool). In prior posts, I've described work by Ryan Lampe and Petra Moser suggesting that the first U.S. patent pool—on sewing machine technologies—deterred innovation, and work by Rob Merges and Mike Mattioli suggesting that the savings from two high tech pools are enormous, and that those concerned with pools thus have a high burden to show that the costs outweigh these benefits. More recently, Mattioli has reviewed the complex empirical literature on patent pools.
Economics Ph.D. student Lucy Xiaolu Wang has a very interesting new paper to add to this literature, which I believe is the first empirical study of a biomedical patent pool: Global Drug Diffusion and Innovation with a Patent Pool: The Case of HIV Drug Cocktails. Wang examines the Medicines Patent Pool (MPP), a UN-backed nonprofit that bundles patents for HIV drugs and other medicines and licenses these patents for generic sales in developing countries, with rates that are typically no more than 5% of revenues. For many diseases, including HIV/AIDS, the standard treatment requires daily consumption of multiple compounds owned by different firms with numerous patents. Such situations can benefit from a patent pool for the diffusion of drugs and the creation of single-pill once-daily drug cocktails. She uses a difference-in-differences method to study the effect of the MPP on both static and dynamic welfare and finds enormous social benefits.
On static welfare, she concludes that the MPP increases generic drug purchases in developing countries. She uses "the arguably exogenous variation in the timing of when a drug is included in the pool"—which "is not determined by demand side factors such as HIV prevalence and death rates"—to conclude that adding a drug to the MPP for a given country "increases generic drug share by about seven percentage points in that country." She reports that the results are stronger in countries where drugs are patented (with patent thickets) and are robust to alternative specifications or definitions of counterfactual groups.
On dynamic welfare, Wang concludes that the MPP increases follow-on innovation. "Once a compound enters the pool, new clinical trials increase for drugs that include the compound and more firms participate in these trials," resulting in more new drug product approvals, particularly generic versions of single-pill drug cocktails. And this increase in R&D comes from both pool insiders and outsiders. She finds that outsiders primarily increase innovation for new and better uses of existing compounds, and insiders reallocate resources for pre-market trials and new compound development.
Under these estimations, the net social benefit is substantial. Wang uses a simple structural model and estimates that the MPP for licensing HIV drug patents increased consumer surplus by $700–1400 million and producer surplus by up to $181 million over the first seven years of its establishment, greatly exceeding the pool's $33 million total operating cost over the same period. Of course, estimating counterfactuals from natural experiments is always fraught with challenges. But as an initial effort to understand the net benefits and costs of the MPP, this seems like an important contribution that is worth the attention of legal scholars working in the patent pool area.
Economics Ph.D. student Lucy Xiaolu Wang has a very interesting new paper to add to this literature, which I believe is the first empirical study of a biomedical patent pool: Global Drug Diffusion and Innovation with a Patent Pool: The Case of HIV Drug Cocktails. Wang examines the Medicines Patent Pool (MPP), a UN-backed nonprofit that bundles patents for HIV drugs and other medicines and licenses these patents for generic sales in developing countries, with rates that are typically no more than 5% of revenues. For many diseases, including HIV/AIDS, the standard treatment requires daily consumption of multiple compounds owned by different firms with numerous patents. Such situations can benefit from a patent pool for the diffusion of drugs and the creation of single-pill once-daily drug cocktails. She uses a difference-in-differences method to study the effect of the MPP on both static and dynamic welfare and finds enormous social benefits.
On static welfare, she concludes that the MPP increases generic drug purchases in developing countries. She uses "the arguably exogenous variation in the timing of when a drug is included in the pool"—which "is not determined by demand side factors such as HIV prevalence and death rates"—to conclude that adding a drug to the MPP for a given country "increases generic drug share by about seven percentage points in that country." She reports that the results are stronger in countries where drugs are patented (with patent thickets) and are robust to alternative specifications or definitions of counterfactual groups.
On dynamic welfare, Wang concludes that the MPP increases follow-on innovation. "Once a compound enters the pool, new clinical trials increase for drugs that include the compound and more firms participate in these trials," resulting in more new drug product approvals, particularly generic versions of single-pill drug cocktails. And this increase in R&D comes from both pool insiders and outsiders. She finds that outsiders primarily increase innovation for new and better uses of existing compounds, and insiders reallocate resources for pre-market trials and new compound development.
Under these estimations, the net social benefit is substantial. Wang uses a simple structural model and estimates that the MPP for licensing HIV drug patents increased consumer surplus by $700–1400 million and producer surplus by up to $181 million over the first seven years of its establishment, greatly exceeding the pool's $33 million total operating cost over the same period. Of course, estimating counterfactuals from natural experiments is always fraught with challenges. But as an initial effort to understand the net benefits and costs of the MPP, this seems like an important contribution that is worth the attention of legal scholars working in the patent pool area.
Sunday, September 8, 2019
Anthony Levandowski: Is Being a Jerk a Crime?
Posted by
Camilla Hrdy
Former Google employee Anthony Levandowski was recently indicted on federal criminal charges of trade secret theft. As reported in the Los Angeles Times, the indictment was filed by the U.S. attorney’s office in San Jose and is based on the same facts as the civil trade secrets lawsuit that Waymo (formerly Google’s self-driving car project) settled with Uber last year. It is even assigned to the same judge. The gist of the indictment is that, at the time of his resignation from Waymo, and just before taking a new job at Uber, Levandowski downloaded approximately 14,000 files from a server hosted on Google's network. These files allegedly contained "critical engineering information about the hardware used on [Google's] self-driving vehicles …" Each of the 33 counts with which Levandowski is charged carries a penalty of up to 10 years in prison and a $250,000 fine.
This is a crucial time to remember that being disloyal to your employer, on its own, is not illegal. Employees like Levandowski have a clear duty of secrecy with respect to certain information they receive through their employment. But if none of this information constitutes trade secrets, there is no civil trade secret claim. In other words, for a civil trade secrets misappropriation claim, if there is no trade secret, there is no cause of action.
For criminal cases like Levandowski's, the situation is more complicated. The federal criminal trade secret statute shares the same definition of "trade secret" as the federal civil trade secret statute. See 18 U.S.C. § 1839(3). However, unlike in civil trade secret cases, attempt and conspiracy can be actionable. 18 U.S.C. § 1832(a)(4)-(5). This means that even if the crime was not successful—because the information the employee took wasn't actually a trade secret—the employee can still go to jail. See U.S. v. Hsu, 155 F. 3d 189 (3rd Cir. 1998); U.S. v. Martin, 228 F.3d 1 (2000).
The Levandoski indictment brings counts of criminal theft and attempted theft of trade secrets. (There is no conspiracy charge, which perhaps suggests the government will not argue Uber was knowingly involved.) But the inclusion of an "attempt" crime means the key question is not just whether Levandowski stole actual trade secrets. It is whether he attempted to do so while having the appropriate state of mind. The criminal provisions under which Levandowski is charged, codified in18 U.S.C. §§ 1832(a)(1), (2), (3) and (4), provide that "[w]hoever, with intent to convert a trade secret ... to the economic benefit of anyone other than the owner thereof, and intending or knowing that the offense will, injure any owner of that trade secret, knowingly—steals...obtains... possesses...[etcetera]" a trade secret, or "attempts to" do any of those things, "shall... be fined under this title or imprisoned not more than 10 years, or both…"
This means Levandowski can be found guilty of attempting to steal trade secrets that never actually existed. This seems odd. It contradicts fundamental ideas behind why we protect trade secrets. As law professor, Mark Lemley, observed in his oft-cited Stanford Law Review article, modern trade secret law is not a free-ranging license for judges to punish any acts they perceive as disloyal or immoral. It is a special form of property regime. Charles Tait Graves, a partner at Wilson, Sonsini, Goodrich & Rosati, who teaches trade secrets at U.C. Hastings College of Law, echoes this conclusion. Treating trade secrets as an employer’s property, Graves writes, counterintuitively "offers better protection for employees who change jobs” than the alternatives, because it means courts must carefully "define the boundaries" of the right, and may require the court to rule in the end "that not all valuable information learned on the job is protectable.” See Charles Tait Graves, Trade Secrets As Property: Theory and Consequences, 15 J. Intell. Prop. L. 39 (2007).
So where does that leave Levandowski? In Google/Waymo’s civil case against Uber, Uber got off with a settlement deal, presumably in part because Google recognized the difficulty in proving key pieces of its civil case. Despite initial appearances, Google’s civil action was not actually a slam dunk. It was not clear Uber actually received the specific files Levandowski took or that the information contained in those files constituted trade secrets, versus generally known information or Levandonwki's own "general knowledge, skill, and experience.” (I discuss this latter issue in my recent article, The General Knowledge, Skill, and Experience Paradox, forthcoming in the Boston College Law Review).
But thanks to criminal remedies under 18 U.S.C. §1832, and that pesky "attempt" charge, Levandowsi is left holding the blame and facing millions in fines, and many decades in jail.
Maybe being a jerk is illegal after all.
Wednesday, July 17, 2019
Pushback on Decreasing Patent Quality Narrative
Posted by
Michael Risch
It's been a while since I've posted, as I've taken on Vice Dean duties at my law school that have kept me busy. I hope to blog more regularly as I get my legs under me. But I did see a paper worth posting mid-summer.
Wasserman & Frakes have published several papers showing that as examiners gain more seniority, their time spent examining patents decreases and their allowances come more quickly. They (and many others) have taken this to mean a decrease in patent quality.
Charles A. W. deGrazia (University of London, USPTO), Nicholas A. Pairolero (USPTO), and Mike H. M. Teodorescu (Boston College Management, Harvard Business) have released a draft that pushes back on this narrative. The draft is available on SSRN, and the abstract is below:
Their conclusion is that to the extent seniority leads to a time crunch through heavier loads, it is handled by more efficient claim amendment through the examiner amendment procedures, and quality is not reduced.
As with all new studies like this one, it will take time to parse out the methodology and hear critiques. I, for one, am glad to hear of rising use of examiner amendments, as I long ago suggested that as a way to improve patent clarity.
Wasserman & Frakes have published several papers showing that as examiners gain more seniority, their time spent examining patents decreases and their allowances come more quickly. They (and many others) have taken this to mean a decrease in patent quality.
Charles A. W. deGrazia (University of London, USPTO), Nicholas A. Pairolero (USPTO), and Mike H. M. Teodorescu (Boston College Management, Harvard Business) have released a draft that pushes back on this narrative. The draft is available on SSRN, and the abstract is below:
Prior research argues that USPTO first-action allowance rates increase with examiner seniority and experience, suggesting lower patent quality. However, we show that the increased use of examiner's amendments account for this prior empirical finding. Further, the mechanism reduces patent pendency by up to fifty percent while having no impact on patent quality, and therefore likely benefits innovators and firms. Our analysis suggests that the policy prescriptions in the literature regarding modifying examiner time allocations should be reconsidered. In particular, rather than re-configuring time allocations for every examination promotion level, researchers and stakeholders should focus on the variation in outcomes between junior and senior examiners and on increasing training for examiner's amendment use as a solution for patent grant delay.In short, they hypothesize (and then empirically show with 4.6 million applications) that as seniority increases, the likelihood of examiner amendments goes up, and it goes up on the first office action. They measure how different the amended claims are, and they use measures of patent scope to show that the amended applications are no broader than those that junior examiners take longer to prosecute.
Their conclusion is that to the extent seniority leads to a time crunch through heavier loads, it is handled by more efficient claim amendment through the examiner amendment procedures, and quality is not reduced.
As with all new studies like this one, it will take time to parse out the methodology and hear critiques. I, for one, am glad to hear of rising use of examiner amendments, as I long ago suggested that as a way to improve patent clarity.
Subscribe to:
Posts (Atom)