Friday, February 25, 2011

Lemley, Risch, Sichelman & Wagner on Bilski

Bilski v. Kappos was one of the most anticipated patent cases in recent history. Almost all observers agreed that Bilski's method of hedging risk was not patentable; the question was how broadly the Supreme Court's opinion would reach. In the end, the Court rejected a categorical exclusion of all business method patents (although Tom Goldstein has argued that Stevens's concurrence, which would have rejected business methods completely, reads like a lost majority). The Court also rejected the Federal Circuit's holding that the machine-or-transformation test is the only test for determining whether a process is patentable subject matter under § 101. But the Court noted that "the machine-or-transformation test is a useful and important clue," and lower courts have continued to rely upon it. So where does this leave us?

The Stanford Law Review recently selected six Bilski-related papers for its 2011 Symposium on The Future of Patents: Bilski and Beyond. Michael Risch has already shared brief descriptions of the papers over at Madisonian; I will examine each of the papers in a series of posts over the coming week.

First up is Life After Bilski by Mark Lemley (Stanford), Michael Risch (Villanova), Ted Sichelman (San Diego), and Polk Wagner (U Penn). These four were attorneys for, and signatories to, an amici brief in Bilski that argued for a technology-neutral applied/abstract approach: "Where an idea is claimed as applied, it is eligible for patentability, but if it is claimed merely in the abstract it is not." Their symposium piece moves away from this focus on abstraction: "We don't exclude inventions from patentability because the invention is too abstract. We refuse to patent certain claims when those claims reach too broadly and thereby threaten downstream innovation."

The authors argue that the machine-or-transformation test "threatens to effectively become mandatory," and this is bad because "[g]atekeeping approaches have proven unsatisfactory." (This builds on Risch's Everything is Patentable.) The thesis is that the abstract ideas doctrine is not meant to exclude certain subject matters; rather, it "prevent[s] inventors from claiming their ideas too broadly." (§ 112 also limits claim scope, but while § 112 is only concerned with disclosure, § 101 "is primarily concerned with removing obstructions to follow-on innovation.") The authors describe three historical cases that "illustrate how excluding abstract ideas limits claim scope"—O'Reilly v. Morse, The Telephone Cases, and Mackay Radio—and describe how their theory "is also largely consistent with the Court's more recent 'abstract idea' decisions," such as Gottschalk v. Benson, Diamond v. Diehr, Parker v. Flook, and Bilski. They also claim that their theory "provides the only reasonable explanation" of In re Abele.

Section III.B describes the new proposed § 101 test, a "contextual, common-law approach" that considers at least five factors:
  1. "Is the claimed invention potentially generative of many kinds of new inventions?"
  2. "Does the industry rely heavily on cumulative invention?"
  3. "Is the technological field fast-moving?"
  4. "Has the patentee disclosed a small number of embodiments but claimed a broad inventive principle?"
  5. "Has the patentee made an important contribution relative to the prior art?"
Section III.C argues that this test should not be the first step in evaluating patentability; the authors "agree with a number of commentators that the right time to apply section 101 is as a backstop after all other validity doctrines have been examined." Finally, Section III.D shows how this test might be applied. The authors argue that patent claims at issue in LabCorp v. Metabolite (method for diagnosing a vitamin deficiency) and Prometheus v. Mayo (method for adjusting medication dose) satisfy their test because the technologies were not generative and the patents were unlikely to hinder future innovators. For software patents that could impede follow-on innovators, as in Ex parte Heuer, the question is whether the patent "disclosed sufficient embodiments—particularly with respect to the existing prior art—to justify the relatively broad language" of the claim. The authors also argue that even if patents on "exercising" or "every method of controlling ball motion using spin" are too broad, specific patent claims for "a novel method of exercising that helps relieve knee pain" or "a novel method of throwing a particular kind of curveball" should be patentable subject matter because they would not foreclose follow-on innovation (though they may fail as obvious, etc.).

So does this proposed test help move the ball forward? There was no shortage of ideas on how the Supreme Court should define patentable processes in Bilski (see PatentlyO's summaries of briefs here and here), and articles of this form ("the current test is unworkable, and here's my better test") often fail, but this piece seems to satisfy my basic patent criteria for a valuable article. I haven't seen this idea that § 101 should be about claim scope, and I thought the examples of how this test would work in practice were very useful. Is their test really workable? Part of the appeal of the formalistic machine-or-transformation test is its relative ease of application, but I think this claim-scope test does a decent job of following Peter Lee's suggestion of making patent standards "enabled," such as by providing examples. And I think this piece could do more to emphasize the potential efficiency gains of the claim-scope approach, as it would help reduce the dynamic inefficiencies caused by patents.

Monday, February 21, 2011

John Golden: Innovation Dynamics

Can fluid mechanics aid our understanding of how patents promote (or impede) innovation? This is the premise of a recent article, Innovation Dynamics, Patents, and Dynamic-Elasticity Tests for the Promotion of Progress (Harv. J.L. & Tech. 2010). Its author, John Golden (UT Austin Law), is another law professor with a physics Ph.D.* (supervised by Bert Halperin at Harvard), and he uses this background to write an article with more equations and figures than I have ever seen in a law journal. The analogy to fluid dynamics is somewhat strained, and some analytical details are unclear, but I admire the attempt to develop a more rigorous model of innovation.

The paper looks at the acceleration of technological progress (the rate at which the speed of progress changes with time), with the basic (and plausible) model given in Eq. 1. Progress is sped by terms that do not depend on the amount of existing technical knowledge (like the existence of grants) and by terms that do (innovation begets more innovation). And progress is slowed by "friction" and "drag" terms (as innovation goes faster, more hurdles will slow it down, perhaps including other patents). The clearest payoff from this model is the observation that "even under a relatively simple model for innovation dynamics, there is no single 'natural' trajectory for innovative progress. Thus, it appears wrong to assume that technological progress naturally proceeds exponentially, linearly, or according to some other simple general form with time." For example, Section IV.B looks at data on the growth of patent counts (which is a problematic measure of innovation, but probably the best available for these purposes). From the log-log plot in Figure 13, Golden concludes that the number of utility patents is "better modeled as having power-law, rather than exponential, time-dependence." While calling this a nice power law would have been a stretch, it is certainly true that it is closer to a power law than an exponential.

What are the implications of this model for patent policy? Golden notes that the relevant measure for patent policy is the ratio of the change between different coefficients in the model, which he calls "dynamic-elasticity" or "double-ratio" tests. He describes the impact of these tests on different industries:
[I]n industries such as the pharmaceutical industry, where regulation and safety concerns help generate large innovative drag that exists independently of the details of patent law, comparatively stronger or broader patent protection might speed progress even if such increased protection imposes substantial costs on follow-on innovators. . . . On the other hand, if a technology (such as software) would be characterized, in the absence of patents, by both strong incentives for innovation and relatively low costs for follow-on innovation, the analysis might be reversed.
More generally, he also explains why double-ratio tests help justify both the "abstract ideas" and "natural phenomena" exclusions from patentable subject matter. Some patent theorists might respond, "So what? We knew all that." But even confirming settled patent law understandings in a new way seems valuable, and I can imagine follow-up papers that build on this idea. Besides, I just think it is fun to see a law review article with derivatives (the non-finance kind), exponentials, and power laws. Not many Bluebook rules for those!

*The other physics Ph.D. law professors I know of are Oskar Liivak (Cornell), David Friedman (Santa Clara), and Katherine Strandburg (NYU). Am I missing anyone?

Thursday, February 17, 2011

Fighting Over Green Patents: How To Appease China & India Without Hurting U.S. Business

Tomorrow at the Yale Climate & Energy Congress Symposium, I will be presenting on a Comment I published in the Yale Law Journal last May: Addressing the Green Patent Global Deadlock Through Bayh-Dole Reform. (I also wrote a nontechnical version of this argument for SlateLicense To Green: Can We Have Clean Energy and Patents, Too?) Rather than summarizing the whole argument here, I will just point out the three pieces that I think are novel contributions:
  1. One way to address global concerns about green patents is by changing the way federally funded green technologies are patented and licensed. A number of articles had recognized that conflicts over IP are contributing to the deadlock in climate change negotiations, but none made the distinction between the patent incentives needed for public-sector and private-sector innovation. I examine the justifications for Bayh-Dole patents as applied to green technologies and conclude that in light of available evidence, patents will impede dissemination of most green technologies.
  2. Market segmentation should be used for green technologies. The strategy of allowing strong patent protection in rich countries (to recoup development costs) while allowing broad access in poor countries has been made by scholars, advocates, and universities in the medical context (see, e.g., this policy statement from AUTM and many universities), but I'm not aware of anyone who had extended this argument to green engineering technologies. And market segmentation is even more compelling for green technologies because patent protection is less important for them than it is for pharmaceuticals.
  3. Funding agencies should use their ex ante control over who receives federal grants to influence licensing policies. Several scholars, particularly Professor Arti Rai, have looked at the impact that funding agencies can have on Bayh-Dole reform, but their focus has been on the ex post influence of these agencies on technologies that have already been developed. I argue that agencies could influence university licensing more effectively through their ability to determine who receives federal grants in the first place. For example, the National Science Foundation's "broader impacts" criterion could be used to encompass access-promoting licensing policies.
I welcome feedback, either in the comments or by email. And for readers in New Haven, feel free to stop by the symposium!

Wednesday, February 16, 2011

Why the Best Law Review Articles Deserve Patents

As new articles editors begin screening the current wave of law review submissions, I have been thinking about when I was in their position a year ago, reading articles during every waking hour and trying to identify the best legal scholarship. My vision of legal scholarship was shaped as a 1L by reading Academic Legal Writing (I have the 3rd edition) by Eugene Volokh of the Volokh Conspiracy (highly recommended for new law students). Volokh argues that a good article should make a claim that is useful, novel, and nonobvious—paralleling §§ 101, 102, and 103 of the Patent Act. He credits this idea to Stephen Carter's 1991 Yale Law Journal essay, Academic Tenure and "White Male" Standards: Some Lessons from the Patent Law (pay access only, from JSTOR or Hein). Carter writes:
[M]y claim is not that every article must, in effect, deserve a patent if it is to be adjudged a good piece of work; my claim, rather, is that the works of scholarship that can meet the patent test are better—add more to human knowledge—than the works that cannot. So if one wants to argue the relative merits of different scholarly works, the patent law tests of novelty and nonobviousness provide useful and workable starting points.
Volokh goes beyond Carter by adding the § 101 utility requirement: "It helps if the article is useful—if at least some readers can come away from it with something that they'll find professionally valuable." His examples focus on utility for legal practitioners, which makes me wonder if he would agree with Judge Posner's diatribe against too many "law and ..." articles being published by law reviews. I would agree that if an article is so esoteric or specialized that it would only be useful to a few people, then it doesn't belong in a generalist law review. But I think the top law journals should publish the top legal scholarship regardless of field, even if it is only useful for other academics, so I would argue for a broader concept of utility—the "some readers" who find something professionally valuable could be legal philosophers or economists or scholars of "law and literature."

To the above "patent requirements" for articles, I would add the disclosure requirements of § 112. The article should "contain a written description" of the idea in "clear, concise, and exact terms." Clear: avoid jargon! Concise: keep it short! Some law professors think "word limits are for suckers," but we rejected otherwise outstanding articles for being way too long, and the long articles we took were accepted in spite of, not because of, their length. Articles should also "enable any person skilled in the art" to follow and use the idea, which raises the question of who the PHOSITA is for law review articles. For a generalist journal like the Yale Law Journal, I think a 2L who has taken a course in the relevant area should be able to at least follow the whole argument, even if faculty consults are needed to help fully appreciate it.

Finally, § 112 directs patentees to "conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention." Academic authors help their readers (particularly non-specialist 2L articles editors) by being similarly honest about the scope of their claims. What is really new about the piece? What is the "invention," and what is "prior art"? Too many articles falsely claim to revolutionize a field, and a few articles offer a truly novel insight without claiming it as their own.

Of course, as Carter pointed out, very few articles deserve a "patent." Many valuable articles fall short on one (or more) of the above requirements. But a few of the articles I read last year particularly stood out, such as The Inducement Standard of Patentability by Abramowicz and Duffy, or, in the non-patent world, Intersystemic Statutory Interpretation by Abbe Gluck and The One and Only Substantive Due Process Clause by Ryan Williams. As a reader—and especially as an articles editor looking for pieces to publish—it is thrilling to come across new "inventions" like these.

Tuesday, February 15, 2011

Oskar Liivak: Cult of the Claim

Does the often-used term "invention" have "zero substantive impact" in patent law? This is Professor Oskar Liivak's claim in his draft paper, Rescuing Invention from the Cult of the ClaimLiivak is an Assistant Professor at Cornell Law School, and I have been following in his academic footsteps – from a Ph.D. in Physics at Cornell to a J.D. at Yale – so it is interesting to see where this intellectual background has taken him.

I started reading Liivak's paper after Lemley's Point of Novelty (the subject of my last post) by coincidence, but there are interesting parallels between the two papers. Lemley argues that we have lost sight of what is new about an invention and remarks, as I mentioned, that "[t]he heart of the problem may be the law's focus on the language of lawyer-created claims rather than inventor-created technologies." Liivak sees a similar problem with the current definition of "invention" as "the subject matter circumscribed by a valid patent claim," rather than as "the set of embodiments that the inventor has conceived and reduced to practice." (I have also recently heard James Dabney, who represented KSR in KSR v. Teleflex, describe this same theoretical confusion between the claims and the invention as a guest in a Yale Law class, so this theme has been popping up a lot recently.)

But Liivak explores this problem – and a potential solution – in more detail than I have seen before. His historical explanation of how a substantive concept of invention was wiped out of patent law is delightfully concise, and his exploration of the problems with the "trivial view" of invention (in which the claims define the invention) seems convincing. He then argues for a "substantive view" of invention, mediated by disclosure:
[D]isclosure ... enforces the equivalence between the invented subject matter and the claimed subject matter. A claim complies with the requirements of § 112 if the specification can corroberate that the inventor invented the claimed subject matter. This ensures that the claims are indeed good proxies for the invention.
Under this view, claims are "proxies for the invention" or "administrative tools," and claims can still be broad as long as the disclosure is. It isn't clear to me, though, what Liivak would think of something like the windshield wiper example in my last post. If the inventor only invents intermittent windshield wipers but claims a car using these wipers, is that claim allowed as long as the car is fully disclosed? Lemley would say yes, but that courts should recognize that only the wipers were novel; I'm not sure whether Liivak's "substantive" invention is only the wipers or if it can include the car. But in any case, it's an interesting argument, and Liivak explores a number of resulting details (like whether his claims could reach after-arising technology).

The draft of Liivak's article linked above is from the August 2010 IP Scholars Conference at Berkeley; it was clearly a work-in-progress at that point, but I'm sure it has evolved and improved over the past six months. It doesn't appear to have been accepted for publication yet, so I look forward to seeing where it ends up and reading the final version. (I'll update this post when a more recent version is available.) New law review articles editors, take note!

Saturday, February 12, 2011

Mark Lemley: Point of Novelty

Who knew that there were so many novel things to say about novelty! In an earlier post, I described Sean Seymore's argument that the novelty test is too strict for complex inventions. This post examines how Mark Lemley (Stanford Law, and the most cited IP prof) criticizes novelty doctrine in a more fundamental way in Point of NoveltyLemley argues that although "the goal of the patent system is to encourage new invention," "[p]atent law today goes out of its way to avoid focusing attention on ... the point of novelty of the invention." He argues that courts should focus on the point of novelty when assessing patents.

Lemley notes that patent claims used to describe the new features of the invention ("central claiming"); it was only around 1870 that courts shifted to interpreting claims as defining an invention's boundaries ("peripheral claiming"), a trend that has accelerated with the decline of "Jepson" claims and the Markman decision that interpreting claims is a question of law. Patentees today rarely identify the point of novelty of their inventions; for example, patent claims for a 4-step process will not tell you if only one of those steps is new.

The insistance that there is no point of novelty of an invention developed out of cases in which courts had to determine whether combination inventions (where the novelty lies in the combination) are obvious, but the rule has spread to other patent doctrines, including anticipation, subject matter, best mode, claim construction, infringement, and damages. Lemley notes some areas where the point-of-novelty approach survives, but the most interesting section of his paper is II.B, with examples of where he believes "the no-point-of-novelty doctrine leads [courts] astray":
  • Repair vs. reconstruction. In Aro v. Convertible Top, the Supreme Court "held that car owners could bypass the patent on convertible top assemblies by replacing what is arguably the most important part of the top an unlimited number of times," which "allowed a third party supplier to capture a significant part of the value supposedly resident in the invention."
  • Written description. Although "[t]he most reasonable theory" for having a separate written description requirement is "to prevent 'late claiming' by a patentee who changes her claims during the prosecution process to cover things she didn't actually understand," "subsequent Federal Circuit decisions have used the no-point-of-novelty rule to read ... the late-claiming concern ... out of written description law."
  • Best mode. The no-point-of-novelty rule exacerbates the problem of best mode being "a potential trap for the unwary." Under current doctrine, "the court would invalidate a patent on the car with intermittent windshield wipers if the inventor did not disclose her preferred brand of tires, a rather extreme requirement."
  • Infringement. "[A] defendant can avoid infringement by eliminating any one of those elements [of the patentee's invention], even if it appropriates the point of novelty ... in its entirety."
At this point, Lemley addresses the question that his examples raised for me: "If patent owners can hamstring themselves by including prior art elements in their patent claims, why do they do it?" His answer: Much of it is related to damages being calculated as a percent of the sales of the relevant product. "[I]f the inventor of the intermittent windshield wiper claims a windshield wiper, his damages in a lawsuit will be measured by the sale of windshield wipers. But claim the identical invention as a car with an intermittent windshield wiper as an element, and the royalty base is the sale of cars – a much larger number."

But is this really a problem? If all the inventor made was a windshield wiper, why should his royalty base be a car? Doesn't the no-point-of-novelty doctrine help prevent overreaching by patentees? Perhaps Lemley would respond that damages should also be calculated based on the point of novelty, but he doesn't address this question. I do think he is right, however, in his conclusion to this section: "The heart of the problem may be the law's focus on the language of lawyer-created claims rather than inventor-created technologies."

Thursday, February 10, 2011

Mike Schuster: Claim Construction and Technical Training

Are judges with a technical background better at construing patent claims? Practitioner W. Michael Schuster (@Patent_Nerd on Twitter) addresses this question in his working paper, Claim Construction and Technical Training: An Empirical Study of the Reversal Rates of Technically Trained Judges in Patent Claim Construction Cases. He claims to show that technically trained district judges are no less likely to be reversed by the Federal Circuit on claim construction than judges without a technical background.

"Technical background" was defined as having an undergraduate degree in science or engineering, and Schuster surveyed judges and searched with the Westlaw Profiler feature to make a database in which 28 out of 617 judges had a technical background. He found 19 patent claim decisions by 8 of these technically trained judges and compared the reversal rate in these cases with the overall claim reversal rates in David Schwartz's Practice Makes Perfect? An Empirical Study of Claim Construction Reversal Rates in Patent Cases (which found that judges with more experience in claim construction were also no less likely to be reversed). Though obviously limited by the small sample size, this is still an interesting result. But as Schuster notes, it is hard to know what to conclude; for example, technical training might only help for patents in that specific area of technology, or claim construction might just be "inherently indeterminate."

Wednesday, February 9, 2011

Michael Risch: Reinventing Usefulness

Few academics have recently been able to offer a novel (or useful?) take on the utility requirement, but I enjoyed Reinventing Usefulness by Michael Risch (Villanova Law), which has just been posted in its edited version (for the BYU Law Review). Risch makes the thought-provoking argument that we should introduce a "commercial utility" requirement (pp. 1240-41): "The proposed test would find commercial utility present with sufficient evidence to convince a person with skill in the art that a) there is a market for the invention, and that b) the invention can be manufactured at a cost sufficient to fulfill market demand. Given that more than 50% of patents wind up being worthless, an initial review to determine which patents are most likely to be worthless should be practically achievable." For a nice review of an earlier draft of this article, see John Duffy's post at Jotwell: Patent Utility Reduxit

Tuesday, February 8, 2011

Sean Seymore on Novelty and Disclosure

Coming to patent law from physics, I have been interested in patent doctrines that seem crazy when I try to explain them to my physics friends. So I was intrigued to discover the work of a professor with a similar research agenda: Sean Seymore is a professor of law and chemistry at Vanderbilt, and he describes his project (in both of his papers I read) as "bridg[ing] the disconnect between patent law and the norms of science."

His latest published paper, Rethinking Novelty in Patent Law (Duke L.J. 2011), argues that the novelty test is too strict for complex technologies. He describes the "quintessential novelty problem" as trying to claim compound X when an earlier third-party patent recites the structure X as one of many compounds without additional details, leaving a question of enablement: was X already in the public's possession? He notes that the current novelty regime incentivizes the earlier patentee to conceal experimental failures and that "it appears that a third-party patent’s mere recitation of X by name or structure is, as a practical matter, sufficient to anticipate a subsequent inventor’s claim to the compound."

Seymore proposes a new novelty paradigm in which the examiner has the initial burden of proving that the disclosure of X in the earlier third-party patent is enabling, for which only documents dated earlier than the third-party patent may be used. He argues that this would promote innovation by allowing X to be patented (and thus enabled and exploited), but the article does not consider the innovation costs of this proposal. Making it easier to patent X will cause dynamic inefficiencies by increasing costs for other innovators who want to use X (in addition to creating static inefficiencies due to the increased price of X). These inefficiencies may well be outweighed by the benefits Seymore outlines, but due to the difficulty of measuring innovation, the problem should at least be acknowledged.

I also read The Teaching Function of Patents (Notre Dame L. Rev. 2010) as background for my own draft paper on patent disclosure (for which I surveyed nanotechnology researchers about how they use the technical content of patents). In this article, Seymore argues in favor of stronger patent disclosures to improve "the ability of the patent to disseminate technical knowledge." Jeanne Fromer (Fordham Law) had put forth a similar defense of robust disclosure requirements in Patent Disclosure (Iowa L. Rev. 2009), and Seymore could have more strongly recognized her contribution (rather than only citing her as part of "a limited amount of scholarship which addresses patent disclosure"). But Seymore's contribution is still valuable, as he joins Fromer in arguing against a long line of patent law theorists and economists who have critiqued the disclosure theory of patents (the idea that we award patents as quid pro quo for the patent disclosure), and he offers some different prescriptive suggestions.

Seymore's main suggestion for improving disclosure is that patent examiners should be able to require "working examples." When I read the abstract, I thought he was talking about physical models, but he actually means that "at least for complex inventions, an actual reduction to practice must become the standard of disclosure" (i.e., patents should not be awarded for "prophetic examples" through the legal fiction of "constructive reduction to practice"), and that inventors should "prove, through adequate detail in the written description, that the claimed invention has been constructed and works for its intended purpose." I think this suggestion is probably sound; as I describe in my own patent disclosure paper, many scientists are surprised to learn that patents can currently be awarded for a Gedankenexperiment, rather than only for inventions that have been shown to work. But it also seems difficult to compare the benefits of increased disclosure with the costs to innovation, which is a problem Seymore again does not address. Still, I enjoyed both articles, and I appreciate reading about patents from someone who actually knows about the technologies that they are trying to promote.

Update 2/22/11: I posted my patent disclosure paper on SSRN last week, so I added links to it. If you are planning on citing it, let me know so I can keep you apprised of updates.

Monday, February 7, 2011

Barney & Collins-Chase: Empirical Analysis of District Court Claim Construction

Patent practitioners James R. Barney and Charles T. Collins-Chase have analyzed 211 district court Markman decisions (encompassing 1858 disputed constructions) for their new article, An Empirical Analysis of District Court Claim Construction Decisions, January to December 2009 (published about two weeks ago by the Stanford Technology Law Review). They tried to locate every claim construction decision in that time period, though they note that some may have escaped their search parameters (which they do not reveal). Their main findings:
  • The district court adopted the patentee's construction (or a minor variation) 36.8% of the time and the accused infringer's construction 15.1% of the time; in the remaining 48.1% of cases, the court adopted a construction that was substantively different from that proposed by either party.
  • The districts with the highest patentee win rates were D. Minn, N.D. Ill, and S.D. Cal.; those with the highest infringer win rates were D.D.C., N.D. Cal., and D. Del.
  • Patentees chose the "broader" construction 90% of the time, and the broader construction won 47.8% of the time overall (compared to 15.6% for the narrower construction). The broader construction was 3.5 times more likely to win when proposed by the patentee, but only 1.3 times more likely to win when proposed by the accused infringer.
  • The median "C/L" ratio (number of words in construction to number of words in the actual patent limitation) was 3.08 for patentees, 3.6 for accused infringers, and 2.41 for courts. There is an inversely proportional trend between win rate and C/L ratio for constructions by patentees (shorter constructions win more often), but not for constructions by accused infringers.
  • The court held that no construction was necessary 38.5% of the time when that argument was put forth by patentees, compared with 13.5% of the time when the argument was made by accused infringers.
  • "Carve-out" constructions, like "XY but not Z" or "X, for example Y or Z" were rare (137 out of 1858 disputed constructions); the win rate for these constructions was 19.7% for patentees and 8.4% for accused infringers.
Aside from arguing that there may be "a systemic bias in favor of patentees’ proposed constructions," the authors do not draw any conclusions from these results or make any prescriptive proposals. Still, I am unaware of any similar in-depth analysis of Markman decisions at the district level, so I'm sure these results will be useful to scholars studying claim construction (and to practitioners involved in Markman hearings).

Sunday, February 6, 2011

Kapczynski & Krikorian: A2K in the Age of IP

Concurring Opinions is currently hosting an online symposium on Access to Knowledge in the Age of Intellectual Property, a collection of essays edited by Amy Kapczynski and Gaëlle Krikorian. The book is available for free download or purchase ($15.69 at Amazon).

Access to Knowledge (A2K) is a movement united by a common critique of strong IP laws; Kapczynski explored the interesting political economy of this social mobilization in a 2007 Yale Law Journal article. It has never been clear to me that the diverse actors included under the A2K umbrella are really united by a coherent intellectual theory (or consistently unified interests); but then again, "intellectual property" itself encompasses a diverse (and sometimes incoherent) array of concepts. Access to Knowledge in the Age of Intellectual Property brings together a diverse collection of international A2K activists and scholars. I have only read a few chapters and skimmed a few more, but it seems like a good reference for those interested in the young A2K field. I would particularly recommend Kapczynski's introduction and Yochai Benkler's chapter on the information commons (particularly pp. 226-35).

In the Concurring Opinions symposium, I thought Frank Pasquale drew some interesting parallels between A2K and other reform efforts (like health care), which "run up against the 'irresistible force' of capital flight and demands for increasing returns on investment." I also enjoyed Lea Shaver's use of Google's ngrams to track the historical use of IP terms; she showed, for example, that patents have historically been written about much more often than copyrights or trademarks, and that the concept of "intellectual property" as a coherent grouping has only taken off in the past few decades.

Friday, February 4, 2011

Buccafusco & Sprigman: Valuing IP

Are there problems with the way IP is priced? Professors Christopher Buccafusco (Chicago-Kent) and Christopher Sprigman (Virginia) provide a novel experimental insight on this question in Valuing Intellectual Property: An Experiment, which was published in the November 2010 issue of the Cornell Law Review. I have been thinking recently about the prices that are set for intellectual property because of Amy Kapczynski's project on theorizing the costs of these prices, which she presented at Yale Law School today (and which I'll blog about in more detail once a draft of her paper is posted). But while Kapczynski provides the first thorough critique of price itself (challenging the premise that IP should be priced whenever transaction costs are low), Valuing Intellectual Property contributes to the second-order critiques that emphasize the particularly high transaction costs in the IP context.

Buccafusco and Sprigman conducted a clever experiment to study how people set prices for creative works by creating a market for 10 poems in a $50 poetry contest. They found that the "authors" who wrote the poems and the "owners" who were told they owned one of the poems were only willing to sell their poem (and the corresponding chance of winning the contest) for over $20, while "bidders" would only pay around $10 to buy a poem. This effect was the same whether the participants could see all 10 poems or not. And even when the "contest" became a random lottery (so that each poem had a 1 in 10 chance of winning), authors and owners would only sell for over $15, while bidders would only pay around $5 (the actual expected value).

This experiment is the first demonstration of the endowment effect (where people value something they own more than an equivalent thing they don't) for non-rival, created goods. Even though the authors were reminded that they would get to keep their poems (which would be emailed to them), they still priced the poems' values as contest entries more highly than the bidders. The implication for IP is that the deadweight loss caused by copyrights and patents may be even larger than previously expected. Although the endowment effect is well established in behavioral law and economics, these inefficiencies are particularly troubling in IP, where the marginal cost of information is zero. Buccafusco and Sprigman argue that their results suggest market failures in licensing IP, which supports the use of liability rules over property rules and a more expansive fair use doctrine. More broadly, their results demonstrate another problem with using price as a signal of value for information goods.

Thursday, February 3, 2011

Abramowicz & Duffy: Inducement Standard

While reading patent article submissions for the Yale Law Journal, I was delighted to come across The Inducement Standard of Patentability by Professors Michael Abramowicz and John Duffy (both at GW Law). This article will be published in YLJ this spring (making two patent articles in one volume!), and a draft is available on SSRN.

Abramowicz and Duffy make two novel contributions to the old economic idea that patents should only be granted for inventions that would not be created and disclosed absent the inducement of a patent. First, they argue that this "inducement standard," which was mentioned by the Supreme Court in Graham, can and should be read into an economic (rather than cognitive) definition of nonobviousness. Although they are probably stretching the Graham quote beyond its original intent, the authors explain that "'obvious' can mean not merely 'easily understood' but also 'easily discovered.'" Because this reading is both plausible and normatively desirable (for minimizing deadweight losses), the article argues that the Supreme Court should update § 103 (which was enacted to codify the common law of nonobviousness), just as it read economic analysis into the Sherman Act.

Providing a legal justification for the inducement standard is already a significant achievement, but Abramowicz and Duffy make a second contribution by refining the inducement standard to consider the dynamism of innovation and the prospect theory of patents (Part II) and by exploring how the standard could be administered (Part III). Their economic analysis is more sophisticated than previous treatments of the topic, and I hope it will rejuvenate the lines of scholarship about this theory. My biggest question about the article is whether the inducement standard really is administrable. It seems no less administrable than the current nonobviousness standard, but I fear that this concern will deter any courts from experimenting with this theory. Still, the article is one of the most interesting contributions to the patent literature that I have read in the past year; I highly recommend it.

Wednesday, February 2, 2011

Peter Lee: Patent Law and the Two Cultures

I'll start my patent scholarship blog with an article I helped edit for the Yale Law Journal this past fall: Patent Law and the Two Cultures by Professor Peter Lee (at U.C. Davis Law). The article notes that decisionmakers often rely on heuristics and deference to experts when confronted with technical complexity, which "raise[s] the provocative question of whether the 'cognitive miser' model is reflected in the patent system," where lay judges often are faced with complex patent disputes (Part II). Lee then describes the formalism of the Federal Circuit in terms of heuristics that lower information costs (Part III), and he sees the "holistic turn" of the Supreme Court as producing "'information consuming' standards [that] will increase technological engagement and attendant cognitive burdens for district judges" (Part IV). After laying out this descriptive theory, Lee offers a prescriptive suggestion: the Supreme Court should recognize the costliness of its holistic standards and should make rules that are "enabled" (borrowing a principle from patent law) such that lay district judges actually can apply them (Part V).

The descriptive insights in Patent Law and the Two Cultures are interesting and compelling. The prescriptive proposal, however, could be developed further. Lee is not the first scholar to argue that the Supreme Court should make its tests more workable, and it is not obvious what distinguishes patent law from other complex areas of adjudication. But Lee does provide examples, such as from Graham and eBay, of areas where the Court has provided somewhat more concrete guidance, and he argues that the Court should make more use of illustrative examples. He also does a nice job responding to potential counterarguments to this proposal. Although somewhat long, the piece is beautifully written and carefully footnoted; Lee is careful to acknowledge where his ideas fit into prior scholarship in the field. Even generalist readers (or lay judges) should be able to enjoy this piece.

Tuesday, February 1, 2011

Another Patent Blog?

There are already many patent law blogs reporting and analyzing the latest patent news, so there is little need for another blog about recent Federal Circuit cases or announcements from the USPTO. But it seems much harder to find information about recent academic scholarship about patent law or broader IP theory. The Jotwell IP section occasionally has detailed reviews of IP articles, but with only eight posts in 2010 (only two of which were specifically about patents), they miss a lot of great pieces.

I read a lot of patent scholarship, and I plan to link to interesting articles here, along with my brief reactions. This blog will probably be of most interest to patent law profs and students, but I also welcome practitioner and other perspectives. If you have suggestions of recent patent articles I should read or other comments about this project, feel free to email me at lisa.ouellette@aya.yale.edu. Thanks for reading!