As Chris Sprigman explained in a 2011 Jotwell post, laboratory experiments are largely missing from the legal academy, but they shouldn't be. Experiments can be used to test theories and tease apart effects that can't be measured in the real world. They can explode old hypotheses and generate new ones. Chris Sprigman and Chris Buccafusco and various coauthors have been among those remedying the dearth of experimental work in IP law; e.g., I've previously blogged about a clever study by the Chrises of how people price creative works. (For more on the benefits and drawbacks of work like this, and citations to many other studies, see my Patent Experimentalism article starting at p. 87.)
Most recently, Chris and Chris have teamed up with Stefan Bechtold for a new project, Innovation Heuristics: Experiments on Sequential Creativity in Intellectual Property, which presents results from four new experiments on cumulative innovation/creation that "suggest that creators do not consistently behave the way that economic analysis assumes." (This should not be surprising to those following the behavioral law and economics literature. Or to anyone who lives in the real world.) I briefly summarize their results below.
Sensitivity to the Costs of Borrowing and Innovating
Experiment 1 involved a combinatorial optimization problem in which subjects had 90 seconds to load a covered wagon with items of different value without going over a weight threshold (reminiscent of the start of the Oregon Trail game). Subjects were told they would receive a specified bonus for using two or fewer items from an earlier player's attempt; i.e., for "innovating" rather than "borrowing." Although one would expect that more subjects would be willing to innovate as the bonus increased, the percentage of innovating subjects was remarkably unchanged over a wide variation in bonus points (see Figs. 2 & 3 in their paper).
Experiment 2 involved a Scrabble task in which participants were shown the letters Z K A E Y P (with varying points per letter) and were asked to make six words of the highest value in 90 seconds. As in experiment 1, they could use a prior subject's list (zek, peak, pea, zap, key, aye) or could "innovate" (borrow two or fewer words) for a bonus. Subjects were more responsive to bonus size than in experiment 1, but results were far from what a rational actor model would predict (see Fig. 6).
Sensitivity to the Quality of Existing Ideas
Experiment 3 had the same wagon-loading task as experiment 1, except there was a fixed bonus and what varied was the strength of the existing submission subjects could borrow from: it was 60%, 80%, or 100% of the best submission strength. They found some sensitivity to the quality of the existing submission, but not as great as one might have predicted. (They also have a nice discussion of the perceived difficulty of innovating, and why the results are not necessary "irrational" even if they don't seem optimal.)
Experiment 4 then varied the strength of the existing submission for the Scrabble game (with the submission scoring 60%, 80%, or 100% of that maximum points possible). Oddly, in this case, 47% and 41% of subjects innovated in the 60% and 80% conditions, but in the 100% condition—when no improvement was possible—a full 86% of players chose to innovate! The authors speculate that this resulted from a unreliable heuristic: players "likely assessed how easily they could come up with words that did not borrow" when choosing to innovate, which was easier for the 100% list that used unfamiliar words (zek, peaky, zap, zep, kype, zea), leaving more familiar words free.
Conclusion and Caveats
As the authors candidly acknowledge, there are two major limitations on how generalizable these results are. First, the subjects were recruited from Amazon Mechanical Turk rather than from a population of real creators and innovators. (If you haven't seen the PBS NewsHour investigation of mTurkers, it's a great read.) Second, the Oregon Trail task and the Scrabble task are quite different from the kinds of scientific and artistic creativity that are typically the subject of patent and copyright disputes. But there are also real advantages to being able to run these experiments in a quasi-controlled setting. Laboratory experiments like this should not be viewed as a substitute for real-world experiments (including real-world policy randomization!) or efforts to learn from natural experiments and other empiricism. It's an important complement, however, and I'm glad Chris and Chris (and now Stefan) keep doing it.