Notes from September’s CopyNight for Washingon, DC
The topic for September’s CopyNight here in DC was the Google Book settlement. We had a decent turn out, not as large as the couple of Summer gatherings but respectable. The discussion was excellent, covering many aspects of this story that hadn’t occurred to me, even as closely as I have been following it for my blog and my podcast.
We started by discussing the impact of the DoJ’s letter on the settlement. It is important to note this was just a letter, not a ruling. some sort of broader antitrust investigation may be underway behind the scenes that prompted the letter. The net effect of the input from the DoJ may be to strip the settlement largely back to its original contours. It ultimately may have been sparked by the Copyright Office, though, as the bulk of the letter was consistent with a Copyright Office hearing from some time back. That the case brought against Google was a class action may have also added pressure to the DoJ to comment.
The unfortunate consequence of a scaling back is that libraries may lose out. The settlement would have set up considerable public access resources for which the American Library Association was in favor. The ALA would have preferred greater government oversight than for what the settlement would have initially called but its a tough compromise to think through. Assessing the risks and costs of such oversight, in particular how they may have limited access, is difficult at best.
The impact to orphan works may be a little easier to appreciate. While the settlement wouldn’t give perpetual consideration to future works, limited to just works up through January of this year, scaling back will cost us a useful registry for out of print, hard to attribute works. Adam Marcus clarified that under the original settlement, that registry may not have been as closed as has been represented. The board was supposed to be open. The sticking point though is that the registry would have been one of a kind, no other attempt to scan out of print and orphan works would have gotten a leg up in terms of protections or allowances despite potential further public good.
The conversation then turned briefly to patents. There was some speculation about possible chilling effects on further development of OCR technolofies, more specifically I think physical systems to make book scanning more cost effective. There was of course mention of one of my favorite projects, reCAPTCHA. Luis van Ahn was at least co-inventor of the original CAPTCHA and no doubt has some interesting IP bound up in his latest venture that directly impacts the field of book scanning. We wondered what further implications Google’s acquisition of reCAPTCHA may have other than to beef up their internal spam fighting efforts.
A couple of folks weighed in at this point with some predictions and observations about the possible ultimate outcome. Tim Vollmer of the ALA worried about the settlement being reduced to the least/worst of what we’ve seen so far. Gavin Baker, a regular with a background in open access in academia, commented that most of the NGOs are currently for what we’ve seen of the stripped back, amended settlement. The only holdouts, noticeably, are commercial outfits that may fear Books as a toehold into the traditional publishing space.
The discussion moved on to orphan works, trying to understand why reform has moved so slowly. The degree of stalling seems to vary by medium, photography being perhaps the most contended case. This may be a consequence of the difficulty of consistently carrying attribution. Digital photography may deal with this issue better than printed photographs but it is still trivial for someone to even inadvertently destroy metadata carrying proper attribution of a work. Gavin seemed to think the scope of the orphan works problem may have been worth setting up Google as a benevolent dictator of a central registry, assuming their remit could be kept exclusively to identifying, registering and mediating orphan works only.
Things took a more philosophical turn as we explored a tangent around reform more generally. It was noted that legislation is almost entirely an additive process, rarely are laws removed from the books to address the need for more suitable compromises. Someone, I believe either Adam or Kat Walsh mentioned a recent Cato Institute event whose topic was the criminalization of everything. The idea seemed to be consistent with the solely additive nature of law making.
Gavin asked the group why the suit was pursued as a class action rather than some other kind of complaint. He offered his own theory, that basing it on a class was a form of preemption. He suggested it actually might be a form of carrot, that if Google would settle, the terms would carry farther with a class than an individual action. The implied threat is that if it wasn’t a class, Google would remain open to a potentially unending string of individual actions.
We closed with another tangent, delving into a consideration of why copyright is viewed and expressed differently across multiple types of media. The consensus was that this was a consequence of the norms and expectations arising from the introduction and adoption of each subsequent new form of media rather than anything inherent in each distinct medium. It is tempting, almost a logical trap, to think there are inherent qualities of media that naturally lead to different legal considerations. Law is made without any such notion, though. Just ponder for a moment the average technical literacy of your typical Congress critter and you’ll understand why that is.
I have notes from the October CopyNight, too, and should be getting those posted soon. Hopefully sooner than it took to get these notes out.