Guest Post by Ronald Mann, Professor of Law at Columbia Law School
There's a lot of press lately for PatQual, the EC's massive "Study on the Quality of the Patent System in Europe." To read the press clippings, you'd think the European Commission had produced the definitive work on how to assess the output of government patent offices. But a closer look suggests this study falls far short of the standards for which academics or serious policy analysts strive.
To start with the most obvious problem, the standards of the data analysis are far below what would pass in this country, much less in the quantitatively sophisticated venues of European universities. Although the 194-page report includes more than 100 figures and tables, with detailed and intricate cross-tabulations that slice numerous survey responses across metrics like the business sector of the respondent, the size of the company's patent portfolio, or the size of the company, not a single one of the graphics mentions the number of companies responding to any particular inquiry, and only once in the entire tome do the authors mention that a mere 221 companies and 98 Universities responded. The small n means that the bulk of the report describes "important" findings about the opinions of important sectors of the EU based on the opinions of a handful of companies.
To give one of the starker examples, Table 4 discusses answers about the relative importance of metrics of patent quality by firm size. The column for medium firms (between 50 and 250 employees) reflects the responses of less than 25 companies – out of presumably tens of thousands of such firms across the EU. Given the thin description of the sample in the report, it is entirely possible that all of those firms are located in a single country, or even a single city.
For my own purposes, the most important subjects that the report addresses are the comparative quality of the EU and US systems, questions that are particularly important now as the US and EU offices attempt to establish work-sharing arrangements. To its credit, the report does a good job of discussing the multi-faceted nature of patent quality from a user's perspective, concluding that it includes some combination of timeliness, reliable validity, and cost effectiveness. And it makes some sense to use a survey to analyze the relative importance different users attribute to those different quality metrics. Even there, though, we would expect some attention to sample size and selection bias (the respondents were identified primarily through trade associations, which presumably have their own particular axes to grind on the issues that the report addresses).
But it makes no sense to use surveys to compare how the different systems are doing on those metrics. So, for example, the report goes out of its way to emphasize how poorly the U.S. office is doing on timeliness as compared to the EU – 81% of respondents rate the EU well on timeliness but only 51% rate the US well. But why should we care about a survey on timeliness, when actual data is available? Although reasonable minds could differ on exactly what the right metric for timeliness is, how comparable the two systems are and what the optimal pendency time would be, it is easy to obtain reliable quantitative data on the existing situation. And the data show pretty definitively that however concerned the PTO and US patenters are about pendency time in the US, it is even longer in the EU (almost 50% longer in fact).
So what do we get from a survey purporting to show that the EU is doing much better on timeliness? We learn that if you ask people in the EU, they think their system is better than ours. Presumably if you asked patentees in the US, where Dave Kappos has made his attack on the backlog a highly visible affair, you'd get the same answer.
But that only shows the limits of survey evidence. It is just as easy to produce a poorly designed survey that "proves" things that are demonstrably false as it is to produce a poor case study that emphasizes poorly chosen and unrepresentative anecdotes. As empirical scholars, it is important to ensure that the data on which policymakers make important choices is as reliable as it should be. And the PatQual study, sad to say, surely fails any reasonable baseline of reliability.