The Impact of 101 on Patent Prosecution

Guest Post By: Colleen Chien, Professor, Santa Clara University Law School

Over the last several years, the USPTO has continued to release high-quality data about the patent system. This is the first of a series of posts by Professor Chien based on insights developed from that data. (The accompanying Patently-O Law Journal Paper includes additional views and methodological notes, and links to queries for replication of the analysis by Chien and Jiun-Ying Wu, a 3L at Santa Clara Law. )

“Everything should be made as simple as possible,
but not simpler” — Albert Einstein

A few weeks ago, the USPTO Director Andrei Iancu announced progress on new guidance to clarify 101 subject matter eligibility by categorizing exceptions to the law. The new guidance will likely be welcomed by prominent groups in the IP community,[1] academia,[2] and the majority of commentators to the 2017 USPTO Patentable Subject Matter report[3] that have called for a overhaul of the Supreme Court’s “two-step test.” Behind these calls are at least two concerns, that the two-step test (1) has stripped protection from meritorious inventions, particularly in medical diagnostics, and (2) is too indeterminate to be implemented predictably. To Director Iancu’s laudable mission, to produce reliable, clear, and certain property rights through the patent system, 101 appears to pose a threat.

Last November, the USPTO released the Office Action Dataset, a treasure trove of data about 4.4 million office actions from 2008 through July 2017 related to 2.2 million unique patent applications. This release was made possible by the USPTO Digital Services & Big Data (DSBD) team in collaboration with the USPTO Office of the Chief Economist (OCE) and is one of a series of open patent data and tool releases since 2012 that have seeded well over a hundred of companies and laid the foundation for an in-depth, comprehensive understanding of the US patent system. The data on 101 is particularly rich in detail, breaking out 101 subject matter from other types of 101 rejections and coding references to Alice, Bilski, Mayo and Myriad.

With the help of Google’s BigQuery tool and public patents ecosystem[4] which made it possible to implement queries with ease, research assistant Jiun-Ying Wu and I looked over several months for evidence that the two-step test had transformed patent prosecution. We did not find it, because, as the PTO report notes, a relatively small share of office actions – 11% – actually contain 101 rejections.[5] However once we disaggregated the data into classes and subclasses[6] and created a grouping of the TC3600 art units responsible for examining software and business methods (art units 362X, 3661, 3664, 368X, 369X),[7] which we dub “36BM,”[8] borrowed a CPC-based identification strategy for Medical Diagnostic (“MedDx”) technologies,[9] and developed new metrics to track the footprint of 101 subject matter rejections, we could better see the overall impact of the two-step test on patent prosecution. (As a robustness check against the phenomenon of “TC3600 avoidance,” as described and explored in the accompanying Patenty-O Law Journal article, we regenerate this graph by CPC-delineated technology sector, which is harder to game than art unit, finding the decline in 101 more evenly spread).

Mayo v. Prometheus, decided in March 2012, and Alice v. CLS Bank, decided in June 2014, elicited the strongest reactions. The data suggest that an uptick in 101 subject matter rejections following these cases was acute and discernible among impacted art units as measured by two metrics: overall rejection rate and “the pre-abandonment rate” rate – among abandoned applications, the prevalence of 101 subject matter rejections within the last office action prior to abandonment.

Within impacted classes of TC3600 (“36BM”), represented by the top blue line, the 101 rejection rate grew from 25% to 81% in the month after the Alice decision, and has remained above 75% almost every month since then. (Fig 1) In the month of the last available data, among abandoned applications, the prevalence of 101 rejection subject matter rejections in the last office action was around 85%. (Fig 2)

Among medical diagnostic (“MedDx”) applications, represented by the top red line, the 101 rejection rate grew from 7% to 32% in the month after the Mayo decision and continued to climb to a high of 64% (Fig 1) and to 78% among final office actions just prior to abandonment (Figure 2). In the month of the last available data (from early 2017), the prevalence of subject matter 101 rejections among all office actions in applications in this field was 52% and among office actions before abandonment, was 62%. (Fig 2)

However, outside of these groupings and other impacted art units (see paper for longer list) the impact of 101 caselaw has been more muted. 101 rejections overall (depicted by the thick black line) have grown – rising from 8% in Feb 2012 to 15% in early 2017 (Fig.1) – but remain exceptional.

On balance, the data confirm that 101 is playing an increasingly important role in the examination of software and medical diagnostics patents. More than four years after the Alice decision, the role of subject matter does not appear to be receding, remaining an issue in a large share of cases not only at their outset but among applications that go abandoned through the last office action. That patentees cannot tell before they file whether or not their invention will be considered patent-eligible, and perceive that much depends not on the merits of the case but in what art unit the application is placed also presents a challenge to the goal of predictability in the patent system.

It is also the case that the vast majority of inventions examined by the office are not significantly impacted by 101. Even when an office action does address subject matter, rejections and amendments on 101 subject matter on the record are often cursory, in contrast with, for example, novelty and nonobviousness discussions.

What does the data teach us and what directions for policy might it suggest? I save this topic, as well as the impact of USPTO guidance on prosecution and some data issues left unexplored here, for the next post, as data gathering continues.

In the meantime the USPTO continues to move forward on revised examiner guidance. As it does, it may want to decide which metrics most matter – overall prevalence of 101, 101 in pre-abandonment phases, or others – and how it hopes the metrics might change as a result of its revised guidance. The USPTO should also consider keeping the office action data up-to-date — right now, high quality data stops around February 2017[10] without any plans to update it of which I’m aware (my subsequent FOIA request for updates was denied). That leaves a gap in our ability to monitor and understand the impact of various interventions as they change over time – certainly not a unique phenomena in the policy world – but one that is fixable by the USPTO with adequate resources. In the meantime, it is thanks to the USPTO’s data release that this and other analyses of the impact of the two-step test is even possible.

Thanks to Jonah Probell and Jennifer Johnson for comments on an earlier draft and Ian Wetherbee for checking the SQL queries used to generate the graphs. Comments welcome at cchien@scu.edu.

= = = = =

[1] The AIPLA has proposed a “clean break from the existing judicial exceptions to eligibility by creating a new framework with clearly defined statutory exceptions.”; the IPO has suggested replacing the Supreme Court’s prohibition on the patenting of abstract ideas, physical phenomena, and laws of nature with a new statutory clause, 101(b), to be entitled “Sole Exception to Subject Matter Patentability.”

[2] https://patentlyo.com/patent/2017/10/legislative-berkeley-workshop.html.

[3]  Patent Eligible Subject Matter: Report on Views and Recommendations From the Public, USPTO (Jul. 2017).

[4] Google Public Patents is a worldwide patent bibliographic database linked with other datasets described more fully in the accompanying Patently-O Law Journal article.

[5] Office Action Dataset at 2 (also mentioning that “101 rejections” include subject matter eligibility, statutory double patenting, utility, and inventorship rejections).

[6] Using Google Patents Public Data by IFI CLAIMS Patent Services and Google, used under CC BY 4.0, Patent Examination Data System by the USPTO, for public use, Patent Examination Research Dataset by the USPTO (Graham, S. Marco, A., and Miller, A. (2015) described in “The USPTO Patent Examination Research Dataset: A Window on the Process of Patent Examination”), for public use.

[7] As detailed in the accompanying PatentlyO Bar Journal paper, available at http://ssrn.com/abstract=3267742.

[8] We put the remainder of TC 3600 art units into the category “TC36 Other” however because many months contained insufficient data (of less than 50 office actions), we did not include it in the Figures.

[9] Coded using a patent classification code (CPC)-based methodology developed from Colleen Chien and Arti Rai, An Empirical Analysis of Diagnostic Patenting Post-Mayo, forthcoming (defining medical diagnostic inventions by use of any of the following CPC codes: C12Q1/6883; C12Q1/6886; G01N33/569; G01N33/571; G01N33/574; C12Q2600/106).

[10] The later months of 2017 have insufficient counts for research purposes.

72 thoughts on “The Impact of 101 on Patent Prosecution

  1. 15

    In the legal world, prosecution generally refers to the plaintiff’s side of litigation. However, patent prosecution is the process of writing and filing a patent application and pursuing protection for the patent application with the patent office. Patent prosecution is very different from litigation, so the use of the term is often confusing to people not familiar with patent lingo. If you are looking for a lawyer to sue another party for violating your patent rights, you are looking for a patent litigator.

  2. 14

    The following comments that I posted on a different thread (from 4 October) appear to me to be relevant to the subject matter of this thread … at least to the extent that they raise the question of whether current practice on 35 USC 101 means that the US patent system is not fit for purpose.

    Aside from the issue of how 35 USC 101 should be interpreted, it is worth posing a slightly different question:
    “Is it a good idea to allow patent eligibility to vary over time?”

    Put another way, this question essentially asks whether a patent system that allows for such variability is fit for purpose, in terms of providing predictable outcomes for all parties (patent applicants, patentees, competitors and the public) that are based upon robust logic.

    My view is that the answer to this question is “no”.

    In all other patent systems, the question of eligibility is simply used as a filter to weed out applications relating either to abstract concepts or to subject matter deemed “undesirable” (such as any of the categories covered by TRIPS Art. 27.2 or 27.3). If an application passes the eligibility filter, then it is subjected to an array of tests (for novelty, inventive step, sufficiency, etc.) that are time-variable, ie that depend upon prior publications and actions, as well as the (common) general knowledge attributable to those skilled in the art.

    Making eligibility time-variable blurs boundaries. That is, it imports into the test for eligibility concepts that, in reality, have more to do with novelty and inventive step. It also makes it impossible to construct a robust chain of logic that allows eligibility to be time-variable but yet retains clear distinctions between ineligibility and lack of novelty, inventive step, etc. The lack of a robust chain of logic then makes the patent system highly unpredictable … to the detriment of everyone (other than lawyers). Indeed, the current controversies over the precise boundaries of patent eligibility (especially relating to methods of medical diagnosis or treatment) provide a perfect illustration of the kind of unpredictability that will arise.

    Another aspect to this is that using a time-variable concept of eligibility can lead to (and has in fact led to) refusal / revocation of patents directed towards subject matter that would otherwise meet the requirements of patentability. To me, that looks very much like a violation of the obligations of the US under TRIPS.

    1. 14.1

      Is it a good idea to allow patent eligibility to vary over time?

      With the exception of only the most subtle of nuances, eligibility does NOT vary over time.

      If you think it does, then you have fallen vic tum to the conflation of 101 with 102/103, which shows an attempt to make the Act of 1952 “go away.”

      1. 14.1.1

        Erm, was that not the main focus of HP’s petition to the Supreme Court? link to patentlyo.com

        You may be correct that a proper interpretation of 101 leads to the conclusion that it does not vary with time. However, it is far from clear whether rulings of the US courts are consistent with such an interpretation.

        1. 14.1.1.1

          However, it is far from clear whether rulings of the US courts are consistent with such an interpretation.

          It is more than clear that the US courts “interpretation” (which in reality is actually NOT an interpretation, but rather is a re-writing) are not consistent.

          But that only shows that the Court has mucked up 101. I am talking about a proper view of 101 as written by Congress (in relation to responding to you at 14.1).

          As to your link, thanks – here’s a link back; link to patentlyo.com

    2. 14.2

      refusal / revocation of patents directed towards subject matter that would otherwise meet the requirements of patentability. To me, that looks very much like a violation of the obligations of the US under TRIPS.

      Can you flesh this out a little?

      I do not think that the US has an obligation under TRIPS for this “otherwise meet the requirements of patentability.”

      It is a different question as to whether or not eligibility has been properly elucidated. But — and taking for argument’s sake — something is given as not properly eligible, TRIPS cannot create an obligation under your “otherwise” prong.

      1. 14.2.1

        Yes. See TRIPS Art. 27.1, which indicates that, subject to certain provisos, patents SHALL be available for ANY invention in the Member States.

        Thus, each signatory to TRIPS has an obligation under international law to make sure that patents are available under their national law for any invention that is not excluded from patentability by any of the provisos from TRIPS that have been incorporated into the national law in question.

        1. 14.2.1.1

          TRIPS Art. 27.1 is not tripped in the instance in which eligibility is not met.

          This is because “invention” itself only rests as having meaning if eligibility is first met.

          Notably, TRIPS and US are not co-extensive – notwithstanding a particular footnote.

          TRIPS does use the qualifier for “inventions” to be (continuing in Art. 27.1): “whether products or processes, in all fields of technology, provided that they are new, involve an inventive step and are capable of industrial application.

          This clause carries a footnote with it: For the purposes of this Article, the terms “inventive step” and “capable of industrial application” may [note the permissive wording] be deemed by a Member to be synonymous with the terms “non-obvious” and “useful” respectively.

          My view (and I am open to correction from a bona fide transnational law expert) is that Art. 27.1’s “invention” first has to be reached based on the Member’s view (of that Member’s own national law). This is in addition to the other provisios. This means – that for the sake of argument – the rewritten law of the Supreme Court is what sets the obligation, and thus, what the Supreme Court has written cannot violate an obligation which is only set by what the Supreme Court writes.

          fyi, the other three “subject to” provisios are:
          para 4 Article 65 (developing country delay)
          para 8 Article 70 (pharma and agri caveat)
          para 3 Article 27 (Members may exclude certain items not germane here)

          Again, there is no “otherwise” trigger that you can rest upon if we take for argument’s sake that eligibility is not met.

          1. 14.2.1.1.1

            The meaning of “invention” in TRIPS will be interpreted according to the Vienna Convention. Whilst such an interpretation MAY consider current interpretations of national laws, I think that it is fair to say that the interpretation proffered by the US courts is such an “outlier” that it is highly unlikely to influence the interpretation reached following the provisions of the VCLT (which would include consideration of factors such as the terms of TRIPS and the “accepted” meaning of the term “invention” at the time that TRIPS was concluded).

            1. 14.2.1.1.1.1

              The meaning of “invention” in TRIPS will be interpreted according to the Vienna Convention.

              Cite?

              Or do you mean TRIPS Art. 1.3? Perhaps TRIPS Art. 1.2? Neither of which are that illuminating. Or perhaps the opening proviso of TRIPS itself (particularly point (c): “…taking into account differences in national legal systems;“?

              As to your attempt here to state “fair to say…such an ‘outlier’” – all that you are doing is bootstrapping and assuming the conclusion that you need to prove.

              So no, it is NOT “fair to say.”

              Sorry Confused, but my post at 14.2.1.1 — and the nature of individual sovereignty, agreement to TRIPS notwithstanding — is far more clear – and simply rests with what a Sovereign deems the word “invention” to mean.

              I “get” the point that you want to make. I just have not seen you provide any real basis for that point.

              1. 14.2.1.1.1.1.1

                Cite??

                TRIPS is an international treaty. It will therefore be interpreted (by a WTO Dispute Resolution Panel) according to the provisions of the Vienna Convention on the Law of Treaties (VCLT). If you do not already know why, or cannot figure it out, then I doubt that any explanation from me will help you to see the light.

                An interpretation according to the VCLT may take account of post-ratification national laws / interpretations … it is just that they will be very much secondary, tertiary, or even quaternary indicia when it comes to determining the correct interpretation of a provision of a decades-old international treaty.

  3. 13

    rising from 8% in Feb 2012 to 15% in early 2017 (Fig.1) – but remain exceptional.

    Doubling is not remaining exceptional – especially if one takes the word-of-mouth info that the same aggressiveness for 101 in the more affected areas was going to be rolled out into those “remain exceptional” areas – which in no small part is why you now see the Office trying to reel this back in.

    The law is the law (regardless of art unit), and the objective application of the law is NOT evident (yet). As the Office starting moving to applying the law (as the law) in the same manner to all art units, and the resultant increases in 101 rejections started trickling in, the reaction seen (i.e., patent groups providing drafts of legislative changes) should make one realize that the “Ends”-achieving Means would not limit the carnage to particular art units.

    One may view the thrust of the articles here (not the data, but the comments about the data) as obscuring rather than illuminating that trend of “expanding” just how 101 was applied in the Office (‘expanding’ in quotes because the action of universally applying the same “Means” is not really an expansion, as much as it is merely being consistent in approach).

  4. 12

    (Cross-posted my remarks from the other post on this same topic. I’m really not sure why there are two, honestly.)

    To make office action data much more accessible, the OCE used artificial intelligence methods to extract information from office actions and code each one based on the extent and type of rejection. The initial release of the dataset provided insight into 4.4 million office actions mailed from 2008 through mid-July 2017 for 2.2 million unique patent applications.

    The resulting file was 1.31 GB, too large to be opened and processed by standard spreadsheet software, but through Google’s BigQuery cloud software, it is now possible to access the dataset from a standard laptop…

    …or just, I don’t know, dump it into a database? I’ve got a sqlite database of every patent application published by the PTO, with a full-text index of the specification, claims, etc. – it’s 750gb and it responds almost instantly to any query. Pretty basic stuff.

    Within impacted classes of TC3600 (“36BM”), the 101 rejection rate grew from 25% to 81% in the month after Alice, and remained above 75%… Among medical diagnostic (“MedDx”) applications, the 101 rejection rate grew from 7% to 32% in the month after Mayo and continued to climb to a high of 64% and to 78% among final office actions just prior to abandonment. … However outside of impacted areas, the footprint of 101 remained small, appearing in under 15% of all office actions.

    Well, there’s also been an uptick in TCs 21/24/26/28.

    There needs to be some more decoding done here, because not all § 101 rejections are alike.

    Pre-Alice, I encountered a steady stream of Nuijten rejections with minor quibbles about CRM language. What’s more – it’s common practice to leave that as an open issue until substantive grounds of rejection are addressed, for two reasons: (1) to conserve prosecution history estoppel, and (2) because examiners become more flexible, and open to a broader range of options, when the only obstacles between the application and an allowance are minor formalities.

    Post-Alice, the § 101 rejections are more earnest, and examiners are digging in their heels – including where they really can’t, but where the deferential nature of the PTO’s (former?) tolerance for examiners’ § 101 decisions was emboldening. Sometimes those rejections are presented alongside (still easily-traversable) Nuijten rejections; sometimes not.

    The upshot is that lumping all § 101 rejections together in these aggregate metrics hides an important distinction – and the magnitude of the § 101 uptick.

    Since the release of office action data, we have looked for evidence that the two-step test had transformed patent prosecution as the headlines would suggest. We did not find it, because, as the PTO report notes, a relatively small share of office actions – 11% – actually contain 101 rejections.

    When patents are grouped by WIPO CPC code, rather than PTO AU code, a sustained increase in 101 rejections following Alice can be discerned in four out of the five major technology sectors (except mechanical engineering) but in no month or any sector does the prevalence of 101 rise above 20%.

    Wait, what? It’s difficult to reconcile these two statements. The conclusions vastly understate the significance of the impact.

    My conclusion, looking at the same numbers, is quite different: Alice and Mayo have created a new grounds of rejection – one that did not formally exist under Bilski and earlier cases – that is affecting a significant number of applications across 80% of the tech centers, and that is dramatically increasing rejections in certain areas.

    1. 12.1

      The authors definitely should have included a metric for the % of actions with 101 rejections citing Alice or Mayo as the dataset already includes parameters for that. It’d serve as a good lower bound for the % of actions with 101 rejections excluding signal-per-se and utility rejections.

        1. 12.1.1.1

          Nice!

          In the comments you state: “If the data is limited to business method art units (3620’s, 3680’s and 3690’s) then the total post-Alice 101 rejection rate sits at around 85%!”

          Would you happen to remeber if that 85% based on all actions with a 101 rejection or actions flagged by the Alice parameter?

          1. 12.1.1.1.1

            I am afraid I cannot recall, and I am traveling at the moment and do not have ready access to my database to check.

            However, you would expect in these AUs that the rejections would cite Alice and/or Bilski (other than, perhaps, a very small number of cases that are clearly ineligible under any standard). The data shows that Bilski citations were in decline anyway, and were then largely supplanted by Alice. So it is reasonable to conclude that the overwhelming majority of the rejections would cite Alice, and that there would be little difference between ‘all 101’ and ‘Alice-flagged’.

            For what it’s worth, however, in the context of that comment, my guess is that I queried on all 101 rejections. I usually try to say what I mean, so if I had meant ‘Alice-based rejections’ I probably would have said so.

          2. 12.1.1.1.2

            Think about all the Internet bingo management applications that will never be developed because of the lack of patent protection.

            WE ARE GOING TO LOSE THE ONLINE BINGO MANAGEMENT TECH RACE TO BRAZIL AND IT WILL BE TOO LATE TO DO ANYTHING

  5. 11

    Do these papers suggest anything surprising or remotely noteworthy? Why worry about author biases when the thesis is “water is wet”?

    1. 11.1

      You do not find this surprising?

      [We] looked over several months for evidence that the two-step test had transformed patent prosecution. We did not find it, because, as the PTO report notes, a relatively small share of office actions – 11% – actually contain 101 rejections.

      My sense is that changes to 101 law have substantially affected patent examination. I am surprised to see that this effect does not show up in the data, unless and until one drills down into a fairly fine grained analysis. It is a useful check on my priors.

      1. 11.1.1

        What is obvious for anyone that actually does this work is that 101 has profoundly changed patent prosecution for certain AUs and not really changed much for other AUs.

        Certainly, patents are written differently and the major corporations have all changed their guidelines.

      2. 11.1.2

        Yeah, it is not surprising to.me how limited the effects of these cases are.

        Are you aware that medical diagnostics and business methods account for only about 8% of filings? (Based on early 2018 data when the USPTO dashboard data was still breaking out medical diagnostics and business methods seperately).

        So, knowing that, is it still surprising that a relatively small fraction of actions have 101 rejections? Were you expecting to see these rejections in other technologies?

        On a side note, it’s a little weird to say that these case have not “substantially affected patent examination” if their massive effects are limited to some small areas. The unqualified statement regarding “transform[ing] the patent process” is unfortunant.

        1. 11.1.2.1

          Are you aware that medical diagnostics and business methods account for only about 8% of filings?

          SHOULD BE 90% THEN WE WOULD NOT HAVE TO WAIT ONE HUNDRED YEARS FOR BEST TARGERTED ADS NOT TO MENTION THE MOST ACCURATE INSURANCE FEES YOU PERSONLY DISSERVE EVERY DATA POINT CREATES A JOB

          1. 11.1.2.1.1

            The mix of “Sarah” and other styles actually makes this post of yours Malcolm pretty funny.

        2. 11.1.2.2

          Ben,

          All information processing patents are affected that include what some call software and business methods. As well as the medical diagnostics and other biomedical AUs.

          This accounts for more than 50% of US patents.

          1. 11.1.2.2.1

            “All information processing patents are affected that include what some call software and business methods.”

            What do you mean by “are affected”? In terms of % of applications receiving 101 rejections, Chien’s paper, and David Stein’s link in post 12, strongly suggest that there is a huge difference in how business methods type software is being treated from TC2600 type software.

        3. 11.1.2.3

          Ben it’s a little weird to say that these case have not “substantially affected patent examination” if their massive effects are limited to some small areas.

          This is true.

          But it’s not 1/1000th as weird as saying that because of section 101 the “patent system is being destroyed” or “patents aren’t worth anything anymore” or “nobody has any idea what’s patentable”, which are the sorts of things we here endlessly from the people who now have to deal with reality (actually, who am I kidding? those people will create a mythology that the database was “cooked” by Google and George Soros before they will deal with reality).

        4. 11.1.2.4

          Ben,

          You are seeing the commotion and action (by patent organizations and by the PTO) BECAUSE the effects were starting to NOT be limited (as the aggressiveness — the Means — of applying 101 were starting to be applied consistently outside of the particular art units more drastically affected.

      3. 11.1.3

        ‘We looked’ is not the same thing as ‘we conducted a full statistical analysis’. The data is quite noisy, for a number of reasons, and it is true that if you just look at it in aggregate, there are no visible effects, distinguishable from the noise, that correlate with the dates of the Mayo/Alice/etc decisions (see, e.g., the first chart in my article here link to blog.patentology.com.au ).

        However, I expect that if you were to perform a proper statistical test (e.g. a Chow test) of the hypothesis that there are structural breaks in the underlying process (i.e. small steps in the data, hidden in the noise) then you would be able to ‘see’ the effect. But in practice it is simpler to just drill down and focus on the 10% of cases that are most obviously affected.

  6. 10

    “With the help of Google’s BigQuery tool and public patents ecosystem[4] which made it possible to implement queries with ease”

    At least they’re not trying to hide their bias, but instead incorporate adulation of Google within their paid advocacy piece masquerading as scholarship.

      1. 10.1.2

        Dennis – with all due respect, that is a naive statement. Of course, we can all access Google products – that’s how they make money. The issue is how Google manipulates the data for search results and the actual results. They do rig their system to influence public opinion. Those of us in the industry know what Google does to sway influence and we are seeing their unethical practices called out now little by little, given how aggressive they are. There is a reason why they got rid of their “Don’t Be Evil” mantra. Before MM loses his mind, I am a registered Democrat and voted for Obama twice. We need to get past politics and protect startups from these giants.

        1. 10.1.2.1

          There is a reason why they got rid of their “Don’t Be Evil” mantra.

          Right. So they could generate fake patent statistics which would then be used by researchers and which would ultimately cause the tinfoil hatted patent maximalists to buy even more tinfoil.

          LOL

          And before you lose your mind, I could care less about Google. They could go belly up tomorrow and I could care less. Their patents stink. And their corporate tax rates are way, way too low. Of course, maybe that’s just what they want me to think.

        2. 10.1.2.2

          IPDude, in this case, at least, I can assure you that there is nothing about the use of BigQuery that has had any influence on the results. BQ is just a tool that enables people without the wherewithal to set up their own database systems to conduct queries on datasets. When I did a very similar analysis (many months ago, and not cited by Chien – link to blog.patentology.com.au ), I downloaded the original USPTO data and imported it into my own database, and made essentially identical findings.

          Indeed, other than general animosity towards Chien, I cannot understand why this article is generating so much controversy. The data simply confirms what practitioners already know: that in fields of subject matter affected by Mayo/Alice there has been carnage, whereas everywhere else (i.e. the vast majority of applications) it is business as usual.

      2. 10.1.3

        I should have been clearer with my critique: the issue is not that they used a Google product (totally fine, most people do it all the time), but how they describe it. It comes across as praise of how good Google’s product is. This is especially true if you go to their link from the footnote, goes into more detail on just how great Google is…

        When the researchers appear to be praising a company with a questionable track record on the subject at hand, that tells me something. I would be quite surprised if the authors were not aware of such track record. An affirmative disclaimer that Google did not (or did) contribute to their scholarship would have cleared up the matter, but that’s noticeably missing.

        Legal professionals strive to avoid even the appearance of impropriety, which should not need to be pointed out to law professors such as Ms. Chien. Instead of doing so, the authors asserted such appearance of impropriety into the text of their piece.

        1. 10.1.3.1

          OK – makes sense to me. I also would not have been so effusive — especially since almost any DB can easily handle this amount of data.

          Thanks for this reply.

  7. 9

    Search for “Colleen Chien Michele Lee”

    And you will find out all you need to know who is paying for this research !
    or what ever come out of her m outh .

    1. 9.1

      I know. I think Dennis should require disclosures of money before putting articles up on his blog.

      Basically, Dennis is complicit with a deception that Colleen Chien is an academic or that this is academic work.

    2. 9.3

      Oh my goodness, it’s women!!

      Run for the hills! Next thing you know they’ll be voting and then they’ll want to be treated equally. Run!

    3. 9.4

      What do you think that this “evidence” shows? I ran the Google search that you recommend. I did not actually see any direct evidence of connections between Prof. Chen and Ms. Lee (although, even if I had, I am not sure what that would really prove).

      Any search string that you put into Google is going to turn up some hits. You actually have to read the hits to learn what they establish. I am rather wondering if you did bother to read the hits, because there is nothing there to substantiate your innuendo.

    4. 9.5

      One of those two pictures that you include shows Prof. Chien (with two other women, neither of whom is Michelle Lee). The other shows Ms. Lee (with six other women, none of whom are Colleen Chien). Do you think it somehow significant that Michelle Lee and Colleen Chien do not appear in the same photo?

      1. 9.5.2

        Do you think it somehow significant that Michelle Lee and Colleen Chien do not appear in the same photo?

        YES IT MEANS THEY ARE TEH SAME PERSON

      2. 9.5.3

        It is obvious that you did not search: Michelle Lee Colleen Chien

        These are the two people who ruined the US patent system, coordinating from both inside the white house and from USPTO. By Pushing AIA from inside the white house and implementing AIA in the worst possible ways to benefit the enfringers.

        “Having Michelle as a role model certainly helped as there were few women, particularly women of color, in law firm leadership, and there still are,” says Colleen Chien (BA/BS ’96), a former senior White House technology advisor who is now a Santa Clara University law professor. “

        1. 9.5.3.1

          It is obvious that you did not search: Michelle Lee Colleen Chien

          Too true. As instructed in your post 9, I searched “Colleen Chien Michelle Lee,” not “Michelle Lee Colleen Chien.” I am not sure why the hit that you mentioned turns up in one search and not the other.

      1. 9.6.1

        That is fine, but her posts being treated as academic work is highly questionable. For example, she doesn’t mention that the two AUs account for about 50% of the applications.

        I think Dennis that you should require disclosures before you permit “professors” from posting on here as if they are academics.

        Advocate Chien should have the burden to show us that she isn’t being paid to do this.

        Yes, great she is sharing data.

        1. 9.6.1.1

          Much like you, Night Writer, I too think that Prof. Chien has the burden of proof to show she is not a 7 foot tall lizard person. May I suggest a cold room to see if she stops pushing up on her legs and instead lies stationary on her belly?

      2. 9.6.2

        Dennis : My issue is not with the data being public, it is with the narrative being insinuated and pushed with the veil of academia about 101.

    5. 9.7

      “Search for “Colleen Chien Michele Lee”

      And you will find out all you need to know who is paying for this research !
      or what ever come out of her m outh .”

      It’s true. It all lines up with just one search. But you have to use Bing, so basically no one will ever know.

  8. 8

    Presumably all the people involved with this research have filed patents on their data gathering and analytical methods. Right? Super valuable stuff.

    Because without the patents nobody would have a reason for doing something like this. Right?

    Let’s hear from the logic-patenting maximalists! Because if Chien et al did file claims and those claims were rejected, we could the hear the whining all the way to the further reaches of the solar system.

  9. 7

    I am shocked — SHOCKED — that 101 rejections increased in the art units where claims are filed which most resemble the claims that were tanked by the Supreme Court.

    Also completely unpredictable and SHOCKING is that in the fields with the absolute worst claims, representing the cr@ ppiest “innovations” filed by the worst attorneys, that 101 rejections are still being handed out and “not understood” by those habitual rent seekers. It’s almost as if some attorneys believe they can avoid rejections just by changing the words they use but without changing the substance (or lack thereof) of the subject matter that falls within the scope of the claims! How can this possibly be explained?

    Oh right: it must be some kind of arbitrary punishment inflicted by the PTO on certain “customers” because that’s the result that Google paid for. Definitely the most reasonable explanation. I mean, it can’t be the case that there is a group of genuinely know-nothing patent attorneys out there (maybe 20% of the attorneys who get on their knees to handle sof tie w0ftie g@ rb@ge) who just refuse to see the writing on the wall and continue to file junk and flush the client’s money down the t0 ilet because they don’t know what else to do. Nah. Unpossible.

    1. 7.1

      From the other thread but worth repeating here:

      DS: The authors’ conclusion from their own data is so implausible

      What conclusion are you referring to? This one?

      Overall, 101 has not had a huge impact on patent prosecution.

      That’s not “implausible”. That’s just a fact. Nobody is stating that the Supreme Court’s clarifying 101 jurisprudence hasn’t had a big impact on cr @ppy logic and correlation patents, or even on half-baked and incredibly broad nutraceutical / pharma claims that read on natural products. Of course that’s been the case. But overall patent prosecution has not been changed much. The bigger impact has been on litigation which is also not surprising because it was litigators who recognized the problems and guided the Supreme Court to its solutions. I grant you that the solutions are not ideal but that’s because in Alice especially the defense’s hands were tied and they weren’t able to address the bigger issue which is causing all the alleged “confusion” for people like you.

  10. 5

    I do not know what to make of this, but I think that it bears mention that the AU1600s (the pale blue line) shows a small but discernible and persistent spike in §101 rejections that comes on just before Alice, but long after Myriad. I am not sure why this should be. The spike appears between Jan and Apr 2014, but I do not see that there were any Guidance docs that came out in that time frame. In any event, I just wanted to flag this point, in case anyone else has thoughts on the matter.

    1. 5.1

      Usually it’s because (a) the art unit just got training on that topic, or (b) the art unit sent out an internal memo indicating that its examiners need to be doing something differently.

      I’ve had examiners offer both explanations, off the record of course. One examiner showed me a memo about § 112 ¶ 6 interpretations being applied much more rigorously, well before the modern era and Williamson v. Citrix. It was marked “INTERNAL – NOT FOR PUBLICATION” or some such thing.

      1. 5.1.1

        The way the PTO determines the “success” of examiner training on a particular topic is to review OA’s for 6 months, maybe a year, after the training is given. I remember when they did 112, 1st (written description) training. At a PPAC meeting one of the deputy associate assistant under commissioners for patent office furniture rearrangement discussed the training. His conclusion, “And for six months after the training, rejections for lack of written description went up 50%! The training was a huge success!”

        (None of the available emojis are a very good eye roll)

        1. 5.1.1.2

          “…one of the deputy associate assistant under commissioners for patent office furniture rearrangement…”.

          I loled.

  11. 4

    Everyone should remember that there are no consequences for unethical behavior for law professors and that this law professor may very well be writing this on contract.

    The title should be, “Attorney for Google presents interpretation of USPTO data as part of advocacy.”

    1. 4.1

      Note that there is no mention that the two biggest AUs affected account for over half the patents.

    2. 4.2

      “Colleen Chien, Professor, Santa Clara University Law School”

      Dennis given that you allow people to put out their name as professors shouldn’t part of this be a disclosure statement of where they are getting their money?

      Should this read “professor” or “advocate for high-tech company”?

  12. 3

    Why do you put up with it. The USPTO needs you as much as you need them, Or they wipe out all atty. participation and then the proses’ will get the message and file in their parents country. Or find a relative in another country, let them do a small part and go in with the prose. wahhlahh!

  13. 2

    Good to see the ‘reputable academics’ finally catching up on trends we amateurs spotted almost as soon as the data set was released (my blog post from last December is here: link to blog.patentology.com.au ).

    Of course, anybody involved in actual prosecution of applications in the relevant subject matter areas already knew what was going on. The data just confirms and quantifies the carnage.

  14. 1

    >Colleen Chien, Professor, Santa Clara University Law School

    I have read articles by Chien. I think she should disclose the source of all her money over the last 10 years. Her work is not academic, but paid advocacy with questionable ethics.

Comments are closed.