Tag Archives: generative ai

Discerning Signal from Noise: Navigating the Flood of AI-Generated Prior Art

by Dennis Crouch

This article explores the impact of Generative AI on prior art and potential revisions to patent examination standards to address the rising tidal wave of AI-generated, often speculative, disclosures that could undermine the patent system’s integrity.


The core task of patent examination is identifying quality prior art.  References must be sufficiently accessible, clear, and enabling to serve as legitimate evidence of what was previously known.  Although documents are widely available today via our vast network of digital communications, there is also increasing junk in the system — documents making unsubstantiated claims that are effectively science fiction.  Patent offices prefer patent documents as prior art because they are drafted to meet the strict enablement standards and filed with sworn veracity statements. Issued patents take this a step further with their imprimatur of issuance via successful examination.  Many of us learned a mantra that “a prior art reference is only good for what it discloses” — but in our expanding world of deep fakes, intentional and otherwise, is face value still worth much?

In a new request for comments (RFC), the USPTO has asked the public to weigh in on these issues — particularly focusing on the impact of generative artificial intelligence (GenAI) on prior art. (more…)

An AI Journey From Fractals to GPT

By Dennis Crouch

I recently was thinking back to 1996 and the start of my senior year at Princeton University. Although I was a mechanical & aerospace engineering major, I had become fascinated with AI and so focused my senior thesis on developing a new AI model within the department of electrical and computer engineering. Instead of employing traditional layers, I utilized a fractal metaphor to design the neural networks. The main theoretical advantage of this approach was its potential to offer a deeper understanding of how the network operated, allowing us to peer into the brain and gain insights into its learning process based on the structure created. Furthermore, the model facilitated greater human control and direction.

This past weekend, computer law expert Van Lindberg reminded me of the Dartmouth Summer Research Project on Artificial Intelligence that had a game plan of solving AI during the summer of 1956. My senior thesis project took 8 months, and met with roughly the same (lack of) success. I’ll write more about this later, but the project was one of the first to use massive parallelism offered by human players across the internet as the learning model. That part was a big success — as well as the Applet front-end developed mostly by my partner Ryan Kotaro Akita. 

I feel like things are coming full circle for me, but this time the AI model has improved exponentially. The rapid progress in human-machine interaction and generative AI is astonishing. I find myself constantly exploring new, innovative technologies poised to disrupt bloated organizations. The venture capital landscape is necessarily shifting, as small teams rapidly develop and release disruptive products and services with higher speed but lower financial requirements.

Today’s exploration for me is autoGPT that allows users to stack various traditional and AI services to create the best autonomous assistant that I have seen so far.  After being provided with general instructions, autoGPT can generate a dynamic project plan that it executes through online interaction and GPT-4 training results. This technology has the potential to equalize expertise, much like what happened with chess several years ago. However, unlike chess, which relies on a fixed board and finite options, this new model addresses real-time, real-world problems. Indeed, it’s a fascinating time to be alive.

This setup is wonderful but also so scary. Awesome in all senses. The AI is ruthless and without emotion or wisdom. It empowers anarchists, terrorists, and reckless operaters to inflict significant harm. GPT layering enables more sophisticated attacks that exploit a combination of human-social and technological weaknesses in a massively parallel manner. Scams, both big and small, are becoming increasingly easier to execute. Of course, there will be those who use these tools to fight for good. As the battle of technology unfolds, life may begin to resemble a futuristic graphic novel more and more.

What are your thoughts on where we are headed?