Guest Post by Professor Camilla Hrdy (Rutgers Law)
Can generative AI models like ChatGPT be “reverse engineered” in order to develop competing models? If so, will this activity be deemed legal reverse engineering or illegal trade secret misappropriation?
I have now written a few articles exploring this question, including Trade Secrecy Meets Generative AI and Keeping ChatGPT a Trade Secret While Selling It Too. But when I first asked this question a year and a half ago, I was getting responses purely in the negative. I asked a panel at a trade secret conference at Georgetown in 2023, “Can ChatGPT be reverse engineered?” Several members of the panel laughed. I would talk to AI experts, and the answer I got was along the lines of: “it’s not going to happen.”
But one of my students, Devin Owens at Akron Law, who has both a patent law and a computer science background, insisted to me that reverse engineering was possible using “model extraction attacks.” A model extraction attack entails imputing a massive number of queries into a “target AI model” and using the target’s responses to train a new model that mimics the original’s behavior and functionality. Devin wrote a student note about this, arguing that AI models are so vulnerable to extraction attacks that they can’t really be “owned” by anyone.
Now it seems clear that at least partial reverse engineering of generative AI models is indeed possible, and of increasing concern to AI developers. (more…)