A group of legal experts are pressing patent agencies, courts and policymakers to address the question as generative A.I. seems on the brink of invading another uniquely human endeavor.
By Steve Lohr
Steve Lohr has written about technology and intellectual property for more than a decade.
Generative artificial intelligence, the technology engine powering the popular ChatGPT chatbot, seems to have a limitless bag of tricks. It can produce on command everything from recipes and vacation plans to computer code and molecules for new drugs.
But can A.I. invent?
Legal scholars, patent authorities and even Congress have been pondering that question. The people who answer “yes,” a small but growing number, are fighting a decidedly uphill battle in challenging the deep-seated belief that only a human can invent.
Invention evokes images of giants like Thomas Edison and eureka moments — “the flash of creative genius,” as the Supreme Court justice William O. Douglas once put it.
But this is far more than a philosophical debate about human versus machine intelligence. The role, and legal status, of A.I. in invention also have implications for the future path of innovation and global competitiveness, experts say.
The U.S. Patent and Trademark Office has hosted two public meetings this year billed as A.I. Inventorship Listening Sessions.
Last month, the Senate held a hearing on A.I. and patents. The witnesses included representatives of big technology and pharmaceutical companies. Next to them at the witness table was Dr. Ryan Abbott, a professor at the University of Surrey School of Law in England, who founded the Artificial Inventor Project, a group of intellectual property lawyers and an A.I. scientist.
The project has filed pro bono test cases in the United States and more than a dozen other countries seeking legal protection for A.I.-generated inventions.
Card 1 of 5
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:
ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).
Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.
Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.
Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.
“This is about getting the incentives right for a new technological era,” said Dr. Abbott, who is also a physician and teaches at the David Geffen School of Medicine at the University of California, Los Angeles.
Rapidly advancing A.I., Dr. Abbott contends, is very different from a traditional tool used in inventions — say, a pencil or a microscope. Generative A.I. is also a new breed of computer program. It is not confined to doing things it is specifically programmed to do, he said, but produces unscripted results, as if creatively “stepping into the shoes of a person.”
A central goal of Dr. Abbott’s project is to provoke and promote discussion about artificial intelligence and invention. Without patent protection, he said, A.I. innovations will be hidden in the murky realm of trade secrets rather than disclosed in a public filing, slowing progress in the field.
The Artificial Inventor Project, said Mark Lemley, a professor at the Stanford Law School, “has made us confront this hard problem and exposed the cracks in the system.”
But patent arbiters generally agree on one thing: An inventor has to be human, at least under current standards.
The project has experienced mixed results so far with patent authorities around the world. South Africa granted it a patent for a heat-diffusing drink container that was generated by A.I., and most countries, including China, have not yet made a determination. In the United States, Australia and Taiwan, its claims have been turned down.
After the U.S. patent office rejected the project’s patent application — a decision upheld in a federal appeals court — Lawrence Lessig, a professor at the Harvard Law School, joined a brief filed this year with the Supreme Court.
In support of the project’s patent claim, Mr. Lessig and his co-authors wrote that the federal appeals court’s ruling “deprives an entire class of important and potentially lifesaving patentable inventions of any protections” and “jeopardizes billions in current and future investments” by undermining the incentive that patent protection would provide.
The Supreme Court declined to hear the case.
Many patents list several inventors, and company employees are frequently named while the patent’s owner is their employer. That suggests a middle ground for A.I. systems as a co-inventor, credited and fully disclosed — a partner rather than a solo creator.
“That may end up being where we land, but that’s a pretty big line to cross,” said Senator Chris Coons, the chairman of the Judiciary subcommittee on intellectual property.
If granting A.I. inventor status is a stretch today, stronger intellectual property protection for the fast-evolving technology is not.
Mr. Coons, a Delaware Democrat, and Senator Thom Tillis, a North Carolina Republican, introduced a bill last month to clarify what kinds of innovations are eligible for patents. It is intended as a legislative fix to the uncertainty raised by a series of Supreme Court decisions. Patents on artificial intelligence, along with medical diagnostics and biotechnology, would most likely become easier to obtain, legal experts say.
At the Senate hearing, Dr. Abbott made the case for A.I. invention, helped by an odd-looking drink container he held up and described. It was created by an A.I. system trained on general knowledge. It had no training in container design, and it was not asked to make one.
The A.I. was built to combine simple ideas and concepts into more complex ones and identify when one had a positive outcome, a process repeated again and again. The resulting design was fed into a 3-D printer. The container employs fractal geometry to improve heat transfer, a kind of anti-Thermos. It could, for example, be used to quickly make iced tea, boiled, steeped and refrigerated.
The container is easy to hold, difficult to drink from and not headed for commercial production yet. But it is certainly novel, and it is entirely the creation of an A.I. system without human control.
The A.I. system was created by Stephen Thaler, who has conducted artificial intelligence research and development for decades, at McDonnell Douglas and later on his own. Dr. Abbott’s study of the A.I. field led him to Dr. Thaler, who agreed to use his technology to generate a demonstration invention or two for the Artificial Inventor Project.
Dr. Thaler’s patented system has some ingredients similar to those in generative A.I. models like ChatGPT, and others that are not. He describes his system as having the machine equivalent of feelings. It becomes digitally excited, producing a surge of simulated neurotransmitters, when it recognizes useful ideas, setting off “a ripening process, and the most salient ideas survive.”
Dr. Thaler said the ability to recognize and react in that way amounted to sentience, and his generative A.I. system is called DABUS, for Device for the Autonomous Bootstrapping of Unified Sentience.
He regards the reluctance of patent authorities to recognize his system as an inventor as discrimination against a creation-capable machine. “It’s speciesism to me,” he said.
But Dr. Abbott said, “That’s totally irrelevant to the legal question.”
And that question will surely become more pressing in time. “There is a universal consensus that A.I. will only get better at this sort of thing,” Dr. Abbott said.