Publication Date
3-7-2025
Journal
Virginia Law Review
Abstract
With the rapid emergence of high-quality generative artificial intelligence (“AI”), some have advocated for mandatory disclosure when the technology is used to generate new text, images, or video. But the precise harms posed by nontransparent uses of generative AI have not been fully explored. While the use of the technology to produce material that masquerades as factual (“deepfakes”) is clearly deceptive, this Article focuses on a more ambiguous area: the consumer’s interest in knowing whether works of art or entertainment were created using generative AI.
In the markets for creative content—fine art, books, movies, television, music, and the like—producers have several financial reasons to hide the role of generative AI in a work’s creation. Copyright law is partially responsible. The Copyright Office and courts have concluded that only human-authored works are copyrightable, meaning much AI-generated content falls directly into the public domain. Producers thus have an incentive to conceal the role of generative AI in a work’s creation because disclosure could jeopardize their ability to secure copyright protection and monetize the work.
Whether and why this obfuscation harms consumers is a different matter. The law has never required disclosure of the precise ways a work is created; indeed, failing to publicly disclose the use of a ghostwriter or other creative assistance is not actionable. But AI authorship is different for several reasons. There is growing evidence that consumers have strong ethical and aesthetic preferences for human-created works and understand the failure to disclose AI authorship as deceptive. Moreover, hidden AI authorship is normatively problematic from the perspective of various theories of artistic value. Works that masquerade as human-made destabilize art’s ability to encourage self-definition, empathy, and democratic engagement, turning all creative works into exclusively entertainment-focused commodities.
This Article also investigates ways to facilitate disclosure of the use of generative AI in creative works. Industry actors could be motivated to self-regulate, adopting a provenance-tracking or certification scheme. And Federal Trade Commission (“FTC”) enforcement could provide some additional checks on the misleading use of AI in a work’s creation. Intellectual property law could also help incentivize disclosure. In particular, doctrines designed to prevent the overclaiming of material in the public domain—such as copyright misuse—could be used to raise the financial stakes of failing to disclose the role of AI in a work’s creation.
Volume
111
Issue
1
First Page
139
Last Page
210
Publisher
The Virginia Law Review Association
Disciplines
Law
Recommended Citation
Jacob Noti-Victor,
Regulating Hidden AI Authorship,
111
Va. L. Rev.
139
(2025).
https://larc.cardozo.yu.edu/faculty-articles/1006