1. Fluid Autonomy in Agentic AI: Implications for
Authorship, Inventorship, and Liability Frameworks
Anirban Mukherjee
Hannah Hanwen Chang
20 June, 2025
Anirban Mukherjee (anirban@avyayamholdings.com) is Principal at Avyayam Holdings. Hannah
H. Chang (hannahchang@smu.edu.sg; corresponding author) is Associate Professor of Marketing
at the Lee Kong Chian School of Business, Singapore Management University. This research was
supported by the Ministry of Education (MOE), Singapore, under its Academic Research Fund
(AcRF) Tier 2 Grant, No. MOE-T2EP40221-0008.
1
2. Abstract
Agentic Artificial Intelligence (AI) systems autonomously pursue goals by learning and adapting
their strategies. Unlike traditional generative AI, which is primarily reactive to user prompts,
agentic AI systems exhibit what we term fluid autonomy: their multi-step processes are (i) stochastic
(each step is probabilistically determined), (ii) dynamic (shaped by ongoing human–machine
interaction), and (iii) adaptive (capable of reorienting to new contexts). While this fluidity fosters
complex, co-evolutionary human–machine interactions capable of generating novel and beneficial
outputs, it also irrevocably blurs boundaries, irreducibly entangling human and machine inputs.
Because traditional legal frameworks assume that authorship, inventorship, and liability can be
traced to discrete actors, they fracture when confronted with this fundamental unmappability,
creating critical gaps where no party clearly owns or is liable, and “crumple zones” where humans
or organizations unfairly absorb responsibility for outcomes they did not fully control. The
challenge is not the legal status of human and machine contributions and control, but the practical
impossibility of accurately tracing outcomes to specific sources, rendering distinct standards based
on origin unworkable. To address this, we advance a principle of functional equivalence: legal and
policy frameworks should treat human and machine contributions as functionally equivalent—not
due to moral or economic parity, but as a pragmatic necessity.
Keywords: Agentic Artificial Intelligence, Fluid Autonomy, Machine Creativity, Authorship,
Copyright, Inventorship, Patent, Liability, Tort.
2
3. I Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
II Fluid Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
III Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
IV Inventorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
V Liability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
VI The Principle of Functional Eqivalence . . . . . . . . . . . . . . . . . . . . . . . . 34
VII Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
I. Introduction
Agentic Artificial Intelligence (AI) refers to AI systems capable of autonomously pursuing long-
term goals, making decisions, and executing complex workflows without continuous human
intervention.1 Although agentic AI shares conceptual roots with earlier intelligent agents—goal-
oriented software designed to sense and act within an environment2—and autonomous agents
in multi-agent systems,3 it represents a significant advancement. Historically, such agents were
1Within this analysis, agency denotes a system’s capacity to initiate goal-directed actions—whether through pro-
grammed imperatives or learned behaviors—while autonomy refers to the degree of independence from direct
human control. This distinction builds on established AI literature. See, e.g., Stan Franklin & Art Graesser, Is
It an Agent, or Just a Program?: A Taxonomy for Autonomous Agents, in Intelligent Agents: Agent Theories,
Architectures & Languages 21, 25 (Michael Wooldridge et al. eds., 1996) (defining an agent as “a system situated
within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its agenda”);
Michael Wooldridge & Nicholas R. Jennings, Intelligent Agents: Theory and Practice, 10 Knowl. Eng’g Rev. 115,
117–18 (1995) (distinguishing between an agent’s “pro-activeness”—its ability “to exhibit goal-directed behaviour by
taking the initiative”—and its “autonomy,” defined as operating “without the direct intervention of humans”); J. M.
Beer, A. D. Fisk & W. A. Rogers, Toward a Framework for Levels of Robot Autonomy in Human-Robot Interaction,
3 J. Hum.-Robot Interact. 74, 77 (2014) (characterizing autonomy as “the extent to which a system can carry
out its own processes and operations without external control”); Jeffrey M. Bradshaw et al., The Seven Deadly
Myths of “Autonomous Systems”, 28 IEEE Intell. Sys. 54, 56 (2013) (highlighting autonomy’s multifaceted nature by
distinguishing “self-sufficiency—the capability of an entity to take care of itself” from “self-directedness, or freedom
from outside control”).
2See Wooldridge & Jennings, supra note 1, at 115.
3See, e.g., N. R. Jennings, Katia Sycara & Michael Wooldridge, A Roadmap of Agent Research and Development,
1 Autonomous Agents & Multi-Agent Sys. 7 (1998); Peter Stone & Manuela Veloso, Multiagent Systems:
A Survey from a Machine Learning Perspective, 8 Autonomous Robots 345 (2000); Michael Wooldridge, An
Introduction to Multiagent Systems (2d ed. 2009).
3
4. typically constrained to narrowly defined tasks under rigid rules.4 In contrast, modern agentic
AI systems leverage advanced technologies to interpret context, flexibly adapt strategies, and
proactively orchestrate multi-step processes.5 As recent scholarship emphasizes, this marks a
fundamental shift from rigid systems requiring constant oversight to AI that can self-initiate
complex plans and adjust them on the fly,6 enabling them to tackle open-ended tasks and coordinate
with other agents or humans to achieve complex objectives.
Central to this distinction between traditional AI and agentic AI is the shift from reactive,
advisory roles to proactive execution. An agentic AI could, for instance, autonomously negotiate
pricing with suppliers, reroute shipments to avoid geopolitical disruptions, and recalibrate produc-
tion schedules in response to fluctuating demand. A prime example is the emerging class of Deep
Research Agents (DRAs)—systems such as OpenAI’s DeepResearch—which “autonomously orches-
trate multistep web exploration, targeted retrieval, and higher-order synthesis,” transforming vast
online information into analyst-grade reports.7 DRAs make independent decisions about source
credibility, how to weigh conflicting information, and how to structure the final report—tasks
previously reserved for human judgment.8 Yet, the overarching need for the research and the
utilization of findings remains in the domain of their human users. The AI here is more than
an amanuensis9 but less than a collaborator—it makes decisions that relate to the form of the
outcome, but does not provide the motivation for the research or shape its use.10
4See R.D. Caballar, What Are AI Agents?, IEEE Spectrum (Feb. 26, 2024), https://spectrum.ieee.org/ai-agents; Roger
Clarke, Regulatory Alternatives for AI, 35 Comput. L. & Sec. Rev. 398, 399 (2019).
5See Yarden Shavit et al., Practices for Governing Agentic AI Systems 1 (OpenAI Tech. Rep., 2023), https:
//openai.com/research/practices-for-governing-agentic-ai-systems.
6See Caballar, supra note 4.
7See OpenAI, Introducing Deep Research, OpenAI (Feb. 2, 2025), https://openai.com/index/introducing-deep-
research/; see also Mingxuan Du et al., DeepResearch Bench: A Comprehensive Benchmark for Deep Research Agents,
arXiv:2506.11763 (June 13, 2025), https://arxiv.org/abs/2506.11763 (analyzing 96,147 real-world queries and showing
that users already deploy DRAs for PhD-level research tasks across 22 domains, including business, law, health,
science, and the arts).
8See D. B. Acharya, K. Kuppan & B. Divya, Agentic AI: Autonomous Intelligence for Complex Goals—A Comprehensive
Survey, 13 IEEE Access 18912, 18913 (2025), https://doi.org/10.1109/ACCESS.2025.3532853 (defining Agentic AI as
autonomous systems capable of accomplishing complex, long-term tasks without direct human supervision).
9See Jane C. Ginsburg & Luke Ali Budiardjo, Authors and Machines, 34 Berk. Tech. L.J. 343, 344 (2019).
10The characterization of AI as amanuensis is consistent with the historical legal treatment of technological aids. Courts
and copyright offices have traditionally viewed such tools—from cameras to word processors—as extensions of
human agency rather than independent creators. See, e.g., U.S. Copyright Office, Compendium of U.S. Copyright
4
5. While agentic AI exhibits functional autonomy, it lacks the consciousness and culpability
inherent in human agency. Moral agency, as understood in Western law and philosophy, requires
a conscious self with subjective experience, intent, and the capacity for rational judgment,11
which forms the basis for legal accountability and the attribution of rights.12 Agentic AI systems
derive their behavior from processes that optimize for reward signals designed to align with
human decision-making patterns and successful outcomes, rather than from intrinsic motivation.
Therefore, their agency is purely functional—emulative of human behavioral patterns—yet lacking
the conscious intentionality that characterizes genuine human agency.13
Agentic AI also falls short of true autonomy. A truly autonomous agent pursues its own
agenda,14 whereas agentic AI operates under the goals and constraints set by its human users;
agentic AI is chiefly autonomous in means and only somewhat autonomous in ends. It is there-
fore distinct from both traditional, non-autonomous AI that is better viewed as a machine, and
hypothetical, fully autonomous artificial general intelligence (AGI) that may qualify for legal
personhood.15
In this Article, we argue this intermediate state—best characterized as fluid autonomy, de-
fined by its stochastic, dynamic, and adaptive nature—introduces novel challenges for legal and
Office Practices § 313.2 (3d ed. 2021) (“The Office will not register works produced by a machine or mere
mechanical process that operates randomly or automatically without any creative input or intervention from a
human author.”).
11See, e.g., Markus Schlosser, Agency, in Stan. Encyc. Phil. (Edward N. Zalta & Uri Nodelman eds., Winter 2019),
https://plato.stanford.edu/archives/win2019/entries/agency/.
12See, e.g., Cruzan v. Dir., Mo. Dep’t of Health, 497 U.S. 261, 279 (1990) (affirming a competent person’s liberty interest
in making their own medical decisions); Restatement (Second) of Contracts § 12(2) (Am. L. Inst. 1981) (defining
the capacity to contract in terms of the ability to understand the nature and consequences of the transaction).
13Parallels may be drawn to both non-rational “actors” (e.g., domesticated animals) whose acts are imputed to the owner
and to corporations whose “agency” does not presuppose will. See, e.g., 4 William Blackstone, Commentaries
on the Laws of England 236–37 (1769) (classifying livestock and household animals as chattels and imposing
tort liability on the owner for any damage they cause); Restatement (Second) of Torts § 509 cmt. a (Am. L. Inst.
1977) (“A possessor of a domestic animal ... is subject to liability for harm done by the animal ... because the
animal is regarded as the possessor’s instrumentality, not as a legal person.”); Buckle v. Holmes, [1926] 2 K.B. 125,
128 (Eng.). However, unlike non-rational actors, agentic AI systems demonstrate sophisticated rationalizing, and
unlike corporations, they are not governed by rational actors.
14See Franklin & Graesser, supra note 1, at 25; Bartosz Brożek & Marek Jakubiec, On the Legal Responsibility of
Autonomous Machines, 25 Art. Intell. & L. 293, 294 (2017).
15See Joanna J. Bryson, M. E. Diamantis & Thomas D. Grant, Of, for, and by the People: The Legal Lacuna of
Synthetic Persons, 25 Art. Intell. & L. 273, 274 (2017).
5
6. policy frameworks. The issue at hand is not how we should conceptualize human and machine—
significant progress has been made on such questions—but rather the fundamental unmappability
of roles and contributions that arises within intertwined human-machine agentic processes. For
instance, contributions in a creative collaboration may defy categorization as originating solely
from human or machine sources.16 This renders traditional legal frameworks that presuppose a
divisible chain of creation impracticable.17 Given the possibility and even the high likelihood of
such scenarios, we argue, legal and policy frameworks should treat human and machine contribu-
tions as functionally equivalent—not because of their inherent moral equality or to incentivize
machine creativity, but due to the practical impossibility of determining origin.
This Article proceeds as follows. Part II establishes the concept of fluid autonomy. We
then trace how the fundamental unmappability of contributions destabilizes current legal and
policy paradigms, examining its implications for authorship frameworks in Part III, inventorship
challenges in Part IV, and liability allocation in Part V. In response to this systemic challenge, Part
VI introduces and defends a novel principle: functional equivalence.18 We conclude in Part VII by
16Some scholars argue against framing human–machine interactions as collaborative, asserting that true collaboration
requires shared intentionality, moral agency, and the ability to co-determine objectives—attributes they deem absent
in AI, which they view as heteronomous tools (i.e., governed externally rather than by self-determination). See, e.g.,
K. D. Evans, S. A. Robbins & J. J. Bryson, Do We Collaborate with What We Design?, 15 Topics Cogn. Sci. 1, 2
(2023). However, this critique presumes that machines lack the autonomy to participate in open-ended creative
processes. Agentic AI subverts this premise. For example, when tasked with producing a climate report, the AI might
autonomously refocus the analysis from mitigation costs to adaptation ethics based on its assessment of emerging
scholarship. While the human sets the broad mandate, the AI dynamically determines the specific objectives and
methodological trajectory—a form of procedural co-determination that blurs the intentional hierarchy (namely, that
humans have intentions while a mere tool does not) central to heteronomy critiques. This fluid renegotiation of
sub-goals defies clean attribution, making “collaboration” less a metaphor than a functional descriptor of such
creative entanglement.
17Authors such as Annmarie Bridy have considered the case where “digital works (i.e., software programs) will, rela-
tively autonomously, produce other works that are indistinguishable from works of human authorship.” Annmarie
Bridy, Coding Creativity: Copyright and the Artificially Intelligent Author, 2012 Stan. Tech. L. Rev. 5, 3. However,
they maintained the assumption that human and machine contributions can be separated. This holds, for example,
when the output in question is developed using an AI whose autonomy is predictable (e.g., a text-to-image AI will
always generate a different output—a distinct image—but will always undertake the same action—it will generate an
image) or even negligible. Agentic AI, in contrast, brings to the fore scenarios where, in addition to the AI’s outputs
being indistinguishable from human outputs, inputs in its “collaborations” with human users become irreducibly
entangled.
18The concept of “functional equivalence” has intellectual precedents across disciplines. In linguistics and sociology, it
denotes how different forms or structures can fulfill the same essential function. See, e.g., Eugene A. Nida, Toward
a Science of Translating 159 (1964) (distinguishing between formal and dynamic/functional equivalence in
translation); Robert K. Merton, Social Theory and Social Structure 34 (1968 ed.) (discussing functional
6
7. outlining the practical implementation pathways for this principle, discussing its limitations, and
exploring its broader societal implications.
II. Fluid Autonomy
Prevailing discourse on AI authorship, inventorship, and liability often relies on a binary conceptu-
alization of AI autonomy.19 At one pole lies traditional generative AI, where users maintain almost
complete control over the AI’s actions through iterative prompting and output curation.20 For
example, text-to-image systems like DALL-E generate outputs conditioned on human-provided
prompts, with any creative variation constrained by the input parameters. In each iteration, the
AI generates an image—the AI’s output may be unpredictable, but its action is predictable. A
human user might experiment with different prompts and then curate the AI-generated images,
selecting the most desirable ones. At the other pole lies hypothetical AGI, capable of sovereign au-
tonomy—independently conceiving and executing creative agendas without any human oversight
or direction.21
alternatives in social institutions). In law, a parallel principle is well-established in electronic-commerce frameworks,
which grant electronic records and signatures the same legal validity as their paper-based counterparts. See, e.g.,
United Nations Convention on the Use of Electronic Communications in International Contracts art. 9, adopted by
G.A. Res. 60/21, U.N. Doc. A/RES/60/21 (Dec. 23, 2005); Uniform Electronic Transactions Act § 7 (1999) (adopted in
49 U.S. jurisdictions). However, this concept has not been systematically applied to resolve the attribution crises in
authorship, inventorship, or liability—the doctrinal gaps exposed by the fluid autonomy of agentic AI. We adapt it
here as a pragmatic legal principle: for the purpose of assigning rights and liability, human and AI contributions
should be treated as interchangeable, not due to any moral or ontological parity, but as a necessary response to the
practical impossibility of disentangling their origins.
19A related taxonomy is often presented where, at one extreme, stand machines like word processors that do “not
cross the ‘mere tool’ threshold”, and at the other, machines such as video games where the “user chooses among
predetermined options decided by the programmer”, and where the “programmer of a videogame can be said to
have authored the audiovisual output because, in fact, she did: She created the code and files generating the images
and sounds.” See Daniel J. Gervais, The Machine as Author, 105 Iowa L. Rev. 2053, 2069–70 (2019). Gervais later
rejects this classification for modern generative AI (“deep learning machines”), in part recognizing AI’s capacity for
autonomy, but from the perspective of the unpredictability (stochasticity) of the machine’s outputs and its ability
to develop high-level representations (e.g., Word2Vec) that capture correlations in the data. Id. at 2071–75. While
he discusses stochasticity in outputs, his framework does not fully encompass the dynamic adaptability and the
contextual autonomy (i.e., autonomy that varies depending on user instructions and the specific task context) that
characterize modern agentic AI, which are crucial for understanding the emergent problem of unmappability.
20See Pamela Samuelson, Generative AI Meets Copyright, 381 Science 158 (2023) (describing how users of traditional
generative AI direct outputs through prompting and curation).
21See Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014) (discussing the potential for AI to
develop independent, world-altering goals).
7
8. Agentic AI disrupts this dichotomy by introducing a novel partial autonomy whereby while
the AI exercises independence in execution, it still operates under human-defined parameters: the
overarching objectives and constraints set by a human user (e.g., “write a research paper, citing
only peer-reviewed papers” or “optimize supply chain costs without introducing new vendors”).
When AI autonomy is either negligible or complete, attributing contributions and control is more
clear-cut: with negligible autonomy (i.e., if AI were merely a tool), all creative output and control
is attributable to the human (akin to a human using any tool); with complete autonomy, the AI can
be the sole creator. However, the partial nature of agentic AI autonomy, which we term its fluidity,
is characterized by three intertwined operational properties that challenge attribution and lead to
a fundamental unmappability of roles and contributions: its processes are (i) stochastic, meaning
its multi-step processes are recursively probabilistic, where the outcome of one probabilistic step
influences the probabilities of subsequent steps, (ii) dynamic, as it responds to user inputs within
the context of all prior interactions, creating a co-evolutionary feedback loop, and (iii) adaptive,
learning implicitly from data and feedback to shift strategies. We now discuss these characteristics
in detail.
Stochasticity in agentic AI actions and outputs arises from its foundation in generative models.
Unlike symbolic AI, which is programmed with explicit, deterministic rules, agentic AI’s internal
intermediate steps (the course of its analysis) and its final step are all probabilistic.22 At each
decision point in its multi-step process, the AI can branch along divergent paths, with each choice
probabilistically influencing subsequent options. Therefore, even minor variations can recursively
compound to produce vastly different outcomes, a process that can lead to chaotic divergence from
22For foundational work on the stochasticity that underlies generative models, see, e.g., Diederik P. Kingma & Max
Welling, An Introduction to Variational Autoencoders, 12 Found. & Trends Mach. Learning 307 (2019); Ian
J. Goodfellow et al., Generative Adversarial Nets, in Advances Neural Info. Processing Sys. 2672 (2014);
Jonathan Ho, Ajay Jain & Pieter Abbeel, Denoising Diffusion Probabilistic Models, in Advances Neural Info.
Processing Sys. 6840 (2020); Tom B. Brown et al., Language Models Are Few-Shot Learners, in Advances Neural
Info. Processing Sys. 1877 (2020). In these architectures, neural networks learn transfer functions (stochastic
functions relating model inputs to outputs) that combine low-level representations into increasingly abstract, high-
level representations. See Yoshua Bengio & Yann LeCun, Scaling Learning Algorithms Towards AI, in Large-Scale
Kernel Machines 1, 3–5 (Léon Bottou et al. eds., 2007). The weights and biases of neurons in these networks,
learned during training, define the parameters of the probability distribution that governs the generative process
for each decision in a multi-turn or multi-step agentic AI.
8
9. a user’s initial expectations.23 Due to its multi-step processes, agentic AI introduces randomness
at three interdependent levels: (1) probabilistic action selection at each decision node, (2) path-
dependent adaptation to prior workflow states, and (3) interpretive variance in processing user
feedback. This creates the computational analog of the “butterfly effect,” where microscopic
differences in initial conditions can lead to macroscopic outcome divergence (such as the agent
analyzing climate policy bifurcating into econometric or sociopolitical frameworks based on early
source selection—a probabilistic choice during initial literature review that then recursively biases
all subsequent analysis).
Furthermore, agentic AI’s autonomy is dynamic. The AI responds to current user inputs
within the context of its own prior outputs and the user’s prior inputs. This creates a feedback
loop: the AI’s autonomy in a given interaction is shaped by its own prior autonomy and the
user’s response to it. Negative user feedback on excessive autonomy may lead the AI to curtail
it, while positive feedback may encourage greater initiative. The user’s guidance thus shapes
the AI’s autonomy, influencing the balance between following specific directions and exercising
independent assessments. This dynamic quality is a crucial component of its fluid autonomy, as
the AI’s operational boundaries and even its ‘understanding’ of the task are not fixed but are
continuously molded by the ongoing dialogue, distinguishing it from more static interactive tools
where each input might trigger a relatively isolated response.
Finally, it is adaptive. When considering machine creativity, Ginsburg and Budiardjo write, “The
computer scientist who succeeds at the task of ‘reduc[ing] [creativity] to logic’ does not generate
new ‘machine’ creativity—she instead builds a set of instructions to codify and simulate ‘substantive
aspect[s] of human [creative] genius,’ and then commands a computer to faithfully follow those
instructions.”24 Inherent in this conceptualization is the idea that the AI was programmed to be
23This stochastic nature stems from agentic AI’s reliance on generative models and reinforcement learning. Mech-
anisms such as epsilon-greedy exploration, where an agent randomly selects a non-optimal action with a given
probability to explore alternative strategies, introduce intrinsic variability. At each step, the agent makes random
choices, making the agent’s path non-deterministic and sensitive to initial conditions. See, e.g., Richard S. Sutton
& Andrew G. Barto, Reinforcement Learning: An Introduction 28 (2d ed. 2018). The result is a chaotic
divergence of outcomes as the probabilistic choices cascade to lead to drastically different pathways.
24Ginsburg & Budiardjo, supra note 9, at 361.
9
10. creative rather than learning to be creative. While the former was true for symbolic AI systems,
modern agentic AI learns from data and interactions with users. Its programmers do not explicitly
code instructions for the AI to follow; rather, the AI learns to be creative through mechanisms like
positive and negative reinforcement, often derived implicitly from user acceptance or correction
of its outputs, or explicit feedback during their interactions. This training makes modern agentic
AI’s behavior contextual, where the level of human control is less clearly defined and subject
to change during operation—its planning, execution, and outputs can vary significantly across
different interactions and tasks.25 This capacity for adaptive learning means the AI’s behavior
is not only contextual but can also evolve in unexpected directions, further contributing to the
fluidity of its autonomy and complicating direct human control over its developmental trajectory.
For instance, an agentic AI system can modify its creative approach based on user feedback,
both implicit and explicit, showcasing the interplay of its stochastic, dynamic, and adaptive nature.
Consider a user who tasks a DRA (such as DeepResearch) with the strategic objective of assessing
the ethical implications of AI-driven diagnostics. The user initiates the research by defining the
broad topic, while the DRA manages the execution, from identifying relevant publications to
synthesizing findings into a structured report. The DRA autonomously determines the appro-
priate analytical frameworks, potentially choosing (a stochastic choice influenced by initial data
encounters) to compare different ethical guidelines across various countries—a level of detail not
explicitly specified by the user. Dynamically, the DRA refines its approach based on user feedback;
for example, if a user consistently prioritizes peer-reviewed articles over preprints, the DRA may
adaptively learn to favor such sources, even without direct instruction, effectively internalizing
the user’s scholarly preferences.26 Thus, temporally, the human user’s oversight dominates during
25Consequently, agentic AI can exhibit emergent behavior–complex, unpredictable patterns that result from its
training, inference, and model structure. See Melanie Mitchell, Complexity: A Guided Tour 3–14 (2009)
(providing a general overview of emergence in complex systems). While symbolic AI could exhibit some unexpected
behaviors due to the complexity of its rules, the scale and nature of emergent behavior in agentic AI, driven by its
learning mechanisms, are qualitatively different.
26Such adaptivity can arise through several complementary mechanisms. First, in-context learning can enable the
system to draw upon prior interactions—such as user prompts and the model’s own outputs. Second, implicit
preference learning, often implemented through reinforcement learning techniques, may allow the model to adjust
its behavior based on patterns of user approval or correction over time. Third, explicit adaptation may occur
either through direct user instruction or via fine-tuning. Fourth, during retrieval-augmented generation, the system
10
11. goal-setting while the DRA assumes increasing control during execution. Functionally, the human
user defines the overarching strategic objectives while the DRA operationalizes these through its
context-sensitive decisions. And interactively, the entire process is subject to continuous, adaptive
modification based on user feedback, blurring the demarcation of control and contribution.
This multifaceted fluid autonomy—arising from the synergistic interplay of an agentic system’s
recursive stochasticity, its deep dynamism in responding to and evolving with user interaction,
and its ongoing adaptivity in learning and strategy modification, as exemplified above—stands
in stark contrast to traditional single-step generative systems like DALL-E. While such systems
may produce stochastic outputs (e.g., generating different images from the same prompt), their
core action remains predictable (e.g., DALL-E will always generate an image or images), and
they lack the multi-step, recursively probabilistic internal processes characteristic of agentic
AI that can lead to chaotic divergence. Furthermore, their capacity for dynamism is typically
limited; they generally do not build a rich, evolving contextual understanding based on a long
history of user interaction, nor do their operational boundaries co-evolve in the same profound
way as agentic systems. Similarly, their adaptivity is generally constrained post-initial training,
lacking the agentic capacity for significant, implicit strategy reorientation based on nuanced user
feedback. Crucially, it is the absence of this rich, synergistic interplay—the lack of deeply interwoven
stochastic depth, dynamic responsiveness, and adaptive learning—that distinguishes traditional
generative AI. Consequently, while these simpler tools present their own set of challenges, they
do not engender the same degree of fundamental unmappability in contributions and control that
arises from the fluid autonomy inherent in agentic AI.
Specifically, fluid autonomy creates recursive feedback loops—processes where outputs in
prior interactions become inputs in subsequent interactions—between AI and human, leading to a
can dynamically prioritize external information sources. Together, these mechanisms can form a ‘relationship
memory’ that evolves across interactions. For a comprehensive survey of the field of LLM personalization, see
Zhehao Zhang et al., Personalization of Large Language Models: A Survey, arXiv:2411.00027 (Nov. 1, 2024),
https://arxiv.org/abs/2411.00027 (providing a taxonomy of techniques for adapting LLMs to user-specific data and
interaction histories). For a specific example of feedback mechanisms, see, e.g., Yuntao Bai et al., Constitutional
AI: Harmlessness from AI Feedback, arXiv:2212.08073 (Dec. 15, 2022), https://doi.org/10.48550/arXiv.2212.08073
(describing a method for training AI systems using AI-generated feedback based on a set of human-provided
principles).
11
12. co-evolutionary dance.27 For instance, in the DRA example, suppose the AI prioritizes research
sources, adjusts analytical methods, and replicates user patterns in response to user feedback. Thus,
a legal scholar who previously emphasized comparative constitutional law in their prompts may
find the AI autonomously expanding its analysis to include foreign jurisprudence—not because
the user explicitly requested it, but because the system has learned to amplify and recombine the
user’s demonstrated preferences. Given this blending of human and machine initiatives, is the
resulting work a product of human intent, machine autonomy, or an inseparable fusion of both?
An AI’s outputs, emerging from this complex interplay, can resist clear attribution. For instance,
an insight that arose from an AI’s initiative in an interactive research process can perhaps equally
be viewed as emergent from the AI mimicking its human user’s inputs and feedback as it can from
the human user issuing instructions that relate to the same topic based on the AI’s prior outputs.
This inherent entanglement and the resulting difficulty in definitively tracing origins is what we
term fundamental unmappability.28
This fundamental unmappability, and the uncertainty it engenders,29 is novel to agentic AI.
27A recursive co-evolution of human and AI contributions finds conceptual analogs in several social science theories.
Actor-network theory (ANT), which rejects hierarchical distinctions between human and non-human “actants,”
provides a particularly apt framework. See Bruno Latour, Reassembling the Social: An Introduction to
Actor-Network-Theory (2005). ANT’s symmetrical treatment of agency aligns with the paper’s argument that
human-AI creative entanglement defies traditional attribution. Similarly, Giddens’ structuration theory—where
social structures and individual agency recursively shape one another—offers parallels to the fluid autonomy
dynamics described here. See Anthony Giddens, The Constitution of Society: Outline of the Theory of
Structuration 1–28 (1984).
28The AI may even incorporate elements from the human user’s prior inputs and feedback verbatim into its outputs.
In such cases, its actions—shaped by its dynamic learning from and adaptive responses to the human user’s inputs
and feedback—might be better characterized as a sophisticated curation of the human user’s creativity, rather than
purely independent creation. See, e.g., Emily M. Bender et al., On the Dangers of Stochastic Parrots: Can Language
Models Be Too Big?, in Proc. of the 2021 ACM Conf. on Fairness, Accountability, & Transparency 610, 617
(2021) (arguing that language models are systems for “stochastically stitching together sequences of linguistic forms”
from their training data, rather than creating meaning). This raises the possibility that even an AI’s ostensibly
autonomous outputs could be considered functionally derivative. This is based on the definition of derivative works
as “based upon one or more preexisting works” through recasting, transformation, or adaptation. 17 U.S.C. § 101
(2018). It is crucial to note that this specific application enters a doctrinal gray zone. Unlike traditional derivative
works where a human author consciously creates a new work based on a pre-existing one, the AI’s outputs are
developed through probabilistic inference from historical interactions. Therefore, they clearly lack the mens rea, or
mental state, typically associated with copyright authorship. Moreover, under 17 U.S.C. § 106(2) (2018), an AI cannot
be recognized as an author. However, the spirit of the inference remains—functionally, an output of the AI may be
equivalent to a derivative work of its human user if its outputs are sufficiently based on its human user’s inputs.
29One might argue that technical solutions such as provenance-tracking or model-logging could resolve this ambiguity.
However, a growing body of literature demonstrates that such tools are fundamentally limited in the face of recursive,
12
13. It is the multi-step, recursively probabilistic processes that underlie agentic AI, coupled with its
dynamic and adaptive capabilities, that make it exceedingly difficult to predict or retroactively
infer contribution and control in human-agentic AI interactions—a challenge not posed to the
same extent by simpler generative tools or by hypothetical, fully autonomous systems where the
user is effectively irrelevant.
While the full legal implications of agentic AI are yet to be seen, preliminary evidence from
disputes involving less advanced autonomous systems already demonstrates how this crisis of
attribution is straining foundational legal doctrines. In the creative sphere, courts are confronting
the entanglement of human direction and machine execution. A 2023 ruling by the Beijing Inter-
net Court, for instance, granted copyright protection to an AI-generated image, reasoning that
the plaintiff’s detailed prompting and iterative adjustments constituted a sufficient “intellectual
investment.”30 Yet in recognizing the human’s role, the court’s analysis underscored the funda-
mental difficulty of delineating where the user’s creative guidance ended and the AI’s autonomous
synthesis began.
This tension is crystallized in the United States by the litigation over Théâtre D’opéra Spatial,
an AI-assisted image that won first prize at the 2022 Colorado State Fair. After the U.S. Copyright
Office refused to register the work, its creator sued, arguing that his 600-plus prompts and
substantive post-production edits supplied the necessary human authorship.31 The Copyright
Office countered that the final image was a product of “inextricably merged” human and machine
adaptive systems. They can record a sequence of inputs and outputs, but they cannot map the internal, transformative
process by which an agentic AI assimilates human feedback. This process renders outputs as statistically emergent
creations—akin to “stochastic parrots” that blend training data without retaining traceable provenance—rather than
as linear derivations of specific inputs. See, e.g., Bender et al., supra note 28, at 617. Moreover, in iterative co-creation,
each revision by either human or AI can overwrite or obscure prior contributions, making a clean “chain of custody”
practically impossible to maintain across complex workflows. See, e.g., F. Vinchon et al., Artificial Intelligence &
Creativity: A Manifesto for Collaboration, 57 J. Creative Behav. 472, 476 (2023) (describing a future of human-AI
“Co-cre-AI-tion” where the output is a “hybridization” of their efforts). Even if a perfect log existed, it could not
capture the conceptual origin of an idea, distinguishing the user’s guiding intent from the AI’s generative execution.
The “provenance” thus becomes a history of co-evolutionary entanglement, not a ledger of separable inputs. See
Iyad Rahwan et al., Machine Behaviour, 568 Nature 477, 483 (2019) (discussing the need to examine “feedback
loops between human influence on machine behaviour and machine influence on human behaviour simultaneously”
in complex hybrid systems).
30See Li Yunkai v. Liu Yuanchun, (2023) Jing 0491 Min Chu No. 11279 (Beijing Internet Ct. Nov. 27, 2023) (China).
31See Complaint, Allen v. Perlmutter, No. 1:23-cv-02377 (D. Colo. Sept. 5, 2023).
13
14. contributions, and because the AI’s contribution was more than de minimis and not under the
user’s direct control, the work as a whole was unregistrable.32
A parallel challenge is unfolding in software development, exemplified by the ongoing class-
action lawsuit against GitHub’s Copilot.33 While a copyright dispute, it prefigures the inventorship
dilemma: plaintiffs allege their code was unlawfully reproduced, while the defense hinges on the
argument that the AI’s synthesis process is so transformative that tracing any given output back
to specific training data—the very act of attribution—is practically impossible. In all these cases,
the core issue is the same: the generative process obscures provenance, making traditional tests of
origin and contribution increasingly tenuous.
Similar crises of attribution are evident in tort and administrative law, where the locus of
responsibility for AI-driven harm becomes fundamentally unmappable. The 2018 fatality involving
an Uber self-driving vehicle provides a stark illustration. Subsequent investigation revealed an
inextricable blend of the system’s design failure to classify a pedestrian and the human safety
driver’s negligence.34 Legal accountability, however, was ultimately deflected from the corporation
to the human operator, demonstrating a “moral crumple zone”35 where entangled causation defaults
to the most proximate human actor.
Moreover, the unmappability that creates such “crumple zones” in physical torts extends to
the erosion of procedural rights and the infliction of economic injury. In Houston Federation of
Teachers v. Houston Independent School District, a federal court found that using a proprietary “value-
added” algorithm to terminate teachers violated due process because its opaque and interdependent
calculations made it impossible for educators to meaningfully challenge a potentially career-ending
32See Letter from the U.S. Copyright Off. Rev. Bd., Re: Théâtre D’opéra Spatial 1 (Feb. 21, 2023), https://www.copyright.
gov/rulings-filings/review-board/docs/Theatre-Dopera-Spatial.pdf.
33See Doe v. GitHub, Inc., No. 4:22-cv-06823 (N.D. Cal. Nov. 3, 2022).
34See Nat’l Transp. Safety Bd., Collision Between Vehicle Controlled by Developmental Automated Driving
System and Pedestrian, NTSB/HAR-19/03, at 32–34 (2019).
35See Madeleine Clare Elish, Moral Crumple Zones: Cautionary Tales in Human–Robot Interaction, 5 Engaging Sci.
Tech. & Soc’y 40, 40 (2019) (coining the term “moral crumple zone” to describe how responsibility for a systemic
failure is often misattributed to a human operator who had limited control, much like a car’s crumple zone is
designed to absorb the force of an impact).
14
15. decision.36 On a systemic scale, Australia’s “Robodebt” scandal revealed a catastrophic failure of
administrative accountability, where a hybrid automated-and-human system issued hundreds of
thousands of unlawful welfare debt notices. A subsequent Royal Commission concluded that the
process was so entangled that officials could not reconstruct or justify how specific debts were
generated, making it impossible to trace responsibility for the widespread harm.37 In each of these
cases, the harm arose from a complex interplay of human and machine action, straining legal
frameworks designed to assign fault to a discrete, identifiable source.
Having established the core challenge of unmappability, both in theory and through its
emerging real-world manifestations, we now turn to its first major legal casualty: the doctrine of
authorship, which, like other frameworks we will examine, is predicated on the ability to attribute
creative acts to a specific, legally cognizable actor.
III. Authorship
Scholars have long questioned whether traditional copyright frameworks—built around the notion
of the human creator—can fully capture works generated by algorithmic processes.38 At the
heart of this debate lies a central question: When AI generates the intellectual content, who is
the author? And, flowing from that, who owns the copyright? Could it be the artist or writer who
commissioned the work, the AI service provider who built the system, the AI itself, or perhaps no
one at all?39
36See Houston Fed’n of Teachers, Local 2415 v. Houston Indep. Sch. Dist., 251 F. Supp. 3d 1168, 1179 (S.D. Tex. 2017).
37See Royal Comm’n into the Robodebt Scheme, Report 23–25 (2023) (Austl.).
38These frameworks have historically relied on a two-part test of access and substantial similarity to adjudicate
infringement. See, e.g., Arnstein v. Porter, 154 F.2d 464, 468 (2d Cir. 1946). For foundational scholarship on this
question, see, e.g., Pamela Samuelson, Allocating Ownership Rights in Computer-Generated Works, 47 U. Pitt. L.
Rev. 1185 (1985); Ryan Acosta, Artificial Intelligence and Authorship Rights, 17 Harv. J.L. & Tech. 589 (2002);
Peter Jaszi, Toward a Theory of Copyright: The Metamorphoses of ‘Authorship’, in Intellectual Property Law
and History 42 (Steven Wilf ed., 2017); Ryan Abbott, The Reasonable Robot: Artificial Intelligence and
the Law (2020).
39This inquiry is complemented by a new wave of litigation from rights-holders focused not on the user’s claim to
authorship, but on the entire lifecycle of AI development and services. This legal battle is being fought on two
fronts. The first concerns the legality of using copyrighted works for AI training. See, e.g., Complaint, The N.Y.
Times Co. v. Microsoft Corp., No. 1:23-cv-11195 (S.D.N.Y. Dec. 27, 2023) (news articles); Complaint, Richard Kadrey
v. Meta Platforms, Inc., No. 4:23-cv-03417 (N.D. Cal. July 7, 2023) (books); Complaint, Andersen v. Stability AI
15
16. This human-centric paradigm faces mounting theoretical challenges. Annmarie Bridy, for
example, challenges the entrenched assumption of uniquely human authorship by arguing that
creativity itself is inherently algorithmic.40 She illustrates that even what we typically consider
“human” creativity operates through rules and structured processes, suggesting that works pro-
duced autonomously by computers are less alien to our creative paradigms than conventional law
presumes. Her analysis underscores that, if the law is to remain relevant in an era increasingly
defined by AI, it must evolve beyond its narrow human-centric lens to accommodate the new
realities of machine-generated creative output.
However, current legal frameworks remain fundamentally anthropocentric, hinging on
whether a human has exercised meaningful control over AI-generated outputs—a benchmark that
the fluid autonomy of agentic AI complicates.41 This is exemplified by the U.S. Copyright Office’s
2023 policy, which affirms that AI-generated works lacking substantial human authorship cannot
be copyrighted, thereby creating significant ambiguity regarding protection and ownership,
particularly in cases of intertwined human and machine contributions.42 For instance, the Office
refused registration for the AI-generated comic Zarya of the Dawn, holding that the human user’s
prompts (e.g., “adjust lighting,” “make the tiger look more menacing”) were insufficiently creative
to constitute authorship, effectively treating the AI as a “tool” rather than a collaborator.43
Ltd., No. 3:23-cv-00201 (N.D. Cal. Jan. 13, 2023) (visual art); Complaint, Getty Images (US), Inc. v. Stability AI, Inc.,
No. 1:23-cv-0135 (D. Del. Feb. 3, 2023) (photographs); Complaint, Sony Music Ent. v. Suno, Inc., No. 1:24-cv-11483
(D. Mass. June 24, 2024) (sound recordings); see also Thomson Reuters Enter. Ctr. GmbH v. ROSS Intelligence Inc.,
No. 20-613-LPS, 2024 WL 569580 (D. Del. Feb. 12, 2024) (denying JMOL after jury verdict finding infringement from
use of copyrighted legal headnotes for AI training). The second targets the AI’s output directly, as exemplified by the
suit from major film studios against Midjourney, which frames the service as a “quintessential copyright free-rider
and a bottomless pit of plagiarism.” See Complaint at 2, Disney Enters., Inc. v. Midjourney, Inc., No. 2:25-cv-05275 (C.D.
Cal. June 11, 2025) (alleging direct and secondary copyright infringement based on both the unauthorized copying
of works to train the AI model and the subsequent generation of outputs substantially similar to iconic characters).
40See Annmarie Bridy, Coding Creativity: Copyright and the Artificially Intelligent Author, 2012 Stan. Tech. L. Rev.
5, ¶ 7.
41See Martin Zeilinger, Tactical Entanglements: AI Art, Creative Agency, and the Limits of Intellectual
Property (2021).
42 See Mark A. Lemley, How Generative AI Turns Copyright Law on Its Head 2 (Stanford Pub. L. Working Paper
No. 38344, 2023), https://ssrn.com/abstract=4517702; see also Copyright Registration Guidance: Works Containing
Material Generated by Artificial Intelligence, 88 Fed. Reg. 16,190, 16,192 (Mar. 16, 2023) [hereinafter Copyright
Registration Guidance].
43See Letter from Robert J. Kasunic, Assoc. Register of Copyrights & Dir. of Registration Pol’y & Prac., U.S. Copyright
Office, to Van Lindberg, Re: Zarya of the Dawn (VAu001480196) (Feb. 21, 2023).
16
17. In stark contrast, Chinese courts have taken a more expansive view.44 As exemplified by the
Shenzhen Tencent decision, the court granted copyright protection to an AI-generated news article,
emphasizing the human involvement in curating training data, selecting input variables, and
setting system parameters—activities that, while arguably less direct than the prompting in Zarya,
were deemed sufficient to establish authorship under Chinese law. This divergence highlights a
fundamental tension: Is direct, expressive input (like detailed prompting) the sine qua non of
authorship, or can more indirect, preparatory contributions suffice?
Critically, these preceding debates—concerning authorless versus authored works45—and
proposed solutions—such as hybrid attribution models,46 two-tiered protection systems,47 or
Gervais’s theory of ‘originality causation’48—all assume the ability to parse the contributions of
human and AI. For instance, if human and AI contributions could be clearly delineated, a work
could potentially be recognized as a collaborative creation.49 This might involve crediting the
human author for creative direction and either acknowledging the AI’s role in a new category
(e.g., “AI-assisted creation”) or attributing the AI-generated portions to the human by extension.
Alternatively, dynamic royalty schemes could be adopted: instead of asking “who is the author?”,
the focus could shift to “how much is each an author? Who should benefit, and how much?”.
A song generated by AI, for example, could trigger a royalty allocation among the human who
44Chinese courts have offered contrasting perspectives on AI authorship. Compare Beijing Film Law Firm v. Beijing
Baidu Netcom Sci. & Tech. Co., (2019) Jing 0491 Min Chu No. 239 (Beijing Internet Ct. Apr. 25, 2019) (China) (holding
that only works created by natural persons qualify for copyright protection), with Shenzhen Tencent Comput. Sys.
Co. v. Shanghai Yingxun Tech. Co., (2019) Yue 0305 Min Chu No. 14010 (Shenzhen Nanshan Dist. People’s Ct.
Dec. 24, 2019) (China) (granting copyright to an AI-generated article based on human involvement in selecting and
arranging inputs). For a detailed discussion, see Yin Wan & Hui Lu, Copyright Protection for AI-Generated Outputs:
The Experience from China, 42 Comput. L. & Sec. Rev. 105581 (2021).
45See Lemley, supra note 42, at 1.
46See Ryan Abbott, Artificial Intelligence, Big Data and Intellectual Property: Protecting Computer Generated Works in
the United Kingdom, in Research Handbook on Intellectual Property and Digital Technologies 322 (Tanya
Aplin ed., 2020).
47See Haochen Sun, Redesigning Copyright Protection in the Era of Artificial Intelligence, 107 Iowa L. Rev. 1213 (2021).
48See Gervais, supra note 19, at 2085.
49Whether this is advisable is another question, with arguments falling on both sides. Compare James Grimmelmann,
There’s No Such Thing as a Computer-Authored Work—and It’s a Good Thing, Too, 39 Colum. J.L. & Arts 403 (2015)
(arguing against AI authorship), with Sun, supra note 47 (proposing sui generis rights for AI-generated works with
human inputs).
17
18. commissioned it, the AI’s developer, and a fund for creators whose works trained the AI.50 These
royalties could be adjusted based on relative contributions: a human who heavily edited the
AI’s output would receive a larger share, while a largely AI-generated work might favor the
developer. Another option involves considering sui generis rights—limited protections weaker
than full human authorship but stronger than the public domain.51
Without the ability to reliably parse contributions, however, these questions, debates, and
proposed solutions become largely moot. While attributing distinct human and AI inputs may
remain feasible in some straightforward settings—thus permitting conventional legal standards to
apply—the real challenge arises with interactions governed by fluid autonomy, where recursive
feedback loops make contributions inextricably entangled.
Specifically, a framework premised on distinguishing the origin of creative elements faces two
intractable problems: (1) ensuring fair and consistent treatment across cases where contributions
are separable versus those where they are inseparable, and (2) establishing reliable criteria for
determining whether contributions can even be parsed in the first place. Consider two classes
of works resulting from human-AI interaction. Works in the first (separable) class allow specific
creative elements to be reasonably attributed to either the human or the AI. For example, the
human might have written distinct sections while the AI generated others, or clear logs might
delineate contributions. In the second (inseparable) class, the interaction, likely involving recursive
50The EU Data Act addresses data access and sharing, with provisions on fair compensation for data generation, but
does not directly address AI training or output royalties. See Regulation (EU) 2023/2854 of the European Parliament
and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data, 2023 O.J. (L 2023/2854)
1. The EU AI Act regulates AI systems, including data governance, but similarly lacks specific provisions on output
royalties, though its broader implications for copyright are subject to analysis. See Regulation (EU) 2024/1689 of the
European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence
(Artificial Intelligence Act), 2024 O.J. (L 1689) 1; see also, e.g., João Pedro Quintais, Generative AI, Copyright and
the AI Act, 56 Comput. L. & Sec. Rev. 106107 (2025).
51Unlike proportional royalties, which operate within existing copyright frameworks to distribute revenue, sui generis
rights create a new framework with its own rules for protection, duration, and scope. Existing sui generis regimes
like the EU Database Directive, protecting non-creative investments (e.g., data compilation) for 15 years, may offer
a precedent for AI-generated works. See Directive 96/9/EC of the European Parliament and of the Council of 11
March 1996 on the legal protection of databases, 1996 O.J. (L 77) 20; see also J. H. Reichman & Pamela Samuelson,
Intellectual Property Rights in Data, 50 Vand. L. Rev. 49, 52 (1997). This approach avoids the need to determine
“authorship” in the traditional sense, focusing instead on the outcome (the AI-generated work) and granting limited
rights based on technical criteria (e.g., evidence of AI synthesis) rather than human creative input. A key advantage
is sidestepping the attribution problem, but a risk is potentially incentivizing a flood of AI-generated content,
potentially impacting the value of human-created works.
18
19. feedback loops, results in an inextricably blended work—a fusion where the origins of specific
ideas, phrasings, or creative choices are fundamentally entangled and untraceable.52
This division yields a dilemma. On the one hand, a framework based on the separation of
human and AI contributions (e.g., granting full copyright only to human-generated portions)
immediately fails when applied to the inseparable class, as the necessary distinctions cannot
be made. On the other hand, a framework suitable for inseparable works must operate without
assessing the extent of specific contributions. Such a framework, if applied to the separable class,
could not account for variations in human versus AI input, treating works with potentially vastly
different contribution levels identically. It is impossible to create a single standard that both
functions for inseparable works and appropriately differentiates between separable works based
on contribution levels.
Suppose instead we developed two distinct standards, one tailored for separable works and
another for inseparable ones (or we crafted an exception for inseparable works in our current
frameworks that rely on separable contributions). The challenge then shifts to reliably determining
whether a specific work belongs to the ‘separable’ or ‘inseparable’ class. Along the continuum
of human-AI interactions, making this determination—deciding whether contributions are truly
separable or inextricably fused—is likely to be subjective and prone to inconsistency. How should
works be treated where some but not all elements might be attributable? Does the presence of any
inseparable element necessitate classifying the entire work as inseparable? If so, a vast majority of
works involving recursive agentic AI interaction might fall into the inseparable category, rendering
the ‘separable’ standard largely irrelevant in practice. Moreover, how could we ensure that these
two distinct standards yield equivalent results? Without such equivalence, works reflecting similar
human effort could receive different legal treatment based merely on the traceability of the creative
52This challenge is analogous to the problem courts faced in separating protectable expression from unprotectable
ideas in computer software, which led to the creation of the “Abstraction-Filtration-Comparison” test. See Cmty.
for Creative Non-Violence v. Reid, 490 U.S. 730, 751 (1989); see also Computer Assocs. Int’l, Inc. v. Altai, Inc., 982
F.2d 693, 706–11 (2d Cir. 1992). The core argument of this paper is that the fluid autonomy of agentic AI can
make the “filtration” step—disentangling the human author’s original expression from the AI’s unprotectable (or
functionally autonomous) processes—practically impossible. More recent litigation continues to highlight the
difficulty of applying traditional copyright principles to complex software interfaces. See, e.g., Oracle Am., Inc. v.
Google LLC, 886 F.3d 1179 (Fed. Cir. 2018), rev’d on other grounds, 141 S. Ct. 1183 (2021).
19
20. process and not its substance.
These challenges are greatly amplified by the dynamic and adaptive facets of fluid autonomy,
which manifest as recursive adaptation53 in human-agentic AI interactions. Consider an AI graphic
designer agent that evolves its artistic style to align with a human client’s historical preferences
and inputs, absorbing and fine-tuning its outputs based on the user’s inputs and interactions.
Suppose, also, that the human client evolves her style to match the AI’s outputs, learning from the
AI.54 This creates a causal entanglement where neither the human user nor the AI fully determines
the creative trajectory; the AI system itself becomes an active participant in the evolution of
the human designer’s style, effectively curating prior human-AI interactions. In such cases, how
should rights be apportioned?
One might argue that an AI is merely a tool, incapable of autonomously undertaking either
derivative or transformative work. On this view, all of the AI’s outputs could potentially qualify
as authored works by the user, since everything produced by a tool (like a word processor) is
typically considered a reflection of its user’s input. Applied to agentic AI, this position would
imply that outputs generated by an agentic AI that adapted to its human users’ inputs, guidance,
and previously authored works may likewise be considered the user’s authored works.
However, now suppose the AI is capable of autonomous output. Further suppose, for instance,
that this agent generates output meeting Feist’s “modicum of creativity” standard55 by internalizing
and recombining its human user’s prior copyrighted works. Under current U.S. law, an AI cannot
itself create derivative works, as only humans hold that capacity under 17 U.S.C. § 106(2) (2018).
However, the human user’s iterative feedback and curation—even if insufficient on their own to
meet the Feist standard for originality—might arguably establish a copyright claim to the AI’s
53When AI systems adapt their creative processes based on human user feedback, and human users adapt their
creative processes based on AI feedback. This produces a creative ouroboros—a self-referential loop where human
and machine contributions mutually reconstitute each other across iterative cycles.
54For instance, linguistic alignment, also known as convergence, is a well-established concept in psycholinguistics,
where conversational partners tend to mimic each other’s language use, including word choices, phrasings, and
syntactic structures. See Martin J. Pickering & Simon Garrod, Toward a Mechanistic Psychology of Dialogue, 27
Behav. & Brain Sci. 169 (2004). In human-AI interactions, this concept implies users may adapt their language
over time when interacting with artificial agents. See, e.g., F. Vinchon et al., Artificial Intelligence & Creativity: A
Manifesto for Collaboration, 57 J. Creative Behav. 472 (2023).
55Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 345 (1991).
20
21. output as a derivative work based on the user’s underlying contributions.56 This is because the
AI’s output could be seen as functionally derivative57 of the user’s prior copyrighted works, which
guided the AI’s adaptation.
This creates a paradox: if the AI’s output is functionally derivative of the user’s prior inputs,
the human user may claim authorship even if the AI operated autonomously and the user’s
specific contributions during the interaction did not meet traditional authorship requirements.
That is, even if the human user did not meet the requirements of the U.S. Copyright Office’s 2023
guidance,58 and while the AI itself lacks authorship rights, its output might still be subject to a
claim of human authorship asserted by the user based on derivative rights.59 This tension exists
because while AI legally cannot create derivative works under 17 U.S.C. § 106(2) (2018), the human
user might leverage the functional relationship between their prior works and the AI’s output to
establish their claim to the new work.
Moreover, even if the AI’s use of its human user’s inputs is functionally transformative,60
akin to a collage artist transforming source material,61 its outputs may remain authorless yet be
56A doctrinal frontier with no clear precedent.
57The term ‘functionally’ acknowledges that an AI cannot legally create derivative works under 17 U.S.C. § 106(2)
(2018) as it is not recognized as an author. However, the AI’s outputs may practically serve as derivatives of human
creative inputs. This theory is bolstered by ongoing litigation exploring the technical capabilities of AI models.
Allegations that models can memorize and reproduce their training data with high fidelity lend significant weight to
the argument that they could similarly create outputs that are functionally derivative of a user’s specific inputs and
prior works, blurring the line between autonomous generation and sophisticated recombination. See, e.g., Complaint
at 2, Richard Kadrey v. Meta Platforms, Inc., No. 4:23-cv-03417 (N.D. Cal. July 7, 2023) (alleging that Meta’s LLaMA
models were trained on copyrighted books and are “able to output summaries and analyses of the books, a clear
sign that the books were ingested and comprehended by the models”).
58AI outputs “determined primarily by the AI” lack protection, but human authors may claim rights if they “exercise
creative control over the AI’s output and contribute original expression” through iterative refinements. Copyright
Registration Guidance, supra note 42, at 16,192.
59These issues are distinct from debates surrounding copyright and traditional generative AI, which primarily focus
on whether the AI’s output is substantially similar to its training data and whether the use of that data constitutes
reproduction–essentially, whether the output is a functional derivative of the training data. See Weijie Huang &
Xiaoyan Chen, Does Generative AI Copy? Rethinking the Right to Copy Under Copyright Law, 56 Comput. L. & Sec.
Rev. 106100 (2025). Agentic AI, in contrast, raises the question of whether its output is a functional derivative of its
user’s inputs.
60An AI cannot legally create transformative works under 17 U.S.C. § 106(2) (2018).
61The concept of transformative use was famously articulated by the Supreme Court. See Campbell v. Acuff-Rose
Music, Inc., 510 U.S. 569, 579 (1994). For its application in a visual arts context, see Cariou v. Prince, 714 F.3d 694, 706–09
(2d Cir. 2013). While Cariou* dealt with human appropriation of existing photographs, the underlying principle–that
significant transformation of pre-existing material can create new copyrightable expression–is relevant to the AI
context.
21
22. eligible for derivative authorship or copyright protection by the human user. This is because,
if the AI’s processes reflect the user’s prior inputs and guidance, the user may be positioned to
claim rights over the resulting outputs, even if the “creative spark”62 originated not from the
human but from the AI’s autonomous generation. For example, the key motif in an output from
an AI-based graphic designer system might have emerged entirely from the AI itself. Yet that
resulting work could still be characterized as authorless (since AI cannot legally be its own author)
and simultaneously subject to an authorship claim by the human user, provided the motif emerged
through the AI’s assimilation of the user’s style and prior works.
Thus, the fluid autonomy of agentic AI fundamentally challenges current human-machine
authorship doctrine. In addition, it destabilizes at least three other foundational doctrines.
First, the work-made-for-hire doctrine, codified in 17 U.S.C. § 201(b) (2018), vests authorship
in employers for works created by employees “within the scope of employment.”63 This doctrine,
however, presupposes a human creator—a premise shared across U.S. and major European legal
systems, despite their different approaches to initial ownership. 64 If an agentic AI operates with
significant fluid autonomy (e.g., generating marketing copy without direct human oversight),
courts may reject work-made-for-hire claims because the AI is neither an employee nor a legally
recognized “author.” This creates a gap: outputs generated by AI under broad corporate directives
(e.g., “create a branding campaign”) may lack clear ownership, as no human employee directly
“created” the work.65 Yet, as established earlier, agentic AI possesses only partial autonomy.
62“Creative spark” denotes the originating creative idea or expressive choice that imbues a work with originality.
6317 U.S.C. § 201(b) (2018); Restatement (Third) of Agency § 7.07 (Am. L. Inst. 2006); see also Cmty. for Creative
Non-Violence v. Reid, 490 U.S. 730, 751 (1989) (establishing factors to determine employment status for work-made-
for-hire).
64The U.S. ‘work-made-for-hire’ doctrine automatically vests copyright ownership in the employer. In contrast,
many continental European jurisdictions, rooted in the concept of droit d’auteur, initially vest copyright in the
employee-creator. See, e.g., Urheberrechtsgesetz [UrhG] [Copyright Act], Sept. 9, 1965, BGBl. I at 1273, § 43 (Ger.);
Code de la propriété intellectuelle [CPI] [Intellectual Property Code] art. L113-9 (Fr.). The UK’s statutory approach
differs; under the Copyright, Designs and Patents Act 1988, c. 48, § 11(2) (UK), the employer is generally the
first owner of copyright. For a comparative discussion, see Dániel Legeza, Employer as Copyright Owner from
a European Perspective (paper presented at the SERCI Annual Congress, 2015). All these doctrines presuppose a
human employee as the creator.
65See, e.g., Mark A. Lemley & Bryan Casey, Fair Learning, 99 Tex. L. Rev. 743, 775 (2021) (acknowledging the
challenges AI-generated works pose to traditional copyright doctrines).
22
23. Therefore, if a human employee provides sufficient creative direction or control over the AI’s
process, and the work is created within the scope of their employment, the work-made-for-hire
doctrine could potentially still apply. The challenge lies in determining when human involvement
meets the threshold for “sufficient creative direction,” given the AI’s autonomous contributions.
Courts assessing creative control often examine who exercised “superintendence” over the work’s
creation, a standard difficult to apply when a non-human agent contributes significantly.66
Second, joint authorship standards, which require an intent to merge contributions into a
unitary whole,67 are challenged when one potential “author” (the AI) lacks legal personhood and
the requisite intent.68 Even among human collaborators, courts acknowledge the difficulty of
proving the subjective “intent to be co-authors,” often resorting to examining “subsequent conduct”
as imperfect evidence of a prior state of mind.69 Agentic AI, however, transforms this problem
that is merely evidentiary among human collaborators—the difficulty of inferring a human’s
mental state—into one that is fundamentally ontological. An AI, lacking legal personhood and
consciousness, has no “prior state of mind” to probe, no intent to infer. Even a perfect log of
the recursive interaction fails to resolve the issue, as it documents a process of co-evolutionary
entanglement, not a meeting of minds. The evidentiary framework itself collapses, rendering the
joint authorship doctrine fundamentally inapplicable.
Third, in jurisdictions recognizing them, moral rights—such as the right to attribution and the
right to integrity of the work—are inherently tied to the human author’s personal connection to
their creation.70 Agentic AI, lacking legal personhood, cannot hold moral rights. The recursive
66See Aalmuhammed v. Lee*, 202 F.3d 1227, 1234–35 (9th Cir. 2000).
67See, e.g., Childress v. Taylor, 945 F.2d 500, 505–06 (2d Cir. 1991).
68Joint authorship in European copyright law also generally requires a collaborative effort and a shared intention
to create a unified work. See, e.g., Urheberrechtsgesetz [UrhG] [Copyright Act], § 8 (Ger.); Code de la propriété
intellectuelle [CPI] [Intellectual Property Code] art. L113-2 (Fr.); Copyright, Designs and Patents Act 1988, c. 48, §
10 (UK).
69As the Second Circuit noted in Childress, while the intent “at the time the writing is done” remains the “touchstone,”
... “subsequent conduct is normally probative of a prior state of mind.” Childress v. Taylor, 945 F.2d 500, 509 (2d Cir.
1991).
70Moral rights are a cornerstone of copyright law in many European jurisdictions, often stemming from the Berne
Convention for the Protection of Literary and Artistic Works art. 6bis, Sept. 9, 1886, as revised at Stockholm, July
14, 1967, 25 U.S.T. 1341, 828 U.N.T.S. 221. These rights, typically including the right of attribution and the right
of integrity, are generally considered inalienable. See, e.g., Urheberrechtsgesetz [UrhG] [Copyright Act], §§ 12-14
23
24. interplay between human and AI, however, complicates the protection of these rights for the
human user. When an AI significantly contributes to a work by evolving its style based on the
user’s prior inputs, the resulting creation becomes a blend of human and machine agency. In these
cases, attributing the work solely to the human user becomes difficult, especially when the AI’s
autonomous contributions are substantial. Furthermore, if the AI, through its fluid autonomy,
modifies the work in ways that diverge from the human user’s original intent or artistic vision, the
user’s right to the integrity of the work may be challenged.71 Unlike traditional scenarios where
moral rights protect against derogatory treatment by other humans, here the AI—employed by
the human user—autonomously alters the work, reflecting a novel conflict between user control
and AI agency. This same crisis of attribution, which destabilizes the human-centric model of
authorship, creates parallel challenges in the domain of patent law, to which we now turn, where
the equally foundational concept of a human ‘conceiver’ is put under similar strain.
IV. Inventorship
The issues challenging authorship frameworks also arise in the context of inventorship. A case
in point is the DABUS litigation, which involved an AI system that generated novel inventions.
Patent applications naming DABUS as the inventor—directly challenging the requirement of a
human conceiver—triggered legal battles worldwide. Thus far, patent offices and courts in major
jurisdictions (U.S., U.K., EU) have rejected AI inventorship, insisting that inventors must be natural
persons.72
(Ger.); Code de la propriété intellectuelle [CPI] [Intellectual Property Code] art. L121-1 (Fr.).
71For instance, an AI literary agent might autonomously revise a manuscript to emphasize themes of algorithmic
bias—a perspective the human author never explicitly endorsed but which emerged from the AI’s analysis of their
prior works on technology ethics. While the AI’s alterations could enhance the work’s social relevance, they
simultaneously undermine the author’s right to control the expression of their personal worldview.
72See, e.g., Thaler v. Vidal, 43 F.4th 1207, 1213 (Fed. Cir. 2022) (affirming the USPTO’s rejection); Decision on Petition,
In re Application of Stephen L. Thaler, Application No. 16/524,350 (U.S. Patent & Trademark Office Apr. 22, 2020);
Technical Bd. of Appeal, Decision J 8/20 (Dec. 21, 2021) (EPO); Thaler v. The Comptroller-General of Patents, Designs
and Trade Marks, [2021] EWCA Civ 1374 (Eng.); Comm’r of Patents v. Thaler, (2022) 167 IPR 25 (FCAFC) (overturning
Thaler v. Comm’r of Patents (2021) 161 IPR 245 (FCA)). South Africa, a non-examining jurisdiction, granted a patent
naming DABUS as inventor. See ZA Patent No. 2021/03242 (granted July 28, 2021).
24
25. The legal questions in the DABUS case were relatively clear-cut only because no human
participated in the inventive process. How might the outcome have differed if a human had
played some role, however minor, in the ideation or development? One can imagine a continuum
from no human participation to solely human participation, with AI systems potentially being
fine-tuned or development processes adjusted to facilitate human-AI partnerships anywhere along
that spectrum. At what point along this continuum would the legal standard for inventorship be
met?73 And critically, would contributions even be separable at that juncture, making a standard
based on contribution levels practicable?74
Under U.S. patent law, inventorship requires both conception (“the complete performance
of the mental part of the inventive act”) and reduction to practice (embodying the invention
in a tangible form).75 Courts have long held that only humans can conceive inventions,
meaning only natural persons can be legally recognized as inventors.76 Agentic AI, however,
may autonomously ‘conceive’—or perhaps more accurately, functionally conceive—by gener-
ating novel solutions that otherwise meet patentability criteria such as non-obviousness and
utility.[However, the use of agentic AI raises further questions about demonstrating that the obviousness standard under 35 U.S.C. § 103 (2018) is met.
further question concerns the meaning of obviousness in the context of autonomous AI. Given
an innovation, if an AI could generate it when provided solely with prior information and
overarching guidance, does this imply the innovation is obvious? If so, this standard arguably
should also apply to human-generated innovations. Specifically, an innovation might be deemed
obvious if an AI could reasonably generate it without specific human guidance, even if it was
actually created by a human and appears non-obvious to human experts.]
For example, an AI drug discovery system might hypothesize and simulate new molecu-
lar structures addressing a target disease mechanism—a process traditionally constituting legal
73For example, contrast the varying decisions of the Chinese courts as discussed earlier, albeit in authorship, with the
DABUS case. See supra note 44.
74I.e., would we be able to measure contributions with sufficient accuracy at that point for such a standard to be
practicable?
75Burroughs Wellcome Co. v. Barr Labs., Inc., 40 F.3d 1223, 1227–28 (Fed. Cir. 1994).
76The European Patent Convention (EPC) also requires that an inventor be a natural person. See Convention on the
Grant of European Patents r. 19(1), Oct. 5, 1973, 1065 U.N.T.S. 199 [hereinafter EPC].
25
26. “conception.” For an invention from such a system, maintaining the human-only conception
requirement depends critically on identifying the extent to which it draws upon its human user’s
inputs and feedback. As seen in the DABUS case, if the AI performed the core conception, the
invention might lack a legally valid conceiver, thereby failing a fundamental requirement for
patentability under current law. However, in scenarios where the human and AI ‘align’ through
the fluid autonomy of the system, the AI’s recursive adjustments based on human inputs and
feedback make it unclear whether the conception originated with the human or the AI, thus
obscuring who performed the crucial “mental part of the inventive act.”
The challenge extends to the second prong of inventorship: reduction to practice. This requires
either physically embodying the invention and demonstrating its utility (actual reduction to
practice) or filing a patent application with a description sufficient to enable a PHOSITA to make
and use the invention (constructive reduction to practice) under 35 U.S.C. § 112(a) (2018). Agentic
AI complicates both pathways.
For actual reduction to practice, AI systems integrated with robotics or simulation tools can
likely perform the necessary physical steps or virtual testing autonomously. An AI might design,
synthesize, and test a novel compound without direct human intervention in each step. However,
if the AI executes these tasks through its fluid autonomy—blending its own learned strategies with
human inputs and autonomous decision-making—attributing the successful reduction to practice
becomes legally tenuous. Whose actions ultimately demonstrated the invention worked for its
intended purpose when the process involves this blend of human guidance and AI execution?
The hurdles are perhaps even higher for constructive reduction to practice. While agentic AI can
generate detailed technical descriptions suitable for a patent draft, satisfying the enablement and
written description requirements of § 112(a) is fraught with difficulty. Enablement demands that the
disclosure teach a PHOSITA how to make and use the invention without undue experimentation.
If the AI’s inventive process relies on logic opaque to humans, its generated description might
detail the outcome but fail to adequately explain the underlying principles or non-obvious steps
26
27. required for replication by a human expert, potentially rendering the disclosure non-enabling.77
In addition, the human user may be crucial in examining the AI’s outputs to ensure that the
invention is sufficiently detailed for another human. Could such iterative feedback constitute
adequate guidance to claim human inventorship?
Finally, the written description requirement necessitates showing the human inventor pos-
sessed the claimed invention at the time of filing. When an AI conceives the core idea and drafts
the description, demonstrating genuine human possession—beyond merely receiving, understand-
ing, and transmitting the AI’s output—becomes problematic. Did the human truly possess the
invention in the legally required sense if the complete mental conception originated significantly
with the AI, even if the human reviewed and filed the AI-generated description? This challenges
the fundamental link between the human mind and the claimed subject matter required by the
written description doctrine.
Moreover, similar to the challenges identified in authorship, the doctrine of joint inventorship
faces distinct and novel difficulties when confronted with agentic AI. Under current U.S. patent
law, joint inventors must each contribute significantly to the invention’s conception—“the complete
performance of the mental part of the inventive act”—and typically engage in some form of
collaborative activity.78 Agentic AI disrupts this framework by introducing a non-human entity
capable of independently generating inventive concepts, yet incapable of forming the requisite
77The “black box” nature of many advanced AI models is a well-documented technical challenge. See, e.g., Finale
Doshi-Velez & Been Kim, Towards A Rigorous Science of Interpretable Machine Learning, arXiv:1702.08608 (Mar. 2,
2017), https://doi.org/10.48550/arXiv.1702.08608 (proposing a taxonomy for the rigorous evaluation of machine
learning interpretability and highlighting the lack of consensus on the topic). This opacity creates a direct conflict
with patent law’s disclosure requirements, which are heightened for inventions in “unpredictable arts” like biotech-
nology. Courts require a disclosure sufficient to enable a person of ordinary skill to practice the invention without
“undue experimentation.” See In re Wands, 858 F.2d 731, 737 (Fed. Cir. 1988) (listing factors for determining undue
experimentation). An AI-generated invention whose logic cannot be explained may fail this test, much as a patent can
be invalidated for failing to disclose the necessary software in a computer-implemented invention. See N. Telecom, Inc. v.
Datapoint Corp., 908 F.2d 931, 941–43 (Fed. Cir. 1990). Similarly, the written description requirement demands that
the inventor was in possession of the claimed invention, a standard that is particularly stringent in unpredictable
fields. See Ariad Pharms., Inc. v. Eli Lilly & Co.*, 598 F.3d 1336, 1351 (Fed. Cir. 2010) (en banc).
78See Burroughs Wellcome Co. v. Barr Labs., Inc., 40 F.3d 1223, 1227–28 (Fed. Cir. 1994) (defining conception); Ethicon,
Inc. v. U.S. Surgical Corp., 135 F.3d 1456, 1460 (Fed. Cir. 1998) (requiring each joint inventor to contribute to conception);
see also Kimberly-Clark Corp. v. Procter & Gamble Distrib. Co., 973 F.2d 911, 917 (Fed. Cir. 1992) (indicating joint
inventors usually collaborate or show connection). While the standard for collaborative intent in U.S. patent law
may differ from the copyright standard articulated in Childress v. Taylor, 945 F.2d 500 (2d Cir. 1991), the requirement
for some joint effort remains. European frameworks generally concur. See EPC, supra note 76, art. 60.
27
28. intent or holding legal status as an inventor.
Consider the prior example from drug discovery: An AI system, guided by human researchers,
autonomously identifies a novel molecular structure constituting the core inventive concept. The
AI’s contribution meets technical criteria (novelty, utility), but it cannot be named an inventor.
Can the human researchers be named? If a single researcher merely provided high-level objectives,
their contribution might fail the conception standard. If multiple researchers provided detailed
specifications and iterative feedback, their collective contribution seems stronger, yet they still may
not have conceived the specific, critical insight generated by the AI. This presents a dilemma: How
should inventorship be determined? If the AI is viewed simply as a sophisticated tool, perhaps
the human researcher(s) should receive full inventorship credit, regardless of whether their
contribution met traditional conception standards for the entire invention. If the principle from
the DABUS litigation is applied strictly to the conception of the core inventive step, then perhaps
no valid human inventor exists for that crucial AI-generated insight, potentially jeopardizing
patentability even with significant human involvement. The challenge is compounded by the AI’s
fluid autonomy: was its critical insight truly independent, or was it functionally derived from the
dynamic and adaptive feedback loops of prior human inputs? If traceable, did the insight arise
primarily from the AI’s adaptations to one specific researcher’s inputs, or did it reflect adaptations
to all users more broadly? The answer could have implications for the extent of inventorship
accorded to individual researchers. These questions, and this very uncertainty, underscore the
difficulty in applying traditional conception standards to joint human and agentic AI inventions.
The crux of the issue, similar to authorship, arises from applying a doctrine predicated on
human conception to human-AI co-creative processes where roles become deeply entangled.
Assessing the legal significance of contributions is profoundly challenging when human inputs
and AI adaptations recursively shape each other, making separation difficult or impossible. With
joint inventorship, this challenge is further compounded: the traditional task of delineating
contributions among multiple human inventors—itself often complex—must now navigate the
added complexities of a recursively adaptive AI that may respond differentially to various human
28
29. collaborators, further blurring the lines of contribution. The unmappability that confounds the
allocation of creative rights in authorship and inventorship proves just as disruptive when the law
must allocate responsibility for harms. We therefore turn from the challenges facing intellectual
property to the parallel crisis emerging in liability frameworks.
V. Liability
The autonomy of AI systems has long raised profound legal and ethical challenges.79 These
challenges are not monolithic; they vary significantly depending on the AI’s degree of autonomy
and the specific context of its deployment. When users cannot reasonably foresee or interpret an
AI’s actions—a problem exacerbated by the “black box” nature of modern systems80—traditional
liability frameworks falter. How can users provide informed consent to autonomous actions they
cannot fully comprehend?81 And how do we assign responsibility across entangled causal chains
when harms arise from recursive human-AI interactions?
With traditional generative AI, liability frameworks largely adhere to a user-centric model.82
Because the user maintains substantial control over outputs through iterative prompting and
curation, legal responsibility typically falls on the human operator. For example, if a user employs
ChatGPT to draft a legally binding contract that subsequently contains errors, courts would
likely hold the user—not the AI or its developer—liable. The AI, in this context, is analogous
to a sophisticated tool, like a word processor or a spreadsheet program, where the user directs
the functionality and bears responsibility for the final product.83 This approach hinges on the
79See Peter M. Asaro, A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics, in Robot Ethics:
The Ethical and Social Implications of Robotics 169, 171 (Patrick Lin et al. eds., 2011).
80See Frank Pasqale, The Black Box Society: The Secret Algorithms That Control Money and Information
3–4 (2015) (describing the opacity of modern algorithmic systems).
81See Brent Mittelstadt et al., The Ethics of Algorithms: Mapping the Debate, 3 Big Data & Soc’y 1, 7 (2016).
82Grounded in principles articulated in the Restatement (Third) of Torts: Products Liability § 2 (Am. L. Inst.
1998), which clarifies that when a product functions as intended, but harm results from user misuse or modification,
liability typically falls on the user.
83This aligns with judicial precedent, such as Warner Bros. Records, Inc. v. Payne, No. W-05-CA-311, 2006 WL 2844410,
at 4–5 (W.D. Tex. Aug. 30, 2006), where users were held liable for copyright infringement resulting from their use of
file-sharing software, a tool similarly under their direct control.
29
30. assumption that the user possesses both foreseeability of potential harms and the capacity to
intervene, given the reactive nature of traditional generative AI.
At the opposite end of the spectrum lie fully autonomous AI systems, often conceptualized in
the context of robotics. These systems are designed for independent decision-making and action,
operating without direct human oversight or real-time intervention. As these AI act independently
with no direct human causation linking a specific action to a human decision, establishing legal
liability for any resulting harm becomes very challenging.84
Complete autonomy introduces what Andreas Matthias terms the “responsibility gap.”85 This
gap arises when an AI’s actions extend beyond the foreseeable scope of its intended use or
design, as determined by its manufacturer or developer. In such cases, assigning responsibility to
the manufacturer becomes problematic because the AI’s behavior is, by definition, not directly
attributable to the manufacturer’s specific instructions or programming. Since the AI’s actions
are autonomous, the user is not directly responsible. If neither the manufacturer nor the user is
responsible, a gap exists.86
Alternatively, when manufacturers or developers can foresee the general type of harm (e.g., a
car accident), human actors—operators, supervisors, or even bystanders—may be unfairly held
accountable for the consequences of AI decisions over which they had little or no practical control.87
A classic example is a self-driving car crash where the human “passenger” is blamed, despite
having no operational control over the vehicle’s autonomous navigation.88 In such scenarios, the
84Recognizing this principle, the EU AI Act imposes strict obligations on developers of “high-risk” AI systems,
requiring extensive risk assessment, data governance, and human oversight to mitigate the potential for unforeseen
harms. See Regulation (EU) 2024/1689, supra note 50, arts. 14–15.
85See Andreas Matthias, The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata, 6
Ethics & Info. Tech. 175, 177 (2004); see also Filippo Santoni de Sio & Giulio Mecacci, Four Responsibility Gaps
with Artificial Intelligence: Why They Matter and How to Address Them, 34 Phil. & Tech. 1057, 1060 (2021).
86For a contrasting perspective, see Maarten Herbosch, To Err Is Human: Managing the Risks of Contracting AI
Systems, 56 Comput. L. & Sec. Rev. 106110, 106116 (2025), who argues that traditional contract law frameworks,
particularly the doctrine of unilateral mistake, are sufficiently flexible to address the liability challenges in contracting
posed by AI system autonomy.
87See Elish, supra note 35, at 44.
88This dynamic is evident in cases involving Tesla’s Autopilot system, such as In re Tesla, Inc. Sec. Litig., 477 F. Supp. 3d
903, 921–22 (N.D. Cal. 2020), where drivers faced scrutiny and potential liability for accidents, even when evidence
suggested limitations in the autonomous driving technology.
30
31. intended purpose and use of the AI are well-defined, but there is a misattribution of responsibility,
driven by the legal imperative to assign responsibility somewhere.
These three contrasting scenarios—the issues relating to the use of generative AI, the responsi-
bility gap, and the moral crumple zone—highlight two critical loci of control underpinning current
liability frameworks. The first is the degree of user control over the AI’s output, which is closely
tied to the concept of AI agency: higher AI agency generally implies lower user control, and vice
versa. The second locus of control concerns the manufacturer’s (or developer’s) foreseeability of the
AI’s use and potential harms. If an AI is designed for a specific, narrow purpose (e.g., a medical
diagnostic tool), the manufacturer has greater foreseeability and thus a clearer responsibility to
anticipate and mitigate risks. Conversely, if an AI is designed for general-purpose use, with a wide
range of potential applications, the manufacturer’s ability to foresee specific harms is diminished,
potentially widening the responsibility gap when harms arise from unpredictable applications. In
situations where the manufacturer does have foreseeability (and thus potential liability), there
remains a risk that users or operators may nevertheless be unfairly blamed, becoming the moral
crumple zone.
Agentic AI, with its fluid autonomy, complicates the determination of both loci of control,
blending the challenges of generative and fully autonomous systems. First, agentic AI’s outputs
can be highly unpredictable, and its users may lack the requisite technical literacy to understand
the AI’s limitations. A non-expert relying on an AI code generator, for instance, might be unaware
of subtle security flaws embedded within the generated code. If that code is then deployed
and exploited, the user could face disproportionate liability for vulnerabilities they could not
reasonably have detected or prevented.89 This scenario highlights a potential systemic failure,
echoing concerns raised by Asaro about tools that “mask their own complexity” and create an
89The EU’s Product Liability Directive establishes a strict liability regime for defective products. See Council Directive
85/374/EEC, 1985 O.J. (L 210) 29, as amended by Directive 1999/34/EC of the European Parliament and of the
Council, 1999 O.J. (L 141) 20. If AI-generated code were considered a ‘product’ under this Directive, and a defect
in that code caused damage, the producer could be held liable, even without proof of negligence. However, the
Directive’s applicability to software is a complex and debated area. Furthermore, the AI Act introduces its own
liability framework, which may interact with or supersede the Product Liability Directive in certain cases. See
Regulation (EU) 2024/1689, supra note 50.
31
32. illusion of control while obscuring underlying risks.90
Second, the recursive interplay between human users and agentic AI systems makes it exceed-
ingly difficult, if not impossible, to disentangle their respective contributions to a given output.
This directly challenges the first locus of control: user control. Unlike traditional generative AI,
where users exert clear authority through iterative prompting and curation, agentic AI’s actions
emerge from a complex, evolving history of interactions with the user. Consequently, it becomes
difficult if not impossible to definitively state whether a particular output stems from direct user
instruction, the AI’s autonomous decision-making, or a fusion of both.91
Third, the fluid autonomy of agentic AI blurs the second locus of control: the manufacturer’s
ability to foresee how the AI will be used and what harms might result. An agentic AI initially
designed for, say, legal contract drafting might, through user interaction and adaptation, evolve to
perform tasks far beyond its original intended scope, such as financial forecasting. This fluidity of
purpose makes it difficult to apply traditional liability frameworks that rely on a clear distinction
between intended and unintended uses. For instance, if this legal AI agent makes a critical
error when used for financial forecasting, the manufacturer could argue the AI was deployed
outside its intended scope, invoking the responsibility gap seen with fully autonomous systems.
Meanwhile, the user might contend they were merely leveraging the AI’s demonstrated, evolved
capabilities: since the AI had evolved to handle financial tasks, the user reasonably believed this
use was appropriate. The strength of these arguments is likely to vary dynamically, depending
on contingency factors such as the extent of the AI’s evolution, how it was used, and whether it
provided any disclaimers. Because these factors can shift unpredictably in each specific instance,
the very concept of a fixed “intended use” becomes somewhat meaningless. This adaptability
90See Asaro, supra note 79, at 171.
91This ambiguity is further complicated by regulatory frameworks like the EU’s General Data Protection Regulation
(GDPR). See Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free movement of such
data, and repealing Directive 95/46/EC (General Data Protection Regulation), art. 22 & recital 71, 2016 O.J. (L 119) 1.
While Article 22 restricts decisions based solely on automated processing, Recital 71 requires users to have rights to
“obtain an explanation” of AI-driven decisions. Even when users cannot practically understand these explanations,
the mere existence of such rights may create a legal presumption of user control, exposing them to liability for
harms they could neither foresee nor prevent.
32
33. undermines the manufacturer’s ability to reasonably anticipate and mitigate potential harms,
placing a novel responsibility on developers to implement guardrails to ensure their products do
not misrepresent their capabilities.
Moreover, organizational deployment of agentic AI fundamentally destabilizes traditional
vicarious liability frameworks, where employers are typically liable for harms caused by employees
acting within the scope of employment (respondeat superior). Agentic AI systems, operating with
fluid autonomy while lacking legal personhood, defy this paradigm. The doctrine of respondeat
superior hinges on the employer’s right to control the employee’s actions. Fluid autonomy, by
its very nature, means the employer’s control is significantly diminished and constantly shifting,
as the AI makes independent decisions and adapts its behavior. And because AI lacks legal
personhood, it cannot be considered an “agent” in the legal sense required for the doctrine to
apply.
Consider an AI hiring agent that autonomously screens job applicants.92 If this agent devel-
ops discriminatory patterns through recursive adaptation (e.g., deprioritizing candidates from
historically marginalized groups), courts face an attribution paradox. The AI’s behavior may
reflect neither explicit corporate policy nor any individual employee’s intent, yet it directly causes
harm. Because the AI is not a legal person, it cannot be held liable. Because the AI’s actions are
autonomous and potentially unforeseeable—a direct result of its stochastic, dynamic, and adaptive
nature—the employer may not have had the requisite control to be held liable under respondeat
superior. Current law provides no clear path to hold the organization liable, as the AI cannot
qualify as an “employee” or “agent” under traditional legal definitions.93
This creates a novel systemic responsibility gap—distinct from Matthias’s general gap where
92Emerging legislation is beginning to address the accountability challenges posed by automated decision-making.
The California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA), includes
provisions related to Automated Decision-Making Technologies (ADMT). See Cal. Civ. Code §§ 1798.110, .120,
.185(a)(16) (West 2024). These provisions could impose liability on organizations using AI systems, such as the AI
hiring tool in this example, by requiring transparency and offering consumers control over how their data is used
in such processes. For a comparative analysis, see Xu Ding & Hua Huang, For Whom Is Privacy Policy Written?, 55
Comput. L. & Sec. Rev. 106072, 106084 (2024).
93The Restatement (Third) of Agency § 1.01 cmt. c (Am. L. Inst. 2006) requires an agent to be a “person,” excluding
AI systems.
33
34. no party is a candidate for liability, and from the “moral crumple zone” where blame defaults
to a human operator. Here, the organization benefits economically from the AI’s actions but
may evade liability for the resulting harms. The doctrinal impasse stems from unmappability:
courts cannot easily disentangle whether discriminatory outcomes originated in (1) the AI’s
training data (developer responsibility), (2) the organization’s deployment parameters (corporate
responsibility), or (3) the AI’s autonomous adaptations (no clear responsibility). The issue is
that the fluid autonomy of agentic AI simultaneously erodes the organization’s control over its
operational processes (the first locus) and undermines the developer’s ability to foresee novel
harms emerging from that specific organizational deployment (the second locus).
Having traced how fluid autonomy and the resulting unmappability destabilize the core
doctrines of authorship, inventorship, and liability, we now shift from diagnosing the problem to
introducing a unifying principle capable of restoring doctrinal coherence.
VI. The Principle of Functional Eqivalence
As the preceding analysis has demonstrated, the fluid autonomy of agentic AI disrupts foundational
assumptions in authorship, inventorship, and liability. In Part III, the inability to parse human
and AI contributions undermines human-centric authorship models. For example, proposals
like hybrid attribution become impractical when human-machine creative efforts are irreducibly
entangled. In Part IV, agentic AI’s capacity for autonomous generation of novel solutions challenges
proving a human ‘conceiver’ under current doctrine, leaving innovations unprotected or their
ownership contested. In Part V, it destabilizes both user-centric and manufacturer-centric models,
creating responsibility gaps and moral crumple zones where no party can be definitively held
accountable. Across these areas, the common thread is the practical difficulty, perhaps impossibility,
of disentangling contributions and control between human and machine, exposing a systemic
challenge for legal paradigms reliant on clear attribution.
To address this systemic challenge, we propose a paradigm shift: treating human and AI
34
35. contributions as functionally equivalent. We propose this equivalence not because of a moral or
economic parity between humans and machines, but as a pragmatic response to the fundamental
unmappability inherent in their entangled creative processes. By “functional equivalence,” we
mean that legal frameworks should focus on the objective qualities and outcomes of human-AI
interactions rather than attempting the often impossible task of disentangling contributions.
This approach circumvents several intractable problems inherent in attribution: (1) the practical
difficulty, often impossibility, of consistently determining when contributions can be parsed; (2) the
absence of fair or workable standards for partial attribution in cases where some disentanglement
might seem possible; and (3) the potential inequities arising from treating collaborative works
differently based solely on the contingent and perhaps arbitrary factor of whether human versus
AI inputs can be reliably isolated in a given instance.94
For authorship, functional equivalence could involve recognizing originality in AI-assisted
works through streamlined registration. Rather than requiring applicants to meticulously de-
marcate human versus AI contributions—a task rendered futile by fundamental unmappability—
registration could focus on the final work’s originality and the human role in initiating, guiding,
and finalizing the project. Ownership would vest in the human user(s) or commissioning entity,
acknowledging the AI as a participant in a recursively entangled process whose contribution
is functionally inseparable from the user’s direction. This differs fundamentally from hybrid
attribution models that erroneously presuppose separable contributions, a premise undermined
by fluid autonomy.
In patent law, functional equivalence might mean assessing patentability (novelty, non-
obviousness, and utility) based on the invention’s objective merits, regardless of whether the core
inventive concept originated with human insight or AI generation. Patents could be granted to
the human inventor(s) who orchestrated the AI process, reduced the invention to practice (even if
94Compounding the attribution challenges posed by agentic AI, the moral landscape in AI is characterized by a
multitude of perspectives and approaches. See Charles D. Raab, Information Privacy, Impact Assessment, and the
Place of Ethics, 37 Comput. L. & Sec. Rev. 105404 (2020). Not only must we contend with the uncertainty over how
a given ethical, legal, or policy standard may apply given the ambiguity in creative attributions that fluid autonomy
entails, but also over which standards or frameworks should be employed.
35
36. constructively via AI-generated descriptions they substantively validated), and met disclosure
requirements, thereby subsuming the AI’s conceptual contribution within the human-directed
R&D workflow. This approach resolves the DABUS impasse by focusing on the human role in
delivering the invention to the public domain via the patent system, rather than dissecting the
precise moment of conception.95
Liability models, under functional equivalence, could adopt frameworks less reliant on pin-
pointing discrete causation. This might involve modified forms of strict liability for developers of
highly autonomous agentic systems deployed in critical domains, or expanded enterprise liability
where organizations deploying agentic AI assume broader responsibility for outcomes, perhaps
mitigated by adherence to best practices in oversight and risk management.96 Alternatively,
sector-specific no-fault compensation schemes (akin to the U.S. National Vaccine Injury Compen-
sation Program) could address harms without requiring intractable causal analysis, potentially
funded through levies on AI deployment.97 The common thread is replacing intractable attribu-
tion with administrable, outcome-focused mechanisms—whether compensation funds, liability
presumptions, or process-based compliance safe harbors (§512).]
Critics may legitimately argue that functional equivalence risks diminishing the perceived
value of human creativity or that it “makes no sense to allocate intellectual property rights to
machines because machines are not the kind of entity that needs incentives in order to generate
95This focus on human orchestration echoes the reasoning in Shenzhen Tencent Comput. Sys. Co. v. Shanghai Yingxun
Tech. Co., (2019) Yue 0305 Min Chu No. 14010 (Shenzhen Nanshan Dist. People’s Ct. Dec. 24, 2019) (China), supra
note 44, where copyright authorship was recognized based on the human creative team’s selection and arrangement
of inputs and parameters guiding the AI-generated work.
96Such proposals build on a long tradition of adapting tort law to place liability on the party best positioned to manage
and bear the risk of new technologies. See, e.g., Escola v. Coca-Cola Bottling Co. of Fresno, 150 P.2d 436, 440–41
(Cal. 1944) (Traynor, J., concurring) (providing the intellectual foundation for strict products liability by arguing
that manufacturers are best able to anticipate and prevent harms from their products). This includes expanding
vicarious liability beyond narrow control tests to encompass the foreseeable risks of an enterprise. See Ira S. Bushey
& Sons, Inc. v. United States, 398 F.2d 167, 171 (2d Cir. 1968).
97See National Childhood Vaccine Injury Act, 42 U.S.C. §§ 300aa-10 to -34 (2018). Law has previously created novel
frameworks when traditional IP and liability models proved inadequate for new technologies. Some solutions create
unique, or sui generis, rights. See, e.g., Directive 96/9/EC of the European Parliament and of the Council of 11 March
1996 on the Legal Protection of Databases, 1996 O.J. (L 77) 20 (creating a right to protect substantial investment in
data compilation, separate from copyright). Other solutions replace difficult attribution analyses with predictable,
process-based frameworks. Cf. 17 U.S.C. § 512 (2018) (creating a liability safe harbor for online service providers
that, while serving the different policy goal of shielding intermediaries, similarly offers a practical alternative to
intractable monitoring and causation inquiries).
36
37. output.”98 While acknowledging these valid concerns, we contend that legal frameworks must
prioritize practicability. The legal system has historically evolved to address technological shifts:
corporate personhood allowed businesses to act as legal entities without equating them to human
moral agents; copyright expanded to protect photographs and software without demanding proof
of unique “humanity” in each pixel or line of code. The legal system must now confront the
reality of creative processes where agentic AI and human contributions are irreducibly entangled.
In such cases, traditional legal distinctions based on human versus AI origins may prove not
merely difficult, but impractical to apply consistently and fairly. Our proposed focus on outcomes,
embodied in the principle of functional equivalence, stems not from a philosophical preference but
from the practical necessity of maintaining a workable legal framework in the face of irreducible
entanglement.
Thus, the principle of functional equivalence offers a coherent theoretical framework for
resolving the attribution crises created by agentic AI. With the principle established and defended,
we now turn to concrete pathways for its implementation and the broader questions that remain.
VII. Conclusion
The fluid autonomy of agentic AI—characterized by its stochastic, dynamic, and adaptive nature—
creates a novel systemic challenge for cornerstone legal doctrines in intellectual property and
tort law. Unlike traditional tools or hypothetical fully autonomous (sovereign) systems, agentic
systems blur the boundaries of control and contribution, creating a co-evolutionary creative
process that defies clear attribution to either human or machine. This fundamental unmappability
has profound implications for legal paradigms reliant on clear attribution. In response, this Article
proposes functional equivalence: a paradigm shift treating human and AI inputs as pragmatically
98See, e.g., Joanna J. Bryson, Robots Should Be Slaves, in Close Engagements with Artificial Companions 63
(Yorick Wilks ed., 2010) (arguing that granting rights to robots could diminish human status); Carys J. Craig & Ian
R. Kerr, The Death of the AI Author, 52 Ottawa L. Rev. 31, 43 (2020) (“[I]t makes no sense to allocate intellectual
property rights to machines because machines are not the kind of entity that needs incentives in order to generate
output.”).
37
38. equivalent for legal purposes. By focusing adjudication on outcomes rather than the intractable
entanglement of their origins, this principle offers a coherent framework for allocating rights and
responsibilities where traditional attribution fails.
Practical implementation of this principle would necessitate legislative and potentially judicial
recalibration.99 For authorship, copyright registration could adopt a rebuttable presumption of
human authorship for works involving agentic AI, absent clear evidence of purely autonomous
generation without human involvement. This approach balances concerns about incentivizing
human creativity with the reality of blended contributions. Streamlined registration processes,
perhaps similar to the U.S. Copyright Office’s group registration options,100 could acknowledge
the collaborative nature without demanding unworkable attribution precision. In patent law,
legislative action revising statutes like 35 U.S.C. § 100(f) (2018), or perhaps judicial reinterpretation
of related case law (though likely facing resistance without statutory change), could clarify that
inventorship can be recognized based on human supervision and reduction to practice, even if
the core conception originated with AI. This approach, contrasting with the European Patent
Convention’s strict adherence to human inventorship,101 would reward outcome novelty and
aligns with arguments that the capabilities emerging from fluid autonomy, potentially exceeding
human expertise, warrant rethinking traditional standards like PHOSITA. Liability frameworks
might adopt a strict liability model for developers of certain agentic AI systems, particularly those
designated ‘high-risk’ under frameworks like the EU AI Act,102 while users assume liability for
foreseeable misuse under established negligence principles. This hybrid approach echoes calls for
structured liability solutions that move beyond simple applications of traditional tort doctrines
ill-suited to AI’s unpredictability.103
99The stakes of such recalibration are high, as current litigation involves not only massive statutory damages but also
demands for the outright destruction of AI models. See Pamela Samuelson, How to Think about Remedies in the
Generative AI Copyright Cases, 67 Commc’ns ACM 27, 28–30 (2024).
100For an example of such a streamlined process, see U.S. Copyright Office, Circular 42, Group Registration of
Photographs (2023).
101See EPC, supra note 76, r. 19.
102See Regulation (EU) 2024/1689, supra note 50.
103See Omri Rachum-Twaig, Whose Robot Is It Anyway?: Liability for Artificial-Intelligence-Based Robots, 2020 U. Ill.
L. Rev. 1141, 1143. While Rachum-Twaig proposes a different mechanism—a ‘presumed negligence’ framework
38
39. Our analysis, while illuminating foundational challenges, has limitations. Our primary focus
on U.S. law leaves open crucial questions of comparative jurisprudence. How will civil law systems,
particularly the EU with its risk-based regulatory framework under the AI Act, reconcile agentic
AI’s fluid autonomy with statutory obligations for human oversight (Art. 14) and transparency (Art.
13)? Comparative studies are needed to investigate how different legal traditions might address
this challenge. For instance, while U.S. law grapples with the post hoc attribution difficulties arising
from unmappability, the European Union’s AI Act, with its emphasis on ex ante risk assessment
and conformity requirements,104 might preemptively constrain the fluid autonomy of agentic AI,
potentially trading some adaptive potential for clearer accountability.
Furthermore, our argument rests on capabilities that are still crystallizing, not on technologies
whose properties are already well understood. Two complementary lines of empirical work are
crucial. The first is quantitative: large-scale logging of human–AI interaction data can be paired
with econometric and information-theoretic tools—e.g., Shapley-value decompositions or causal
inference models—to estimate how much of a given artifact is best explained by human prompts
versus the AI’s autonomous processes. The goal is not to achieve perfect provenance, but to
supply courts with principled confidence intervals analogous to the statistical evidence they
already accept when apportioning antitrust damages or detecting employment discrimination.105
triggered by failing specific ‘safe harbor’ duties (e.g., monitoring, patching)—the underlying goal of establishing
clearer responsibility benchmarks for developers and users aligns with the functional equivalence approach
advocated here.
104See Regulation (EU) 2024/1689, supra note 50, art. 17 (Conformity Assessment), which mandates a significant ex
ante verification regime: many high-risk AI systems must undergo conformity assessments—some by third parties—
before being placed on the market or put into service. This pre-market scrutiny, focusing on transparency, safety,
and fundamental rights, represents a fundamentally different regulatory philosophy compared to legal systems
relying primarily on post hoc liability determination after harm has occurred. For the legislative intent behind the
ex ante approach, see European Commission, Proposal for a Regulation laying down harmonised rules on
artificial intelligence (Artificial Intelligence Act), COM(2021) 206 final, Explanatory Memorandum, at
9–11. For an analysis of how the AI Act shifts compliance burdens to earlier stages of the AI lifecycle, see Michael
Veale, Kira Matus & Robert Gorwa, AI and Global Governance: Modalities, Rationales, Tensions, 19 Ann. Rev. L.
& Soc. Sci. 255 (2023).
105For examples of technical attribution methods, see Scott M. Lundberg & Su-In Lee, A Unified Approach to Interpreting
Model Predictions, in 30 Advances in Neural Info. Processing Sys. 4768 (2017) (introducing SHAP values);
Judea Pearl, Causality: Models, Reasoning, and Inference (2d ed. 2009). Courts have a long and established
history of relying on such complex, probabilistic evidence. See, e.g., Wal-Mart Stores, Inc. v. Dukes, 564 U.S. 338,
356–57 (2011) (discussing statistical evidence in class-action discrimination cases); Int’l Bhd. of Teamsters v. United
States, 431 U.S. 324, 339 (1977) (accepting statistical proof of a “pattern or practice” of discrimination); Daniel
39
40. This quantitative work should be complemented by qualitative inquiry, such as ethnographic
studies exploring how engineers, legal professionals, and artists perceive and experience agency
within human-AI co-productions, illuminating whether folk norms of attribution converge with
or resist emergent legal doctrine. Taken together, these empirical programs would translate the
abstract idea of “unmappability” into both measurable (if fuzzy) quantities and lived experiences,
giving courts a familiar evidentiary scaffold on which to build doctrine and thereby grounding
the pragmatic case for functional equivalence.
These empirical needs point to a deeper, normative quandary that transcends doctrinal me-
chanics. The first challenge is to copyright’s cultural mission. When agentic AI systems trained
on existing works learn to replicate and recombine dominant aesthetic and narrative patterns,
they risk creating a feedback loop that homogenizes culture. This process not only marginalizes
non-conforming styles and viewpoints but also threatens to erode the expressive diversity that
copyright law is meant to foster.106 In parallel, the patent system justifies its grant of exclusive
rights by arguing that it incentivizes costly, high-risk, and non-obvious invention that would not
otherwise occur.107 Functional equivalence in agentic AI, however, may alter this calculus. If AI’s
computational power makes certain types of innovation cheap and predictable, its algorithmic
path-dependencies may simultaneously steer invention toward incremental improvements rather
than disruptive breakthroughs. This could lead to patent portfolios that are broad but shallow,
rewarding rapid, derivative work over foundational research. In sum, the uncritical embrace of the
principle of functional equivalence may therefore inadvertently calcify systemic inequities under
L. Rubinfeld, Econometrics in the Courtroom, 85 Colum. L. Rev. 1048 (1985). Such models might, for example,
weight human inputs (e.g., prompts, edits) against AI-generated variations to determine when disentanglement
becomes impractical—a challenge courts already navigate in fuzzy determinations like “substantial similarity” or
“non-obviousness.” See Arnstein v. Porter, 154 F.2d 464, 468 (2d Cir. 1946); see KSR Int’l Co. v. Teleflex Inc., 550 U.S.
398, 427 (2007) (acknowledging the “necessarily imprecise” nature of the obviousness inquiry).
106The goal of copyright is not merely to incentivize the production of more works, but to “promote the Progress of
Science and useful Arts,” U.S. Const. art. I, § 8, cl. 8, a purpose long understood to include the enrichment of public
discourse and culture. See, e.g., Jack M. Balkin, Digital Speech and Democratic Culture: A Theory of Freedom of
Expression for the Information Society, 79 N.Y.U. L. Rev. 1, 35–38 (2004); Julie E. Cohen, Copyright, Commodification,
and Culture: Locating the Public Domain, in The Future of the Public Domain 121, 136–40 (Lucie Guibault & P.
Bernt Hugenholtz eds., 2006).
107See, e.g., Graham v. John Deere Co., 383 U.S. 1, 6 (1966) (linking the patent monopoly to the constitutional goal of
promoting invention).
40
41. the veneer of technological progress, ultimately undermining the very innovation and diversity
our legal frameworks were built to protect.
Therefore, even as we argue that legal systems must evolve beyond purely anthropocentric
paradigms to embrace functional equivalence as a practical necessity, we remain mindful that this
approach may have its own negative consequences. However, maintaining the status quo risks
creating a legal landscape that either stifles technological progress by adhering to unworkable
standards or fails to adequately protect the human authors and innovators it aims to serve. In
contrast, by shifting the focus from unmappable contributions to the assessment of tangible out-
comes, the principle of functional equivalence promises a potentially more stable and predictable
foundation for coherently allocating rights and fairly assessing liability.
41