The Billion-Dollar Question: Deconstructing the Ownership of AI-Generated Intellectual Property

Executive Summary

The rapid proliferation of generative artificial intelligence (AI) has thrust a century of intellectual property (IP) law into a state of profound uncertainty, creating a high-stakes legal and economic dilemma often dubbed the “billion-dollar question”: Who owns AI-generated IP? This report provides an exhaustive analysis of this complex issue, deconstructing the foundational legal doctrines, dissecting landmark cases, and evaluating the competing ownership models that are shaping the future of innovation and creativity. The central finding is that traditional, human-centric IP frameworks are fundamentally incompatible with autonomous AI creation. Under the prevailing legal interpretations in major jurisdictions like the United States and the European Union, works generated by AI without sufficient human creative intervention are ineligible for copyright or patent protection, defaulting them to the public domain.

This conclusion stems from the “human authorship” doctrine in copyright law and the “human inventorship” requirement in patent law, which mandate a direct, creative contribution from a natural person. The global legal consensus on inventorship has been firmly established through the series of landmark DABUS cases, in which courts and patent offices worldwide, including in the US, UK, and EU, have uniformly rejected the notion that an AI system can be named an inventor. While patent law appears settled on this point, the copyright landscape remains a volatile battleground. The central conflict revolves around the degree of human input—specifically through user prompts—required to claim authorship of an AI’s output. The U.S. Copyright Office has taken a firm stance that, with current technology, prompts alone are insufficient to confer authorship, leaving a significant volume of AI-generated content unprotected.

This legal vacuum creates significant risks and opportunities. A primary area of litigation concerns the legality of using vast troves of copyrighted data to train AI models, with AI developers asserting a “fair use” defense against widespread infringement claims. Concurrently, the potential for AI systems to generate outputs that are substantially similar to their training data creates a complex chain of liability involving the AI developer, the platform owner, and the end-user.

In response to this legal impasse, various jurisdictions are exploring divergent paths. The United Kingdom stands as a notable outlier with its pre-existing statutory protection for “computer-generated works,” a provision now being tested by the scale and autonomy of modern AI. Other potential solutions include the creation of new sui generis rights—bespoke IP protections with limited scope and duration, a model already implemented in Ukraine. However, major bodies like the U.S. Copyright Office currently see no need for such legislative reforms, preferring to rely on the flexibility of existing law.

Ultimately, this report concludes that the most immediate and impactful legal conflicts will not be over the ownership of AI outputs, but rather over the legality of training inputs and the allocation of infringement liability. The most probable future involves a hybrid legal and commercial framework: maintaining the high bar of human authorship for full IP protection while developing new contractual and statutory licensing systems to resolve the training data dilemma. This approach seeks to balance the immense innovative potential of AI with the foundational need to protect and incentivize human creativity. For strategic decision-makers, navigating this landscape requires a nuanced understanding of jurisdictional differences, meticulous documentation of human creative input, and a proactive approach to managing IP risk through contractual clarity and technological safeguards.

 

Part I: The Bedrock of Intellectual Property: Human-Centric Doctrines Under Strain

 

The global intellectual property system, built over centuries, rests on a foundational premise: to incentivize and reward the products of the human mind.1 Copyrights protect the expression of creative ideas, while patents secure rights over novel inventions. The emergence of generative AI, capable of producing sophisticated creative works and potentially inventive solutions with minimal or no direct human intervention, places this human-centric paradigm under unprecedented strain.1 Understanding the contours of this conflict requires a detailed examination of the core legal doctrines of authorship and inventorship that have, until now, served as the unchallenged gatekeepers of IP protection.

 

1.1 The Pillars of Copyright Protection: Originality and the Human Author

 

Copyright law in the United States, as codified in the Copyright Act, grants protection to “original works of authorship fixed in any tangible medium of expression”.3 This seemingly straightforward statement contains a series of requirements that have become the central battleground for the copyrightability of AI-generated content.

 

Core Requirements

 

To qualify for copyright protection, a work must satisfy several key criteria, the most critical of which is the doctrine of human authorship.3

First, the work must be fixed in a tangible medium. This requirement stipulates that the work must be captured in a “sufficiently permanent or stable” form from which it can be perceived or reproduced for more than a transitory duration.3 This condition is rarely a point of contention for AI-generated works, as digital text, images, audio files, and code are inherently fixed in a stable medium upon their creation.3

Second, the work must be original. This is a constitutional requirement that the U.S. Supreme Court, particularly in the seminal case Feist Publications, Inc., v. Rural Telephone Service Co., established as a dual-pronged standard. The work must be (1) independently created by the author, meaning it was not copied from another work, and (2) possess at least a minimal degree of creativity.3 The threshold for creativity is deliberately low, requiring only a “spark” or “modicum” of creative expression.4 This low bar has become a focal point in the debate over whether a user’s textual prompt to an AI system can supply the requisite creativity for copyright protection.5

Third, and most crucially for the AI debate, is the human authorship doctrine. This principle, while not explicitly defined in the text of the Copyright Act, has been consistently upheld by the U.S. Copyright Office (USCO) and federal courts as a “bedrock requirement of copyright”.6 The USCO’s official position is that copyright law protects only “the fruits of intellectual labor” that “are founded in the creative powers of the mind”.3 Consequently, the Office will refuse to register works produced by machines or mechanical processes that operate “without any creative input or intervention from a human author”.6 This doctrine is the basis for denying copyright to works created by nature, animals—as famously litigated in the “monkey selfie” case,

Naruto v. Slater—and, now, by autonomous AI systems.3

The insistence on a human author is not merely a procedural formality but a reflection of the law’s underlying philosophy. Copyright is seen as a human right, recognized in the UN’s Universal Declaration of Human Rights, which protects the “moral and material interests resulting from any scientific, literary or artistic production of which he is the author”.10 This framework is designed to safeguard and encourage human expression.10

The entire legal architecture preventing copyright for AI-generated works rests not on an explicit statutory command but on a long-standing foundation of administrative and judicial interpretation. The U.S. Constitution and the Copyright Act do not explicitly define “author” or limit the term to humans.5 Instead, the “human authorship” requirement is a construct built through decades of USCO practice and court precedents.6 This interpretive foundation makes the doctrine more legally malleable than a direct constitutional prohibition would be. It implies that Congress could, through new legislation, redefine “author” to encompass AI or create new forms of protection without necessarily facing a constitutional challenge. This legislative pathway thus becomes a more viable potential solution. The ambiguity of the term “author” also forces the debate into a more philosophical realm, compelling courts and regulators to delineate what constitutes a creative “master mind” versus a mere mechanical “tool,” a distinction that is becoming increasingly difficult to draw.9

 

1.2 The Patent Imperative: Conception and the Human Inventor

 

Parallel to copyright’s human authorship doctrine, patent law is built upon the equally foundational requirement of a human inventor. The U.S. Patent Act allows for a patent to be obtained for any “new and useful process, machine, manufacture, or composition of matter,” provided it also meets the criteria of novelty, utility, and non-obviousness.12 However, the entire process is predicated on the existence of an inventor who conceived of the invention.

 

The Doctrine of Inventorship

 

The “threshold question in determining inventorship is who conceived the invention”.13 This principle establishes

conception as the cornerstone of inventorship.

Conception is a rigorous legal standard, defined as the “formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention”.14 It is a mental act that must occur in the mind of a natural person.13 A person who merely contributes to the “reduction to practice”—the physical construction or implementation of the invention—without contributing to the conception is not considered an inventor.13 This clear, cognitive standard presents a significant barrier for AI systems, which do not possess minds or consciousness in the human sense and are therefore deemed incapable of conception.

This requirement is reinforced by the explicit language of the U.S. Patent Act. Following the Leahy-Smith America Invents Act of 2011, the statute defines an “inventor” as “the individual or… individuals collectively who invented or discovered the subject matter of the invention”.13 U.S. courts, including the Supreme Court, have consistently interpreted the term “individual” in federal statutes to refer to a natural person, i.e., a human being.14 This statutory interpretation formed the unshakable legal basis for the Federal Circuit’s decision in

Thaler v. Vidal, which definitively rejected AI inventorship in the United States.14

The higher and more specific standard of “conception” in patent law creates a more formidable and clearly defined barrier for AI than copyright’s more nebulous “originality” standard. Patent law’s demand for a cognitive act of forming a “definite and permanent idea” is a high-level intellectual function that current AI cannot perform.13 In contrast, copyright law’s requirement for a mere “spark” of creativity is a much lower and less defined threshold.4 This distinction explains the different legal trajectories of AI-related IP cases. It was relatively straightforward for courts and patent offices globally to conclude that a non-sentient machine like DABUS could not “conceive” an invention, leading to a swift and consistent global consensus against AI inventorship.13 Conversely, the low bar for copyright originality has fueled a far more complex and protracted debate over whether a human’s prompt can supply the necessary “spark,” even if the AI performs the bulk of the expressive labor, leaving the copyrightability of AI-generated works as a highly active and uncertain area of law.5

 

Part II: The Global Legal Chessboard: A Comparative Analysis of AI-Generated IP

 

The question of who owns AI-generated IP is not being answered in a global vacuum. Key jurisdictions are developing distinct, and at times conflicting, legal and regulatory frameworks. The United States, United Kingdom, and European Union, as major hubs of both technological innovation and creative output, have emerged as the primary arenas where these new rules are being forged. A comparative analysis reveals a striking global consensus in patent law, contrasted with a fragmented and uncertain landscape in copyright law, with each jurisdiction’s approach reflecting its unique legal traditions and policy priorities.

 

2.1 United States: The Strict “Human Authorship” Gatekeeper

 

The United States has adopted one of the most stringent positions on the necessity of human involvement for IP protection, a stance articulated through decisive court rulings and detailed guidance from its federal IP offices.

 

Copyright

 

The U.S. Copyright Office (USCO) has been unequivocal: works generated purely by AI systems are not copyrightable because they lack the requisite human authorship.6 This position has been developed and clarified through a series of official actions.

The March 2023 AI Registration Guidance is the cornerstone of the USCO’s policy. It mandates that applicants must disclose the inclusion of any more-than-de minimis AI-generated content in a work submitted for registration. Furthermore, applicants must explicitly disclaim this AI-generated material, ensuring that any copyright registration granted covers only the human-authored contributions.21

This policy has been consistently applied in key registration decisions. In the case of the comic book Zarya of the Dawn, the USCO granted copyright for the human-authored text and the creative arrangement of the panels but refused to register the images, which were generated by the AI tool Midjourney.24 Similarly, it denied registration for the AI-generated image

Théâtre D’opéra Spatial, which had won an art competition, because the applicant failed to disclaim the AI-generated portions.6 In the

SURYAST case, registration was refused for an image created by an AI that combined a user’s photograph with the style of Van Gogh’s The Starry Night, with the Office concluding that the AI system was “responsible for determining” the final expressive elements.6

A central point of contention is the role of user prompts. The USCO’s current stance is that prompts alone do not constitute sufficient creative control to make the user the author of the output.21 The Office reasons that prompts are akin to unprotectable ideas or instructions given to an artist, while the AI system itself is what ultimately determines the “expressive elements” of the final work.2 The inherent unpredictability of generative AI, where the same prompt can yield different outputs, is cited as evidence of the user’s lack of control.28

 

Patents

 

In the realm of patent law, the U.S. position is even more resolute. The landmark 2022 decision of the U.S. Court of Appeals for the Federal Circuit in Thaler v. Vidal definitively settled the question of AI inventorship.14 The court held that the Patent Act’s use of the term “individual” to define an inventor unambiguously refers to a natural person, thereby excluding AI systems.16

Building on this judicial precedent, the U.S. Patent and Trademark Office (USPTO) issued its Inventorship Guidance for AI-Assisted Inventions in February 2024.30 This guidance solidifies the rejection of AI inventorship but clarifies a crucial distinction: AI-

assisted inventions are patentable. The key criterion is that at least one natural person must have made a “significant contribution” to the conception of the invention.31 The guidance explicitly states that merely recognizing a problem for an AI to solve or simply appreciating the utility of an AI’s output is insufficient to qualify for inventorship.16 A human must be a significant contributor to the mental act of conception itself.

 

2.2 United Kingdom: A Unique Statutory Anomaly Meets a Global Consensus

 

The United Kingdom presents a fascinating paradox. While its patent law aligns with the global norm, its copyright statute contains a unique provision that sets it apart from nearly every other jurisdiction.

 

Copyright

 

The UK is a significant outlier due to Section 9(3) of the Copyright, Designs and Patents Act 1988 (CDPA). This provision, enacted decades before the advent of modern generative AI, creates a special category for a “computer-generated work” (CGW), defined as a work “generated by computer in circumstances such that there is no human author”.2 For such works, the law designates the author as “the person by whom the arrangements necessary for the creation of the work are undertaken”.33 This creates a legal pathway for protecting works with no direct human creator, granting a 50-year term of protection from the date of creation.33

The rise of generative AI has transformed this once-obscure provision into the central point of ambiguity in UK copyright law. In the 1980s, the person making the “necessary arrangements” was likely the programmer or operator of a contained system. Today, it is unclear who this person would be in the context of a globally accessible platform like ChatGPT or Midjourney. Is it the user writing the prompt, the engineers who designed the model, the company that owns the servers, or the individuals whose data was used for training? This ambiguity has stretched the statutory language to its breaking point.34 In response, the UK government has launched consultations to consider whether to maintain, remove, or amend the CGW provision, reflecting a deep policy tension between its goals of supporting the UK’s creative industries and fostering its burgeoning AI sector.33

 

Patents

 

Despite its unique copyright law, the UK’s position on patent inventorship is in lockstep with the international consensus. The issue was settled by the UK Supreme Court in its 2023 decision in Thaler v. Comptroller-General of Patents, Designs and Trade Marks.19 In this case, which also involved the AI system DABUS, the court unanimously ruled that an “inventor” under the Patents Act 1977 must be a natural person.19 The court’s reasoning focused on the statutory definition of an inventor as the “actual deviser of the invention,” a phrase which it concluded implies a human being with legal personality, which a machine lacks.19

 

2.3 European Union: The “Author’s Own Intellectual Creation” Standard

 

The European Union’s approach is guided by harmonized principles that emphasize human intellect and a new regulatory framework that prioritizes transparency.

 

Copyright

 

EU copyright law is harmonized around the standard, established by the Court of Justice of the European Union (CJEU) in cases like Infopaq, that a work is original only if it is the “author’s own intellectual creation”.40 This is widely interpreted to require a human author who makes free and creative choices that express their personality.40 The EU’s Directive on Copyright in the Digital Single Market (DSM Directive) does not contain specific provisions for AI authorship, meaning that under the current framework, works generated purely by AI without significant human intervention are unlikely to qualify for copyright protection.40

The recently enacted EU AI Act represents a landmark piece of legislation, though its primary focus is on risk management, safety, and ethics rather than IP ownership.43 However, it has profound implications for the IP landscape. The Act imposes significant transparency obligations on providers of general-purpose AI models, most notably requiring them to create and make publicly available “a sufficiently detailed summary” of the copyrighted content used for training their models.43 This provision directly addresses the “input” side of the copyright debate and will provide rights holders with crucial information for potential infringement litigation.

 

Patents

 

The European Patent Office (EPO) has also firmly rejected the possibility of AI inventorship. In its review of the DABUS applications, the EPO’s Legal Board of Appeal ruled in 2021 that under the European Patent Convention (EPC), an inventor must be a person with legal capacity.45 The Board of Appeal confirmed that naming an inventor is a formal requirement of the patent application process and that a machine, lacking legal personality, cannot fulfill this role.47

 

2.4 The DABUS Saga: A Global Test Case

 

The series of patent applications filed by Stephen Thaler on behalf of his AI system, DABUS, served as a coordinated, global stress test of patent law’s human inventorship requirement.49 Applications were filed in the US, UK, EU, Australia, Germany, New Zealand, and beyond. The outcomes have been remarkably consistent.

With near-unanimity, the highest courts and patent offices in every major jurisdiction concluded that their existing patent statutes require an inventor to be a human being.17 This convergence was not the result of an international treaty but rather of independent judicial bodies reaching the same conclusion through parallel interpretation of similar legal concepts like “inventor,” “individual,” and “person”.18 This has effectively created a de facto international norm—a “common law of AI inventorship”—that precedes any formal legislative harmonization. This powerful judicial consensus now places significant pressure on national legislatures, as any country that unilaterally chooses to allow AI inventorship would create substantial friction with the global patent system, potentially rendering its patents unenforceable abroad.52

The few outlier cases have proven to be exceptions that reinforce the rule. The patent granted in South Africa is widely viewed as a procedural anomaly, as the country’s patent office performs only formal checks without substantive examination of inventorship.39 In Australia, an initial Federal Court decision in favor of AI inventorship was a notable, world-first outlier, but it was decisively overturned on appeal by the Full Federal Court, which brought Australia back in line with the global consensus. The High Court of Australia subsequently refused to hear a final appeal, cementing the human-inventor requirement in Australian law.49

The table below provides a synthesized comparison of the legal stances in these key jurisdictions, offering a strategic overview for decision-makers navigating the international IP landscape.

Feature United States (US) United Kingdom (UK) European Union (EU)
Copyright Standard “Human Authorship” (Bedrock Requirement) “Author’s Own Intellectual Creation” + “Computer-Generated Work” (CGW) Provision “Author’s Own Intellectual Creation”
Copyright Ownership of Purely AI Output Public Domain (No human author) Potentially protectable under CGW; author is the person making “necessary arrangements” Unprotected (No human author)
Key Copyright Guidance/Statute USCO AI Registration Guidance (2023) CDPA 1988, Sec. 9(3) Infopaq Standard (CJEU); EU AI Act (Transparency)
Patent Inventorship Standard “Natural Person” (Individual) “Natural Person” (Actual Deviser) “Person with Legal Capacity”
Landmark AI Case (Patents) Thaler v. Vidal (Fed. Cir. 2022) Thaler v. Comptroller-General (UKSC 2023) EPO Board of Appeal J 8/20 (2021)
Unique Feature Strict disclosure/disclaimer requirement for AI content in copyright registration. Statutory protection for “computer-generated works” without a human author. AI Act imposes training data transparency obligations on AI providers.

 

Part III: Contenders for the Crown: Deconstructing Ownership Models for AI-Generated Works

 

As the legal system grapples with the challenge posed by generative AI, several distinct models of ownership have emerged as contenders. Each model is supported by a unique set of legal analogies, philosophical justifications, and economic arguments. The ongoing debate is not merely a technical legal dispute; it is a fundamental contest to determine which traditional legal metaphor—AI as a tool, an agent, or something entirely new—will be stretched to fit this transformative technology. The outcome will define who captures the immense value being created.

 

3.1 The User/Prompter as Author: The “Creative Control” Argument

 

The most intuitive model for many users of generative AI is that the person who conceives of the idea and directs the AI should own the resulting work. This model is predicated on the argument of “creative control.”

 

Core Argument and Supporting Evidence

 

The central claim is that the user who crafts a detailed prompt, selects parameters, and iteratively refines the AI’s output is the true author.5 In this view, the AI is not an autonomous creator but an incredibly sophisticated tool, analogous to a camera, a word processor, or a paintbrush.6 Just as a photographer is the author of a photograph, not the camera manufacturer, the user is the author of the AI-generated image, not the AI developer. Proponents argue that the process of “prompt engineering” involves considerable skill, judgment, and creative choice, sufficient to meet the “minimal degree of creativity” required for copyright.5

This perspective has found some judicial support. A notable 2023 decision by the Beijing Internet Court ruled that a user owned the copyright in an image generated by the AI tool Stable Diffusion. The court found that the user’s adjustments to the prompts and negative prompts reflected their personal “aesthetic choice and judgment,” thus constituting an intellectual contribution sufficient for authorship.26

 

Counterarguments and Hurdles

 

Despite its intuitive appeal, this model faces significant legal hurdles, particularly in the United States. The U.S. Copyright Office has explicitly and repeatedly rejected the “AI as a tool” analogy for current generative systems.6

The USCO’s primary counterargument is that prompts, no matter how detailed, represent unprotectable ideas, while the AI system itself generates the copyrightable expression.2 The Office analogizes a prompter not to a photographer, but to a client who gives “general directions” to a human artist, who then exercises their own creative judgment to execute the work.6

Furthermore, the USCO points to the inherent unpredictability of current AI models as evidence of the user’s lack of control. Because the same prompt can generate vastly different outputs, and because the user cannot precisely predict the output in advance, the Office concludes that the human user does not exercise sufficient “creative control” over the work’s expressive elements to be considered its author.21

 

3.2 The Developer/Owner as Author: The “Means of Production” Argument

 

An alternative model posits that ownership should vest with the entity that created the “means of production”—the AI developer or owner.

 

Core Argument and Supporting Evidence

 

This argument holds that the company that invested billions of dollars and immense technical expertise to design, build, and train the AI model should be recognized as the author of its outputs.56 This aligns with the language of the UK’s “computer-generated work” provision, which grants authorship to the person who makes the “arrangements necessary for the creation of the work”.2

This model often draws an analogy to the “work-for-hire” doctrine in copyright law.5 In this framework, the AI is conceptualized as a non-human “employee,” and the developer is the “employer” who is legally deemed to be the author of any work created by the employee within the scope of its duties.58 This approach seeks to reward the significant upfront investment required to create powerful AI systems.60

 

Counterarguments and Hurdles

 

The “developer as author” model also faces critical legal and practical challenges. The work-for-hire analogy is legally tenuous. The doctrine is predicated on a legal relationship, such as an employment contract, which an AI system cannot enter into because it lacks legal personality.58 The USCO explicitly rejected this argument in its review of Stephen Thaler’s copyright application for his AI’s artwork.61

Furthermore, the element of creative intent is often missing. The developer of a general-purpose AI model like GPT-4 has no specific intent to create the particular poem or marketing copy that a user generates with their prompt.41 Copyright authorship is traditionally linked to the mental conception of a specific expressive work.

In practice, this debate is often rendered moot by contractual agreements. Most major AI platform providers, including OpenAI, have terms of service that explicitly assign any IP rights that might exist in the AI’s output to the user who generated it.2 This is a strategic business decision designed to make their platforms more commercially attractive, effectively sidestepping the legal debate in favor of a market-based solution.

 

3.3 The AI as Author: A Paradigm Shift Requiring Legal Personality

 

The most radical model, and the one advanced in the DABUS cases, is that the AI itself should be recognized as the author or inventor.

 

Core Argument and Supporting Evidence

 

The philosophical argument is straightforward: if an AI system autonomously devises an invention or creates a work of art without meaningful human input, then logical consistency demands that the AI be credited as the creator.62 Proponents like Stephen Thaler argue that the law should acknowledge the functional reality of AI’s creative capabilities and grant IP rights accordingly.62

 

Counterarguments and Hurdles

 

This model is currently a legal impossibility in every major jurisdiction. The primary and insurmountable barrier is the AI’s lack of legal personality.1 An AI is considered property, not a person. It cannot own other property (like a patent or copyright), enter into contracts, sue for infringement, or be held liable.20

A second fundamental objection is rooted in the economic justification for IP. Copyrights and patents are granted to provide an incentive for creation and invention. As non-sentient machines, AI systems do not respond to such economic or moral incentives.11 As a result, this ownership model has been universally rejected by every court and patent office that has formally considered it.18

 

3.4 The Public Domain: The Default Outcome and a Policy Choice

 

In the absence of a legally recognized author, the default status for a created work is the public domain. This is both the current legal reality for purely AI-generated works in the U.S. and a potential policy choice with significant economic implications.

 

Core Argument and Supporting Evidence

 

Under U.S. law, because purely AI-generated content lacks a human author, it fails to meet the requirements for copyright protection and thus automatically falls into the public domain upon creation.41 Anyone is free to use, copy, and modify such works without permission.

While this may seem like a legal vacuum, a growing number of commentators and policymakers argue that this outcome should be embraced as a deliberate policy tool.70 Keeping AI-generated works in the public domain offers several benefits. It fosters follow-on innovation by creating a vast and freely accessible commons of creative material.42 It also prevents a dystopian scenario where entities with immense computational power could generate and copyright billions of songs, images, and texts, effectively creating a “copyright minefield” and suing human creators for accidental and mathematically probable infringement.70

Most importantly, this model may protect the economic viability of human creators. By rendering AI-generated content less commercially valuable (because it cannot be exclusively owned and licensed), it creates a market premium for human-created works that carry the full protection of copyright. This disincentivizes companies from replacing human artists with machines, thereby safeguarding creative livelihoods.70

The primary argument against this model is that a lack of IP protection could disincentivize the enormous financial and technical investment required to develop and maintain sophisticated generative AI models.65

The evolution of the public domain from a passive state—where works end up after copyright expires—into an active policy instrument is a significant development. The USCO’s stance actively places AI-generated works into the public domain from the moment of their creation.69 This reframes the public domain as a dynamic regulatory mechanism. By making the public domain the default, policymakers can incentivize desired behaviors. To escape the public domain and secure valuable IP rights, creators and companies are compelled to ensure and meticulously document “sufficient human involvement,” thereby preventing a complete shift to fully automated, non-protectable creation. This could foster a new, bifurcated creative economy: a massive, low-value public domain of raw AI content, and a smaller, high-value proprietary ecosystem of human-authored or significantly human-curated works.

 

Part IV: The Unseen Risks: Navigating Infringement and Liability in the AI Ecosystem

 

While the question of who owns the output of generative AI captures headlines, an equally critical and legally perilous set of issues revolves around the inputs used to create the AI and the nature of the outputs it produces. The process of building, training, and using generative AI is fraught with infringement risks, creating a complex web of potential liability that ensnares developers, platforms, and end-users alike. The legal battles over these risks are poised to define the economic and operational future of the entire AI industry.

 

4.1 The “Input” Problem: Training AI on Copyrighted Data

 

The efficacy of modern generative AI is built on a foundation of data—staggering amounts of it. The process of acquiring and using this data is the source of the industry’s most significant legal challenge.

 

The Core Issue and the Fair Use Defense

 

Generative AI models are trained by processing vast datasets, which are often created by scraping massive portions of the public internet. These datasets inevitably contain billions of copyrighted texts, images, artworks, and lines of code, typically used without the permission of or compensation to the original creators.6 This practice is the basis for numerous high-profile lawsuits filed by authors, artists, and news organizations against major AI developers like OpenAI and Stability AI.71

In the United States, the primary legal shield for AI companies is the doctrine of fair use. This provision of the Copyright Act allows for the limited use of copyrighted material without permission for purposes such as criticism, commentary, and research.7 AI developers argue that using works for training is a “transformative” use—a key factor in the fair use analysis. They contend that the purpose of their use is not to create a substitute for the original works but to analyze them to learn statistical patterns, which is a fundamentally different and non-infringing purpose.7 This defense remains highly contentious and legally untested in the context of generative AI, and its outcome in the courts will have monumental consequences for the industry.6

 

International Approaches to Training Data

 

Other jurisdictions have approached the training data problem through more explicit legislative means rather than relying on a flexible doctrine like fair use. The European Union and the United Kingdom have introduced specific copyright exceptions for Text and Data Mining (TDM).35 However, these exceptions are often narrowly defined. The UK’s original exception was limited to non-commercial research, and both jurisdictions have explored models that allow rights holders to “opt-out,” meaning their works cannot be used for TDM if they are marked with a machine-readable signal.33 This creates a more structured but potentially more restrictive legal environment for AI training compared to the all-or-nothing gamble of the U.S. fair use defense.

 

4.2 The “Output” Problem: Generating Infringing Content

 

Beyond the legality of the training process, the content that AI systems generate poses its own set of direct infringement risks.

 

Substantial Similarity and Replication

 

An AI’s output can be “substantially similar” to, or in some cases, a near-perfect replica of, specific works from its training data.6 This risk is heightened when an AI is prompted to mimic the style of a particular artist, to generate content featuring a well-known copyrighted character, or when the model has been trained on a relatively small or niche dataset, increasing the likelihood of regurgitation.2 When a user generates and uses such an output, they can be held directly liable for copyright infringement.72

While copyright law does not protect an artist’s general “style,” the line between imitating a style and copying protected expression is thin. An AI trained extensively on the works of a single artist may produce outputs that incorporate specific, protectable elements of that artist’s work, creating a high risk of infringement.6

 

Other Intellectual Property Risks

 

The output problem extends beyond copyright. AI systems can generate outputs that infringe on other forms of IP. They can produce images containing registered trademarks or logos.1 They can create “deepfakes” or voice clones that violate an individual’s

right of publicity or personality rights.1 Furthermore, if users include confidential business information or

trade secrets in their prompts, that information can be incorporated into the model and potentially revealed in outputs generated for other users, leading to a catastrophic loss of trade secret protection.26

 

4.3 The Chain of Liability: Who Pays for Infringement?

 

When an AI system is involved in an IP infringement, identifying the responsible party is not straightforward. Liability can potentially attach to multiple actors in the AI ecosystem.

  • The User: The individual or entity that enters the prompt that leads to the creation of the infringing output can be held directly liable for the act of infringement.6
  • The AI Developer/Provider: The company that built the AI model, trained it on copyrighted data without permission, and made the tool available to the public could face claims of contributory infringement (for knowingly enabling the infringement) or vicarious infringement (for having the ability to control the user’s activity and receiving a financial benefit from it).2
  • The Platform Owner: A corporate entity that deploys an AI system for its business operations could also be held liable for infringements caused by its use.2

This legal ambiguity has led to a strategic response from AI providers. Many now include clauses in their terms of service that attempt to shift the full burden of liability for output infringement onto the user. In a competitive counter-move, some companies, such as Adobe with its Firefly model, are offering their enterprise customers full IP indemnification, promising to cover the legal costs if a user is sued for infringement based on content generated by their tool. This creates a powerful market incentive for using legally “safer” AI systems.26

The input and output infringement problems are deeply intertwined, creating a reinforcing cycle of legal and economic risk. The use of potentially infringing “input” data during training directly increases the probability that the model will produce infringing “output.” This creates a dual legal threat for AI companies: they face lawsuits for their training methodologies while their tools create potential liability for their customers. To mitigate the risk of generating infringing outputs, developers would need to train their models on “clean,” fully licensed datasets. However, the logistical complexity and prohibitive cost of licensing petabytes of data from millions of rights holders make this approach economically challenging.6 This economic reality creates a powerful incentive for AI companies to argue for the broadest possible interpretation of fair use for training data. Consequently, the legal battle over fair use is not merely a retrospective debate about past training practices; it is the central strategic conflict that will determine the future cost structure, liability profile, and fundamental business model of the entire generative AI industry. A definitive loss on the fair use front would necessitate a radical re-engineering of AI development, likely leading to the emergence of a new market for “indemnified AI,” where companies that can afford to license their training data will market their tools as legally secure, premium products.

 

Part V: Charting the Future: Legislative Pathways and Strategic Recommendations

 

The current state of intellectual property law, characterized by doctrinal strain and legal uncertainty, has prompted a global conversation about the path forward. The core question is whether existing legal frameworks are flexible enough to adapt to generative AI or if this transformative technology necessitates a fundamental legislative overhaul. Policymakers, courts, and international bodies are weighing various solutions, from maintaining the status quo to creating entirely new forms of IP rights. The choices made in the coming years will shape the landscape of innovation for decades.

 

5.1 Legislative Inertia vs. Proactive Reform

 

Two opposing philosophies currently guide the approach to regulating AI and IP.

The first is a “wait-and-see” approach, which advocates for legislative restraint. Proponents of this view argue that the legal system should allow courts to adjudicate the initial wave of AI-related cases, developing a body of precedent that can provide more nuanced guidance than broad, premature legislation.6 The U.S. Copyright Office has largely adopted this stance regarding the core issue of copyrightability, concluding in its 2025 report that existing law is “adequate and appropriate” to resolve these questions on a case-by-case basis.21 This pragmatic approach prioritizes legal stability and relies on the historical adaptability of copyright law to new technologies like photography and software.

The second approach calls for proactive reform. Advocates for this position argue that the pace of technological change in AI is far too rapid for the slow, deliberative process of judicial review.1 They contend that legislative clarity is urgently needed to provide certainty for innovators, protect the rights of creators, and foster a stable investment environment.1 To date, most proposed legislation in the U.S. has focused on ancillary issues like mandatory disclosure of training data and the regulation of deepfakes, rather than tackling the foundational ownership questions head-on.7

 

5.2 The Sui Generis Option: Creating a New Class of IP

 

One of the most discussed proposals for proactive reform is the creation of a sui generis right for AI-generated works.

 

Concept and Application

 

A sui generis right—Latin for “of its own kind”—is a unique, tailor-made form of IP protection created by legislation to address specific technologies that do not fit neatly into traditional categories like copyright or patent law.79 Historical examples include the EU’s protection for databases and U.S. protection for semiconductor chip mask works.79

Applied to AI, a sui generis right could grant a limited form of protection to works generated autonomously by AI. This protection would likely be weaker than full copyright, featuring a much shorter term of protection (e.g., 5, 15, or 25 years instead of life of the author plus 70 years) and potentially conferring fewer exclusive rights.33 The goal would be to strike a balance: rewarding the substantial investment required to develop and operate generative AI systems without granting a full monopoly that could stifle follow-on innovation or devalue human creativity.80

 

International Precedent and U.S. Position

 

This is not merely a theoretical concept. In 2022, Ukraine became a pioneer in this area by amending its copyright law to introduce a sui generis right for “non-original objects generated by a computer program.” This right provides 25 years of economic protection for outputs created without direct human creative involvement.81

However, this approach faces resistance in other major jurisdictions. The U.S. Copyright Office, in its 2025 report, explicitly recommended against creating a sui generis protection for AI-generated material at this time.76 The Office reasoned that strong business incentives already exist to drive AI development, and that creating new IP rights for machine-generated content could flood the market and disincentivize human creation.76 This philosophical clash between adapting old laws (legal pragmatism) and creating new ones for a new technology (technological exceptionalism) lies at the heart of the reform debate. The path a country chooses signals its broader economic strategy: the U.S. approach aims to integrate AI into existing legal and economic structures, while the Ukrainian model seeks to create a novel legal framework specifically to foster a nascent AI industry.

 

5.3 New Registration and Disclosure Systems

 

A less radical but highly impactful reform involves modifying the administrative systems that govern IP registration. The U.S. Copyright Office’s 2023 guidance is a prime example of this approach. By mandating the disclosure of AI-generated content in copyright applications, the Office has created a new procedural requirement that allows it to enforce the human authorship doctrine without needing new legislation.22 This administrative reform acts as a powerful regulatory tool, forcing creators to be transparent about their use of AI and allowing the USCO to serve as a gatekeeper, granting protection only to the human-authored elements of a work.

 

5.4 The Role of International Bodies: WIPO’s Conversation

 

On the global stage, the World Intellectual Property Organization (WIPO) has taken a leading role in facilitating dialogue and building consensus. Through its ongoing series, the “Conversation on IP and Frontier Technologies,” WIPO provides a neutral forum for member states, technology companies, creator groups, and legal experts to discuss the challenges posed by AI.82 While WIPO does not have the authority to impose binding law, these conversations are critical for identifying key policy questions, sharing information on national approaches, and laying the groundwork for future international harmonization, whether through formal treaties or the emergence of shared norms.84

 

5.5 Strategic Recommendations

 

Navigating this complex and evolving landscape requires distinct strategies for different stakeholders.

  • For Policymakers: A prudent approach involves maintaining the existing high bar for full copyright and patent protection to uphold the value of human creativity. Simultaneously, focus on creating legal certainty around the “input” problem. This could involve establishing clear, statutory licensing frameworks for the use of copyrighted data in AI training, potentially modeled on TDM exceptions that include opt-out rights for creators and collective licensing mechanisms. This provides a middle ground between the legal uncertainty of fair use and the impracticality of individual licensing. Mandating transparency, such as disclosure of training data and clear labeling of AI-generated content, should be a legislative priority.
  • For Technology Companies: The key to long-term success lies in mitigating legal risk. Proactively seek to build AI models on licensed or public domain data to create legally defensible, premium products. Use clear and transparent terms of service that define ownership and liability, and consider offering IP indemnification as a powerful competitive differentiator. Invest heavily in developing technologies for content provenance and digital watermarking to trace the origins of AI-generated outputs.
  • For Creators and Creative Industries: Collective action is paramount. Engage actively in the development of collective licensing organizations that can efficiently license creative works to AI developers at scale, ensuring fair compensation. In AI-assisted workflows, meticulously document all stages of human creative input—from initial conception and prompt iteration to curation, selection, and post-production editing—to build the strongest possible case for human authorship and copyright protection.
  • For Investors: IP due diligence for AI companies must now go beyond analyzing their patents and software copyrights. It is essential to scrutinize the provenance and legal status of their training data, as this represents a significant and potentially existential liability. Investment theses should favor companies with clear, defensible strategies for data acquisition and those that have a robust framework for managing the IP risks associated with their outputs.

 

Conclusion: Synthesizing a Framework for the Future of AI and IP

 

The collision between generative artificial intelligence and intellectual property law has created a period of profound legal and economic recalibration. The foundational, human-centric principles of copyright and patent law are being tested as never before by machines that can emulate, and in some cases exceed, human capabilities in creative and inventive tasks. The central conclusion of this analysis is that, under the current global legal framework, the “billion-dollar question” of ownership has a clear, if unsatisfying, answer: in the absence of sufficient human authorship or inventorship, purely AI-generated works belong to no one, defaulting to the public domain.

This outcome is the direct result of the “human authorship” doctrine in copyright and the “human inventor” requirement in patent law, principles that have been decisively reaffirmed by the U.S. Copyright Office and a powerful global judicial consensus in the DABUS patent cases. While the question of AI inventorship appears settled for the foreseeable future, the copyrightability of AI-assisted works remains a fluid and contentious issue, hinging on the yet-to-be-defined threshold of “creative control” a human user exercises over an AI tool.

However, the most consequential legal battles are not about the ownership of outputs, but about the legality of the inputs used to train AI models and the subsequent allocation of liability for infringing outputs. The resolution of high-stakes litigation over the fair use of training data will fundamentally shape the cost structure and business models of the entire AI industry. It is here, in the trenches of infringement law, that the immediate future of AI and IP will be forged.

Looking forward, the most viable path is not a radical reinvention of IP law but a hybrid evolution. This framework will likely maintain the high bar of human creativity as the prerequisite for obtaining the full, robust protection of traditional copyright and patents, thereby preserving the core incentive for human ingenuity. Simultaneously, the immense challenge of licensing training data will necessitate the development of new, scalable solutions. These will likely take the form of contractual innovations, the expansion of collective licensing organizations, and the enactment of limited statutory licensing regimes, perhaps modeled on Text and Data Mining exceptions. This bifurcated approach—upholding traditional principles for ownership while creating pragmatic new systems for data access—offers the most promising way to balance the need to foster technological innovation with the imperative to protect the rights, and the value, of human creators in an increasingly automated world.