Does a human who edits an AI-created work become a joint author with the AI?

ai joint author

If a human edits a work that an AI initially created, is the human a joint author under copyright law?

U.S. copyright law (at 17 U.S.C. § 101) considers a work to be a “joint work” if it is made by two or more authors intending to mix their contributions into a single product. So, if a human significantly modifies or edits content that an AI originally created, one might think the human has made a big enough contribution to be considered a joint author. But it is not that straightforward. The law looks for a special kind of input: it must be original and creative, not just technical or mechanical. For instance, merely selecting options for the AI or doing basic editing might not cut it. But if the human’s editing changes the work in a creative way, it might just qualify as a joint author.

Where the human steps in.

This blog post is a clear example. ChatGPT created all the other paragraphs of this blog post (i.e. not this one). I typed this paragraph out from scratch. I have gone through and edited the other paragraphs, making what are obviously mechanical changes. For example, I didn’t like how ChatGPT used so many contractions. I mean, I did not like how ChatGPT used so many contractions. I suspect those are not the kind of “original” contributions that the Copyright Act’s authors had in mind to constitute the level of participation to give rise to a joint work. But I also added some sentences here and there, and struck some others. I took the photo that goes with the post, cropped it, and decided how to place it in relation to the text. Those activities are likely “creative” enough to be copyrightable contributions to the unitary whole that is this blog post. And then of course there is this paragraph that you are just about done reading. Has this paragraph not contributed some notable expression to make this whole blog post better than what it would have been without the paragraph?

Let’s say the human editing does indeed make the human a joint author. What rights would the human have? And how would these rights compare to any the AI might have? Copyright rights are generally held by human creators. This means the human would have rights to copy the work, distribute it, display or perform it publicly, and make derivative works.

Robot rights.

As for the AI, here’s where things get interesting. U.S. Copyright law generally does not recognize AI systems as authors, so they would not have any rights in the work. But this is a rapidly evolving field, and there is ongoing debate about how the law should treat creations made by AI.

This leaves us in a peculiar situation. You have a “joint work” that a human and an AI created together, but only the human can be an author. So, as it stands, the AI would not have any rights in the work, and the human would. Here’s an interesting nuance to consider: authors of joint works are pretty much free to do what they wish with the work as they see fit, so long as they fulfill certain obligations to the other authors (e.g., account for any royalties received). Does the human-owner have to fulfill these obligations to the purported AI-author of the joint work? It seems we cannot fairly address that question if we have not yet established that the AI system can be a joint author in the first place.

Where we go from here.

It seems reasonable to conclude that a human editing AI-created content might qualify as a joint author if the changes are significant and creative, not just technical. If that’s the case, the human would have full copyright rights under current law, while the AI would not have any. As these human-machine collaborations continue to become more commonplace, we will see how law and policy evolve to either strengthen the position that only “natural persons” (humans) can own intellectual property rights, or to move in the direction of granting some sort of “personhood” to non-human agents. It is like watching science fiction unfold in reality in real time.

What do you think?

See also:

Five legal issues around using AI in a branding strategy

The use of AI in the domain name industry

AI

Artificial intelligence has important uses in the domain name industry. With the use of AI, domain name registration, management, and valuation have been made more efficient and accurate. Here are some specific ways AI is affecting domain names:

  • Domain name suggestion and search optimization: AI-powered domain name generators can suggest relevant and available domain names based on specific keywords, making the search process easier and faster for businesses and individuals. Additionally, AI algorithms can optimize search results based on user behavior and preferences, making it easier for potential customers to find the right domain name for their needs.
  •  

  • Domain name valuation: AI algorithms can analyze and evaluate domain names based on various factors such as age, traffic, and backlinks, among others. This information is valuable for domain name investors and businesses looking to acquire domain names that align with their branding strategies.
  •  

  • Domain name security: AI-powered security tools can detect and prevent domain name fraud and phishing attacks. These tools can identify suspicious behavior, such as attempts to hijack a domain name, and alert domain name owners and security teams to take necessary actions.
  •  

  • Domain name portfolio management: AI algorithms can help businesses and individuals manage their domain name portfolios more efficiently by providing insights on which domain names to renew, which to drop, and which to acquire. This information can help businesses save money and optimize their domain name strategies.
  •  

AI is transforming the domain name industry by making it more efficient, secure, and cost-effective. Domain name registrars, investors, and businesses can leverage AI-powered tools to find, evaluate, and manage domain names more effectively, making the process easier and faster for all involved. We can expect even more innovations in the domain name industry in the years to come.

Five legal issues around using AI in a branding strategy

AI branding strategy

The ability of AI to gather, analyze, and interpret large sets of data can lead to invaluable insights and efficiencies. But as businesses increasingly rely on AI to develop and execute branding strategies, they must be aware of the potential legal issues that can arise. Here are five issues to consider:

  • Data Protection and Privacy Laws: AI systems often require vast amounts of data to operate effectively, much of which may be personal data collected from customers. This brings into play data protection and privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States. Non-compliance with these laws can lead to substantial fines and reputational damage. So businesses must seek to ensure that their use of AI complies with all applicable data protection and privacy laws.
  •  

  • Intellectual Property Rights: AI systems can generate content, designs, or even brand names. But who owns the rights to this AI-generated output? This is a complex and evolving area of law, with different jurisdictions taking different approaches. Businesses need to remember to consider intellectual property issues, both in the context of protecting their own rights and not infringing upon the rights of others.
  •  

  • Bias and Discrimination: AI systems learn from the data on which they are trained. If this data contains biases, the AI system can amplify these biases, leading to potentially discriminatory outcomes. This not only has ethical implications but also legal ones. In many jurisdictions, businesses can be held liable for discriminatory practices, even if unintentional. Businesses should ensure their AI systems are trained on diverse and representative data sets and regularly audited for bias.
  •  

  • Transparency and Explainability: Many jurisdictions are considering regulations that require AI systems to be transparent and explainable. This means that businesses must be able to explain how their AI systems make decisions. If a customer feels that it has been unfairly treated by an AI system, the business may need to justify the AI’s decision-making process. Compliance with these requirements can be challenging, particularly with complex AI systems.
  •  

  • Contractual Obligations and Liability: When businesses use third-party AI systems, it is crucial to clearly understand and define who is responsible if something goes wrong. This includes potential breaches of data protection laws, intellectual property infringement, and any harm caused by the AI system. Businesses should ensure their contracts with AI vendors clearly outline the responsibilities and liabilities of each party.

While AI presents numerous opportunities for enhancing a branding strategy, it also introduces a range of legal considerations. Businesses must navigate these potential legal pitfalls carefully so that they can leverage the power of AI while minimizing legal risk.

Federal Circuit holds AI not an inventor

robot AI inventor

The United States Court of Appeals for the Federal Circuit held that an artificial intelligence system cannot be named as an inventor under the Patent Act because the statute limits inventors to natural persons.

Stephen Thaler sued United States Patent and Trademark Office and its director after the PTO rejected his patent applications that listed his AI system, DABUS, as the sole inventor, arguing that the agency wrongly refused to accept applications without a human inventor.

Plaintiff asked the court to reverse the PTO’s decision, reinstate the patent applications, and conclude that an AI software system may qualify as an inventor under the Patent Act.

Federal Circuit’s ruling

The court ruled that the PTO properly denied the applications and that the district court correctly granted summary judgment to defendants because only a natural person may be an inventor under federal patent law.

An individual has to be a person

The court ruled this way because the Patent Act defines an inventor as an “individual,” and the statute’s text, surrounding provisions, and prior precedent all show that “individual” means a human being, not a machine or software system. The court also rejected plaintiff’s policy and constitutional arguments because the statutory language was unambiguous.

Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir., August 5, 2022)

Scroll to top