AI and voice clones: Three things to know about Tennessee’s ELVIS Act

On March 21, 2024, the governor of Tennessee signed the ELVIS Act (the Ensuring Likeness, Voice, and Image Security Act of 2024) which is aimed at the problem of people using AI to simulate voices in a way not authorized by the person whose voice is being imitated.

Here are three key things to know about the new law:

(1) Voice defined.

The law adds the following definition to existing Tennessee law:

“Voice” means a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice of the individual;

There are a couple of interesting things to note. One could generate or use the voice of another without using the other person’s name. The voice simply has to be “readily identifiable” and “attributable” to a particular human. Those seem to be pretty open concepts and we could expect quite a bit of litigation over what it takes for a voice to be identifiable and attributable to another. Would this cover situations where a person naturally sounds like another, or is just trying to imitate another musical style?

(2) Voice is now a property right.

The following underlined words were added to the existing statute:

Every individual has a property right in the use of that individual’s name, photograph, voice, or likeness in any medium in any manner.

The word “person’s” was changed to “individual’s” presumably to clarify that this is a right belonging to a natural person (i.e., real human beings and not companies). And of course the word “voice” was added to expressly include that attribute as something in which the person can have a property interest.

(3) Two new things are banned under law.

The following two paragraphs have been added:

A person is liable to a civil action if the person publishes, performs, distributes, transmits, or otherwise makes available to the public an individual’s voice or likeness, with knowledge that use of the voice or likeness was not authorized by the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.

A person is liable to a civil action if the person distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology, service, or device, the primary purpose or function of which is the production of an individual’s photograph, voice, or likeness without authorization from the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.

With this language, we see the heart of the new law’s impact. One can sue another for making his or her voice publicly available without permission. Note that this restriction is not only on commercial use of another’s voice. Most states’ laws discussing name, image and likeness restrict commercial use by another. This statute is broader, and would make more things unlawful, for example, creating a deepfaked voice simply for fun (or harassment, of course), if the person whose voice is being imitated has not consented.

Note the other interesting new prohibition, the one on making available tools having as their “primary purpose or function” the production of another’s voice without authorization. If you were planning on launching that new app where you can make your voice sound like a celebrity’s voice, consider whether this Tennessee statute might shut you down.

See also:

Utah has a brand new law that regulates generative AI

On March 15, 2024, the Governor of Utah signed a bill that implements new law in the state regulating the use and development of artificial intelligence.  Here are some key things you should know about the law.

  • The statute adds to the state’s consumer protection laws, which govern things such as credit services, car sales, and online dating. The new law says that anyone accused of violating a consumer protection law cannot blame it on the use of generative AI (like Air Canada apparently attempted to do back in February).
  • The new law also says that a person involved in any act covered by the state’s consumer protection laws asks the company she’s dealing with if she is interacting with an AI, the company has to clearly and conspicuously disclose that fact.
  • And the law says that anyone providing services as a regulated occupation in the state (for example, an architect, surveyor or a therapist) must disclose in advance any use of generative AI. The statute outlines the requirements for these notifications.
  • In addition to addressing consumer protection, the law also establishes a plan for the state to further innovation in artificial intelligence. The new law introduces a regulatory framework for an AI learning laboratory to investigate AI’s risks and benefits and to guide regulation AI development.
  • The statute discusses requirements for participation in the program and also provides certain incentives for the development of AI technologies, including “regulatory mitigation” to adjust or ease certain regulatory requirements for participants and reduce potential liability.

This law the first of its kind and other states are likely to enact similar laws. Much more to come on this topic.

Lawyers and AI: Key takeaways from being on a panel at a legal ethics conference

Earlier today I was on a panel at Hinshaw & Culbertson’s LMRM Conference in Chicago. This was the 23rd annual LMRM Conference, and the event has become the gold standard for events that focus on the “law of lawyering.”

Our session was titled How Soon is Now—Generative AI, How It Works, How to Use it Now, How to Use it Ethically. Preparing for and participating in the event gave me the opportunity to seriously consider some of the key issues relating to how lawyers are using generative AI and the promise that wider future adoption of these technologies in the legal industry holds.

Here are a few key takeaways:

    • Effective use. Lawyers are already using generative AI in ways that aid efficiency. The technology can summarize complex texts during legal research, allowing the attorney to quickly assess if the content addresses her specific interests, is factually relevant, and aligns with desired legal outcomes. With a carefully crafted and detailed prompt, an attorney can generate a pretty good first draft of many types of correspondence (e.g., cease and desist letters). Tools such as ChatGPT can aid in brainstorming by generating a variety of ideas on a given topic, helping lawyers consider possible outcomes in a situation.

 

    • Access to justice. It is not clear how generative AI adoption will affect access to justice. While it is possible that something like “legal chatbots” could bring formerly unavailable legal help to parties without sufficient resources to hire expensive lawyers, the building and adoption of sophisticated tools by the most elite firms will come at a cost that is passed on to clients, making premium services even more expensive, thereby increasing the divide that already exists.

 

    • Confidentiality and privacy. Care must be taken to reduce the risk of unauthorized disclosure of information when law firms adopt generative AI tools. Data privacy concerns arise regardless of the industry in which generative AI is used. But lawyers have the additional obligation to preserve their clients’ confidential information in accordance with the rules governing the attorney-client relationship. This duty of confidentiality complicates the ways in which a law firm’s “enterprise knowledge” can be used to train a large language model. And lawyers must consider whether and how to let their clients know that the client’s information may be used to train the model.

 

    • Exposing lawyering problems. Cases such as Mata v. Avianca, Park v. Kim and Kruse v. Karlenwherein lawyers or litigants used AI to generate documents submitted to the court containing non-existent case citations (hallucinations)tend to be used to critique these kinds of tools and tend to discourage lawyers from adopting them. But if one looks at these cases carefully, it is apparent that the problem is not so much with the technology, but instead with lawyering that lacks the appropriate competence and diligence.
    •  

    • AI and the standard of the practice. There is plenty of data suggesting that most knowledge work jobs will be drastically impacted by the use of AI in the near term. Regardless of whether a lawyer or law firm wants to adopt generative AI in the practice of law, attorneys will not be able to avoid knowing how the use of AI will change norms and expectations, because clients will be effectively using these technologies and innovating in the space.

Thank you to Barry MacEntee for inviting me to be on his panel. Barry, you did an exemplary job of preparation and execution, which is exactly how you roll. Great to meet my co-panelist Andrew Sutton. Andrew, your insights and commentary on both the legal and technical aspects of the use of AI in the practice of law were terrific.

VIDEO: Elon Musk / OpenAI lawsuit – What’s it all about?

 

So Elon Musk has sued OpenAI. What’s this all about?

The lawsuit centers on the breach of a founding agreement and OpenAI’s shift from non-profit to for-profit through partnerships with companies like Microsoft. It has been filed in state court in California and talks about the risks of artificial general intelligence (or AGI). It talks about how Musk worked with Sam Altman back in 2015 to form OpenAI for the public good. That was the so called “founding agreement” which also got written into the company’s certificate of incorporation. One of the most intriguing things about the lawsuit is that Musk is asking the court to determine that OpenAI is Artificial General Intelligence and thereby has gone outside the initial scope of the Founding Agreement.

ChatGPT was “utterly and unusually unpersuasive” in case involving recovery of attorney’s fees

chatgpt billing

In a recent federal case in New York under the Individuals with Disabilities Act, plaintiff prevailed on her claims and sought an award of attorney’s fees under the statute. Though the court ended up awarding plaintiff’s attorneys some of their requested fees, the court lambasted counsel in the process for using information obtained from ChatGPT to support the claim of the attorneys’ hourly rates.

Plaintiff’s firm used ChatGPT-4 as a “cross-check” against other sources in confirming what should be a reasonably hourly rate for the attorneys on the case. The court found this reliance on ChatGPT-4 to be “utterly and unusually unpersuasive” for determining reasonable billing rates for legal services. The court criticized the firm’s use of ChatGPT-4 for not adequately considering the complexity and specificity required in legal billing, especially given the tool’s inability to discern between real and fictitious legal citations, as demonstrated in recent past cases within the Second Circuit.

In Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023) the district court judge sanctioned lawyers for submitting fictitious judicial opinions generated by ChatGPT, and in Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024) an attorney was referred to the Circuit’s Grievance Panel for citing non-existent authority from ChatGPT in a brief. These examples highlighted the tool’s limitations in legal contexts, particularly its inability to differentiate between real and fabricated legal citations, raising concerns about its reliability and appropriateness for legal tasks.

J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024)

See also:

Using AI generated fake cases in court brief gets pro se litigant fined $10K

fake ai cases

Plaintiff sued defendant and won on summary judgment. Defendant sought review with the Missouri Court of Appeals. On appeal, the court dismissed the appeal and awarded damages to plaintiff/respondent because of the frivolousness of the appeal.

“Due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by [AI], we dismiss the appeal.” With this, the court began its roast of the pro se appellant’s conduct.

The court detailed appellant’s numerous violations of the applicable Rules of Appellate Procedures. The appellate brief was unsigned, it had no required appendix, and had an inadequate statement of facts. It failed to provide points relied on, and a detailed table of cases, statutes and other authorities.

But the court made the biggest deal about how “the overwhelming majority of the [brief’s] citations are not only inaccurate but entirely fictitious.” Only two out of the twenty-four case citations in the brief were genuine.

Though appellant apologized for the fake cases in his reply brief, the court was not moved, because “the deed had been done.” It characterized the conduct as “a flagrant violation of the duties of candor” appellant owed to the court, and an “abuse of the judicial system.”

Because appellant “substantially failed to comply with court rules,” the court dismissed the appeal and ordered appellant to pay $10,000 in damages for filing a frivolous appeal.

Kruse v. Karlen, — S.W.3d —, 2024 WL 559497 (Mo. Ct. App. February 13, 2024)

See also:

GenAI and copyright: Court dismisses almost all claims against OpenAI in authors’ suit

copyright social media

Plaintiff authors sued large language model provider OpenAI and related entities for copyright infringement, alleging that plaintiffs’ books were used to train ChatGPT. Plaintiffs asserted six causes of action against various OpenAI entities: (1) direct copyright infringement, (2) vicarious infringement, (3) violation of Section 1202(b) of the Digital Millennium Copyright Act (“DMCA”), (4) unfair competition under  Cal. Bus. & Prof. Code Section 17200, (5) negligence, and (6) unjust enrichment.

Open AI moved to dismiss all of these claims except for the direct copyright infringement claim. The court granted the motion as to almost all the claims.

Vicarious liability claim dismissed

The court dismissed the claim for vicarious liability because plaintiffs did not successfully plead that direct copying occurs from use of the software. Citing to A&M Recs., Inc. v. Napster, Inc., 239 F.3d 1004, 1013 n.2 (9th Cir. 2001) aff’d,  284 F.3d 1091 (2002) the court noted that “[s]econdary liability for copyright infringement does not exist in the absence of direct infringement by a third party.” More specifically, the court dismissed the claim because plaintiffs had not alleged either direct copying when the outputs are generated, nor had they alleged “substantial similarity” between the ChatGPT outputs and plaintiffs’ works.

DMCA claims dismissed

The DMCA – at 17 U.S.C. 1202(b) – requires a defendant’s knowledge or “reasonable grounds to know” that the defendant’s removal of copyright management information (“CMI”) would “induce, enable, facilitate, or conceal an infringement.” Plaintiffs alleged “by design,” OpenAI removed CMI from the copyrighted books during the training process. But the court found that plaintiffs provided no factual support for that assertion. Moreover, the court found that even if plaintiffs had successfully asserted such facts, they had not provided any facts showing how the omitted CMI would induce, enable, facilitate or conceal infringement.

The other portion of the DMCA relevant to the lawsuit – Section 1202(b)(3) – prohibits the distribution of a plaintiff’s work without the plaintiff’s CMI included. In rejecting plaintiff’s assertions that defendants violated this provision, the court looked to the plain language of the statute. It noted that liability requires distributing the original “works” or “copies of [the] works.” Plaintiffs had not alleged that defendants distributed their books or copies of their books. Instead, they alleged that “every output from the OpenAI Language Models is an infringing derivative work” without providing any indication as to what such outputs entail – i.e., whether they were the copyrighted books or copies of the books.

Unfair competition claim survived

Plaintiffs asserted that defendants had violated California’s unfair competition statute based on “unlawful,” “fraudulent,” and “unfair” practices. As for the unlawful and fraudulent practices, these relied on the DMCA claims, which the court had already dismissed. So the unfair competition theory could not move forward on those grounds. But the court did find that plaintiffs had alleged sufficient facts to support the claim that it was “unfair” to use plaintiffs works without compensation to train the ChatGPT model.

Negligence claim dismissed

Plaintiffs alleged that defendants owed them a duty of care based on the control of plaintiffs’ information in their possession and breached their duty by “negligently, carelessly, and recklessly collecting, maintaining, and controlling systems – including ChatGPT – which are trained on Plaintiffs’ [copyrighted] works.” The court dismissed this claim, finding that there were insufficient facts showing that defendants owed plaintiffs a duty in this situation.

Unjust enrichment claim dismissed

Plaintiffs alleged that defendants were unjustly enriched by using plaintiffs’ copyright protected works to train the large language model. The court dismissed this claim because plaintiff had not alleged sufficient facts to show that plaintiffs had conferred any benefit onto OpenAI through “mistake, fraud, or coercion.”

Tremblay v. OpenAI, Inc., 2024 WL 557720 (N.D. Cal., February 12, 2024)

See also:

ChatGPT providing fake case citations again – this time at the Second Circuit

Plaintiff sued defendant in federal court but the court eventually dismissed the case because plaintiff continued to fail to properly respond to defendant’s discovery requests. So plaintiff sought review with the Second Circuit Court of Appeals. On appeal, the court affirmed the dismissal, finding that plaintiff’s noncompliance in the lower court amounted to “sustained and willful intransigence in the face of repeated and explicit warnings from the court that the refusal to comply with court orders … would result in the dismissal of [the] action.”

But that was not the most intriguing or provocative part of the court’s opinion. The court also addressed the conduct of plaintiff’s lawyer, who admitted to using ChatGPT to help her write a brief before the appellate court. The AI assistance betrayed itself when the court noticed that the brief contained a non-existent case. Here’s the mythical citation: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014).

When the court called her out on the legal hallucination, plaintiff’s attorney admitted to using ChatGPT, to which she was a “suscribed and paying member” but emphasized that she “did not cite any specific reasoning or decision from [the Bourguignon] case.” Unfortunately, counsel’s assertions did not blunt the court’s wrath.

“All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure,” read the court’s opinion as it began its rebuke. It reminded counsel that the rules of procedure impose a duty on attorneys to certify that they have conducted a reasonable inquiry and have determined that any papers filed with the court are legally tenable. “At the very least,” the court continued, attorneys must “read, and thereby confirm the existence and validity of, the legal authorities on which they rely.” Citing to a recent case involving a similar controversy, the court observed that “[a] fake opinion is not ‘existing law’ and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law. An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.”

The court considered the matter so severe that it referred the attorney to the court’s Grievance Panel, for that panel to consider whether to refer the situation to the court’s Committee on Admissions and Grievances, which would have the power to revoke the attorney’s admission to practice before that court.

Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024)

See also:

Generative AI executive who moved to competitor slapped with TRO

generative ai competitor

Generative AI is obviously a quickly growing segment, and competition among businesses in the space is fierce. As companies race to harness the transformative power of this technology, attracting and retaining top talent becomes a central battleground. Recent legal cases, like the newly-filed Kira v. Samman in Virginia, show just how intense the scramble for expertise has become. In the court’s opinion granting a temporary restraining order against a departing executive and the competitor to which he fled, we see some of the dynamics of non-competition clauses, and the lengths companies will go to in order to safeguard their intellectual property and strategic advantages, particularly in dealing with AI technology.

Kira and Samman Part Ways

Plaintiff Kira is a company that creates AI tools for law firms, while defendant DeepJudge AG offers comparable AI solutions to boost law firm efficiency.  Kira hired defendant Samman, who gained access to Kira’s confidential data. Samman had signed a Restrictive Covenants Agreement with Kira containing provisions that prohibited him from joining a competitor for 12 months post-termination. Samman resigned from Kira in June 2023, and Kira claimed he joined competitor DeepJudge after sending Kira’s proprietary data to his personal email.

The Dispute

Kira sued Samman and DeepJudge in federal court, alleging Samman breached his contractual obligations, and accusing DeepJudge of tortious interference with a contract. Kira also sought a temporary restraining order (TRO) to prevent Samman from working for DeepJudge and to mandate the return and deletion of Kira’s proprietary information in Samman’s possession.

Injunctive Relief Was Proper

The court observed that to obtain the sought-after injunction, Kira had to prove, among other things, a likelihood of success at trial. It found that Kira demonstrated this likelihood concerning Samman’s breach of the non-competition restrictive covenant. It determined the non-competition covenant Samman breached to be enforceable, given that it met specific requirements including advancing Kira’s economic interests. The court found that the evidence showed Samman, after leaving his role at Kira, joined a direct competitor, DeepJudge, in a role similar in function, thus likely violating the non-competition restrictive covenant.

The court found that Kira faced irreparable harm without the injunction, especially given the potential loss of clients due to Samman’s knowledge of confidential information. The court weighed the balance of equities in favor of Kira, emphasizing the protection of confidential business information and enforcement of valid contracts. It required Kira to post a bond of $15,000, to ensure coverage for potential losses Samman might face due to the injunction.

Kira (US) Inc. v. Samman, 2023 WL 4687189 (E.D. Va. July 21, 2023)

See also:

https://kxd.srb.temporary.site/website_f704b91a/2016/09/16/when-can-you-use-a-competitors-trademark-in-a-domain-name/

Scroll to top