Negotiating technology and AI agreements: “We have all been here before”

negotiating ai

When it comes to AI issues in technology contracts, both vendors and customers are tending to over-negotiate because the frameworks for understanding and allocating risk are still developing.

“It’s getting to the point where I’m no fun anymore.”

By now, the scenario is familiar.

A technology vendor, driven by a combination of innovation, competition, and market expectation, rolls out a new AI-enabled feature and presents a revised SOW or contract amendment to govern it.

You review the paper and find pages of disclaimers, exclusions, and carefully constructed liability protections. You mark it up. It comes back with twice the redlines. You revise again. A few more turns follow and, before long, both sides are entrenched in positions that feel increasingly immovable.

Limitation of liability provisions expand. Indemnities proliferate. Warranty disclaimers become Byzantine. What began as a relatively straightforward technology negotiation starts to feel defensive, abstract, and strangely personal. At some point, everyone involved is tempted to ask the same question: why has this become so complicated?

Part of the answer is that neither side is entirely certain what it is actually protecting against. The vendor is trying to avoid liability for the inherent unpredictability of probabilistic systems. The customer is trying to protect confidential information, intellectual property, and operational integrity. Both concerns are legitimate.

But uncertainty has a way of distorting negotiations. When parties cannot clearly distinguish between manageable risks and inherent characteristics of the technology itself, they tend to negotiate broadly, defensively, and sometimes imprecisely. The contract becomes less about operational reality and more about managing anxiety in sophisticated language.

“Questions of a thousand dreams, what you do and what you see.”

The vendor’s concern is real.

AI systems produce outputs that may be inaccurate, incomplete, misleading, or inconsistent. No serious vendor can realistically guarantee perfect results from a probabilistic system, nor can every AI-generated output be subjected to meaningful human review before use.

At the same time, vendors understandably want to promote the transformative potential of their products. The commercial pressure surrounding AI is enormous. Vendors want the upside associated with AI-enabled functionality without assuming unlimited liability for the technology’s inherent limitations.

The customer’s concern is equally real.

When proprietary business information enters an AI environment, customers need to understand where that information is stored, how long it is retained, whether it is used for model training, and whether it could be disclosed, repurposed, or absorbed into broader datasets controlled by the vendor or its underlying model providers. Those concerns are not theoretical. They implicate confidentiality obligations, intellectual property rights, regulatory exposure, and operational trust.

The underlying problem is not that either side is wrong. The problem is that the market is still developing the vocabulary and frameworks necessary to separate distinct categories of risk from one another. Accuracy risk is different from security risk. Hallucination risk is different from confidentiality risk. A flawed AI-generated answer is not the same thing as a data breach.

Yet negotiations often collapse those distinctions into broad contractual abstractions. A vendor disclaimer intended to address AI unpredictability can quietly become language broad enough to shield misconduct, weak controls, or preventable failures. Customers respond with increasingly expansive protections of their own. The drafting escalates. The precision decreases. The negotiation slowly drifts away from the operational realities the contract is supposed to govern.

“If I had ever been here before, I would probably know just what to do.”

Of course, we have seen versions of this before.

Digital signatures and electronic commerce produced similar uncertainty in the late 1990s and early 2000s. Parties questioned whether electronic signatures were enforceable, whether identity could be reliably authenticated, and whether online transactions created unacceptable legal exposure. Negotiations became heavily burdened with protective language and contingency planning. Then legislation developed. Courts clarified principles. Market norms emerged. What once felt revolutionary became routine.

Electronic discovery followed a similar path. When the Federal Rules evolved to address electronically stored information, parties became intensely focused on metadata, preservation obligations, and search protocols. Negotiations expanded accordingly. Over time, practitioners developed standards, courts established boundaries, and the market learned the difference between practical risk and theoretical exposure.

Cloud computing generated another cycle. Early negotiations over remote data hosting often became sprawling debates over data location, audit rights, security controls, and liability allocation. Eventually, standardized frameworks emerged. Concerns did not disappear, but they became more proportionate to the actual operational realities involved.

The pattern repeats because technological disruption tends to produce the same initial imbalance: innovation advances faster than the frameworks used to evaluate and allocate risk.

“We have all been here before.”

AI now represents the newest version of that imbalance.

What exactly is an AI-generated output? Who owns it? To what extent may vendors train models on customer information? Can de-identified or aggregated customer data be used for model improvement without implicating confidentiality obligations or intellectual property rights? What operational obligations arise if an AI-enabled feature is suspended for security reasons? The legal answers to these questions are still developing. So are the commercial norms.

Right now, parties are negotiating in the middle of that uncertainty. Templates are evolving in real time. Lawyers are drafting frameworks before the market has fully agreed on its definitions. That uncertainty naturally creates overreach on both sides, but it also creates the conditions from which durable standards eventually emerge.

“It’s been a long time comin’.”

History suggests where this goes next.

As parties accumulate practical experience with AI implementation, negotiations will become more precise. The market will gradually distinguish between theoretical concerns and operationally significant risks. Standardized approaches will emerge around issues like training data, confidentiality protections, liability allocation, acceptable use restrictions, and model governance.

Negotiations that currently feel novel and highly bespoke will become increasingly familiar. This is what happened with electronic commerce, e-discovery, and cloud computing. The technologies changed. The pattern did not. Over time, markets learn how to absorb uncertainty.

“So much time to make up . . .”

The path toward proportionate contracting begins with understanding.

Good lawyers focus on protecting clients from risk. Great lawyers focus on understanding which risks actually matter, which risks can realistically be controlled, and how those risks should be allocated in proportion to legitimate business objectives.

That requires more than aggressive drafting. It requires curiosity. It requires operational understanding. It requires the discipline to distinguish between language that meaningfully reduces risk and language that merely expresses discomfort with uncertainty. What is the client actually trying to accomplish? What systems are being used? What information is entering the model? What safeguards already exist? Which risks are structural characteristics of probabilistic systems and which arise from preventable implementation failures?

Those distinctions matter. Without them, negotiations can become increasingly sophisticated while simultaneously becoming less precise.

“Carry on, love is coming.”

The current intensity surrounding AI negotiations is real, but it is also temporary.

The market is still learning how to evaluate AI risk with proportionality and precision. Until those norms stabilize, parties will continue negotiating aggressively around uncertainty that neither side fully knows how to measure.

But equilibrium will emerge. Shared frameworks will develop. Market expectations will mature. Contract language will become more standardized. Negotiations that currently consume enormous time and energy will eventually feel as ordinary as cloud computing negotiations do today.

That is not because the risks disappear. It is because experience gradually teaches markets which risks are manageable, which risks are inherent, and which protections are commercially reasonable.

We have seen this pattern repeatedly whenever transformative technologies disrupt established legal and operational assumptions. AI is unlikely to be the exception. It is simply the latest turn in a very familiar cycle.

Scroll to top