OpenAI's $15,000 Model Inspection Cap Sparks Legal Battle with New York Times

· 1 min read

article picture

In a recent legal battle between The New York Times (NYT) and OpenAI, a controversial dispute has emerged over the costs of inspecting AI models during court proceedings. The case highlights growing concerns about transparency and accountability in artificial intelligence systems.

The NYT filed a lawsuit against OpenAI over copyright concerns related to ChatGPT. As part of the discovery process, OpenAI proposed allowing NYT's expert to examine their AI models in a secure, isolated environment. However, the inspection process came with strings attached - a $15,000 cap on API queries, after which the NYT would need to pay half of retail prices for additional testing.

This proposal sparked strong opposition from the NYT, which claims it would need approximately $800,000 worth of retail credits to properly investigate their case. The newspaper accused OpenAI of attempting to "hide its infringement" behind excessive costs while refusing to disclose actual operational expenses.

OpenAI defended its position by arguing the cap helps prevent unnecessary "fishing expeditions" and reduces burden on the company. The AI company maintains that the NYT's requested scope of testing is "arbitrary and unsubstantiated."

The outcome of this dispute could set an important precedent for future AI litigation. If courts allow companies to charge retail prices for model inspection, it may create significant financial barriers for plaintiffs seeking to investigate potential AI harms.

Lucas Hansen from CivAI notes that while public models can be readily examined, accessing original models through APIs is often necessary to uncover training data sources - a key focus of the NYT lawsuit. However, developers have reported that OpenAI's API pricing structure makes extensive testing prohibitively expensive.

As AI technology continues to rapidly evolve and impact society, this case underscores the challenges of balancing corporate interests with public accountability. The ability to thoroughly inspect AI models may become increasingly critical as concerns about AI-related harms grow.

Both the NYT's legal team and OpenAI have declined to comment on the ongoing litigation.

Note: The provided link about Chinese hackers and FBI takedown is not contextually related to this article about OpenAI and NYT's legal dispute over AI model inspection costs. Therefore, following instruction #4, I have omitted including that link in the article.