OpenAI Accuses New York Times of Manipulating ChatGPT for False Evidence in Copyright Suit
OpenAI alleges that the New York Times manipulated its ChatGPT chatbot to create false evidence against the company, breaching OpenAI's usage terms, in a recent Manhattan federal court filing.
The company is requesting the dismissal of portions of the Times' copyright lawsuit against it and Microsoft, claiming the newspaper unlawfully prompted the chatbot into mimicking its content.
OpenAI criticizes the Times for not adhering to its own high journalistic standards and asserts that the truth will emerge, accusing the Times of hiring someone to compromise OpenAI's products. Both OpenAI and the Times have yet to comment on the matter.
The lawsuit, filed by the Times in December, contends that OpenAI and Microsoft used the newspaper's articles to train chatbots without authorization. The case is part of a broader conflict where copyright holders challenge tech firms over AI's use of their intellectual property.
Companies like OpenAI defend their practices as fair use, vital for the AI industry's expansion. The Times argues that OpenAI and Microsoft are piggybacking on its journalism to mimic the newspaper, citing instances of chatbots relaying the Times' content verbatim.
OpenAI counters that the Times' claims are based on numerous trials that resulted in rare outcomes, emphasizing that ChatGPT does not typically distribute Times articles on demand.