OpenAI Develops ChatGPT Security While UpGuard Unveils AI Risks

OpenAI is currently embroiled in a significant legal battle with The New York Times, vigorously fighting a court order demanding access to 20 million private ChatGPT user conversations. The Times, which filed a lawsuit against OpenAI for alleged copyright infringement, claims these conversations might reveal users attempting to bypass its paywall. OpenAI, however, argues that this demand infringes on user privacy and establishes a dangerous precedent, emphasizing that most of these conversations are unrelated to the core lawsuit. US District Judge Stephen V. Wilson issued the order on November 7, but OpenAI has challenged it, offering privacy-focused alternatives like targeted searches, which The Times rejected. OpenAI's Chief Information Security Officer, Dane Stuckey, stated on November 12, 2025, that the company will explore every option to protect user privacy and is accelerating its security roadmap, including client-side encryption. This legal dispute unfolds against a backdrop of growing concerns over data security and the widespread, often unapproved, use of AI tools within organizations. A recent report from UpGuard, published on November 12, 2025, highlights the prevalence of "shadow AI"—unapproved AI tools—among employees. The report reveals that over 80% of workers, including nearly 90% of security professionals, utilize these tools, with executives being the most frequent users. Employees often turn to shadow AI because they believe they understand the associated risks or find that company-approved options do not adequately meet their needs. This reliance on unapproved AI tools creates substantial cybersecurity risks for businesses, as data breaches stemming from shadow AI cost an average of $670,000 more than those from approved AI, posing significant vulnerabilities across various sectors like health care and finance.

Key Takeaways

  • OpenAI is challenging a court order to provide 20 million ChatGPT user conversations to The New York Times.
  • The New York Times' lawsuit against OpenAI alleges copyright infringement.
  • OpenAI argues the demand violates user privacy and sets a dangerous precedent, offering privacy-focused alternatives.
  • US District Judge Stephen V. Wilson issued the order for the conversations on November 7.
  • OpenAI is accelerating its security roadmap, including client-side encryption, to protect user data.
  • "Shadow AI," referring to unapproved AI tools, is widely used by over 80% of employees, with executives being the highest users.
  • A report from UpGuard, published November 12, 2025, details the prevalence and risks of shadow AI.
  • Many employees use shadow AI because they trust it more than colleagues or find approved options inadequate.
  • Data breaches caused by shadow AI cost an average of $670,000 more than those from approved AI.
  • The widespread use of shadow AI poses significant cybersecurity vulnerabilities for businesses across various sectors.

OpenAI fights New York Times demand for user chats

OpenAI is fighting a demand from The New York Times to hand over 20 million private ChatGPT conversations. The Times made this demand as part of its lawsuit against OpenAI, claiming it might find users trying to bypass its paywall. OpenAI believes this demand violates user privacy and sets a dangerous precedent, as most conversations are unrelated to the lawsuit. The company is accelerating its security roadmap, including client-side encryption, to protect user data. OpenAI's Chief Information Security Officer, Dane Stuckey, stated on November 12, 2025, they will explore every option to protect user privacy.

OpenAI challenges court order for 20 million user chats

OpenAI is challenging a court order that requires it to give 20 million ChatGPT user conversations to The New York Times. The Times and other news groups sued OpenAI for alleged copyright infringement. OpenAI argues that these "complete conversations" are mostly unrelated to the case and violate user privacy, even with de-identification. US District Judge Stephen V. Wilson issued the order on November 7. The New York Times claims user privacy is protected by the order and OpenAI's own terms of service allow such access. OpenAI offered privacy-focused options, like targeted searches, but the Times rejected them.

Shadow AI use is common especially among leaders

A new report from UpGuard reveals that "shadow AI," or unapproved AI tools, is widely used by employees, with executives using it the most. Over 80% of workers, including nearly 90% of security professionals, use these tools. Many employees, especially in health care and finance, trust AI more than their colleagues. The report, published November 12, 2025, found that employees often use shadow AI because they believe they understand the risks, making traditional security training less effective. This widespread use poses security vulnerabilities for businesses across various sectors.

Unapproved AI tools create costly cybersecurity risks

The use of "shadow AI," or unapproved AI tools, is creating major cybersecurity risks for companies. A recent report shows that data breaches caused by shadow AI cost an average of $670,000 more than those from approved AI. Many employees use these tools because company-approved options do not meet their needs.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

OpenAI New York Times ChatGPT User Privacy Lawsuit Data Protection Copyright Infringement Court Order Shadow AI Unapproved AI Tools Cybersecurity Risks Employee AI Use Data Breaches AI Governance

Comments

Loading...