Salesforce is facing multiple class-action lawsuits from authors, including Molly Tanzer and Jennifer Gilmore, who allege the company used their copyrighted books without permission to train its AI models, specifically the xGen models. These lawsuits claim Salesforce utilized datasets like Books3, RedPajama, and The Pile, which contain thousands of books. Attorney Joseph Saveri is leading these actions, which mirror similar legal challenges against tech giants like OpenAI and Meta. The authors seek damages and injunctions, arguing that their intellectual property rights have been infringed. Meanwhile, the broader AI landscape sees varied developments: OpenAI's Sora app is sparking discussions about its impact on social media and content creation, while at MIPCOM, AI was presented as an opportunity for the media industry, with Vitpepper Studios premiering the first fully AI-generated series. In education, Long Island fifth graders received AI-generated photos of their future selves. The AI boom's demand for resources is also impacting the copper market amid U.S.-China trade tensions, with experts warning of potential supply bottlenecks. Companies like One New Zealand are implementing mandatory AI training for employees to ensure responsible use, covering fairness, privacy, and security. The intense power demands of AI, particularly for training large language models, are driving the need for collaborative energy systems and grid modernization. Security risks are also emerging, with insecure tokens in Machine Control Protocol (MCP) workflows creating potential backdoors for attackers. On a more positive note, venture capitalist Marc Andreessen believes AI will empower a new generation of filmmakers by democratizing complex tasks. However, the use of AI-generated code has caused a split in the popular Doom engine port GZDoom, with developers raising concerns about licensing and bugs.
Key Takeaways
- Salesforce is being sued by authors, including Molly Tanzer and Jennifer Gilmore, for allegedly using copyrighted books without permission to train its AI models like xGen.
- The lawsuits claim Salesforce used datasets such as Books3, RedPajama, and The Pile, which contain numerous books, and seek damages for copyright infringement.
- Similar lawsuits have been filed against other major AI companies, including OpenAI and Meta, highlighting broader concerns about AI training data.
- OpenAI's new app, Sora, is generating discussion about its potential effects on social media and content creation, with some experts viewing it as addictive and focused on aesthetics over substance.
- At the MIPCOM media event, AI was largely viewed as an opportunity, with the premiere of the world's first fully AI-generated series, Tesseract, by Vitpepper Studios.
- The increasing demand for AI is driving up the need for copper, a critical metal, leading to concerns about supply chain challenges and potential bottlenecks due to U.S.-China trade tensions.
- The significant power consumption of AI, especially for training large language models, necessitates collaborative energy systems and upgrades to data center power infrastructure.
- Companies like One New Zealand are introducing mandatory training programs for all employees to ensure the responsible and safe use of AI.
- Security vulnerabilities are being identified in AI workflows, particularly concerning insecure tokens in Machine Control Protocol (MCP) that could create backdoors for attackers.
- The use of AI-generated code has led to a split within the GZDoom project, with developers raising concerns about licensing and potential bugs under the GNU General Public License (GPL).
Authors sue Salesforce over AI training data
Joseph Saveri Law Firm filed a class-action lawsuit against Salesforce on behalf of authors Molly Tanzer and Jennifer Gilmore. The suit claims Salesforce used copyrighted books from datasets like Books3 to train its AI models without permission. This is the eleventh such lawsuit filed by the firm regarding AI training data. The lawsuit alleges Salesforce acknowledged using the Books3 dataset and cites CEO Marc Benioff's comments about 'stolen' data. Thousands of copyright holders could be part of the proposed class.
Authors sue Salesforce for AI model training
Authors Molly Tanzer and Jennifer Gilmore have sued Salesforce, alleging the company used thousands of copyrighted books to train its xGen AI models without permission. The lawsuit claims this infringes on their intellectual property rights and constitutes unfair competition. This case highlights growing concerns in the literary community about copyrighted material being used for AI development. Similar lawsuits have been filed against other major AI companies like OpenAI and Meta. The legal battles are expected to shape future AI development and copyright law.
Salesforce accused of using pirated books for AI training
A class-action lawsuit in San Francisco federal court accuses Salesforce of using pirated books to build its XGen AI models. The suit claims Salesforce used the RedPajama and The Pile datasets, which include a collection of over 196,000 books known as Books3. Salesforce allegedly removed references to these datasets from its website after questions arose. Hugging Face later removed the Books3 dataset due to copyright complaints. The lawsuit seeks damages and other relief for copyright holders whose works were allegedly used since October 2022.
Authors sue Salesforce over AI training practices
Bestselling authors, including Jonathan Franzen and Jodi Picoult, have sued Salesforce in federal court, claiming their copyrighted works were used without permission to train AI models. The lawsuit alleges Salesforce's partnership with AI firm Cohere infringes on intellectual property rights. This action mirrors similar lawsuits against OpenAI and Meta as authors push back against unauthorized use of their content. The plaintiffs seek damages and an injunction to prevent further unauthorized use of their material.
Salesforce sued for AI model training copyright infringement
Salesforce is facing a class-action lawsuit from authors E. Molly Tanzer and Jennifer Gilmore. The suit alleges the company used copyrighted books from datasets to train its AI models, potentially leading to financial penalties or compensation claims. This legal action is similar to other lawsuits filed against major tech companies like OpenAI, Microsoft, and Meta for alleged misuse of copyrighted content in AI training. The outcome could impact how AI models are trained in the future.
Salesforce sued for using authors' books in AI training
Two authors, Molly Tanzer and Jennifer Gilmore, have sued Salesforce, accusing the company of using their books without permission to train its AI models. The lawsuit claims Salesforce infringed copyright laws by using their novels in the xGen AI models. Attorney Joseph Saveri highlighted the need for transparency and fair compensation for creators whose material is used for AI products. This lawsuit is among several similar actions against tech giants like OpenAI, Microsoft, and Meta Platforms.
Salesforce sued over AI training data controversy
Salesforce faces a federal lawsuit from authors E. Molly Tanzer and Jennifer Gilmore for allegedly building its XGen AI models using a pirated library of books. The lawsuit claims Salesforce used the RedPajama and The Pile datasets, containing over 196,000 books, and later removed references to them. Authors seek class certification for copyright holders whose works were allegedly used since October 2022. They are requesting statutory damages, profit disgorgement, and other relief.
Authors sue Salesforce for AI training methods
Two authors, Molly Tanzer and Jennifer Gilmore, have sued Salesforce for allegedly using pirated versions of their novels to train its artificial intelligence tools. The proposed class-action complaint claims Salesforce infringed copyrights by using their work to train its xGen AI models. Attorney Joseph Saveri, representing the authors, is pursuing similar lawsuits against other tech firms. The lawsuit notes Salesforce CEO Marc Benioff's criticism of AI companies using 'stolen' data for training.
OpenAI's Sora app sparks debate on social media's future
OpenAI's new app, Sora, is a powerful creative tool but raises concerns about potential harm and the future of social media. Experts like Marlon Twyman and Rudy Fraser note that Sora, like Vine and TikTok, is designed to be addictive, but shifts focus from people to content creation. Some developers find it antisocial and nihilistic, questioning the demand for such applications. The app prioritizes aesthetics over substance and changes the meaning of 'social' by focusing on the account holder's vision rather than authentic content.
AI presents opportunities at MIPCOM media event
At MIPCOM, artificial intelligence was viewed as an opportunity rather than a threat by industry leaders. Sean Atkins, CEO of a creator-led video production company, expressed optimism about AI's long-term potential despite near-term challenges. Banijay's Damien Viel discussed the necessary infrastructure and legal frameworks for AI integration, including clipping tools and new content consumption methods. While AI-generated actresses like Tilly Norwood face skepticism, Vitpepper Studios premiered Tesseract, the world's first fully AI-generated series.
Long Island students get AI glimpse of future
Fifth graders on Long Island experienced a unique look into their futures using artificial intelligence. The students were surprised with AI-generated photos that depicted what they might look like as adults. This innovative use of AI provided a fun and engaging way for young students to interact with advanced technology.
AI boom impacts copper market amid US China trade tensions
Renewed U.S.-China trade tensions highlight supply challenges for copper, a metal crucial for the AI boom and increased defense spending. Demand for copper is expected to surge over the next decade, driven by AI, electrification, and industrial growth. London Metal Exchange CEO Matt Chamberlain stressed the need for supply chain diversity and reinvestment in smelting. Experts warn that copper could become a 'strategic bottleneck' if governments and investors do not act to ensure sufficient supply.
One NZ trains staff on responsible AI use
One New Zealand (One NZ) has launched a company-wide AI training program called 'Using AI Responsibly.' This initiative aims to equip all 2,500 employees with the knowledge to use AI safely, fairly, and responsibly. The 30-minute training module covers principles of fairness, privacy, transparency, and security. This program addresses a gap, as surveys indicate many companies do not offer AI training to their staff.
AI boom needs collaborative energy systems
The AI boom, particularly training large language models, creates unpredictable and intense power demands for data centers. Traditional grids struggle with this volatility, necessitating a shift towards collaborative energy systems. Data center operators must partner with utilities, exploring options like on-site generation and battery storage. Modular grid infrastructure and digitalization for real-time optimization are also crucial. Supportive policies are needed to create a more flexible and resilient energy grid for the future.
Insecure tokens pose risks in AI workflows
Organizations rushing to adopt AI may be creating security risks through insecure tokens in Machine Control Protocol (MCP) workflows. Unlike traditional shadow IT, shadow MCP is harder to control because connections lack authentication brokers and tokens may not be rotated. This creates an untracked backdoor for attackers. To mitigate this, companies should implement an AI connector registry to discover, register, monitor, and enforce the use of AI models and connectors, preventing breaches.
Can you avoid AI in your personal life?
It is nearly impossible to keep artificial intelligence out of personal lives, as AI crawlers collect vast amounts of information. While mainstream chatbots like Grok, Meta AI, Gemini, and ChatGPT train on public data, specialized tools used by law enforcement have fewer limitations. VPNs and privacy-focused browsers can help reduce AI's reach, but they are not foolproof. Limiting shared information and adjusting privacy settings are key to maintaining personal data privacy.
Marc Andreessen sees AI empowering new filmmakers
Venture capitalist Marc Andreessen believes artificial intelligence will revolutionize filmmaking by empowering a new generation of creators. He sees AI tools democratizing the industry, making complex tasks like visual effects and editing more accessible. Andreessen views AI as a powerful co-pilot that will augment creativity, allowing filmmakers to focus on storytelling. He expressed profound optimism about AI fostering a more diverse and innovative film landscape.
GZDoom splits over AI code controversy
The popular Doom engine port GZDoom has split due to a controversy over lead developer Graf Zahl's decision to incorporate AI-generated code from ChatGPT. Developers raised concerns about potential licensing violations under the GNU General Public License (GPL) and introduced bugs. This led to a developer exodus and the creation of a rival fork called UZDoom. The split highlights broader industry debates on the ethics and legalities of using AI-generated code in open-source projects.
Sources
- Salesforce Faces Class-Action Copyright Claim From Saveri Law Over AI Training
- Authors sue Salesforce for allegedly using books in AI training
- Salesforce Faces Class Action Over Alleged Illegal AI Training Data
- Salesforce faces lawsuit from authors over AI training practices - The Times of India
- Salesforce faces copyright lawsuit from authors over training AI models
- Salesforce Sued Over Use of Authors' Books for AI Training
- Salesforce enters legal crossfire over AI training data
- Authors Sue Salesforce Over AI Training Methods
- The Blurred Truths of Sora
- At MIPCOM, AI Wasn’t a Threat — It Was an Opportunity
- Long Island fifth graders surprised with AI photos of future selves
- Trump's latest China trade spat offers lessons for the copper market amid AI boom
- One NZ begins staff AI training
- Why the AI boom requires a collaborative energy system
- The Silent Backdoor: Insecure Tokens in AI-Driven MCP Workflows
- Is it possible to keep AI out of your personal life?
- Marc Andreessen says AI will give rise to a new type of filmmaker: 'That's a reason for profound optimism'
- Doom Modding Split: GZDoom Forks into UZDoom Over AI Code Controversy