Anthropic, backed by Amazon and Google, is updating its privacy policy, requiring users of Claude Free, Pro, and Max to decide by September 28, 2025, whether to allow their chat data to be used for AI training. If users opt in, Anthropic will retain the data for up to five years. Users can opt out in their privacy settings; otherwise, their conversations will be used to improve Claude's large language models. This change doesn't affect Claude for Work, Education, or Government users. This move aligns Anthropic with Google and Meta, who also use consumer data for AI training. In other AI news, Google's new Pixel 10 phones feature enhanced AI capabilities, including improved cameras, more RAM, and brighter screens, along with new AI features like Magic Cue, Voice Translate, and Gemini Live, powered by Google's Tensor G5 chip. California is set to implement new AI regulations for workplaces starting October 1, focusing on preventing discrimination and requiring bias audits. Red Dot Capital Partners believes AI can assist in venture capital but can't replace human trust and judgment in investments. AI image generators like ChatGPT (using GPT-Image-1), Grok, and Midjourney are becoming more versatile, with ChatGPT leading in overall image creation and editing. Nvidia is profiting from the growth in AI data centers, with data center revenue forming the bulk of its sales, even if mega data centers don't succeed. AI is also making its way into emergency medical services (EMS), sparking debates about its role in dispatch, billing, and clinical support. ZainTECH has launched an AI-powered security service across the Middle East and North Africa (MENA) to protect cloud and hybrid environments. Lastly, the Country Radio Seminar (CRS) 2026 will feature AI training and a digital music summit, addressing challenges and opportunities in the country radio industry.
Key Takeaways
- Anthropic will use chat data from Claude Free, Pro, and Max users for AI training unless users opt out by September 28, 2025; data will be kept for up to five years.
- Google's Pixel 10 phones have enhanced AI features, including Magic Cue and Gemini Live, powered by the Tensor G5 chip.
- California's new AI regulations for workplaces, effective October 1, require bias audits and prevent AI-driven discrimination in hiring and promotions.
- Red Dot Capital Partners emphasizes that AI cannot replace human trust in venture capital investments.
- ChatGPT, with its GPT-Image-1 model, is rated as the best overall AI image generator, while Midjourney excels at following prompts accurately.
- Nvidia is profiting from the growth of AI data centers, regardless of the success of mega data centers.
- AI is being integrated into emergency medical services (EMS), sparking discussions about its impact on dispatch, billing, and clinical support.
- ZainTECH has launched an AI-powered security service across the MENA region to protect cloud and hybrid environments.
- The Country Radio Seminar (CRS) 2026 will feature AI training and a digital music summit.
- Anthropic's policy change aligns it with Google and Meta in utilizing user data for AI model training, raising privacy considerations.
Anthropic AI trains on user chats unless you opt out by September 28
Anthropic will now train its AI models using user data from Claude Free, Pro, and Max plans. Users must opt out by September 28 if they don't want their chat and coding sessions used. If users allow data use, Anthropic will keep the data for up to five years. This change does not affect commercial users like Claude for Work or Education. Users can change their privacy settings to opt out.
Anthropic uses user chats to train AI models, keeps data for five years
Anthropic, backed by Amazon and Google, uses chat data from Claude Free, Pro, and Max users to train its AI models. The company announced it will keep user data for up to five years. This helps improve its large language models. Users can opt out if they don't want their data used.
Claude AI users must opt out of chat data use for AI training
Claude users on Free, Pro, and Max plans must now opt out if they don't want their chats used for AI training. Anthropic will keep chat data for up to five years to improve its models if users allow it. If users opt out, conversations are stored for only 30 days. These changes don't affect Claude for Work, Education, or Government users. Users have until September 28, 2025, to decide.
Anthropic joins Google and Meta in using user chats for AI training
Anthropic is now using consumer chat data for AI model training by default, like Google and Meta. Users of Claude's free, Pro, and Max versions must opt out if they don't want their data used. Anthropic will keep data for five years for those who consent. This change has sparked debate about user privacy and data control. Enterprise users of Claude for Work, Gov, and Education are excluded.
Anthropic's Claude will train on your chats, here's how to opt out
Anthropic is changing its privacy policy to train its Claude AI chatbot with user data. New users can opt out at signup, and existing users will see a popup to opt out. Users must decide by September 28, 2025. Opting out can be done in Claude's settings under the Privacy option. The new policy applies to Claude Free, Pro, and Max users, but not commercial services like Claude for Work.
Anthropic users must now choose share data or opt out of AI training
Anthropic is changing how it uses user data, requiring Claude users to decide by September 28 whether to share their chats for AI training. Previously, Anthropic didn't use consumer chat data for model training. Now, it wants to train AI on user conversations and coding sessions, keeping data for five years for those who don't opt out. This policy applies to Claude Free, Pro, and Max users, but not business customers.
Anthropic's Claude AI wants to use your chats, here's how to opt out
Anthropic will soon use your Claude chat transcripts for AI training. Users have until September 28 to opt out of this change. New users can opt out during signup. Existing users will see a notification about the change. If users opt in, data retention extends to five years. This policy affects Claude Free, Pro, Max, and Code users, but not Claude for Work, Gov, or Education.
Anthropic users must now choose share data or opt out of AI training
Anthropic is changing how it uses user data, requiring Claude users to decide by September 28 whether to share their chats for AI training. Previously, Anthropic didn't use consumer chat data for model training. Now, it wants to train AI on user conversations and coding sessions, keeping data for five years for those who don't opt out. This policy applies to Claude Free, Pro, and Max users, but not business customers.
Google's Pixel 10 phones boost AI capabilities
Google has released a new line of Pixel smartphones with enhanced artificial intelligence. The AI is designed to help with tasks like finding information on the device. The new phones aim to improve user experience through smarter technology.
Pixel 10 Pro review smarter AI and familiar design
The Pixel 10 Pro and 10 Pro XL are improved versions of the Pixel 9 Pro with a focus on AI and Gemini integration. These phones have better cameras, more RAM, and brighter screens. New AI features include Magic Cue, Voice Translate, and Gemini Live. The phones use Google's Tensor G5 chip and come in colors like Moonstone. Battery life is longer, and they support Pixelsnap magnetic wireless charging.
California's new AI rules for workplaces start October 1
California's new AI regulations take effect on October 1, impacting all employers in the state. These rules cover any automated decision system used for hiring, promotions, or training. Employers can't use AI tools that discriminate based on protected categories. They should also conduct bias audits of their AI systems. Employers must keep records related to AI for four years and train HR teams on the new rules.
Red Dot Capital says AI can't replace human trust in venture capital
Danielle Ardon Baratz from Red Dot Capital Partners says AI helps with tasks like finding companies and drafting memos. However, she believes AI can't replace the trust and judgment needed when investing in startups. Red Dot looks for AI startups that solve real problems and have strong teams. They also consider if the company owns unique data and can sustain value creation.
AI image generators compared ChatGPT wins
AI image generators are becoming more versatile, with tools like ChatGPT, Grok, and Midjourney. ChatGPT, with its GPT-Image-1 model, is the best overall for creating and editing images. Midjourney is excellent at following prompts accurately. Safety is a concern, as some tools struggle to prevent deepfakes. These tools were tested using prompts like a futuristic Tokyo skyline and a medieval blacksmith's workshop.
Ten ways AI is shaping the future
Artificial intelligence is changing many areas, from vaccine development to human relationships. AI helps track wildlife, improve education, and recycle plastic. It also aids in medical diagnoses and is used in children's toys. These applications show AI's growing impact on science, technology, and daily life. AI-designed enzymes can recycle plastic waste and AI tools help diagnose medical conditions.
Nvidia profits from AI data centers whether they boom or bust
Nvidia is benefiting from the growth in AI data centers. Data center revenue makes up most of Nvidia's sales. CEO Jensen Huang mentioned Spectrum-XGS, a product that lets separate data centers work as one. Even if mega AI data centers don't succeed, Nvidia can still profit. Nvidia's technology helps smaller facilities link together, ensuring they benefit from AI's infrastructure needs.
AI in EMS sparks debate on the future of emergency medical services
EMS leaders discussed the role of AI in emergency medical services at the California Ambulance Association Annual Conference. AI is changing dispatch, billing, and clinical support. Experts debated whether AI will revolutionize or replace human roles. EMS organizations need internal expertise or consultants to evaluate AI solutions. Concerns include job security, cybersecurity, and compliance with regulations like HIPAA.
CRS 2026 to feature AI training and digital music summit
The Country Radio Seminar (CRS) 2026 will include new themes, more digital content, and AI training. The event focuses on challenges and opportunities in the country radio industry. Key topics include sales, innovation, mental health, music discovery, and talent engagement. The Digital Music Summit will cover global trends and best practices for using new tools. CRS 2026 will be held in Nashville from March 18-20.
ZainTECH launches AI security service across Middle East and North Africa
ZainTECH has launched an AI-powered security service across the MENA region. This service provides protection for cloud and hybrid environments. It is supported by Security Operations Centers in the UAE, KSA, and Jordan. The service offers threat detection, vulnerability assessment, and automated patch management. It helps organizations stay ahead of cybersecurity risks and ensures compliance with regional data laws.
Sources
- Anthropic will start training its AI models on chat transcripts
- Anthtopic trains models with user chats, extends data retention to five years (AMZN:NASDAQ)
- Claude users must now opt out to keep their chats out of AI training
- Anthropic Joins Google and Meta, Makes AI Training on User Chats Opt-Out by Default
- Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out
- Anthropic users face a new choice – opt out or share your data for AI training
- Anthropic Wants to Use Your Chats With Claude for AI Training: Here's How to Opt Out
- Anthropic users face a new choice – opt out or share your data for AI training
- Google's Pixel 10 phones raises the ante on artificial intelligence
- Pixel 10 Pro and 10 Pro XL Review: Familiar Hardware, Smarter Gemini AI Brain
- California’s New AI Regulations Take Effect Oct. 1: Here’s Your Compliance Checklist
- “AI can’t replace trust”: Red Dot Capital Partners on the human side of venture capit
- I put the top AI image generators head-to-head — see the results for yourself
- 10 AI Applications Shaping the Future
- Mega AI data centers may boom or bust—either way, Nvidia will still cash in
- AI buzzer battle sparks urgent EMS wake-up call
- CRS 2026 First Look: New Themes, More Digital, and AI Training
- ZainTECH Launches AI-Powered SecOps Managed Security Service Across MENA