Google, OpenAI Gemini Data, Scale AI Funding News

California is enacting new laws to protect children from technology's impacts, with Governor Gavin Newsom signing bills that mandate AI companies to implement safety features preventing chatbots from discussing self-harm with minors. While Newsom vetoed one bill that would have broadly restricted minors' access to AI companion chatbots, he approved another requiring AI chatbots to disclose their non-human nature and adhere to safety protocols for users in distress. These state-level actions highlight a growing push for AI regulation, with Senator Marsha Blackburn anticipating federal AI regulation despite industry opposition, noting that states are stepping in due to federal inaction. Meanwhile, concerns about AI surveillance are rising in Maine, where local governments are deploying AI-powered cameras for license plate reading and facial recognition, raising privacy issues. In a significant global development, Adani Group and Google are partnering to invest $15 billion over five years in India to build a large-scale AI data center hub, aiming to support the nation's growing AI demand and establish a major AI hub with renewable energy integration. On the cybersecurity front, MCPTotal and Harmonic Security have launched platforms to help enterprises securely manage AI workflows and gain control over their AI systems, addressing risks like supply chain vulnerabilities and prompt injection. Digicloud Africa is also collaborating with Google Security Operations to offer AI-powered cybersecurity solutions across Africa. In the realm of AI development and data, Reddit faces criticism for antisemitic content on its platform, which is used to train AI models like Google's Gemini and OpenAI's ChatGPT, raising concerns about bias amplification. In contrast, a manager at Globalt Investments believes the current AI market is not a bubble, citing substantial real investment in AI development. This comes as the broader stock market sees a divergence, with traditional 'real economy' sectors performing well while AI stocks experience a downturn.

Key Takeaways

  • California is implementing new laws requiring AI companies to add safety features to prevent chatbots from discussing suicide or self-harm with minors.
  • Governor Gavin Newsom vetoed a bill that would have broadly restricted minors' access to AI companion chatbots but signed another requiring AI chatbots to disclose they are not human.
  • Senator Marsha Blackburn believes federal AI regulation is necessary and inevitable, noting that states are creating AI safeguards for minors due to federal inaction.
  • Concerns over AI surveillance are growing in Maine, with local governments deploying AI-powered license plate readers and facial recognition cameras.
  • Adani Group and Google are investing $15 billion over five years to build a large-scale AI data center hub in Andhra Pradesh, India.
  • MCPTotal and Harmonic Security have launched new platforms to enhance enterprise security and control over AI workflows and systems.
  • Reddit is facing criticism for antisemitic content on its platform, which is used as a training data source for AI models like Google's Gemini and OpenAI's ChatGPT.
  • A manager at Globalt Investments believes the current AI market is not a bubble, citing significant real investment in AI development.
  • The stock market is showing a divergence, with traditional 'real economy' sectors performing well while AI and tech stocks are faltering.
  • Western executives are expressing concern over China's advanced, highly automated AI manufacturing capabilities, particularly in the EV market.

California enacts new laws to protect children from AI and social media

California Governor Gavin Newsom signed several new health bills aimed at protecting children from the impacts of technology. Some laws will require AI companies to add safety features to prevent chatbots from discussing suicide or self-harm with minors. Other new laws will require social media platforms to display warning labels about potential mental health issues for young users starting in 2027. These measures aim to hold AI developers more accountable for harm caused by their products and increase penalties for creating nonconsensual AI pornography. Governor Newsom also signed a bill creating CalCompute, a public AI computing resource for startups and researchers.

Newsom vetoes AI chatbot bill for minors, citing overreach

California Governor Gavin Newsom vetoed a bill that would have placed strict limits on minors' access to AI companion chatbots. He stated the legislation, sponsored by Senator Steve Padilla, was too broad and could have effectively banned these tools for young people. The bill aimed to prevent addictive features and ensure users knew they were interacting with AI, inspired by cases where teens formed unhealthy relationships with chatbots. While Newsom vetoed this bill, he did sign another measure, SB 243, requiring AI chatbots to disclose they are not human and follow safety rules for users in distress.

Teen promotes AI safety education for young people

High school senior Kaashvi Mittal is working to promote AI safety education for children. She understands the potential dangers of AI, especially for young people whose ideas are still forming. Mittal founded an organization called Together We AI to make AI education accessible to everyone. She supports Governor Newsom's efforts to create safeguards for artificial intelligence, like the bills he signed into law. Mittal also understood why Newsom vetoed Assembly Bill 1064, which would have restricted children's use of many companion chatbots, noting its broad language could limit useful AI learning systems.

Senator: Federal AI regulation is coming despite tech industry opposition

Senator Marsha Blackburn believes federal regulation for artificial intelligence is necessary and inevitable, even with opposition from major tech companies. She noted that states like California, Texas, and others are stepping in to create AI safeguards for minors because the federal government has not yet acted. Blackburn has long advocated for online child safety and social media regulation, supporting legislation like the Children's Online Safety Act. She also emphasized the need for online consumer privacy protection and bills addressing how AI uses personal data, name, image, and likeness without consent.

MCPTotal launches platform for secure enterprise AI workflows

MCPTotal has launched a new platform designed to help businesses securely adopt and manage Model Context Protocol (MCP) servers. MCP is crucial for connecting AI models with enterprise systems but has introduced risks like supply chain vulnerabilities and prompt injection. MCPTotal's platform offers a hub-and-gateway architecture for centralized hosting, authentication, and an AI-native firewall to monitor traffic and enforce policies. It provides a catalog of secure MCP servers, enabling employees to connect AI models to business systems like Slack and Gmail while giving security leaders visibility and control.

Harmonic Security releases MCP Gateway for AI ecosystem control

Harmonic Security has launched MCP Gateway, a new tool designed to give security teams full visibility and control over their organization's AI systems. The gateway intercepts all Model Context Protocol (MCP) traffic, allowing security teams to identify connected clients and servers. It enforces policies to block risky actions and uses sensitive data models to prevent the exfiltration of intellectual property. MCP Gateway aims to address the new security challenges posed by agentic AI, such as workflow hijacking and credential theft, by providing a developer-friendly solution for managing AI workflows.

Maine faces growing concerns over AI surveillance

Local governments in Maine are increasingly using artificial intelligence for surveillance, raising privacy concerns. Municipalities are deploying AI-powered license plate reader cameras from Flock Safety, which scan, log, and store data on all passing vehicles. Some police departments are also using Verkada's AI cameras for facial recognition, despite a state law restricting such technology. Additionally, AI tools like Placer.ai are being used to track residents' movements and gather data downtown. Critics argue this widespread AI surveillance goes beyond legitimate law enforcement needs and infringes on constitutional rights, calling for public debate and stricter controls.

Western executives express alarm over China's advanced AI manufacturing

Western executives from the automotive and green energy sectors are returning from China with concerns about the country's highly automated manufacturing capabilities. They warn that China's rapid advancements in AI and robotics could leave Western nations behind, particularly in the electric vehicle (EV) market. Executives described touring factories run almost entirely by robots, highlighting China's shift from low wages to a highly skilled engineering workforce driving innovation. This automation is seen as a strategic move to compensate for a declining population and gain a competitive advantage globally.

Adani Group and Google to build $15 billion AI data center hub in India

The Adani Group and Google plan to invest $15 billion over five years to develop a large-scale data center hub in Andhra Pradesh, India. This project, managed by their joint venture AdaniConneX, will include renewable energy facilities and subsea cable networks. The goal is to support India's growing demand for AI and establish the complex as a major AI hub. Adani Group will build new transmission lines and energy systems, while Google aims to provide the foundation for businesses, researchers, and creators to develop AI applications. This investment highlights India's growing importance in the global data center market.

Digicloud Africa partners with Google for AI cybersecurity solutions

Digicloud Africa has partnered with Google Security Operations to offer advanced, AI-powered cybersecurity solutions across Africa. This collaboration aims to help organizations update their security systems and better combat sophisticated cyber threats. They will deploy a cloud-native Security Information and Event Management (SIEM) platform that uses AI and threat intelligence for real-time detection and response. The Google Security Operations platform offers continuous visibility and faster response times, improving security team efficiency and providing a significant return on investment for businesses.

VSCO adds AI editing tools and RAW file support

The video editing app VSCO is introducing new AI image editing tools, support for high-resolution RAW files, and non-destructive editing capabilities. The new AI features are located in a tab called 'AI Labs' and include AI-powered object removal, which can intelligently blend backgrounds after an object is removed. VSCO also plans to launch an 'Upscale' feature to increase image resolution. These AI Labs features are available on VSCO's Pro tier subscription, costing $12.99 per month or $60 annually.

Reddit faces criticism for antisemitism impacting AI training data

Reddit is facing scrutiny for allowing widespread antisemitism on its platform, which is used as a major source for training AI models like Google's Gemini and OpenAI's ChatGPT. Reports indicate that hateful content targeting Jewish people is not always removed, raising concerns that this bias will be embedded into AI systems. Despite pleas from Jewish content moderators, Reddit has been slow to address the issue, with some moderators reportedly being disciplined for reporting abuse. This situation highlights a double standard in content moderation and risks amplifying hate speech through AI.

Market sees 'real economy' surge while AI stocks falter

The stock market experienced a significant split, with traditional 'real economy' sectors performing well while artificial intelligence and tech stocks struggled. Host Jim Cramer noted this divergence, attributing the shift partly to Federal Reserve Chairman Jay Powell's comments suggesting potential help for the economy and a pause in bond selling. Bank stocks led the rally in established sectors, contrasting with the earlier downturn in the Nasdaq Composite. Geopolitical tensions also contributed to market volatility, impacting investor sentiment towards riskier tech assets.

AI market not a bubble, says Globalt Investments manager

Thomas Martin, a senior portfolio manager at Globalt Investments, believes the current market is not experiencing an artificial intelligence bubble. He argues that significant real investment is being directed towards the development of AI technologies. Martin shared his perspective during an appearance on Bloomberg Tech.

Sources

AI safety child protection social media regulation AI ethics AI policy AI legislation AI regulation AI governance AI accountability AI pornography AI chatbots AI companion chatbots AI education AI computing resources AI startups AI researchers AI security enterprise AI AI workflows AI models AI systems AI ecosystem AI surveillance AI manufacturing AI data centers AI applications AI cybersecurity AI editing tools AI training data AI stocks AI market AI investment AI technology artificial intelligence California Gavin Newsom Marsha Blackburn MCPTotal Harmonic Security Maine China Adani Group Google Digicloud Africa VSCO Reddit Globalt Investments Thomas Martin Bloomberg Tech