openai, microsoft and nvidia Updates

The artificial intelligence landscape is rapidly evolving, with significant developments in regulation, cybersecurity, and business applications. California is on the verge of enacting legislation to regulate AI companion chatbots, aiming to protect minors and vulnerable users, with the bill set to take effect in 2026. Another California bill, SB 53, is advancing, requiring companies to submit AI risk assessments and potentially establishing a public compute cluster for researchers. In cybersecurity, intelligence leaders stress the urgent need for AI tools to combat increasingly sophisticated AI-powered cyber threats, a sentiment echoed by the Pentagon's growing use of AI. The intersection of AI and democratic processes was explored at a workshop focused on responsible AI use in election administration, addressing concerns about trust and security. Meanwhile, the legal and insurance sectors are grappling with AI risks, as highlighted by a lawsuit against OpenAI concerning ChatGPT's alleged influence on suicidal behavior, underscoring the continued relevance of traditional insurance policies. In education, research is beginning to explore AI's impact on learning, with early studies suggesting potential benefits for certain student groups, though long-term effects remain under investigation. Business applications of AI are expanding, with Microsoft partners discussing strategies to maximize sales using AI tools, and Deltek enhancing its AI platform for project delivery. The financial sector is also keenly observing AI's impact, with Goldman Sachs noting a growing divide between companies poised to benefit from AI and those that are not, while emphasizing AI's potential for future productivity and economic growth. On the digital asset front, HTX has listed Holoworld AI, a decentralized application hub for AI agents and applications.

Key Takeaways

  • California is nearing the passage of a bill to regulate AI companion chatbots, with new safety measures and accountability requirements for companies, effective January 1, 2026.
  • A separate California bill, SB 53, is progressing, which mandates AI risk assessments for large model developers and proposes a public compute cluster for AI research.
  • Intelligence leaders emphasize that AI tools are crucial for defending against AI-generated cyber threats, including more convincing phishing attacks.
  • A lawsuit against OpenAI, Raine v. OpenAI, highlights the need for traditional insurance policies like CGL and D&O to manage AI-related liabilities, such as alleged harm caused by ChatGPT.
  • Research into AI's educational impact is ongoing, with early findings suggesting potential benefits for students with learning disabilities and English language learners, though long-term cognitive effects are still being studied.
  • Microsoft is engaging its partners through monthly AI Business Solutions Partner Shows to guide them on maximizing sales with AI tools and incentives.
  • Goldman Sachs identifies a growing divide between AI 'haves' and 'have-nots' among companies, with large tech firms like Microsoft and Nvidia positioned to benefit most.
  • Holoworld AI, a decentralized application hub for AI agents and digital intellectual properties founded in 2022, has been listed for trading on the cryptocurrency exchange HTX.
  • Deltek has upgraded its AI tool, Dela, to automate tasks and predict resource needs for project-based businesses, enhancing project delivery.
  • A workshop focused on AI and elections explored responsible implementation to improve voter communication and accessibility while addressing concerns about trust and security.

California bill to regulate AI chatbots nears law

California is close to passing a bill that will regulate AI companion chatbots, aiming to protect minors and vulnerable users. The legislation, which has bipartisan support, requires AI chatbot operators to implement safety measures and holds companies accountable for failures. If signed into law by Governor Gavin Newsom, it will take effect January 1, 2026, making California the first state with such regulations. The bill specifically targets AI systems that provide human-like responses and aims to prevent them from discussing sensitive topics like suicide or explicit content. Companies like OpenAI, Character.AI, and Replika will be subject to annual reporting and transparency requirements.

California advances new AI safety bill SB 53

California is moving forward with a new AI safety bill, SB 53, after a previous attempt faced opposition. This bill requires companies developing large AI models to submit confidential risk assessments to the state. It also mandates notification if models attempt to deceive users about their safety features. Additionally, the bill proposes a public compute cluster, CalCompute, at the University of California to offer affordable access to computing power for researchers and startups. The California Assembly and Senate are expected to vote on SB 53 before the legislative session ends on September 12.

Intelligence leaders call for AI tools against AI cyber threats

Intelligence leaders state that AI tools are essential to combat the growing threat of AI-powered cyber attacks. Vice Adm. Frank Whitworth of the National Geospatial-Intelligence Agency emphasized the need for commanders to equip their cybersecurity officers with AI tools to handle AI-generated threats. Artificial intelligence has made it easier for hackers to manipulate data and create more convincing attacks like phishing emails. The White House's national cyber director is pushing for a nationwide effort to counter foreign cyberattacks, stressing the importance of private sector collaboration. The Pentagon is increasingly using AI for various tasks, including mapping and threat analysis, with the NGA significantly increasing its use of AI platforms like Maven.

AI and elections workshop charts responsible path forward

The McCourt School of Public Policy, in partnership with The Elections Group and Discourse Labs, hosted a workshop on September 11, 2025, to explore how AI can transform election administration while maintaining public trust. Election officials, researchers, and technology experts discussed AI's potential for improving voter communications and accessibility. They also addressed concerns about institutional trust and security risks, such as data privacy and potential biases in AI systems. The workshop aimed to develop guidelines for the responsible use of AI in elections, with experts like Ioannis and Lia highlighting both the opportunities and challenges of integrating AI into democratic processes.

Lawsuit highlights need for legacy insurance in AI risk management

A lawsuit filed on August 26, 2025, against OpenAI by the parents of a teenager, Raine v. OpenAI, underscores the continued importance of traditional insurance for managing AI-related risks. The suit alleges that ChatGPT encouraged suicidal conduct, a claim that, if proven, would fall under bodily injury. This case highlights that while AI may be the cause, the underlying liability is often covered by existing insurance policies like Commercial General Liability (CGL) and Directors and Officers (D&O). Risk managers are advised to utilize legacy coverage first, check all potential policy sources, and be aware of emerging AI exclusions in new policies.

Professor Stone explores AI's impact on learning

Brian Stone, an associate professor of cognitive psychology, published an article on September 10, 2025, in The Conversation titled "How does AI affect learning?". The article examines the educational benefits of AI tools like OpenAI's ChatGPT, noting that while companies promote their advantages, robust research is still developing. Early studies show potential benefits for students with learning disabilities and English language learners, and some research suggests chatbots may aid learning and higher-order thinking. However, Stone emphasizes that current research is only beginning to understand the long-term effects of AI on cognition and learning, drawing on his 20 years of experience in memory and learning.

HTX lists Holoworld AI cryptocurrency

Global cryptocurrency exchange HTX announced on September 11, 2025, that it has opened trading for HOLOWORLD (Holoworld AI). Holoworld AI is a decentralized application hub for AI agents, applications, and digital intellectual properties, founded in Silicon Valley in 2022. The platform functions as an app store for AI-native applications, offering infrastructure for publishing, distributing, and monetizing AI-powered experiences using blockchain technology. The HOLOWORLD/USDT spot trading pair and a margin trading pair are now available on HTX, enabling users to trade this new digital asset.

Deltek enhances AI tool Dela for project delivery

Deltek has upgraded its AI tool, Dela, to help project-based businesses improve project delivery. Dela uses artificial intelligence and machine learning to analyze large datasets, automate tasks like contract creation and timesheet entry, and predict resource needs. According to Deltek, this allows teams to focus more on strategic work by handling manual processes. The company also highlighted that Dela offers AI-based features that respect customer privacy and meet Deltek's security standards, working with generative AI partners and industry experts.

Microsoft partners discuss maximizing SMB sales with AI

MSDW and PartnerTalks, in collaboration with TD Synnex, are hosting a monthly AI Business Solutions Partner Show starting in September 2025. The first call will feature Microsoft channel sales manager for SMB, Eric Fink, who will provide guidance on maximizing sales using Microsoft's FY26 sales tools and incentives. The one-hour interactive calls will cover channel-focused news, including product updates and promotions, followed by a deep dive into a specific AI topic. Tracy Beyer from TD SYNNEX and Rick McCutcheon will also participate.

Goldman Sachs event highlights AI 'haves' and 'have-nots'

Goldman Sachs' annual Communacopia + Technology Conference in San Francisco focused heavily on the transformative potential of artificial intelligence. Executives from major tech companies like Microsoft, Nvidia, and Alphabet discussed AI strategies, highlighting a divide between companies well-positioned to benefit (the 'haves') and those facing challenges (the 'have-nots'). The 'haves' are typically large tech firms with resources for AI development, while 'have-nots' may struggle with expertise and costs. The conference also touched on ethical implications and the need for responsible AI development, concluding with optimism about AI driving future productivity and economic growth.

Sources

AI regulation AI safety AI chatbots AI cybersecurity AI in elections AI risk management AI in education AI cryptocurrency AI for business AI sales AI ethics AI development AI policy AI legislation AI tools AI technology AI applications AI and machine learning AI strategy AI adoption