Microsoft Copilot Adds Anthropic Claude, OpenAI Models

Microsoft is expanding its AI offerings within its Copilot assistant, announcing on September 24, 2025, that business users will have the option to select AI models from Anthropic alongside those from OpenAI. This integration will bring Anthropic's Claude Opus 4.1 and Claude Sonnet 4 models to Copilot, allowing users to choose between them for tasks such as complex research and building custom AI tools via the Researcher agent and Copilot Studio. This move diversifies Microsoft's AI strategy, reducing its primary reliance on OpenAI. Meanwhile, the broader AI landscape sees ongoing developments in regulation, with California preparing to sign its first AI regulation bill, Senate Bill 53, aiming to balance innovation with safety by requiring security protocols and incident reporting. In other tech news, Cisco is training 80,000 employees in AI skills, and Security Operations Centers are increasingly using AI for threat detection, though human analysts remain vital for context and judgment. The potential risks of autonomous AI systems are also being addressed through governance frameworks, and the impact of AI on journalism is a subject of ongoing debate. Separately, Tencent Cloud Hackathon saw the development of 17 'AI for Social Good' projects, and TechCrunch Disrupt 2025 is set to feature discussions on generative AI and developer tools with leaders from companies like Hugging Face.

Key Takeaways

  • Microsoft will integrate Anthropic's Claude Opus 4.1 and Claude Sonnet 4 models into its Copilot assistant for business users starting September 24, 2025, offering an alternative to OpenAI models.
  • California is set to become the first state to enact AI regulations with Governor Newsom signing Senate Bill 53, which mandates security protocols and incident reporting for AI companies.
  • Cisco is undertaking a large-scale initiative to train 80,000 employees in AI skills to foster innovation.
  • Security Operations Centers (SOCs) are leveraging AI for enhanced threat detection, though human analysts are still essential for validation and context.
  • Agentic AI systems, which operate autonomously, require robust governance frameworks to manage risks and ensure alignment with business goals.
  • The role of AI in journalism is being debated, with concerns about AI-generated content and fake news alongside its potential as a reporting tool.
  • The Tencent Cloud Hackathon successfully developed 17 'AI for Social Good' projects focused on areas like biodiversity and mental health.
  • TechCrunch Disrupt 2025 will feature an AI Stage with industry leaders discussing generative AI, developer tools, and autonomous vehicles, including speakers from Hugging Face.
  • The integration of Anthropic's Claude models into Microsoft Copilot diversifies Microsoft's AI partnerships beyond its primary reliance on OpenAI.
  • A podcast discussion highlighted concerns about a potential 'AI vulnerability cataclysm' within six months, sparking debate on AI security threats.

Microsoft adds Anthropic AI to Copilot, diversifying from OpenAI

Microsoft is integrating AI models from Anthropic into its workplace assistant, Copilot, starting September 24, 2025. This move allows business users to choose between Anthropic's Claude Opus 4.1 and Claude Sonnet 4, or OpenAI's models for tasks like research and building custom AI tools. This partnership diversifies Microsoft's AI strategy beyond its primary reliance on OpenAI, which has been a key partner and investor.

Microsoft Copilot now offers AI choices with Anthropic integration

Starting September 24, 2025, Microsoft is allowing business users of its Copilot AI assistant to choose between AI models from OpenAI and Anthropic. This expansion beyond OpenAI, Microsoft's main AI partner, includes Anthropic's Claude Opus 4.1 for complex tasks and Claude Sonnet 4 for lighter functions. The integration aims to provide users with more options for digital research and custom AI tool development.

Microsoft expands Copilot with Anthropic AI models

Microsoft is adding Anthropic's AI models, Claude Opus 4.1 and Claude Sonnet 4, to its Microsoft 365 Copilot assistant for businesses, starting September 24, 2025. Users can now select Anthropic models for the Researcher agent and Copilot Studio, offering an alternative to OpenAI's models. This move diversifies Microsoft's AI offerings and provides more choices for complex tasks and custom agent creation.

Microsoft Copilot integrates Anthropic AI, reducing OpenAI reliance

Microsoft announced on September 24, 2025, that it will integrate Anthropic's AI models, Claude Sonnet 4 and Claude Opus 4.1, into its Copilot assistant. This allows users to select Anthropic models for the 'Researcher' agent and Copilot Studio, offering an alternative to OpenAI's models. The move signifies Microsoft's strategy to diversify its AI partnerships and reduce dependence on OpenAI.

Microsoft Copilot adds Anthropic AI models for business users

Starting September 24, 2025, Microsoft is incorporating Anthropic's AI models, Claude Opus 4.1 and Claude Sonnet 4, into its Copilot assistant for business clients. This allows users to choose between Anthropic and OpenAI models for tasks like complex research and building custom AI tools. The integration marks a significant step in diversifying Microsoft's AI strategy.

Microsoft Copilot adds Anthropic AI models

Microsoft is integrating Anthropic's AI technology into its workplace assistant, Copilot, expanding beyond its reliance on OpenAI's models. This move, announced September 24, 2025, allows users to choose between Anthropic and OpenAI models for certain functions, reshaping Microsoft's AI strategy.

Microsoft 365 Copilot expands model choices with Anthropic integration

Microsoft is enhancing Microsoft 365 Copilot by adding Anthropic's Claude Sonnet 4 and Claude Opus 4.1 models, available starting September 24, 2025. Users can now select these models within the Researcher agent and Copilot Studio, providing an alternative to OpenAI's models. This expansion aims to bring more AI innovation to Copilot, tailored for work and business needs.

Banking AI needs trust, not just tech, for secure future

Financial institutions are rapidly adopting AI, with over 80% using it for efficiency and personalization. However, current regulations struggle to keep pace with AI's growing risks, especially with generative AI and large language models. A new framework called Banking AI Control Standards (BAICS) is being developed by the Financial Services AI Council to help banks securely and responsibly implement AI, addressing eight key control domains to maintain trust and compliance.

Agentic AI needs governance for autonomy and accountability

Agentic AI systems, which operate autonomously, are becoming more integrated into businesses, with over three-quarters of organizations using AI. While these agents offer greater value, they also introduce new risks like drifting from purpose or violating rules. To manage this, strong oversight, transparency, and governance frameworks are essential. Low-code platforms are emerging as a way to embed security and compliance into AI development, ensuring these autonomous systems align with business goals and maintain trust.

AI's impact on journalism: Disruption or demise?

Artificial intelligence is rapidly changing journalism, raising questions about its future. While AI tools can assist reporters, they also pose challenges, with instances of AI-generated articles and fake news surfacing. Experts debate whether AI will eliminate writing jobs or if there will always be a market for verified human reporting. Guidelines are being developed to manage AI use in newsrooms, emphasizing accountability and critical AI literacy.

Cisco trains 80,000 employees in AI for innovation

Cisco is implementing a comprehensive program to equip its 80,000 employees with AI skills, aiming to foster innovation and adoption. The initiative includes a pilot program called 'Teaming with AI' and an internal platform called CircuIT, which provides role-specific AI prompts. Cisco is also redesigning workflows with its Atlas architecture to integrate AI more deeply, focusing on people, tools, and processes to drive productivity and agility.

AI augments Security Operations Centers for better threat detection

Security Operations Centers (SOCs) are gaining significant value from AI augmentation, particularly in anomaly detection and threat prioritization. AI learns normal user and system behavior to flag deviations that traditional methods might miss. While AI excels at processing vast amounts of data and identifying subtle anomalies, human analysts remain crucial for providing context, judgment, and validating threats, creating a collaborative approach to cybersecurity.

Portland art shows contrast AI creations with nature's art

Two art exhibitions in Portland and Falmouth, Maine, showcase contrasting approaches to artmaking. Brian Smith's show at Moss Galleries features handmade art exploring 'queer ecology' and materiality, emphasizing the artist's hand and physical creation. In contrast, Roopa Vasudevan's exhibition at Space gallery delves into generative artificial intelligence, visually exploring AI's capabilities and the fears associated with it, highlighting the evolving definition of art.

TechCrunch Disrupt 2025 features AI Stage with industry leaders

TechCrunch Disrupt 2025, held in San Francisco from October 27-29, will feature an AI Stage with leaders from companies like Character.AI, Hugging Face, and Wayve. The stage will cover topics such as generative AI, developer tools, autonomous vehicles, and national security. Sessions will offer insights into AI's future, investment opportunities, and building strategies for startups.

Tencent Cloud Hackathon develops 17 AI for Social Good projects

The inaugural Tencent Cloud Hackathon concluded in Shenzhen, with 17 teams developing 'AI for Social Good' projects in just 48 hours. Utilizing Tencent Cloud's AI tools like CodeBuddy and TCADP, developers focused on biodiversity preservation and mental health support. Winning projects included 'Poke Planet,' a game that turns conservation achievements into educational content, showcasing technology's potential for social impact.

California Governor Newsom to sign AI regulation bill

California Governor Gavin Newsom announced on September 24, 2025, that he will sign Senate Bill 53, a bill to regulate artificial intelligence. This makes California the first state to enact such regulations, aiming to balance innovation with safety. The bill requires AI companies to have security protocols, whistleblower protections, and report safety incidents, while also establishing a public AI research vehicle called Cal Compute.

Podcast debates AI vulnerability cataclysm threat

A Security Intelligence podcast episode discusses the potential threat of an 'AI vulnerability cataclysm,' with an AI security CEO suggesting it could occur within six months. Host Matt Kosinski and panelists debate whether this is a legitimate threat or fear-mongering, alongside other cybersecurity topics like Scattered Spider's return and cloud misconfigurations.

Sources

Microsoft Copilot Anthropic OpenAI AI models AI strategy Generative AI AI regulation AI governance Agentic AI Cybersecurity AI in journalism AI for social good AI in finance AI in art AI skills training AI threat detection TechCrunch Disrupt Autonomous vehicles AI research