OpenAI Security Tightens, Microsoft AI Saves $500M, DeepSeek

Recent reports highlight both the advancements and the risks associated with artificial intelligence. The Internet Watch Foundation (IWF) reported a significant surge in AI-generated child sexual abuse material (CSAM), with a 400% increase in related web pages and a jump to 1,286 videos in the first half of 2025, compared to just two in 2024. This content is becoming increasingly realistic, prompting law enforcement and the UK government to take action, including banning AI tools used for creating such material. Act for Kids is advocating for AI risk education in schools, especially after incidents of students misusing AI. In response to security concerns, OpenAI is tightening its internal security measures to protect its intellectual property, particularly following claims that DeepSeek, a Chinese AI startup, used ChatGPT data. These measures include limiting access to sensitive information, enhancing staff checks, and using biometric scans. Microsoft, on the other hand, has saved $500 million in call center costs by using AI after laying off 9,000 employees, with AI now writing 35% of the code for new products. The financial sector is also seeing increased AI adoption, with experts predicting the global AI market in financial services will reach $190.33 billion by 2030. BitMart has launched an AI trading assistant called Beacon (BitMartGPT) for crypto trading, offering real-time market insights. Meanwhile, experts are emphasizing the need for AI governance to manage risks and ensure responsible AI use across businesses, especially with generative AI being integrated into various software programs. Pope Leo XIV has also addressed the transformative impact of AI, emphasizing the need for ethical development and the importance of human moral judgment.

Key Takeaways

  • The IWF reported a 400% surge in AI-generated child abuse web pages, reaching 210 in the first half of 2025.
  • AI-generated child abuse videos have surged to 1,286 in the first half of 2025, up from only two in 2024.
  • The UK government is banning AI tools used for creating child abuse content, with penalties up to five years in jail.
  • Act for Kids advocates for AI risk education in sex education to address online child abuse and sextortion.
  • OpenAI is tightening security measures, including biometric scans, due to IP theft fears, particularly from Chinese AI companies like DeepSeek.
  • Microsoft saved $500 million in call centers by using AI after laying off 9,000 employees.
  • AI now writes 35% of the code for new products at Microsoft.
  • The global AI market in financial services is projected to reach $190.33 billion by 2030.
  • BitMart launched Beacon (BitMartGPT), an AI trading assistant for crypto trading.
  • Experts emphasize the need for AI governance to manage risks and ensure responsible AI use in businesses.

AI-Generated Child Abuse Videos Surge Online, Watchdog Warns

The Internet Watch Foundation (IWF) reports a surge in AI-generated child sexual abuse videos. In the first half of 2025, they found 1,286 AI-made videos, up from only two last year. The IWF says these videos are becoming hard to distinguish from real abuse. The UK government is cracking down, making it illegal to create or share AI tools for abuse content, with penalties up to five years in jail.

AI Risks Must Be Taught in Sex Education, Says Act for Kids

Act for Kids warns that sex education needs to include the risks of AI. They say parents and schools must teach kids about online child abuse and sextortion using AI. This comes after Australian students used AI to create inappropriate images of classmates. The National Centre for Missing and Exploited Children saw a huge rise in AI-generated child abuse reports. Act for Kids offers tips on talking to kids about AI and sex education.

AI-Made Child Abuse Images Flood the Internet

Artificial intelligence is being used to create realistic child sexual abuse material (CSAM). The Internet Watch Foundation found 1,286 AI-generated videos in the first half of 2025, a big jump from only two in 2024. The National Center for Missing & Exploited Children reports a surge in AI-generated CSAM reports. Experts warn that this content is becoming harder to detect and could overwhelm authorities.

AI Child Abuse Web Pages Surge, Watchdog Alarmed

The Internet Watch Foundation (IWF) reports a 400% surge in AI-generated child abuse web pages. They found 210 web pages with this material in the first half of 2025, up from 42 last year. These pages contained 1,286 videos, compared to just two in 2024. The IWF says most videos are very realistic and involve girls, sometimes using real children's likenesses. Law enforcement is taking action, and the UK is banning AI tools used for creating child abuse content.

AI Child Abuse Webpages Surge 400%, Watchdog Alarmed

The Internet Watch Foundation (IWF) reports a 400% increase in AI-generated child abuse web pages. In the first half of 2025, they found 210 web pages with AI material, up from 42 last year. These pages included 1,286 videos, a significant rise from just two in 2024. The IWF says most of this content is very realistic and is classified as the most severe level of abuse. Law enforcement is responding, and the UK is banning AI tools used to create this content.

AI Security: A Four-Step Plan for Your Business

As companies use more AI, security teams face new challenges. Engineering teams add AI without security advice, while security teams don't clearly explain their expectations. Pangea's Sourabh Satish suggests a four-phase approach to AI security. This includes assessing current AI use, creating clear policies, implementing security controls, and educating users. This helps balance AI innovation with strong security.

AI Governance Needed for SaaS Security, Experts Say

Generative AI is being added to many software programs, like Slack and Zoom. This means AI is spreading quickly across businesses without much control. A recent study shows 95% of U.S. companies use generative AI, but many worry about data security and privacy. Experts say AI governance is needed to manage risks and ensure AI is used responsibly. This includes policies and controls to protect data and meet legal requirements like GDPR and HIPAA.

OpenAI Tightens Security Due to IP Theft Fears

OpenAI is increasing its internal security to protect its ideas from Chinese AI companies. This follows claims that DeepSeek, a Chinese AI startup, used ChatGPT data to train its R1 model. OpenAI is now limiting access to sensitive information and enhancing staff checks. They are also using biometric scans and isolating critical data from external networks. These measures aim to shield OpenAI's intellectual property but may slow down teamwork.

AI Attacks: How Hackers Fool Artificial Intelligence

Artificial intelligence can be tricked by attacks that make it misclassify data. One type of attack, called data poisoning, can give hackers access to backdoors in AI systems. Machine learning algorithms can look for the wrong things in images, leading to errors. Adversarial attacks and data poisoning are used to exploit weaknesses in AI models. Researchers are working to find and fix these problems.

Microsoft Saved $500 Million with AI After Layoffs

Microsoft says it saved $500 million in call centers by using AI after laying off 9000 employees. AI now writes 35% of the code for new products at Microsoft. The company used AI to boost productivity in sales and customer service. However, these changes led to layoffs of nearly 4% of Microsoft's workforce, including many engineers.

AI's Rise in Stock Trading: What You Need to Know

AI and machine learning are changing the stock market. Experts predict the global AI market in financial services will grow to $190.33 billion by 2030. Algorithmic trading, which uses AI, now handles a large percentage of trades. AI systems can learn from data and improve over time without human help. Large language models (LLMs) like BERT and FinBERT are used to analyze news and predict market movements.

BitMart Launches AI Trading Assistant Beacon (BitMartGPT)

BitMart has launched Beacon (BitMartGPT), an AI trading assistant for crypto trading. Beacon offers real-time market information, smart problem-solving, and an interactive knowledge base. It uses X Insights to analyze social media and market trends. Users can ask questions in simple language and get data-driven insights. Beacon is available to BitMart users and aims to improve their trading experience.

AI Models Learn Like Humans, Says Psychologist

A psychology professor argues that some AI models learn and solve problems like humans. He says humans learn through reinforcement, a key principle in psychology. AI models programmed to learn through reinforcement can mimic human problem-solving. These AI machines are now among the most powerful, thinking and learning in ways similar to humans.

Pope Leo Says AI Puts Humanity at a Crossroads

Pope Leo XIV says AI is transforming society and humanity is at a crossroads. He says AI affects education, healthcare, and communication. The Pope's message was sent to the AI for Good Summit 2025. He stressed the need for ethical AI development that benefits everyone. He also noted AI cannot replace human moral judgment and responsibility.

Sources

AI-generated content Child sexual abuse material (CSAM) Internet Watch Foundation (IWF) Act for Kids Online child abuse Sextortion AI risks Sex education Data security Privacy AI governance GDPR HIPAA AI security Pangea AI policies Security controls OpenAI Intellectual property theft DeepSeek ChatGPT Data poisoning Adversarial attacks AI models Microsoft AI in business AI in finance Algorithmic trading Large language models (LLMs) BERT FinBERT BitMart AI trading assistant AI and human learning Reinforcement learning Ethical AI Pope Leo XIV AI for Good Summit 2025 UK government Law enforcement