The rapid advancement and deployment of AI tools are creating both opportunities and challenges across various sectors. In the realm of information dissemination, AI has shown a concerning tendency to spread misinformation, as seen following the shooting of Charlie Kirk. AI tools like Grok and Google's AI Overview incorrectly identified suspects, altered images, and provided conflicting details, highlighting the probabilistic nature of AI responses rather than real-time fact-checking. Amazon has also taken action, removing AI-generated books about the incident that appeared for sale shortly after it occurred, some with publication dates preceding the event itself. This underscores the need for AI reliability in fast-moving situations, a concern echoed by Governor Cox's advice to limit social media use. Meanwhile, efforts are underway to establish ethical AI frameworks. Malaysia and Zetrix AI Berhad are collaborating to develop global standards for Shariah-compliant artificial intelligence, aiming to create a trusted channel for Islamic guidance through Zetrix AI's NurAi, the first Shariah-aligned Large Language Model. This initiative seeks to position Malaysia as a leader in ethical AI and promote the Islamic digital economy. On the technological front, Nvidia and Kioxia are partnering to develop a solid-state drive (SSD) capable of 100 million random IOPS by 2027, a significant performance boost for AI servers, with Nvidia planning to integrate these drives directly with GPUs. OpenAI's substantial investments are also driving growth for major tech companies; Oracle's stock saw a surge due to a significant backlog from an OpenAI deal, and Broadcom announced a $10 billion custom chip deal with OpenAI. Microsoft, a key partner, has heavily invested in OpenAI, and Nvidia's GPUs are crucial for its development. These partnerships have contributed to the market cap increases of these companies since ChatGPT's launch. OpenAI's head of hardware, Richard Ho, is advocating for hardware-level safety features, including 'kill switches,' for future AI infrastructure, emphasizing the need for observability and secure execution paths. Beyond current capabilities, new methods for measuring AI intelligence are emerging, such as Georgios Mappouras's proposed Turing Test 2.0, which assesses AI's ability to create functional knowledge from non-functional information, moving beyond mere conversational imitation. However, widespread AI adoption faces hurdles. A survey in the UK indicates workers are hesitant about AI, with many not informing their employers about its use due to fears of being perceived as less capable. Despite government pushes for adoption, employers often lack clear guidance, and AI has not yet demonstrably boosted productivity. Similarly, many CEOs' aggressive AI adoption strategies are failing to deliver expected results, with current AI software often not increasing revenue and implementation proving more complex than anticipated, sometimes leading to data breaches and legal issues. In education, UCLA's STEM departments are integrating AI learning tools through the AIMS program to reduce learning disparities in introductory courses, providing supplemental math materials and AI hinting tools, with plans to expand to other sciences.
Key Takeaways
- AI tools like Grok and Google's AI Overview have spread misinformation following the Charlie Kirk shooting, incorrectly identifying suspects and details.
- Amazon removed AI-generated books about the Charlie Kirk shooting that were published shortly after the event, some with erroneous publication dates.
- Malaysia and Zetrix AI Berhad are collaborating to establish global standards for Shariah-compliant AI, leveraging Zetrix AI's NurAi LLM.
- Nvidia and Kioxia are working together to develop SSDs capable of 100 million random IOPS by 2027 to enhance AI server performance.
- OpenAI's significant spending is boosting tech giants like Oracle, which has a large backlog from an OpenAI deal, and Broadcom, which secured a $10 billion custom chip deal with OpenAI.
- OpenAI's head of hardware is calling for hardware-based 'kill switches' and other safety features in future AI infrastructure.
- A new proposed Turing Test 2.0 aims to measure AI intelligence by its ability to create functional knowledge, not just imitate human conversation.
- UK workers express hesitancy towards AI, with many not disclosing its use at work due to concerns about their capabilities.
- Many CEO-led AI adoption initiatives are failing to increase revenue or deliver expected results, facing implementation challenges and organizational hurdles.
- UCLA is introducing AI learning tools in its undergraduate physics and math courses through the AIMS program to support student learning in STEM.
AI fuels false claims after Charlie Kirk shooting
Following the shooting of Charlie Kirk, AI tools like Grok and Google's AI Overview spread false information across social media. These tools incorrectly identified the suspect, altered photos, and provided conflicting details about the victim and the incident's motive. Experts explain that AI generates responses based on probability, not real-time fact-checking, leading to inaccuracies. While some AI bots have been removed or updated, the spread of misinformation highlights concerns about AI's reliability in fast-moving events. Governor Cox urged people to limit social media use to avoid such disinformation.
Charlie Kirk shooting sparks widespread misinformation from AI
The shooting of Charlie Kirk has led to a surge of conspiracy theories and false information, largely spread by AI chatbots. Chatbots like Perplexity and Grok incorrectly stated Kirk was still alive or mischaracterized the event. This follows similar instances where AI tools provided inaccurate information during crises like the Los Angeles protests and the Israel-Hamas war. While some AI-generated videos of the event are fake, many real videos are circulating. Additionally, unverified claims about the shooter's identity and motives have spread online.
Malaysia and Zetrix AI partner on Shariah-compliant AI standards
Malaysia and Zetrix AI Berhad have signed a Letter of Intent to create global standards for Shariah-compliant artificial intelligence. This collaboration aims to establish a framework for AI compliance, certification, and governance based on Islamic principles. Zetrix AI developed NurAi, the world's first Shariah-aligned Large Language Model. The partnership will focus on developing certification standards, promoting Malaysia as a center for Islamic AI, and creating a trusted channel on NurAi for Islamic guidance. This initiative supports Malaysia's digital economy and aims to ensure AI remains trusted and representative of Muslim values.
Malaysia and Zetrix AI partner on Shariah-compliant AI standards
Malaysia and Zetrix AI Berhad are collaborating to build global standards for Shariah-compliant artificial intelligence, signing a Letter of Intent to establish a framework for AI compliance, certification, and governance. Zetrix AI's NurAi, the first Shariah-aligned Large Language Model, will be central to this effort. The partnership, witnessed by Prime Minister Anwar Ibrahim, aims to position Malaysia as a leader in ethical AI and promote the Islamic digital economy. Key areas include developing Shariah certification, global advocacy, and creating a trusted channel on NurAi for Islamic guidance, ensuring AI aligns with Islamic principles and serves over 2 billion people worldwide.
Nvidia and Kioxia aim for 100 million IOPS SSD by 2027
Kioxia is collaborating with Nvidia to develop a solid-state drive (SSD) capable of 100 million random IOPS by 2027, a significant leap in performance for AI servers. Nvidia plans to use these drives to boost AI performance by connecting them directly to GPUs. Current high-end SSDs offer around 3 million IOPS, highlighting the need for faster storage to meet the demands of modern AI workloads. The new drives will likely use a PCIe 7.0 interface and may utilize Kioxia's XL-Flash technology to achieve this unprecedented speed.
Amazon removes AI-generated books about Charlie Kirk shooting
Amazon has removed several AI-generated books about the Charlie Kirk shooting that appeared for sale shortly after the event. Titles like 'The Shooting of Charlie Kirk' were published within hours of the incident, with one even listing a publication date before the shooting occurred. These books appear to be a quick cash-making scheme using generative AI, which can produce entire books rapidly. Amazon stated it removes content that violates its guidelines, and the listed publication date was a technical glitch. The incident highlights concerns about the proliferation of AI-generated content and misinformation.
UK workers hesitant about AI despite government push
A survey reveals that UK workers are wary of artificial intelligence, with only 17% seeing it as a good substitute for human interaction. A third of employees do not inform their bosses about using AI tools at work, fearing their abilities will be questioned. Many believe AI threatens the social structure and is a tool for those less capable. Despite calls from leaders like Keir Starmer to increase AI adoption, employers often lack clear guidance, and a stigma surrounds AI use. Research suggests AI is not yet significantly boosting productivity and humans should remain in control.
OpenAI's spending fuels tech giants like Oracle and Broadcom
OpenAI's significant spending is driving major growth for tech companies, including Oracle and Broadcom. Oracle's stock surged due to a large backlog largely from an OpenAI deal, while Broadcom announced a $10 billion custom chip deal with OpenAI. Microsoft, a key partner, has invested heavily in OpenAI. Nvidia's GPUs are also essential for OpenAI's AI development. These partnerships have contributed to a massive increase in the market caps of these companies since ChatGPT's launch. Despite concerns about OpenAI's non-profit structure, its projected revenue growth is substantial.
OpenAI executive calls for hardware kill switches in AI
Richard Ho, head of hardware at OpenAI, stated that future AI infrastructure will need safety features, including 'kill switches,' built directly into the hardware. He expressed concern that current safety measures are primarily software-based and assume hardware security. Ho highlighted the need for memory-rich, low-latency infrastructure for AI agents and discussed networking challenges. OpenAI's proposed safety measures include real-time kill switches in AI clusters, telemetry for detecting abnormal behavior, and secure execution paths. He emphasized the importance of observability as a hardware feature for monitoring AI systems.
New Turing Test 2.0 measures AI by knowledge creation
Georgios Mappouras proposes Turing Test 2.0, a new standard for measuring machine intelligence that goes beyond imitating human conversation. This updated test assesses whether AI can transform non-functional information into functional knowledge, essentially creating new insights. Mappouras distinguishes between raw data and applicable knowledge, arguing that true intelligence lies in the ability to innovate. Passing this test would require AI to solve unsolved problems or demonstrate a 'flash of genius,' proving it can generate knowledge not present in its training data.
CEOs' AI push leads to failures despite enthusiasm
Many CEOs are aggressively pursuing AI adoption to cut costs, but these efforts are frequently resulting in failure and frustration. Despite intense excitement, current AI software is often not increasing revenue for companies. Implementing AI is proving difficult, requiring significant organizational change rather than just technological upgrades. Failed AI rollouts have led to issues like data breaches and legal problems, and some companies have had to reverse automation plans due to AI's inability to fully replace human labor. The gap between executive expectations and AI's real-world capabilities is widening.
UCLA STEM departments adopt AI tools for enhanced learning
UCLA's undergraduate physics and math courses will introduce AI learning tools this fall through the Artificial Intelligence and Math Skills (AIMS) program. This initiative aims to reduce learning disparities in introductory STEM courses by providing supplemental math materials and AI hinting tools. Research indicates that incentivized supplemental assignments improve exam performance, especially for students needing more support. The AIMS project, funded by a Teaching and Learning Center grant, plans to expand to chemistry and life sciences courses, recognizing AI's growing role in higher education.
Sources
- AI fuels false claims after Charlie Kirk's death, CBS News analysis reveals
- Charlie Kirk Killing Sparks Wild Misinformation
- Malaysia and Zetrix AI Partner to Build Global Standards for Shariah-Compliant Artificial Intelligence
- Malaysia and Zetrix AI Partner to Build Global Standards for Shariah-Compliant Artificial Intelligence
- Nvidia and Kioxia target 100 million IOPS SSD in 2027 — AI server drives aim to deliver 33 times more performance
- Amazon removes likely AI-generated books about Charlie Kirk that sparked conspiracies
- UK workers wary of AI despite Starmer’s push to increase uptake, survey finds
- OpenAI's spending spree is powering the tech industry. Oracle is the latest winner
- Future AI infrastructure will need kill switches built directly into the hardware, says OpenAI
- Measuring Machine Intelligence Using Turing Test 2.0
- CEOs Are Obsessed With AI, But Their Pushes to Use It Keep Ending in Disaster
- AI tools to sweep through UCLA STEM departments, expanding education through AIMS