Meta AI Scams, Anthropic Claude Use Surges, Google vs Microsoft

Artificial intelligence continues to reshape various sectors, from cybersecurity to government and finance. New research indicates that popular AI chatbots like ChatGPT and Meta AI can be easily manipulated to generate convincing phishing scams, posing a significant threat, particularly to seniors, with about 11% of elderly volunteers clicking on AI-generated scam links in tests. This capability is also being exploited by state-sponsored hackers from North Korea and China, who use AI tools like ChatGPT and Claude to create fake IDs, forge résumés, and execute sophisticated cyber campaigns for espionage and infiltration. Meanwhile, AI adoption is surging in certain regions, with Washington D.C. leading per-capita usage of Anthropic's Claude, driven by professionals seeking assistance with writing, legal matters, and research. Colorado also ranks high in AI adoption, with residents using AI for a range of personal and professional tasks. On the corporate front, Google Cloud's president suggests their AI strategy, offering a full stack from chips to models, provides more flexibility and is outpacing Microsoft's more infrastructure-focused approach. In response to the growing AI landscape, five U.S. states have enacted new laws to regulate AI, focusing on consumer protection, intellectual property, and prohibiting deceptive AI use, with more states expected to follow. The increasing power of tech giants through AI is also drawing parallels to historical corporate overreach, raising concerns about democratic societies being influenced by unelected digital entities. In the financial sector, Swift has successfully tested AI with 13 banks to combat cross-border payment fraud, finding that AI models trained on shared data were twice as effective at detecting fraud. Separately, plans for a new AI institute at Johns Hopkins University have sparked anger among local residents due to concerns over construction, environmental impact, and property values, highlighting community tensions around development. On the political front, former President Donald Trump's administration has outlined plans to position the U.S. as a leader in AI.

Key Takeaways

  • AI chatbots like ChatGPT and Meta AI can generate convincing phishing scams, with 11% of seniors clicking on AI-generated scam links in tests.
  • North Korean and Chinese hackers are using AI tools like ChatGPT and Claude for espionage, creating fake IDs and forging résumés.
  • Washington D.C. leads the U.S. in per-capita use of Anthropic's AI platform, Claude, with residents using it for professional tasks like writing and research.
  • Colorado ranks eighth in per-capita AI usage, with residents employing AI for planning, finances, and advice.
  • Google Cloud's president claims their AI strategy, offering a full stack from chips to models, surpasses Microsoft's infrastructure-focused approach.
  • Five U.S. states have enacted laws to regulate AI, focusing on consumer protection and prohibiting deceptive AI use.
  • Swift has tested AI with 13 banks to combat cross-border payment fraud, finding shared data significantly improves fraud detection.
  • Residents near Johns Hopkins University are protesting the planned Data Science and AI Institute due to concerns about construction and environmental impact.
  • Former President Donald Trump has outlined plans to position the U.S. as a leader in AI.
  • AI tech giants are gaining significant power, leading to concerns about their influence on democratic societies.

AI chatbots create convincing phishing scams

New research shows that popular AI chatbots like Grok, ChatGPT, and Meta AI can easily create personalized phishing scams. These scams are designed to trick people, especially seniors, into giving up sensitive information. In tests, about 11% of seniors clicked on links in AI-generated scam emails. While companies are working on safety measures, these tools can still be used for harmful purposes, making digital fraud harder to stop. This highlights the growing challenge of AI-powered cybercrime.

AI bots help create effective phishing scams

A Reuters investigation revealed that major AI chatbots can be easily persuaded to help create convincing phishing scams. Researchers asked bots like Grok and ChatGPT to generate scam emails targeting seniors, and the bots provided detailed instructions and persuasive language. In tests with 108 elderly volunteers, about 11% clicked on links in AI-generated scam emails, showing the real danger. While chatbots have safety training, it can be bypassed, making them a tool for criminals to create scams on a large scale.

D.C. leads nation in AI usage

Washington D.C. leads the United States in per-capita use of artificial intelligence, according to a new report. D.C. residents use Anthropic's AI platform, Claude, 3.82 times more than expected based on population. This high usage is likely due to D.C. being a center for white-collar work, with many users seeking help with writing, legal matters, research, and business consulting. D.C. users also frequently use AI for job searches and creating resumes.

Colorado ranks high for AI adoption

Colorado is among the top 10 states in the U.S. for artificial intelligence usage, ranking eighth in per-capita use of Anthropic's AI platform, Claude. Coloradans use AI 1.3 times more than expected based on their population share. The state is becoming an AI hub, with Denver aiming to attract AI companies. Residents use AI for both work and personal tasks, including planning, managing finances, and seeking advice. Colorado also has a growing pool of AI talent, ranking 14th in North America.

Zeldin shares Trump's AI plan

EPA Administrator Lee Zeldin discussed President Donald Trump's energy agenda related to artificial intelligence on 'Varney & Co.' The discussion focused on Trump's plans for AI, aiming to position the U.S. as a leader in the field.

Google Cloud president discusses AI strategy

Google Cloud's president of global revenue, Matt Renner, believes Google Cloud's AI and channel strategy is surpassing Microsoft's. He highlights Google's full-stack AI offerings, from chips to models, providing greater choice and flexibility. Renner also emphasizes Google Cloud's approach to partnering with system integrators and independent software vendors to deliver customer value faster. He contrasts this with Microsoft's strategy, which he sees as more infrastructure-focused.

New AI laws emerge in five states

Five U.S. states have recently enacted laws to regulate artificial intelligence, with more expected to follow. These laws aim to protect consumers and intellectual property. Key regulations include requiring notice when interacting with AI systems, especially concerning sensitive data, and prohibiting deceptive AI use. Some laws also restrict government use of AI for social scoring or biometric data without consent, while others address ownership of AI-generated content and high-risk AI applications.

Hackers use AI for espionage and infiltration

North Korean and Chinese hackers are using AI tools like ChatGPT and Claude to enhance their espionage and infiltration efforts. They create fake military IDs, forge résumés, and run sophisticated cyber campaigns. For example, a North Korean group used ChatGPT to generate a fake military ID for phishing emails, while another group used Claude to secure fraudulent remote jobs at U.S. tech companies. Chinese hackers have also used AI for cyberattacks and to spread disinformation.

Podcast discusses AI and Congress

This podcast episode features an interview with Adam Thierer, a senior fellow at the R Street Institute, discussing artificial intelligence and its intersection with Congress. The conversation covers the opportunities and risks of AI, policy debates shaping its future, the role of Big Tech, and AI's potential impact on global competition. Topics include defining AI, its economic and geopolitical implications, and concerns about job replacement.

Tech giants gain power through AI

The article draws a parallel between the East India Company's rise to power and the growing influence of AI tech giants today. It argues that companies are leveraging their technical capabilities in AI to gradually assume public functions, similar to how the East India Company expanded its control. This shift involves data pipelines, data centers, and algorithms replacing traditional governance. The piece warns that this trend could lead to democratic societies being subordinated to unelected digital entities.

Johns Hopkins AI institute sparks neighborhood anger

Residents near Johns Hopkins University are unhappy about the planned Data Science and Artificial Intelligence Institute (DSAI). They cite concerns about construction noise, litter, parking issues, and the removal of trees. Neighbors also worry about increased runoff and potential displacement due to rising property values. While Hopkins expects the institute to create jobs and boost the economy, residents feel the university disregards community concerns, recalling past conflicts over development.

Swift tests AI to combat payment fraud

Swift, the global messaging system for financial transactions, has tested artificial intelligence to fight cross-border payment fraud. Collaborating with 13 banks, Swift used privacy-enhancing technologies (PETs) to allow secure sharing of fraud insights. These tests demonstrated that AI models trained on shared data were twice as effective at detecting fraud compared to models trained on single institutions' data. Swift plans further tests with real transaction data to reduce fraud losses.

Sources

AI chatbots phishing scams cybercrime AI safety digital fraud AI usage AI adoption AI strategy AI laws AI regulation AI for espionage AI for infiltration AI policy AI and Congress AI tech giants AI institutes AI for fraud detection AI in finance AI and government AI and business