Google Gemini, ChatGPT Use Surges, Scale AI Investigated

The artificial intelligence landscape continues to evolve rapidly, with new tools and applications emerging across various sectors. In education, a growing number of students are using AI tools like ChatGPT and Google Gemini for assignments, prompting discussions about AI literacy and responsible use rather than outright bans. Educators are exploring AI avatars and specialized courses to prepare students for an AI-integrated future, with nearly 80% of Oklahoma students reportedly using AI for schoolwork. Meanwhile, businesses are increasingly adopting AI agents for tasks, with over half of organizations using generative AI now employing these autonomous systems, though this brings new security and governance challenges. Google Cloud highlights the trend towards AI agents, which can make decisions and execute tasks, necessitating strong controls for data privacy and security. In the financial world, Wall Street strategists are raising S&P 500 targets, largely driven by sustained AI investment and the strong performance of major tech companies. However, concerns about market concentration persist, as a few large firms dominate the AI sector. The ethical implications of AI training data are also under scrutiny, with The Atlantic launching a tool to reveal the content used to train AI models. This comes as AI companies have reportedly downloaded millions of YouTube videos without explicit permission for training purposes, sparking legal battles over copyright and fair use. On the cybersecurity front, IBM's latest podcast discusses emerging threats, including AI-powered cybercrime and new ransom tactics. In Pittsburgh, new AI Hub Suites aim to foster the region's tech scene by connecting businesses with AI talent and resources. Separately, San Francisco labor regulators are investigating Scale AI's treatment of its contract workers who train AI models, examining labor practices for local residents.

Key Takeaways

  • AI tools like ChatGPT and Google Gemini are increasingly used by students for assignments, leading educators to focus on AI literacy and responsible use.
  • Nearly 80% of Oklahoma students reportedly use AI for assignments, highlighting the widespread adoption in education.
  • Businesses are rapidly adopting AI agents, with over 52% of organizations using generative AI now employing these autonomous systems for task execution.
  • Wall Street strategists are increasing S&P 500 targets, with AI investments being a primary driver of market performance.
  • Concerns exist regarding the concentration of power in the AI industry, with a few large companies dominating the market.
  • The Atlantic has launched AI Watchdog to increase transparency regarding the data used to train AI models, addressing concerns about copyrighted or objectionable material.
  • Millions of YouTube videos have reportedly been downloaded by AI companies without permission for training purposes, leading to ongoing legal disputes.
  • AI agents present new security challenges for businesses, requiring robust governance and controls for data privacy and risk mitigation.
  • San Francisco labor regulators are investigating Scale AI's treatment of its contract workers involved in AI model training.
  • Industrial AI success depends on integrated infrastructure, balancing compute, networking, and storage, rather than just powerful hardware.

The Atlantic launches AI Watchdog to reveal training data secrets

The Atlantic has created AI Watchdog, a new tool to uncover what data is used to train artificial intelligence models. Generative AI chatbots are becoming popular sources of information, but the data they learn from is kept secret by companies. This data can include copyrighted works, misinformation, and objectionable material. AI Watchdog allows users to search over 7.5 million books, 81 million articles, and millions of videos to see what content is included in AI training data sets. The tool aims to bring transparency to the 'black box' of machine learning.

Search tool reveals YouTube videos used for AI training

The Atlantic has launched a search tool that allows people to find YouTube videos used to train generative AI. This tool is part of an investigation into how AI companies are using video content. While the tool shows which videos are in AI training data sets, it does not definitively prove that AI companies used them. AI developers often download videos in bulk to create AI products capable of generating video.

Millions of YouTube videos taken for AI training without permission

AI companies have downloaded over 15.8 million YouTube videos from more than 2 million channels without permission to train their AI products. Many of these are how-to videos, and they are found in at least 13 different data sets. While YouTube allows downloads for personal viewing, this mass downloading for AI training is different and potentially illegal. Tech companies argue this is 'fair use,' but lawsuits are ongoing, and the outcome could affect creators' willingness to share content online. Generative AI tools are already creating videos that compete with human-made content.

Teach AI in class, not ban it, student argues

A high school student argues that banning artificial intelligence from classrooms is not effective and misses opportunities for learning. While some students misuse AI to cheat, others, like the author, use it as a tool to enhance understanding. AI is becoming increasingly integrated into work and education, with AI use among employees doubling in two years. Instead of banning it, educators should teach AI literacy, responsible use, and critical thinking about AI's ethical issues and limitations. This approach can help students prepare for a future where AI is prevalent.

Tulsa summit focuses on AI's role in education's future

Educators and industry leaders met in Tulsa for the Education Leadership Summit to discuss the future of AI in education. Panelists stressed the importance of balancing AI integration with critical thinking skills, noting that students are already using AI tools. A survey showed nearly 80% of Oklahoma students use AI for assignments, highlighting the need for responsible usage education. The summit aimed to strengthen connections between schools and industries to prepare students for AI-driven job markets.

AI agents in business bring new security challenges

AI agents are now being used in businesses to perform tasks, creating both opportunities and risks for security teams. A Google Cloud report shows that 52% of organizations using generative AI have moved to AI agents, which can make decisions and execute tasks. While AI has improved security posture for many, with faster threat identification and incident resolution, managing these autonomous systems presents new governance challenges. CISOs must focus on data privacy, security, and establishing strong controls to mitigate risks.

Bakery Square launches AI Hub Suites to boost Pittsburgh's tech scene

Bakery Square in Pittsburgh has opened new 'Corporate AI Hub Suites' to give companies access to the city's artificial intelligence community and talent. The suites offer flexible office spaces, with the first tenant being financial wellness platform Credit Genie. This initiative aims to solidify Pittsburgh's position as a global AI competitor by connecting businesses with resources like Carnegie Mellon University and local AI experts. The development is part of a broader effort to highlight and advance AI innovation in the region.

AI concentration risk concerns Wall Street strategists

Wall Street strategists are discussing the significant concentration risk in the artificial intelligence industry, where a few large companies dominate. Keith Lerner, Chief Investment Officer at Truist, noted that the 'Mag 7' tech companies have effectively acquired or influenced over 800 other companies. He cautioned against investing solely in AI due to future uncertainties and policy changes. Lerner also pointed out that while US concentration is high historically, it's not out of line globally, and these large companies function more like conglomerates.

AI fuels Wall Street upgrades for S&P 500

Wall Street strategists are raising their S&P 500 targets, driven by the strong performance of artificial intelligence investments. Firms like Deutsche Bank, Wells Fargo, and Barclays have increased their forecasts, citing resilient earnings and continued AI capital expenditure. Despite concerns about narrow market leadership, strategists believe AI investment will sustain the bull market. They view the economic outlook with cautious optimism, emphasizing that the current rally is heavily dependent on ongoing AI spending.

IBM podcast discusses AI, cybercrime, and new threats

The first episode of IBM's 'Security Intelligence' podcast explores emerging cyber threats, including 'vibe hacking' and HexStrike AI, an offensive security framework enabling AI agent armies. Hosts Jeff Crume, Suja Viswesan, and Nick Bradley also discuss a new ransom demand tactic from Scattered Lapsus$ Hunters and the rise of remote access trojans (RATs). The discussion questions whether cybercrime has become too easy due to these advancements.

Boise State professor uses AI avatars and courses for students

Margaret Sass, a lecturer at Boise State's School for the Digital Future, is using generative AI tools like ChatGPT and Google Gemini to enhance learning. She created 12 AI avatars representing different workplace personalities for her 'Teamwork in the Digital Age' class, allowing students to practice collaboration. Sass also offers AI courses to senior citizens, focusing on positive impacts and safety tips, including how to avoid AI scams. She believes teaching ethical AI use is crucial as the technology becomes more integrated into jobs and daily life.

Industrial AI needs integrated infrastructure for success

Building effective industrial AI requires carefully engineering the entire infrastructure, not just adding more powerful hardware. A manufacturer saw a 20x boost in output by properly balancing compute, networking, and storage for AI workloads. Many industrial settings struggle with aging infrastructure that cannot handle the data demands of AI. Experts advocate for a hybrid approach, using on-premises systems for real-time tasks and the cloud for other workloads, to achieve performance, efficiency, and meet needs like sovereign AI.

San Francisco probes Scale AI over worker treatment

San Francisco labor regulators are investigating Scale AI's treatment of its workers, who are considered contractors. Scale AI relies on thousands of these workers to train AI models. The city's Office of Labor Standards Enforcement is looking into labor practices for San Francisco residents who worked for Scale AI over the past three years. Scale AI stated it is cooperating with the investigation and complies with all labor laws. This probe follows previous investigations and lawsuits concerning worker classification and pay.

Sources

AI Training Data Generative AI AI Ethics AI in Education AI Security AI Agents AI Hubs AI Market Concentration AI Investment Cybercrime AI Avatars Industrial AI AI Worker Treatment AI Transparency Machine Learning Copyright Misinformation AI Literacy AI Regulation AI Infrastructure