Recent developments in the tech industry have seen significant advancements in AI technology, with companies like OpenAI and Google making headlines. OpenAI is developing a new AI device that is compact, screenless, and fully aware of its user's surroundings, with the potential to add $1 trillion in market value to the company. Meanwhile, Google is facing an antitrust probe over its deal with Character.AI, a chatbot maker. In other news, cities like Dubuque, Iowa, are balancing AI and security efforts, while universities like Xi'an Jiaotong-Liverpool University are using AI to predict hazards on campus. The use of deepfakes has also raised concerns about free speech and the potential for misuse. Additionally, AI is reshaping the banking industry, with only 25% of banks using AI strategically, and is changing the workforce by expanding the definition of a qualified workforce. As AI continues to evolve, it is essential to consider its impact on society, with students at James Madison University taking a critical look at generative AI in a new course. However, the accuracy of AI benchmarking platforms has also come under scrutiny, with researchers claiming that some platforms favor proprietary AI models from big tech companies.
Key Takeaways
- OpenAI is developing a new AI device that is compact, screenless, and fully aware of its user's surroundings.
- Google is facing an antitrust probe over its deal with Character.AI, a chatbot maker.
- Cities like Dubuque, Iowa, are balancing AI and security efforts.
- Universities like Xi'an Jiaotong-Liverpool University are using AI to predict hazards on campus.
- The use of deepfakes has raised concerns about free speech and the potential for misuse.
- AI is reshaping the banking industry, with only 25% of banks using AI strategically.
- AI is changing the workforce by expanding the definition of a qualified workforce.
- Students at James Madison University are taking a critical look at generative AI in a new course.
- The accuracy of AI benchmarking platforms has come under scrutiny, with researchers claiming that some platforms favor proprietary AI models from big tech companies.
- OpenAI's acquisition of io, a startup founded by former Apple designer Jony Ive, is part of a $6.5 billion equity deal.
OpenAI Creates New AI Device
OpenAI is developing a new AI device that is compact, screenless, and fully aware of its user's surroundings. The device, described as a 'third core device' and 'AI companion', will be small enough to sit on a desk or fit in a pocket. OpenAI CEO Sam Altman believes this venture could add $1 trillion in market value to the company. The device is part of OpenAI's acquisition of io, a startup founded by former Apple designer Jony Ive, in a $6.5 billion equity deal.
OpenAI and Jony Ive Introduce io
OpenAI and Jony Ive have introduced io, a new hardware startup that aims to 'completely reimagine what it means to use a computer'. The company has acquired io in a $6.5 billion deal, with Jony Ive taking on a key creative and design role at OpenAI. The startup is working on a new device that is not a smartphone or wearable, but rather a screenless device that understands its user's environment and context. The device is expected to be released in late 2026 and will be a 'third device' that complements existing gadgets.
OpenAI Develops Anti-Screen Device
OpenAI is working on a new AI device that is compact, screenless, and fully aware of its user's surroundings. The device, described as a 'third core device' and 'AI companion', will be small enough to sit on a desk or fit in a pocket. OpenAI CEO Sam Altman believes this venture could add $1 trillion in market value to the company. The device is part of OpenAI's acquisition of io, a startup founded by former Apple designer Jony Ive, in a $6.5 billion equity deal. The company aims to create a new category of devices that are distinct from any other.
Google Faces Antitrust Probe
Google is facing an antitrust probe over its deal with Character.AI, a chatbot maker. The US Department of Justice is examining whether Google structured the agreement to avoid formal government merger scrutiny. Google signed a licensing deal with Character.AI last year, granting it a non-exclusive license to the company's large language model technology. The DOJ can scrutinize whether the deal is anti-competitive, even if it did not require a formal review. Google is already under pressure from regulators, with the DOJ seeking to break up the company's dominance in the online search market and digital advertising technology.
Google Faces Antitrust Investigation
The US Department of Justice is probing whether Google violated antitrust law with its agreement to use the artificial intelligence technology of Character.AI, a popular chatbot maker. Google signed a deal with Character.AI last year, which included a non-exclusive license to the company's technology. The DOJ is examining whether Google structured the agreement to avoid formal government merger scrutiny. The investigation is in its early stages and may not lead to an enforcement action.
Dubuque CIO Balances AI and Security
Joe Pregler, the CIO of Dubuque, Iowa, is balancing the city's AI and security efforts. The city has recently transitioned to a modernized data center and is integrating artificial intelligence into its operations. Pregler is developing a citywide AI policy and has adopted Microsoft Copilot for internal use. The city is also investing in public-facing tools to support transparency and utility management. Pregler is working to recruit a new security officer and is prioritizing information security, with a focus on staying within budget.
XJTLU Security Team Uses AI
The security team at Xi'an Jiaotong-Liverpool University is using AI to predict hazards on campus. The team has developed a smart solution called the Campus Safety Hub, which combines a no-code development platform with an AI prediction model. The system enables incident reporting and provides a unified interface for security personnel to report issues. The AI-powered Open Campus Security Prediction Model analyzes data on past incidents to identify patterns and trends, allowing for proactive planning and prevention.
Deepfake Laws Bring Prosecution
Pennsylvania's attorney general has used a new law to charge a man with possessing AI-generated child sexual abuse material. The law, which bans such content, came into effect after a police officer was found to have a cache of photos featuring lurid images of minors created by artificial intelligence. The officer was not charged due to the timing of the discovery, but the new law has allowed for prosecution in other cases. The use of deepfakes has raised concerns about free speech and the potential for misuse.
Agentic AI Changes Workforce
Agentic AI is changing the workforce by expanding the definition of a qualified workforce. AI agents are capable of handling tasks once considered beyond the reach of automation, and the total addressable market for digital labor could soon reach trillions of dollars. Salesforce CEO Marc Benioff believes that AI will have a significant impact on the workforce, and companies must adapt to stay competitive.
AI Reshapes Banking Industry
AI is reshaping the banking industry, with only 25% of banks using AI strategically. A new report from Boston Consulting Group finds that most banks are stuck in pilot projects and are not scaling AI for real competitive advantage. The report warns that banks must overhaul their strategy, technology, and governance to remain competitive, or risk losing their place in the financial ecosystem. AI is eroding traditional banking advantages, such as pricing power and customer loyalty, and banks must adapt to stay relevant.
JMU Students Study Generative AI
Students at James Madison University will be taking a critical look at generative AI in a new course. The course will explore the potential benefits and risks of generative AI and its impact on society. Students will examine the technology behind generative AI and its applications in various fields, including art, music, and writing.
AI Benchmarking Platform Faces Scrutiny
A popular AI benchmarking platform, LM Arena, is facing scrutiny from researchers who claim that its tests favor proprietary AI models from big tech companies. The researchers found that the platform's tests grant major language models 'undisclosed private testing practices' that give them an advantage over open-source models. The study suggests that the platform's leaderboard rankings may not accurately reflect the performance of different AI models, and that big tech companies may be able to 'rig' the system to their advantage.
Sources
- OpenAI’s next big bet won’t be a wearable: report
- OpenAI + Jony Ive = io. Is This the Future of AI Hardware?
- The Anti-Screen Device: OpenAI & Jony Ive Are Reinventing AI Hardware
- Google faces DoJ probe over deal for AI tech, Bloomberg Law reports
- Google Faces Antitrust Investigation Over Deal for AI-Fueled Chatbots
- Dubuque, Iowa, CIO Balances AI, Security, Data Center Upgrade
- XJTLU security team harnesses AI to predict hazards - Xi'an Jiaotong-Liverpool University
- Deepfake Laws Bring Prosecution and Penalties, but Also Pushback
- Agentic AI Is Already Changing the Workforce
- AI reshapes banking – except banks aren’t ready to be reshaped
- JMU students will take a critical look at generative AI in course
- AI benchmarking platform is helping top companies rig their model performances, study claims