The FBI has issued warnings about scammers using AI-generated voice messages to impersonate senior US officials, targeting government officials and their associates. These scammers use text messages and AI-generated voice messages to gain access to personal accounts, which could be used to steal login credentials and target additional officials. Meanwhile, experts emphasize the importance of hardware in AI development, stating that the future of AI relies more on hardware than just models. In other news, companies like Kiteworks are developing platforms to unify AI governance and compliance, while others like RunDiffusion are launching AI training platforms. The Department of Defense has also appointed a new Senior Technical Advisor for AI Ethics & Risks. Additionally, natural gas is being considered as a reliable and affordable source of energy to fuel the AI revolution. However, there are also concerns about the use of AI in medical research, with some experts skeptical about claims that AI can double the human lifespan. The Copyright Office has also released a report suggesting that companies using copyrighted works to train AI models may not qualify for the fair use defense.
Key Takeaways
- The FBI warns of scammers using AI-generated voice messages to impersonate senior US officials.
- Experts emphasize the importance of hardware in AI development, stating that the future of AI relies more on hardware than just models.
- Kiteworks has developed a platform to unify AI governance, compliance, and third-party risk.
- RunDiffusion has launched a creator-owned AI training platform called Runnit.
- The Department of Defense has appointed a new Senior Technical Advisor for AI Ethics & Risks.
- Natural gas is being considered as a reliable and affordable source of energy to fuel the AI revolution.
- Some experts are skeptical about claims that AI can double the human lifespan.
- The Copyright Office suggests that companies using copyrighted works to train AI models may not qualify for the fair use defense.
- Soundcloud has updated its terms of use to state that it will not use user content to train generative AI models without explicit consent.
- Swisscom has joined the Swiss National AI Institute to accelerate the development of innovative and trustworthy AI products and services.
FBI Warns of AI Voice Scams
The FBI is warning people about scammers using AI to impersonate senior US officials' voices. These scammers use text messages and AI-generated voice messages to gain access to personal accounts of state and federal government officials. They may also use the information to target other officials or their associates. The FBI advises people not to assume a message is authentic just because it claims to be from a senior official.
Malicious Actors Impersonate US Officials
Malicious actors are using AI-generated voice messages to impersonate senior US officials. They use these messages to establish rapport with targets before sending them a link to a separate messaging platform, which may be a hacker-controlled website that steals login credentials. The FBI warns that access to targets' accounts could be used to go after additional government officials or their associates and contacts.
FBI Warns of AI Voice Messages
The FBI is warning people about malicious actors using AI-generated voice messages to impersonate senior US officials. These messages are used to establish rapport with targets before sending them a link to a separate messaging platform, which may be a hacker-controlled website that steals login credentials. The FBI advises people not to assume a message is authentic just because it claims to be from a senior official.
Malicious Actors Use AI to Impersonate Officials
Malicious actors are using AI-generated voice messages to impersonate senior US officials. They use these messages to establish rapport with targets before sending them a link to a separate messaging platform, which may be a hacker-controlled website that steals login credentials. The FBI warns that access to targets' accounts could be used to go after additional government officials or their associates and contacts.
AI's Future Relies on Hardware
Experts believe that the future of AI relies more on hardware than just models. Zhou Shaofeng, founder of Xinghan Laser, says that the focus on models is overshadowing the importance of hardware in AI development. He argues that real intelligence requires perception, interaction, and action, which starts at the hardware level. Shaofeng also notes that the current imbalance in AI investment, with less than 10% going into infrastructure, could become AI's Achilles' heel.
AI Hardware Will Define the Future
The future of AI will be defined by hardware, not just bigger models. Experts argue that the current focus on software is overshadowing the importance of hardware in AI development. Zhou Shaofeng, founder of Xinghan Laser, says that real intelligence requires perception, interaction, and action, which starts at the hardware level. He also notes that the current imbalance in AI investment, with less than 10% going into infrastructure, could become AI's Achilles' heel.
Kiteworks Unifies AI Governance and Compliance
Kiteworks has developed a platform that unifies AI governance, compliance, and third-party risk. The platform provides a single immutable audit trail that tracks who sends what to whom, when, and how. It also includes features such as geofencing controls, automated compliance reporting, and integrations with DLP, ATP, and SIEM tools. Kiteworks aims to help organizations manage today's demands while preparing for what's next in AI.
Dani Gibbons Takes on New Role at DOD
Dani Gibbons has assumed the role of Senior Technical Advisor for AI Ethics & Risks at the Department of Defense's Chief Digital and Artificial Intelligence Office. Gibbons was previously the Deputy Division Chief within the same office and has experience in AI ethics and risk management. She will work on developing technical tools, assessments, and policy guidance to operationalize the DoD AI Ethical Principles.
RunDiffusion Launches AI Training Platform
RunDiffusion has launched a new platform called Runnit, which provides creator-owned AI training pipelines. The platform includes five specialized training pipelines tailored for professionals, artists, and teams. Runnit ensures that creators maintain full ownership of their content and provides features such as templates, team collaboration tools, and custom model options.
Natural Gas Can Fuel AI Revolution
Natural gas can fuel the AI revolution by providing a reliable and affordable source of energy. The demand for energy to power data centers is increasing, and natural gas can help meet this demand. It is also a cleaner source of energy than coal and can be used in conjunction with renewable energy sources. Companies like Williams are investing in natural gas infrastructure to support the growth of AI.
Swisscom Joins National Swiss AI Research
Swisscom has joined the Swiss National AI Institute to accelerate the development of innovative and trustworthy AI products and services. The partnership aims to strengthen Swiss sovereignty in the field of AI and promote knowledge transfer between research and industry. Swisscom will work with the institute to develop relevant benchmarks and create curated datasets that prioritize transparency and bias reduction.
AI May Not Double Human Lifespan
Some technologists claim that AI can double the human lifespan by 2030, but experts are skeptical. While AI can help with medical research and improve healthcare, there is no evidence that it can modulate the biological process of aging. The burden of proof is on AI scientists to demonstrate that they can extend human lifespans in a measurable way.
Soundcloud Updates AI Training Policy
Soundcloud has updated its terms of use to state that it will not use user content to train generative AI models without explicit consent. However, the fine print is still unclear, and it is uncertain how the company will implement this policy. The update comes after concerns were raised about the use of user data for AI training.
Generative AI Training May Not Qualify for Fair Use
The Copyright Office has released a report concluding that companies that use copyrighted works to train AI models may not qualify for the fair use defense. The report suggests that commercializing copyrighted works in training data to compete with the original works is unlikely to fit the fair use exception. The office recommends that companies consider licensing frameworks to acquire the necessary data.
Sources
- Scammers use AI to spoof senior U.S. officials' voices, FBI warns
- Malicious actors using AI to pose as senior US officials, FBI says
- FBI warns of AI voice messages impersonating top U.S. officials
- Malicious actors using AI to pose as senior US officials, FBI says
- AI’s future hinges on hardware, not just models: expert
- Why AI Hardware, Not Just Bigger Models, Will Define The Future Of AI
- The GOAT of Data Security? How Kiteworks Unifies AI Governance, Compliance, and Third-Party Risk
- Dani Gibbons Takes on New Role at DOD Chief Digital and Artificial Intelligence Office HS Today
- RunDiffusion Launches “Runnit” Platform with Creator-Owned AI Training Pipelines
- Natural gas can fuel our AI revolution
- Swisscom Dips Toes in National Swiss AI Research
- AI Will Double the Human Lifespan By 2030, Tech CEO Claims. Is This the Dawn of Immortality?
- Soundcloud updates its AI training policy, but it's still unclear
- Generative AI Training May Not Qualify for the Fair Use Defense