AI Regulation, Funding, and Job Impact

New York has passed the RAISE Act to regulate AI safety, setting transparency standards for AI labs and requiring them to report safety incidents, with penalties for non-compliance. Senator Andrew Gounardes co-sponsored the bill, which mandates AI companies to create safety plans against automated crime and bioweapons. In business, Conveyor received $20 million to develop AI tools for B2B sales, automating tasks like security reviews. Sinclair is investing $5 million in AI learning. Meanwhile, users desire improvements in ChatGPT, such as admitting when it doesn't know something and showing its energy usage. Legal experts in India discussed AI's impact on the legal field, emphasizing ethics and accuracy. Changes to cybersecurity rules made by President Trump are sparking debate among experts. Some CEOs are warning employees about potential AI job losses, causing anxiety, while experts suggest focusing on training and support instead. AI is being used to analyze social media to understand public opinion, helping companies make quick decisions. Experts suggest that jobs requiring manual labor, creativity, and good judgment are safer from AI, and AI skills in finance and IT can lead to higher earnings.

Key Takeaways

  • New York's RAISE Act aims to regulate AI safety and requires transparency from AI labs.
  • The RAISE Act mandates AI companies to create safety plans against threats like automated crime.
  • Conveyor secured $20 million to build AI tools for B2B sales automation.
  • Sinclair is investing $5 million to advance AI learning.
  • ChatGPT users want the AI to admit when it doesn't know something and display its energy usage.
  • Indian legal experts discussed the ethical and accuracy considerations of AI in law.
  • Changes to cybersecurity rules by President Trump are under expert scrutiny.
  • CEOs' warnings about AI job losses can cause employee anxiety.
  • AI is used to analyze social media to gauge public opinion.
  • Jobs requiring manual labor, creativity, and judgment are considered safer from AI.

New York passes RAISE Act to regulate AI safety

New York passed the RAISE Act to prevent AI disasters from companies like OpenAI and Google. The law sets transparency standards for AI labs and requires them to report safety incidents. Companies failing to meet standards could face penalties up to $30 million. The bill now goes to Governor Kathy Hochul for approval.

NY legislature approves Gounardes' AI safety bill

The Responsible AI Safety and Education (RAISE) Act passed in both houses of New York's legislature. Senator Andrew Gounardes co-sponsored the bill, which makes big AI companies create safety plans. These plans should protect against things like automated crime and bioweapons. The bill lets New York's Attorney General punish companies that don't follow the rules.

Conveyor gets $20M to build AI for business sales

Conveyor, a software company, received $20 million to create AI tools for B2B sales. Their AI agents, like "Sue" and "Phil," help with security reviews and RFPs. These tools aim to speed up deals by automating trust-related tasks. Customers like Zendesk and Atlassian have seen efficiency improvements using Conveyor's platform.

5 improvements users want for ChatGPT

ChatGPT is useful, but it has flaws that users want fixed. One wish is for ChatGPT to admit when it doesn't know something instead of making things up. People also want it to show how much energy it uses and connect users with real experts. Other requests include ChatGPT noticing when users are overusing it and helping them stop.

Sinclair invests $5M in AI learning

Sinclair plans to spend $5 million to advance AI learning, says AI is the future.

AI in law India's legal leaders discuss AI's impact

Legal experts in India met to discuss how AI is changing the legal field. They talked about using AI in courtrooms and legal work, focusing on ethics and fairness. Speakers from Microsoft and SEBI highlighted the need for responsible AI use. They also warned against relying too much on AI without checking its accuracy.

Experts react to Trump's changes to cybersecurity rules

President Trump changed some cybersecurity rules made by Presidents Biden and Obama. The new rules affect software security, AI, and digital IDs. Some experts think the changes are good, while others disagree. They discussed how these changes could impact the industry and national security.

Is your boss scaring you about AI job loss

Some CEOs are warning employees that AI could take many jobs. Experts say this can make workers anxious and less productive. While AI is changing the workplace, scaring employees isn't helpful. Instead, leaders should be honest about changes and offer training and support.

AI analyzes social media to understand public opinion

AI can analyze social media to understand how people feel about products and events. Companies use AI to track emotions on platforms like Twitter and Reddit. This helps them make quick decisions and fix problems fast. For example, game companies use AI to respond to player feedback and improve their games.

Expert reveals jobs safe from AI

An expert says young people should train for jobs that AI can't easily replace. These include traditional trades like plumbers and electricians because they require manual labor. Jobs needing creativity and good judgment are also safe. People with AI skills in finance and IT earn more money.

Sources

AI regulation RAISE Act New York OpenAI Google Transparency standards AI safety Safety incidents Penalties Responsible AI AI safety plans Automated crime Bioweapons Attorney General AI for business sales B2B sales AI agents Security reviews RFPs Automation ChatGPT AI limitations Energy consumption AI learning AI in law Legal field Ethics Fairness Cybersecurity Software security Digital IDs AI job loss Employee anxiety AI training AI and social media Public opinion Sentiment analysis AI in gaming AI-resistant jobs Traditional trades Creativity Judgment AI skills Finance IT