salesforce hugging face Development

Artificial intelligence continues to drive significant investments and transformations across various sectors, though it also presents considerable challenges in monetization, workforce impact, and security. Hitachi Energy, a subsidiary of Japan's Hitachi Ltd, is committing $1 billion to bolster the U.S. power grid, with nearly half, $457 million, allocated for a new facility in South Boston, Virginia. This investment aims to produce large power transformers, making it the largest U.S. producer, and directly addresses the surging electricity demand from new AI data centers. Construction for this facility begins this year, with operations expected by 2028. Meanwhile, Hyperscale Data's subsidiary, Ault Markets, is developing StableShare, an AI-powered software-as-a-service platform for tokenized securities, which is slated for a 2026 launch to enhance efficiency and compliance for financial institutions. BioLab Holdings, Inc., a medical manufacturer, is also strategically investing in cureVision, a German health tech startup, to bring its AI-powered wound analysis and diagnosis technology to the U.S. market. This technology uses optical sensors and 3D imaging for fast, contact-free wound assessments. Despite these advancements, the industry faces hurdles. Salesforce CEO Marc Benioff recently spoke about an "AI crisis," noting the difficulty many companies, including Salesforce, encounter when trying to monetize their substantial AI investments. This sentiment is echoed by Limited Partners, who are struggling with AI data risks due to misaligned incentives. The human element of AI adoption is also under scrutiny, as seen when a 25-year Commonwealth Bank employee, Kathryn, was made redundant after an AI chatbot named Bumblebee took over her role, prompting unions to call for stronger AI regulations. In response to the evolving landscape, the Electronic Security Association (ESA) has formed an AI Readiness Council (ARC) to guide the electronic security and life safety industry in the ethical and profitable use of AI. On the security front, Palo Alto Networks researchers uncovered a new AI supply chain attack called 'Model Namespace Reuse.' This method involves hackers registering names of deleted or transferred AI models on platforms like Hugging Face to deploy malicious code, demonstrating risks against Google's Vertex AI and Microsoft's Azure AI Foundry. Google has since implemented daily scans for orphaned models, but experts advise pinning models to specific versions for better security. To address the broader regulatory challenges, a scholar suggests applying product liability principles to AI, drawing lessons from the FDA's experience with high-risk AI medical devices, which shifted towards post-market monitoring to hold manufacturers accountable for risks that emerge after deployment.

Key Takeaways

  • Hitachi Energy is investing $1 billion in the U.S. power grid, including $457 million for a new large power transformer facility in South Boston, Virginia, to meet the rising demand from AI data centers.
  • Ault Markets, a subsidiary of Hyperscale Data, is developing StableShare, an AI-powered platform for tokenized securities, with a planned launch in 2026.
  • Salesforce CEO Marc Benioff highlighted an "AI crisis" within the industry, indicating challenges in monetizing AI investments despite its transformative potential.
  • Palo Alto Networks discovered a new AI supply chain attack, "Model Namespace Reuse," which targets platforms like Hugging Face and demonstrated risks against Google's Vertex AI and Microsoft's Azure AI Foundry.
  • Google has initiated daily scans for orphaned AI models to mitigate the "Model Namespace Reuse" attack, while experts recommend pinning models to specific versions for enhanced security.
  • A 25-year Commonwealth Bank employee, Kathryn, was made redundant after an AI chatbot, Bumblebee, took over her job, prompting calls for stronger AI regulations to protect workers.
  • The Electronic Security Association (ESA) has established an AI Readiness Council (ARC) to guide the electronic security and life safety industry in the ethical and profitable adoption of AI.
  • BioLab Holdings, Inc. is making a strategic investment in cureVision, a German health tech startup, to bring its AI-powered wound analysis and diagnosis technology to the U.S. market.
  • Limited Partners are facing significant AI and data risks, primarily due to misaligned incentives within the investment community.
  • A scholar proposes using product liability principles for AI regulation, drawing on the FDA's experience with post-market monitoring for high-risk AI medical devices.

Ault Markets develops AI platform StableShare for tokenized securities

Hyperscale Data's subsidiary, Ault Markets, is developing StableShare, an AI-powered platform for tokenized securities. This software-as-a-service solution will help financial institutions like broker-dealers and family offices issue and manage tokenized instruments called "stable shares." StableShare uses AI and blockchain to make issuing securities more efficient, compliant, and transparent. Milton "Todd" Ault III, Executive Chairman of Hyperscale Data, stated that the platform will redefine how securities are managed. Ault Markets is working with broker-dealers and plans to launch StableShare in 2026.

Ault Markets builds AI platform StableShare for tokenized securities

Hyperscale Data announced that its subsidiary, Ault Markets, is developing StableShare, an AI-powered platform for tokenized securities. This software-as-a-service solution will allow financial participants to create "stable shares," which are tokenized instruments backed by existing securities. StableShare combines blockchain and AI to improve efficiency, compliance, and transparency in managing securities. Milton "Todd" Ault III, Executive Chairman of Hyperscale Data, believes this platform will redefine how securities are issued and traded. Ault Markets is collaborating with broker-dealers and aims for a 2026 launch.

Hyperscale Data subsidiary creates AI platform StableShare

Hyperscale Data, Inc. announced on Thursday that its subsidiary, Ault Markets, is developing StableShare, an AI-powered platform for managing tokenized securities. This software-as-a-service solution will allow financial participants to create "stable shares," which are digital tokens backed by existing securities. StableShare will use AI to make issuing securities easier, automate compliance, and provide real-time data. Milton "Todd" Ault III, Executive Chairman of Hyperscale Data, stated that the platform will empower financial institutions. Ault Markets is working with broker-dealers and plans to launch StableShare in 2026.

Hitachi invests $1 billion in US power grid for AI demand

Hitachi Energy, a subsidiary of Japan's Hitachi Ltd, plans to invest $1 billion to boost its power grid manufacturing in the United States. This investment comes as the country faces a surge in electricity demand from new AI data centers. Nearly half of the investment, $457 million, will fund a new facility in South Boston, Virginia, to produce large power transformers, becoming the largest U.S. producer. Construction starts this year, with operations beginning by 2028. Andreas Schierenbeck, CEO of Hitachi Energy, emphasized that this move is crucial for strengthening the domestic supply chain and reducing bottlenecks.

Trump agenda sparks Hitachi $1 billion US energy investment

Hitachi Energy is investing $1 billion in U.S. electrical grid infrastructure, including a new large power transformer facility in Virginia. This investment will create thousands of jobs and help power the artificial intelligence revolution. The White House credits President Donald J. Trump's energy dominance agenda and efforts to make the U.S. a global AI powerhouse as the catalyst. This follows President Trump's earlier announcement of commitments from leading energy and technology companies to build AI and energy infrastructure in Pennsylvania. The administration aims to fortify supply chains and meet growing energy demand.

Salesforce CEO highlights AI monetization challenges

Salesforce CEO Marc Benioff discussed the company's artificial intelligence business during a recent earnings call, calling it the "biggest transformation" in company history. Despite his excitement, Benioff's comments unintentionally suggested an "AI crisis" in the industry. Salesforce, like many other software companies, is finding it hard to make money from its AI investments. An unnamed analyst described the situation as a "gold rush mentality" where companies are rushing into AI without clear long-term profit plans. Investors are closely watching how Salesforce will turn its AI efforts into real revenue growth.

Security Association creates AI Readiness Council

The Electronic Security Association, or ESA, has formed a new AI Readiness Council, called ARC. This council aims to help the electronic security and life safety industry use artificial intelligence in a smart and profitable way. Priya Serai from Zeus Fire and Security will lead the council, with Matt Carlson from Doyle Security as vice chairman. AI is changing how security systems are designed and how companies run their businesses, but it also brings challenges like ethics and compliance. The ARC will focus on real-world AI uses, ethical guidelines, best practices, and training to help members confidently adopt AI.

Scholar suggests product liability for AI regulation

A scholar proposes using product liability principles to improve how artificial intelligence is regulated. With AI evolving quickly, some systems can cause harm even without bad intentions, like a mental health chatbot giving diet advice. The scholar suggests learning from the FDA's experience with high-risk AI medical devices, which shifted from pre-market approval to monitoring products after they are released. Product liability law can hold manufacturers accountable for risks that appear after AI systems are in use. This approach allows regulators to adjust strategies as failures happen, rather than trying to predict every possible AI risk beforehand.

AI chatbot replaces long-time Commonwealth Bank employee

Kathryn, a Commonwealth Bank employee of 25 years, was made redundant after training an AI chatbot named Bumblebee that eventually took her job. The 63-year-old single mother was among 45 customer service workers let go in late July as the AI became more advanced. Kathryn felt devastated, having dedicated her career to the bank and worrying about supporting her son. After unions protested, the bank offered affected staff the choice to stay, redeploy, or take voluntary redundancy. Kathryn ultimately accepted redundancy, feeling uncertain about future job security and finding new roles unsuitable. Unions are now calling for stronger AI regulations to protect workers from similar situations.

BioLab invests in AI wound care technology cureVision

BioLab Holdings, Inc., a medical manufacturer specializing in wound care, announced a strategic investment and partnership with cureVision. cureVision is a German health tech startup that uses optical sensors, 3D imaging, and artificial intelligence to revolutionize wound analysis and diagnosis. Their technology allows clinicians to perform fast, contact-free wound assessments in under two minutes, streamlining measurement and documentation. BioLab will help cureVision enter the U.S. market by supporting its regulatory process, reimbursement strategy, and commercialization through its national distribution network. Both companies aim to improve patient outcomes and wound care efficiency.

Limited Partners face AI data risks with misaligned incentives

Limited Partners are currently struggling with the risks associated with artificial intelligence and data. The main challenge they face is that the incentives for managing these risks are not properly aligned. This situation highlights a growing concern within the investment community regarding the safe and effective use of AI technologies.

New AI supply chain attack targets Google and Microsoft

Researchers at Palo Alto Networks have uncovered a new AI supply chain attack method called 'Model Namespace Reuse'. This attack involves hackers registering names of deleted or transferred AI models on platforms like Hugging Face. They can then deploy malicious AI models and run their own code on affected systems. Palo Alto Networks demonstrated this risk against Google's Vertex AI and Microsoft's Azure AI Foundry, and found thousands of vulnerable open-source projects. Google has started daily scans for orphaned models, but experts warn that trusting models based only on their names is not safe. Companies should pin models to specific versions and store them in trusted locations to prevent these attacks.

Sources

AI Platform Tokenized Securities FinTech Blockchain Technology SaaS AI Data Centers Infrastructure Investment Energy Sector Supply Chain AI Monetization Software Industry AI Adoption Security Industry Ethical AI AI Regulation Product Liability Workforce Impact AI Chatbot Healthcare Technology Wound Care AI in Healthcare AI Risks Data Security Cybersecurity AI Supply Chain Security Open Source AI Investment Regulatory Compliance Digital Transformation