Artificial intelligence continues to permeate various sectors, from education and law enforcement to healthcare and personal productivity. In education, institutions like the University of Nebraska at Omaha are embracing AI as an opportunity, launching AI degree programs and learning labs to prepare students for a future workforce that increasingly utilizes AI tools. Experts advocate for teaching critical AI use rather than outright bans, suggesting a shift in assessment methods to focus on the learning process and the development of uniquely human skills like critical thinking and empathy. Meanwhile, law enforcement agencies like the Anchorage Police Department are adopting AI software to analyze vast amounts of data for investigations, with a five-year contract approved for a tool that can process thousands of hours of recordings. In healthcare, Medicare is set to pilot an AI algorithm in January 2025 to test the denial of care for certain patients, a move aimed at identifying wasteful services but raising concerns among critics about potential harm and delays. On the personal productivity front, AI tools are being used to enhance efficiency in tasks such as voice dictation for messages, note-taking during meetings, and summarizing research. However, the importance of human interaction remains, with career coaches offering personalized guidance, emotional support, and accountability that AI cannot replicate. The development of AI is also facing increased scrutiny regarding data transparency, with California mandating developers of generative AI models to disclose training data starting January 1, 2025. Concerns about research security are also prominent, as China leads in AI research output, prompting calls for the U.S. to better safeguard its scientific innovation. On a darker note, AI is also being exploited for malicious purposes, with a man facing federal charges for allegedly creating and threatening to release AI-generated pornographic images of an influencer as part of an extortion scheme. In parallel, companies like Infineon and Thistle are working to bolster security for AI applications at the edge through hardware security controllers to protect AI models and sensitive data.
Key Takeaways
- Universities are increasingly viewing AI as an opportunity, with programs like the University of Nebraska at Omaha's AI degree and learning lab aiming to prepare students for an AI-integrated workforce.
- Experts recommend teaching critical AI use and adapting assessment methods to focus on the learning process and human skills like critical thinking, rather than banning AI.
- The Anchorage Police Department is using AI software called Closure to analyze investigative data, including jail call recordings, under a $375,000 five-year contract.
- Medicare will pilot an AI algorithm in January 2025 to test the denial of care for certain patients, aiming to reduce wasteful services but sparking concerns about patient harm.
- Personal productivity is being enhanced by AI tools for tasks like voice dictation, meeting notes, and research summarization, though human career coaches remain essential for personalized guidance and emotional support.
- California's AB 2013 law, effective January 1, 2025, will require generative AI developers to publicly disclose their training data, including copyrighted or personal information.
- China leads in AI research output, filing significantly more patents than the U.S., which experts say highlights a need for improved research security in the U.S. to prevent intellectual property transfer.
- A man faces federal charges for allegedly using AI-generated pornography to extort an Instagram influencer, highlighting the malicious use of AI technology.
- Infineon and Thistle are collaborating to enhance security for edge AI applications using hardware security controllers to protect AI models and sensitive data.
Man charged with AI porn threats against influencer
An Oakland County man, Joshua Stilman, faces federal charges for allegedly creating and threatening to release AI-generated pornographic images of an Instagram influencer. He is accused of interstate extortion and cyberstalking. The victim reported receiving explicit messages and AI-generated images, along with threats of assault if she did not respond. Stilman allegedly used the screenname FriendBlender and linked to a Google Drive folder containing the images, which also revealed his name. He has been released on a $10,000 bond.
AI sextortion victim speaks out after arrest
Anne Blodgett, a content creator from Oregon, is speaking out after the arrest of Joshua Stilman, who is accused of extorting her using AI-generated pornography. Stilman allegedly sent Blodgett AI nude images of herself along with aggressive messages and threats. Blodgett stated she took the sexual assault threats very seriously and vowed to pursue justice. Stilman faces federal charges of cyberstalking and interstate threats to extort and has been released on bond.
University sees AI as opportunity, not threat
The University of Nebraska at Omaha views artificial intelligence as an opportunity for higher education, not a threat. Chancellor Joanne Li highlights that AI can provide personalized tutoring, help students succeed academically, and prepare them for a workforce increasingly using AI tools. The university has launched an AI Learning Lab and an AI degree program. Li emphasizes that AI can be an equalizer for students and that universities must teach responsible and ethical AI use, focusing on uniquely human skills like critical thinking and empathy to complement AI capabilities.
Universities should teach critical AI use
Experts argue that universities should teach students to use artificial intelligence critically rather than ban it. They suggest that AI can enhance learning and prepare students for a future workforce where AI is common. Instead of fearing AI, educators propose redesigning assessments to focus on the learning process, not just the final product. This approach can help students develop critical thinking and analytical skills by evaluating and refining AI-generated content, ensuring they are well-equipped for a technologically evolving world.
Medicare to test AI for denying patient care
The Trump administration will launch a pilot program in January 2025 to test an AI algorithm for denying care to Medicare patients, mirroring practices of private insurers. This program, called WISeR, aims to identify and reduce wasteful or low-value medical services in six states through 2031. While the administration claims safeguards will prevent denials of medically appropriate care, critics worry about potential delays and harm to patients. This move expands prior authorization, a process already criticized for slowing or blocking access to necessary treatments.
Anchorage Police use AI for investigations
The Anchorage Police Department has adopted artificial intelligence software called Closure to analyze investigative data, marking its first use of AI. Police Chief Sean Case stated the software can process large amounts of data, such as over 1,000 hours of jail call recordings, to find specific information like names or threats. The department previously tested AI for report writing. The Assembly approved a five-year contract for the software costing $375,000, with city officials and prosecutors reviewing it and finding no negative impact on case prosecution.
Human career coaches remain essential despite AI
Artificial intelligence can assist with tasks like resume writing and job searching, but human career coaches offer irreplaceable value. Coaches provide personalized guidance, help set realistic goals, develop networking strategies, and offer crucial emotional support and accountability. While AI focuses on efficiency and data, human coaches leverage empathy, intuition, and real-world experience to help clients navigate complex career decisions and personal growth. This human element fosters trust and drives transformational change, making coaches vital for long-term career success.
California mandates AI training data disclosure
Starting January 1, 2025, California will require developers of generative AI models to publicly disclose the data used to train their systems under a new law called AB 2013. This law mandates detailed information about data sources, availability, size, type, and inclusion of copyrighted or personal data. While intended to increase transparency and aid researchers, industry leaders express concerns that it could hinder development. This regulation, among the most comprehensive in the U.S., may influence other states to adopt similar disclosure requirements.
Infineon and Thistle boost secure AI with OPTIGA Trust M
Infineon Technologies and Thistle Technologies are collaborating to enhance security for AI applications at the edge. By integrating Infineon's OPTIGA Trust M hardware security controller into Thistle's Security Platform for Devices, they aim to protect AI models and sensitive data. This solution offers hardware-backed encryption, secured model provenance, and signed data with metadata. It allows device makers to deploy robust security foundations quickly, safeguarding intellectual property and ensuring the integrity of AI models and data in embedded systems.
AI tools for meetings and tasks
Yifan Zhang, managing director at AI2 Incubator, shares her favorite AI tools for boosting productivity. She uses voice dictation for emails and Slack messages, doubling her response speed. For meeting notes, Granola enhances personal notes without needing video recordings, aiding her frequent context-switching. Vy, Vercept's agent, automates tasks like prioritizing event waitlists. Zhang also utilizes ChatGPT Pro for research and summarizing new contacts, and LangChain with OpenAI to build internal tools for founders and experts.
China leads AI research, U.S. needs research security
China is now leading the world in artificial intelligence research output, filing nearly ten times more AI patents than the U.S. and surpassing combined output from the U.S., EU, and UK. This surge highlights a critical blind spot for America: research security, which involves safeguarding scientific innovation from adversaries. The U.S. needs to better track funding, affiliations, and data flows to prevent intellectual property transfer and foreign influence. Experts urge greater visibility into research networks and data provenance to maintain national security and AI leadership.
Sources
- Oakland Co. man accused of threatening Instagram influencer with AI porn
- Victim of AI sextortion responds after Commerce Twp. suspect arrested
- AI is an opportunity for higher education, not a threat
- Preparing students for a world shaped by artificial intelligence
- Private health insurers use AI to approve or deny care. Soon Medicare will, too.
- Anchorage police adopt AI to analyze investigative data
- AI Will Never Replace Human Career Coaches: Here’s Why
- California Law Will Require AI Developers to Disclose Training Data
- Infineon and Thistle target Secure Edge AI with OPTIGA™ Trust M
- AI Toolbox: AI2 Incubator’s Yifan Zhang on her favorite tools to capture meetings, automate tasks
- China’s AI surge exposes America’s blind spot: research security