AI
skim
IQ
Home
Archive
About
☾
EN ▾
← Read the full brief
Podcast
Monday, 30 March 2026
Duration: 5:16
Listen to the Czech audio edition and follow the transcript below.
▶
0:00
0.75×
1×
1.25×
1.5×
2×
--:--
Transcript
0:00
Welcome to AIskimIQ, your daily AI briefing for March 30th, 2026. Today we're diving into major developments across large language models, AI agents, safety concerns, and some surprising pivots in AI products—including OpenAI's unexpected shutdown of Sora.
0:14
Starting with large language models, we've got updates on Anthropic's new Claude Mythos and emerging solutions to a persistent problem: AI hallucinations.
0:23
Anthropic's upcoming Claude Mythos model is reportedly outperforming all previous versions, promising significant improvements across the board, though the company is carefully detailing both its capabilities and potential risks.
0:35
Researchers have developed a new metric for measuring uncertainty in LLMs that could help flag hallucinations and give users better insight into when they can trust an AI's output.
0:46
A NIH postgraduate researcher argues that AI hallucinations may be fundamentally unavoidable in large language models, raising important questions about the limits of current technology.
0:56
In AI agents and automation, we're seeing a race heat up between OpenAI and Anthropic, along with important security questions about containment.
1:05
OpenAI and Anthropic are ramping up next-generation models with automation features, competing to develop AI agents capable of handling complex tasks independently.
1:14
Security researchers are examining whether AI agents deployed in sandboxed containers can escape their boundaries—a critical concern as these systems gain more autonomy.
1:23
New frameworks for building advanced cybersecurity AI agents are emerging, incorporating tools, guardrails, handoffs, and multi-agent workflows to improve threat detection and response.
1:32
On the safety and alignment front, we have a sobering warning from a former OpenAI researcher and growing concerns about AI's real-world harms.
1:42
Daniel Kokotajlo, a former OpenAI researcher turned whistleblower, has warned that AI could pose an existential threat to humanity within the next five years.
1:51
Meta faces mounting legal troubles with two recent court losses centered on allegations that the company knew about its products' potential harms—with implications for AI research oversight.
2:02
AI-powered anti-money laundering systems are becoming efficient but may actually hide financial crime risks through hidden false negatives and over-reliance on automation.
2:11
Looking at AI tools and products, Microsoft is pulling back on Copilot, while Eli Lilly is betting $2.75 billion on an AI drug development partnership.
2:21
Following public backlash, Microsoft is scaling back Copilot integration in Windows 11 and reconsidering its controversial Recall feature.
2:28
Pharmaceutical giant Eli Lilly is investing up to $2.75 billion with Hong Kong-based Insilico Medicine to accelerate AI-driven drug discovery and development.
2:37
A new ophthalmic AI co-pilot is in development specifically trained on data from the Chinese population to assist with diagnosis, treatment planning, and disease monitoring.
2:47
In image and video generation, OpenAI has shuttered Sora, Disney's billion-dollar investment is off the table, and regulators are cracking down on AI nudification tools.
2:57
OpenAI has shut down its Sora video-generation app, ending a major Hollywood initiative and a planned $1 billion investment from Disney.
3:05
EU regulators are proposing legislation to ban AI nudification apps that generate unauthorized sexually explicit content, targeting platforms like X's Grok.
3:14
Sora's shutdown signals a strategic shift in how AI video will be used in marketing and advertising, with industry leaders reassessing the technology's trajectory and societal concerns.
3:24
In robotics and embodied AI, there's serious investor interest and a notable acquisition by Amazon.
3:30
Physical Intelligence is raising new funding that would value the company at over $11 billion, reflecting growing investor confidence in robotics breakthroughs.
3:39
Amazon has acquired Fauna Robotics, the startup behind the Sprout humanoid robot—a friendly, capable machine designed for homes and workplaces.
3:47
On the research side, quantum machine learning is advancing, but classical AI still leads in practical applications like cybersecurity.
3:55
Scientists have developed tighter performance guarantees for quantum machine learning, a step forward for this emerging field.
4:02
A new AI tool is streamlining drug synthesis by automating molecular design—a breakthrough that could accelerate pharmaceutical development.
4:09
New research shows that traditional machine learning still outperforms quantum approaches for phishing detection, suggesting quantum AI has further to go before replacing established methods.
4:19
On funding and infrastructure, startup valuations are climbing and semiconductor partnerships are reshaping the AI supply chain.
4:26
Aetherflux, led by Robinhood co-founder Baiju Bhatt, is raising new financing at a $2 billion valuation to build solar-powered satellites for AI computing.
4:35
Data centers are making the transition from AC to DC power systems as AI chips become faster and more power-intensive, requiring fresh infrastructure approaches.
4:44
The US House passed the Chip Security Act, restricting high-performance chip exports to China and signaling a major shift in global AI hardware competition.
4:54
AMD and Samsung have signed a new partnership to supply next-generation HBM4 memory for AMD's Instinct AI GPUs, adding another piece to the reshaping AI supply ecosystem.
5:05
That's your AIskimIQ briefing for March 30th, 2026—a day marked by big pivots, safety warnings, and a robotics boom. Catch us tomorrow for the latest in AI news.
← Previous