Embodied Intelligence, Everyday Agents, and the Architecture of AI: What Day 3 of EmTechAI 2025 Revealed

Embodied Intelligence, Everyday Agents, and the Architecture of AI: What Day 3 of EmTechAI 2025 Revealed

After two days of provocations, paradigms, and platform shifts, Day 3 of MIT Technology Review’s EmTechAI 2025 turned its attention to something deeper—not just where AI is headed, but what it feels like to live and work with it. The conversations traced a new arc: one where AI systems become tactile, regulatory frameworks must be human-centered, and foundational models begin learning the laws of nature, not just the laws of language.

If Days 1 and 2 were about what AI is and where it’s going, Day 3 was about what AI will mean to the fabric of business, law, infrastructure, and science.


🧠 From FOMO to Fluency: Embedding AI Across the Workforce

Robert Blumofe of Akamai opened with an unflinching reality check: the AI hype cycle has become so pervasive that many organizations are implementing solutions not because they work—but because they feel like they should.

Blumofe cautioned against what he called "AI success theater," where performance is mistaken for competence. His antidote? AI literacy. Not just prompt engineering, but true vertical understanding of the full AI stack. Instead of asking "How can we use LLMs here?", teams should ask: "What's the problem, and what's the right model for it?"

At Akamai, this mindset has led to a flourishing internal AI sandbox, where employees can experiment safely with models across domains. And the lesson was clear: Don’t be an "LLM one-trick pony." True enterprise fluency means building AI muscle across problems, not just interfaces.


🏥 Change Management at Scale: AI at Takeda and Deloitte

In a compelling joint session, Gabriele Ricci (Chief Data & Tech Officer at Takeda) and Jim Rowan (Head of AI at Deloitte) gave the audience a rare peek into what organizational transformation actually looks like.

Takeda, one of the world’s largest pharmaceutical companies, launched an ambitious digital dexterity initiative that reached over 35,000 employees in nine months. This wasn't just a training platform—it was a mindset overhaul. Ricci described how Takeda's employees now rate and build their own GenAI agents, supported by a tiered learning model: from basic AI use to agent orchestration.

Meanwhile, Deloitte emphasized speed and iteration. Agents, said Rowan, should be thought of like interns: test them, supervise them, and give them increasing autonomy over time. But also expect to retire them. In this space, obsolescence is not a failure—it’s the natural outcome of technological progress.


⚖️ Negotiating with Machines: AI in the Legal World

Eleanor Lightbody, CEO of Luminance, brought a radically practical use case to the stage: AI models that don’t just help draft contracts, but negotiate them.

In Luminance’s system, both parties upload documents and let AI agents determine points of alignment, redlines, and compromise. If both sides are using Luminance, then yes—AI negotiates with AI. But critically, human legal teams retain oversight. The system flags areas of uncertainty, and the lawyer remains the final authority.

Lightbody emphasized that this isn’t about removing lawyers. It’s about removing their tedium. In fact, junior lawyers using the tool gain faster exposure to complex reasoning, accelerating their path to high-value legal work.

Beyond law, she hinted at a future where this kind of contract intelligence could power compliance, procurement, insurance, and regulation itself. In other words, the real AI transformation may not be visible in court. It might live in the footnotes.


🧬 Perspective-Aware AI: Hossein Rahnama's Future of Work

Hossein Rahnama of MIT Media Lab and Flybits offered one of the day’s most speculative (and moving) visions.

Rahnama isn’t trying to build agents that serve humans. He’s trying to build agents that represent them. His concept of a "chronicle" is a persistent multimodal data structure that reflects a person’s behavior, context, and preferences. These chronicle-based agents can collaborate, simulate decisions, and provide embodied AI support—from the boardroom to the hospital room.

His demo was as philosophical as it was technical: a system where multiple AI agents debate with one another, reflecting different organizational perspectives, letting the human user observe and synthesize insights.

"Empowerment," he said, "is the architecture." The most powerful agents will not be the most autonomous. They will be the most aligned.


🧾 The Language of Value: Autogen AI and Proposal Writing

Sean Williams, CEO of AutogenAI, delivered a poetic and cerebral exploration of what it means for machines to "do language."

Rather than measuring LLMs by how well they predict the next word, Williams invited us to ask: What does it mean to participate in a language game? Drawing from Wittgenstein, he argued that true language performance requires context, fluency, and intent—not just completion.

Autogen doesn’t just write RFPs. It crafts persuasive documents with domain-specific goals. Their clients have seen revenue growth of 30% or more by using AI to iterate faster and write better. But the lesson wasn’t just about growth. It was about redefining how we measure the "intelligence" in artificial intelligence.


🚘 Generalizing Autonomy: Wayve and the Future of Embodied AI

Vijay Badrinarayanan, VP of AI at Wayve, made perhaps the strongest case for why self-driving cars are an AI problem—not a robotics one.

Unlike the high-definition map-heavy approach of yesterday’s autonomy players, Wayve trains large end-to-end neural networks using real-world driving data. Their thesis: generalization beats precision. And they proved it by dropping a UK-trained model into U.S. streets and watching it adapt within 500 hours.

This foundation model for embodied intelligence isn’t just about driving. It’s about movement, control, reasoning—across machines. Wayve’s future isn’t just robotaxis. It’s a universal driver. A new operating system for physical intelligence.


📦 Touch Meets Torque: Amazon Robotics Unveils Vulcan

Scott Dresser of Amazon Robotics introduced Vulcan, a robot that can do something deceptively hard: pick items from densely packed totes without vision alone.

By combining computer vision with tactile feedback and force sensors, Vulcan can reach, feel, and adjust like a careful human arm. It’s already deployed in Amazon fulfillment centers in Washington and Berlin, handling 75% of stocked items.

But the deeper story is what Vulcan enables. Fewer ladders. Fewer ergonomic injuries. And robots that can sense, not just see.

As Dresser put it, this isn’t just about faster warehouses. It’s about general-purpose robotics built for adaptability, flexibility, and finesse. The kind you can retrofit, not just redesign.


🔬 AI for Science: Peter Lee on the Languages of Nature

Peter Lee, President of Microsoft Research, closed Day 3 with a dazzling exploration of what might be AI’s greatest frontier: the sciences themselves.

Lee likened today’s generative models to the invention of copper wire—essential infrastructure, but still awaiting their "lightbulb moment." At Microsoft, his teams are building AI emulators of physical systems—from molecular simulations to atmospheric dispersion.

One such system, MatterGen, trained on Fortran-based quantum simulations, can now model molecular interactions millions of times faster than traditional physics engines. Another, Aurora, uses AI to project pollution flows across urban grids with unprecedented resolution.

What happens when AI doesn’t just learn human language—but the language of proteins, electrons, and weather?

Lee offered a measured optimism: "Generative AI is our transistor. The machine that defines its purpose is still to come."


🌍 Final Reflections

Day 3 was less about generative hype and more about generative gravity. From robotics to jurisprudence, from AI literacy to molecular design, the future is arriving unevenly—but undeniably.

And across the day, a new meta-question emerged:

Not just "Can AI work?" or even "How do we work with AI?"

But rather: "How will our institutions, professions, and paradigms evolve in a world where agency itself can be modeled, delegated, and shared?"

In the words of Hossein Rahnama: "Empowerment is the architecture."

And that may be EmTechAI's most urgent message of all.

To view or add a comment, sign in

Others also viewed

Explore topics