How Physical AI Is Redefining Intelligence in Motion
For years, artificial intelligence meant something disembodied. It recommended books, wrote emails, transcribed meetings, and answered questions. It lived in the cloud and spoke through screens. But that’s changing. AI is getting a body.
What’s emerging is a new wave of technology known as Physical AI—or Embodied AI—where intelligence leaves the datacenter and enters the world of movement, manipulation, and physical consequence. These systems sense, decide, and act. They’re not just reactive tools but autonomous agents capable of shaping their surroundings. The change may sound technical, but it’s cultural, industrial, and increasingly geopolitical.
From Google DeepMind’s latest simulation model, Genie 3, to SiMa.ai’s power-efficient edge hardware and Skild AI’s robotic foundation brain, Physical AI is evolving into something more than a research trend. Countries like South Korea are investing heavily in it. Conferences like RoboBusiness 2025 are centering around it. And underneath it all is a deeper question: who decides what values are embedded in machines that now move and decide in the physical world?
Simulation and the mind’s rehearsal space
One of the clearest signals of this shift is Google DeepMind’s Genie 3, a “world model” that allows AI systems to train in high-fidelity, physics-aware virtual environments. With just a few lines of text, Genie can generate a simulated warehouse or ski slope, filled with obstacles, agents, and realistic interactions. These are not just visuals—they’re interactive spaces where robotic agents can practice moving, solving problems, and responding to real-world dynamics.
As Subramanian Ramamoorthy at the University of Edinburgh notes, flexible robot decision-making depends on the ability to anticipate outcomes. World models like Genie give AI that capacity by providing a place to fail, learn, and try again.
Simulation is the rehearsal. But it’s not value-neutral. What scenarios are built, what actions are prioritized, what constitutes safety or risk—all of these encode assumptions. Even in simulation, we begin to make choices about how intelligence should behave when embodied in physical systems.
SiMa.ai and the edge of autonomy
If Genie represents the mind’s eye, SiMa.ai is the body. The San Jose-based company recently raised $85 million to advance its Modalix platform, which enables AI models to run on-device with minimal power consumption. SiMa.ai’s systems don’t depend on the cloud. They can process multimodal data, vision, transformers, even generative models, locally, in real time.
This is what makes Physical AI truly operational. It’s one thing to design intelligence in a lab. It’s another to deploy it in a factory, a hospital, or a robot that needs to make immediate decisions in tight spaces. For CEO Krishna Rangasayee, the goal is to make intelligent action simple, efficient, and safe across industries ranging from robotics and aerospace to smart vision and healthcare.
But as these systems become more embedded in critical environments, the stakes of embedded decision-making grow. When should a robot yield to a human? How should it interpret urgency versus caution? What’s considered a successful outcome—and who defines it?
These are design choices. And by extension, value choices.
A general brain for a general body
Skild AI takes things a step further. Where SiMa.ai builds the deployment muscle, Skild AI aims to be the general-purpose brain. With over $435 million in funding, the company is developing a foundation model for robotics—an architecture that can adapt across different robots, tasks, and settings.
CEO Deepak Pathak highlights the paradox at the heart of robotics: the things humans find easy, like climbing stairs or turning a knob, are often hardest for robots. Many models today excel at choreographed stunts or tightly scripted demos. Skild wants to solve the ordinary.
Their “Skild Brain” is trained on large-scale simulations and human video footage, allowing it to reason about both motion and interaction. The goal is true generalization, intelligence that transfers across hardware and environments, safely and robustly. Investors like Lightspeed’s Raviraj Jain have praised the company’s ability to deliver results “in the wild,” where robots must navigate real-world unpredictability and human presence.
In essence, Skild is trying to do for robotics what OpenAI did for language—create a single adaptable foundation. But as with language models, the question remains: what values come baked into the foundation? Who decides what the robot should do when tradeoffs emerge? When safety collides with speed? When caution conflicts with productivity?
National priorities, civilizational stakes
These questions aren’t only technological. They’re national and cultural. In late July, South Korea’s Ministry of Science and ICT convened an industry-academic summit focused entirely on Physical AI. Vice Minister Ryu Je-myung described the current moment as one of urgency—not a future problem, but a present demand.
South Korea sees Physical AI as a means of revitalizing its manufacturing sector, long a national strength. Officials and researchers discussed how to turn entire factories into intelligent systems. But concerns also emerged: data collection is expensive, software and hardware remain fragmented, and the country risks falling behind China, Germany, and the United States.
There’s also a deeper worry. As Yoo Tae-jun of MAUM.AI warned, data sovereignty in Physical AI is not optional—it’s existential. If Korean robots are trained on American, Chinese, or European datasets, they may begin to reflect those external values. What does it mean, he asked, for a robot to be “domestically aligned”?
That’s not a theoretical question. It’s a policy one.
The global contest over values in AI
As AI systems, especially embodied ones, become central to infrastructure, labor, and communication, the debate over values becomes unavoidable.
In the United States, AI is framed as an extension of liberal-democratic norms. The National Security Commission on AI, for instance, has positioned American AI leadership as vital to preserving free speech, human rights, and pluralism. Innovation is linked to national identity.
Europe, while sharing these democratic ideals, is more skeptical of U.S. dominance. Through the AI Act, the EU seeks to codify its own priorities, dignity, transparency, and the precautionary principle. Brussels argues that American AI models are shaped by Silicon Valley’s commercialism and culture wars. European AI, it contends, should reflect European values, with stricter safeguards and deeper alignment to social trust.
In China, the framing is completely different. There, AI is a tool of civilizational continuity. The Chinese Communist Party requires that AI systems reinforce “socialist core values,” with strict censorship over politically sensitive content. From the government’s perspective, AI is not just a tool, it’s a pillar of national sovereignty and cultural protection.
Russia takes a similar stance. Its AI ecosystem is tightly bound to its “sovereign internet” policies, where domestic data storage, algorithmic control, and censorship are positioned as patriotic duties. AI is framed not as a product of freedom, but as a defense mechanism against foreign ideological subversion.
In all these cases, the values embedded in AI systems are not accidental. They are deliberate, enforced, and increasingly divergent. And as Physical AI enters factories, streets, and homes, those values take on form. They move, they respond, they act.
RoboBusiness and the industrial pivot
This coming October, RoboBusiness 2025 in Santa Clara will bring together many of the key players shaping this future. NVIDIA’s Deepu Talla will keynote on the foundations of embodied intelligence. UC Berkeley’s Ken Goldberg will discuss sim-to-real reinforcement learning. Dexterity, ABB Robotics, SRI, and Diligent Robotics will all present on how AI is reshaping manufacturing, dexterity, and human-machine interaction.
What’s striking is how Physical AI has moved from the margins to the center of conversation. It’s no longer a niche within robotics. It’s becoming the defining lens through which we understand the future of automation.
Where do we go from here?
The story of AI is no longer confined to chatbots or data pipelines. It is unfolding in warehouses, hospitals, retail spaces, and national labs. It is taking shape not just through code, but through motion. And the systems that result, robots, drones, autonomous vehicles, will act according to the values we embed in them today.
Some will favor liberty. Others will favor order. Some will operate with caution, others with speed. These are not just engineering decisions. They are cultural, political, and philosophical ones.
Physical AI is changing how we think about intelligence. But it’s also forcing us to ask: what kind of intelligence do we want? And whose version of “intelligence” will define the machines that move through our world?
That may end up being the most important question of all.
Very insightful overview of Physical AI, thank you Aaron Prather for sharing. May we add, however, that not all forms of Physical AI benefit from bipedal locomotion, let alone a humanoid form. The real opportunity lies not in mimicking human capabilities, but in recognizing that intelligence can be distributed, embedded, and manifested in ways that amplify what humans cannot do. Interested to hear your take on this one.
President at Allfx, Inc. | Leading Robotic Special Effects Supervisor
6dThat picture of a robot with a synthetic human face is just plain creepy. It falls into the uncanny valley. Did the scientist making these robots not realize nobody wants that? Robonality™️is the future of humanoid robots.