Liquid Foundation Models: Why Edge-Native AI Could Transform Robotics
For most of the generative AI boom, intelligence has lived in the cloud. Whether it is OpenAI’s GPT series or Google’s Gemini, the underlying assumption has been that power comes from scale. Models trained on trillions of tokens, hosted in massive data centers, and accessed through APIs have become the norm. Yet for many real-world systems—from smartphones and laptops to cars, satellites, and robots—this cloud-first approach has been a bottleneck. Latency, privacy, and reliability issues make it unsuitable for devices that must reason and act in real time.
This is where Liquid Foundation Models (LFMs) come into play. Liquid AI, a company spun out of work at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), has reimagined what a foundation model should be when designed for the edge first. The release of its second-generation system, LFM2, marks a pivotal shift away from the notion that AI must always be hosted in distant clouds.
Daniela Rus, Liquid AI’s co-founder and director of CSAIL, described LFMs at the 2025 Robotics Summit & Expo as “an essential building block for the next generation of intelligent machines.” Her words are not hyperbole. If LFM2 delivers on its promise, it could fundamentally change how robots, autonomous systems, and even consumer devices think, learn, and interact with us.
What Makes LFMs Different?
Unlike traditional transformer-based models, LFMs are designed from the ground up for efficiency, speed, and adaptability. The term “liquid” is more than marketing. It refers to a hybrid neural architecture inspired by dynamical systems theory, which Liquid AI has been refining for several years. LFM2 combines short-range convolutional blocks with multiplicative gates and grouped query attention layers. This approach allows information to “flow” through the network more effectively while using fewer parameters than transformer models of comparable performance.
The result is striking. The largest LFM2 model, with just 1.2 billion parameters, performs competitively with Qwen3, which has 1.7 billion parameters, despite being 47 percent smaller. The smaller 700 million- and 350 million-parameter models also hold their own against competitors like GPT -3 and Llama 3.2. LFM2’s hybrid design delivers up to twice the decoding and prefill speeds on CPUs compared to Qwen3 and three times faster training compared to Liquid AI’s first-generation LFM.
This is not just an optimization exercise. It represents a philosophical shift. LFMs are not miniaturized versions of cloud models. They are purpose-built for low-latency, on-device workloads, which makes them particularly suited to real-time applications. The models have been trained on 10 trillion tokens, with a mix of English, multilingual, and code data, and refined using knowledge distillation from Liquid AI’s larger LFM1-7B teacher model. Techniques like Direct Preference Optimization ensure that smaller models produce high-quality outputs by ranking and optimizing their responses against both synthetic and real-world data.
What is perhaps most significant is that these models are hardware-agnostic and run efficiently on CPUs, GPUs, and NPUs. This allows them to be deployed not just on high-end servers but also on devices such as smartphones, laptops, cars, drones, and robots.
Why Edge-First AI Matters
Running foundation models on the edge offers four key benefits. The most obvious is lower latency. When a robot or autonomous vehicle must send data to a cloud server, wait for a response, and then act, even a delay of a few hundred milliseconds can lead to errors or safety issues. LFMs reduce this delay to nearly instant reasoning, which is essential for robots navigating crowded environments or humanoids working alongside humans on an assembly line.
The second advantage is privacy. With models running directly on the device, sensitive information never leaves the hardware. For robots working in healthcare, logistics, or smart homes, this is a game-changer. In a world where regulators are increasingly scrutinizing data flows, an edge-first approach could simplify compliance and make AI adoption more acceptable to users and governments alike.
The third benefit is resilience. Robots deployed in disaster zones, rural areas, or industrial settings cannot always rely on stable connectivity. A drone inspecting remote pipelines or a field robot harvesting crops cannot afford to “go dark” every time the network drops. Offline operation is essential for these scenarios, and LFMs are designed with this reality in mind.
Finally, there is the question of cost. Cloud inference is expensive. Companies deploying fleets of AI-enabled robots or vehicles face massive recurring fees when relying on remote servers. By running locally, LFMs not only reduce operational costs but also enable startups and smaller companies to compete without building or renting massive cloud infrastructure.
What LFMs Mean for Robotics
The impact on robotics could be profound. Robots have always struggled with the trade-off between local processing and remote intelligence. Cloud-connected robots can tap vast computational resources but are vulnerable to latency and connectivity issues. Local-only robots, on the other hand, have traditionally been limited by their onboard compute capacity. LFMs bridge this gap by packing significant reasoning capabilities into lightweight, edge-friendly models.
Service robots and mobile platforms, for instance, could use LFMs for real-time sensor fusion and language-conditioned control. Imagine a hospital delivery robot that can not only navigate autonomously but also understand verbal instructions from staff, adjust its route on the fly, and do all of this without ever connecting to a remote server.
Humanoid and collaborative robots stand to benefit even more. One of the biggest challenges in human-robot interaction is natural, real-time conversation and task adaptation. A humanoid working alongside a factory worker, for example, must understand nuanced instructions, adapt to changes in workflow, and respond conversationally—all within milliseconds. LFMs, which can run entirely on-device, bring this kind of capability closer to reality.
Autonomous vehicles and drones also gain from LFM2’s architecture. While Tesla and other companies rely heavily on cloud-based updates and reasoning, there is a growing recognition that core decision-making must happen locally for safety reasons. A delivery drone or an autonomous car navigating a tunnel cannot depend on a cloud signal to decide whether to brake or turn.
In industrial settings, LFMs open new possibilities for adaptive robotics. An assembly-line robot with a model like LFM2 could understand natural language commands, log anomalies in plain English, and replan its actions based on sensor feedback—all without needing to connect back to a central server. Field robots, from agricultural harvesters to disaster response units, may benefit the most. Working in environments with unreliable connectivity, they must make quick, context-aware decisions independently. Running a foundation model locally allows them to adapt to new conditions without waiting for instructions from a control center.
Beyond Robotics: A Shift in AI Economics
The significance of LFMs extends beyond robotics. If models like LFM2 succeed, they could fundamentally alter the economics of AI. Today, most generative AI applications are bottlenecked by access to expensive cloud infrastructure, controlled by a handful of companies. By making high-quality generative AI feasible on commodity hardware, LFMs could democratize AI development.
Liquid AI has also chosen a relatively open approach. The LFM2 models are available under an Apache 2.0-based license, allowing unrestricted academic and research use, as well as commercial deployment for companies with revenue under $10 million. The weights are already hosted on Hugging Face, and Liquid AI has released developer tools like LEAP, a software development kit for embedding LFMs into mobile and edge applications, and Apollo, an iOS app for on-device experimentation.
This open release could accelerate adoption across industries. For robotics startups, it lowers the barrier to entry. No longer must they spend millions on proprietary cloud inference. For large enterprises, it provides a way to deploy AI privately within their own infrastructure, rather than relying on third-party providers.
Challenges and the Road Ahead
The excitement around LFMs should not overshadow the challenges. Models trained for edge efficiency must still demonstrate their effectiveness in safety-critical environments. Robots, especially those operating close to humans, require verifiable and explainable decision-making. Certifying LFMs for such use cases will take time and careful testing.
There is also the question of scalability. While 1.2 billion parameters are impressive for edge devices, some robotics applications may still require larger, more capable models. A hybrid cloud-edge approach, where small models handle most tasks and larger models are consulted occasionally, may be the pragmatic near-term solution.
Finally, developer tooling and integration with existing robotics frameworks like ROS remain in early stages. Tools like LEAP are promising, but robotics engineers will need tighter integration with simulation platforms, planning libraries, and sensor-processing pipelines to fully leverage LFMs.
A Future of Liquid Intelligence
The promise of LFMs is clear: AI that operates where it is needed, within the devices and robots that directly interact with the environment. This could represent a turning point for embodied AI, where robots are not merely tools following pre-set commands but adaptive agents that reason about their surroundings in real time.
As Rus put it at the Robotics Summit, “Every device can now be an AI device.” For robotics, that means every robot—whether a warehouse autonomous mobile robot, a humanoid in a factory, or a drone inspecting infrastructure—can soon carry a brain capable of learning, conversing, and adapting, all without relying on a distant cloud.
The era of tethered intelligence is coming to an end. LFMs and Liquid AI’s work in particular suggest a future where intelligence is not locked in remote data centers but flows like water into every robot, wearable, and sensor around us. If the technology matures as expected, robots may soon become not just faster and more efficient but truly autonomous partners in our daily lives.
Head of Business Development EMEA at Kudan Inc., MBA, Master Black Belt, Industrial Metaverse, Lean Robotics, Computer Vision, E2E Digitization
2wThoughtful post, thanks Aaron
Controls and Instrumentation Manager at Vital Manufacturing Inc.
3w"Exciting advancements in AI! How might LFMs impact industries reliant on real-time decision-making, such as autonomous vehicles and healthcare robotics?"
President at JTS Market Intelligence
3wThanks very much for sharing, Aaron 👍 Definitely worth the read 👌
Supply Chain Logistics Specialist | Data-Driven Problem Solver | Warehouse Operations Expert
3wI have a real question what is your biggest barrier?