SNIA Computational Storage Standards – A Vision for Intelligent Data Infrastructure
At the 2024 Compute Memory and Storage Summit, Bill Martin (Samsung) and Jason Molgaard (Solidigm)—co-chairs of the SNIA Computational Storage Technical Work Group (TWG)—presented a forward-looking update on the standardization of computational storage, a transformative paradigm where compute capabilities are embedded directly into storage devices.
Architecture & Models:
SNIA defines three primary computational storage device (CSX) types:
Together, these enable in-situ data processing, reducing host CPU dependency, improving performance, and minimizing data movement.
Key Enhancements in v1.1:
Building on the award-winning 1.0 specs, Version 1.1 introduces two major advancements:
API Innovations:
The Computational Storage API (CS-API) abstracts the hardware and presents a unified interface to applications. Designed to be OS-agnostic, it simplifies access, memory management, and CSF execution across all CSX types. This fosters rapid adoption without rewriting existing applications.
Harmonization with NVMe:
Notably, SNIA’s architectural models are directly reflected in the NVMe Computational Storage Command Set ratified in early 2024. The standards bodies collaborated closely, ensuring SNIA's architectural vision is embodied in NVMe’s implementation, while the SNIA API fully supports NVMe computational storage operations.
Emerging Frontier: SDXI Integration
The TWG is exploring synergies with Smart Data Accelerator Interface (SDXI), a memory-to-memory data movement protocol. This pairing enables:
Together, CSX and SDXI promise to power next-gen composable infrastructures where compute, storage, and memory interoperate fluidly—ushering in a new era of data-centric compute.
Closing Thoughts
This presentation underscored SNIA’s pivotal role in leading vendor-neutral innovation in storage standards. With a robust architectural foundation, harmonized NVMe alignment, and SDXI-enabled future direction, computational storage is poised to redefine performance, scalability, and efficiency in AI, HPC, and data-intensive workloads.