Kubernetes Internals for Developers: CNI, CSI, CRI, and OCI

Kubernetes Internals for Developers: CNI, CSI, CRI, and OCI

When working with Kubernetes, we often hear about interfaces like CNI, CSI, CRI, and OCI.

But what are they really? Why do they exist?

Let’s peel back the layers of Kubernetes and understand how it talks to the outside world — and why it’s designed the way it is.👇

Article content

🌐 1. CNI – Container Network Interface

🔌 “CNI is how Kubernetes handles pod networking.”

Every pod in Kubernetes gets its own network namespace and IP address. But how does it get an IP? Who sets up the routes? Who tears it all down when the pod dies?

👉 CNI is the answer.

The Container Network Interface (CNI) is a specification maintained by the CNCF that allows Kubernetes to integrate with different network plugins like:

  • 🔸 Calico (for network policy & BGP routing)
  • 🔸 Flannel (simple overlay networks)
  • 🔸 Cilium (eBPF-based networking)

Article content

📌 When a pod is scheduled, the kubelet asks the CNI plugin to:

  • Allocate a network namespace
  • Assign an IP address
  • Set up routes and rules

💡 CNI keeps Kubernetes decoupled from the actual implementation of the network stack.

Article content

💾 2. CSI – Container Storage Interface

📦 “CSI is how Kubernetes handles dynamic storage provisioning.”

Kubernetes lets you attach storage volumes to pods using PersistentVolumes and PVCs. But what if you're running on AWS today and moving to GCP tomorrow? Should you rewrite all your storage logic?

❌ Not at all. Enter CSI.

The Container Storage Interface (CSI) allows storage vendors to write plugins that Kubernetes can use to:

  • Provision volumes dynamically
  • Attach/detach volumes from nodes
  • Mount/unmount volumes into containers

✨ Popular CSI drivers:

  • AWS EBS CSI driver
  • GCP PD CSI driver
  • Ceph CSI driver
  • Local Path CSI for development

Article content

CSI helps Kubernetes support any storage backend in a cloud-agnostic way — just like CNI does for networking.


🧱 3. CRI – Container Runtime Interface

🛠️ “CRI is how Kubernetes talks to the container runtime.”

Once your pod is scheduled, how does Kubernetes actually start the container?

The kubelet needs to talk to a container runtime — but there are multiple options: Docker (deprecated), containerd, CRI-O, etc.

To avoid hardcoding any specific runtime, Kubernetes uses the CRI, an abstraction layer that defines:

  • How to start/stop containers
  • How to pull container images
  • How to check container status

🧩 Runtimes like containerd and CRI-O implement the CRI API, and the kubelet uses this API to manage containers.

Article content

🔄 This modularity means you can swap runtimes without touching the rest of your Kubernetes logic.


📦 4. OCI – Open Container Initiative

🧬 “OCI defines the DNA of containers.”

Before Kubernetes and Docker became popular, there was no standard way to build or run containers.

The Open Container Initiative (OCI) was created to fix that.

It defines two key standards:

  • OCI Image Format – how container images should be structured
  • OCI Runtime Spec – how containers should be executed (e.g., runc)

Thanks to OCI, you can:

  • Build images with Docker or Buildah
  • Run them with containerd, CRI-O, or Podman
  • Know they’ll work because everyone follows the same standard

Article content

Kubernetes relies heavily on this standardization to ensure interoperability across tools and platforms.


Summary

These four interfaces—CNI, CSI, CRI, and OCI—are the backbone of Kubernetes' modular and pluggable architecture. Because of these abstractions:

Article content

✅ You can change cloud providers

✅ You can swap runtimes or storage solutions

✅ Kubernetes will still just work 🔥

Article content


To view or add a comment, sign in

Others also viewed

Explore topics