SlideShare a Scribd company logo
Kubernetes Networking
Intro for Stackers
Pino de Candia @ Huawei
Talk Outline
• Kubernetes networking
• Kuryr
• Dragonflow
Kubernetes
Networking
Containers vs. Pods
• Containers in the same pod communicate via loopback
• Multiple containers in a Pod; single network namespace.
• Containers are NOT bridged inside a Pod.
Pod
C1 C2 C3
eth0
Containers vs. Pods Analogy
-
= +
H O
H H
O
H2O
C12H22O11
https://speakerdeck.com/thockin/kubernetes-understanding-
pods-vs-containers
- Process ~ Particle
- Container ~ Atom
- Pod ~ Molecule
- Application ~ Combination of molecules
K8s Networking Manifesto
• all containers can communicate with all
other containers without NAT
• all nodes can communicate with all
containers (and vice-versa) without NAT
• the IP that a container sees itself as is the
same IP that others see it as
Minimal network setup!
• IP per pod
• K8s is ready to deploy Pods after install
• No L2 Network, Network Port, Subnet,
FloatingIP, Security Group, Router, Firewall,
DHCP…
• In general, user doesn’t have to draw network
diagrams
Network diagram
Pod
Node
1
Pod Pod Pod Pod Pod
DNS
Pod
Node
2
Node
3
Svc
VIP
Svc
VIP
Svc
VIP
DNS
VIP
DNS
Pod
DNS
Pod
IP Network
Think about Services, not Pods
• Pods are grouped by label
• Pods are automatically managed
• Sets of pods provide a service
• Service IP/port is load-balanced to pods
• Service names auto-registered in DNS
The service architecture
PodReplicaSet Pod Pod
Endpoints
Service
Deployment
DNS
IP1 IP2 IP3
managesmanages
LB
reads clusterIP,
externalIP
reads
Client
Auto-scaler
ReplicationController, ReplicaSet
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
• Pets vs. Cattle
• Don’t deploy just one pod
• Deploy something to
manage pods for you
• Label your Pods
Fundamentals: Pods are ephemeral
• A Pod can be killed at any time
• A new Pod may be created at any time
• No Pod migration!
– Port/VIF/IP address doesn’t need to move
– Even pods in a StatefulSet change address
Deployment
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: my-nginx
spec:
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
• ReplicationController came first
• ReplicaSet has a more
expressive selector
• Deployment enables
declarative updates for
ReplicaSet
Service – L4 load balancing
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
run: my-nginx
• Adds pods to an Endpoints object
– …if a selector is defined, otherwise you manage
Endpoints object some other way
• supports TCP and UDP and liveness probes
• East-West (pod-to-pod) using a “clusterIP”
• North-South using NodePort, ExternalIP, or
LoadBalancer (as specified in template)
• Type LoadBalancer behavior depends on
implementation (and varies by hosting cloud)
NodePort and ExternalIP use ClusterIP
10.0.0.2
Port 30001 
clusterIP:port
clusterIP:port
 A:p1, B:p2
externalIP:port 
clusterIP:port
Service1 -> clusterIP:port,
NodePort 30001, Endpoints1
Endpoints1-> A:p1, B:p2
10.0.0.3
Port 30001 
clusterIP:port
clusterIP:port
 A:p1, B:p2
externalIP:port 
clusterIP:port
10.0.0.4
Port 30001 
clusterIP:port
clusterIP:port
 A:p1, B:p2
externalIP:port 
clusterIP:port
A B
East-West (pod-to-pod) load-balancing
clusterIP:port
 A:p1, B:p2
clusterIP:port
 A:p1, B:p2
A B
Client Pod IP unchanged, clusterIP:port changed to service Pod IP and targetPort
North-South load-balancing with NodePort
10.0.0.2 10.0.0.3
Port 30001 
clusterIP:port
clusterIP:port
 A:p1, B:p2
10.0.0.4
A B
First, SNAT+DNAT - clientIP to nodeIP, NodePort to clusterIP:port
Then DNAT - clusterIP:port to service pod:targetPort
DNS records for Services
• With clusterIP:
– creates DNS A and SRV records of Service.Namespace 
clusterIP/port
• Without clusterIP (Headless):
– With selectors:
• manages Endpoints and creates DNS A records for each IP
– Without selectors:
• With ExternalName  creates DNS CNAME record
• Without ExternalName  expects someone else to manage
Endpoints and creates DNS A records for each IP
Service discovery
• Via environment variables
• Via DNS
Canonical workflow
1. Service records are registered in DNS
2. Client pod queries DNS for service
3. Client sends service request
4. KubeProxy (or other SDN) L4 load-
balances it to one of the Endpoints.
Overview Diagram
PodReplicaSet Pod Pod
Endpoints
Service
Deployment
DNS
IP1 IP2 IP3
managesmanages
LB
reads clusterIP,
externalIP
reads
Client
Auto-scaler
1 2
3
4
5
6
7
Ingress – L7 routing
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
• Route different URL paths to
different backend services
• Different Ingress controllers
implement different feature
subsets
• DNS behavior depends on the
controller
Namespaces
• A scope for names and labels
• Mechanism to attach authorization and
policy
– Namespaces can map to organizations or
projects
• Scope for quotas
– Total quotas as well as per-resource limits are
defined per namespace
NetworkPolicy – Security
apiVersion:
networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
ingress:
- from:
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
• Anyone can query DNS for services in any
namespace
• By default pods receive traffic from
anywhere
• Pods selected by a NetworkPolicy allow in
only what’s explicitly allowed
– E.g. Pods with label “role:db” should allow TCP
to port 6379 from pods with label “role:frontend”
in namespaces with label “project:myproject”
• Only ingress rules in v1.7
– Egress, QoS and other rules in progress
NetworkPolicy
Comparison to OpenStack
• Less networking
• No DHCP
• No Metadata Service or Proxy
• Service is the central concept
• Similar to Heat, but more opinionated
• No IPSec VPN, port mirroring, QoS,
service chaining…
Other network-related topics
• Number of interfaces (or IPs) per pod
• Federation/Ubernetes and Ubernetes Lite
• Service Meshes (Istio)
Network providers/controllers
• Romana
• Cilium
• Trireme
• NGINX Ingress
• GCE
• AWS
• Azure
• KubeDNS/SkyDNS
• KubeProxy
• Flannel
• Weave
• Kuryr
• Calico and Canal
• Contiv
Kubernetes Admin Challenges
• What’s the right combination of controllers?
• How to keep users informed of the features
you support.
• Underlay design
• VMs vs. Bare Metal
• How many clusters/deployments?
• Connectivity across environments and clouds
Kuryr
Kuryr motivation
• It’s hard to connect VMs, bare-metal and
containers
• Overlay2 for containers in VMs
• Smooth transition to cloud-native and
micro-services
Kuryr-Kubernetes Port per Pod
Pod Pod
Pod Pod
Pod Pod
VM VM
VM VM
Neutron-port-per-Pod, nested case
VM
Pod Pod Pod
eth0
eth0.10
eth0.20
eth0.30
VM
Pod
Pod
Pod
network A
network B
network C
network D
trunk
port
child
ports
Each Pod gets a
separate port-level
firewall, like any
VM in Neutron
Kuryr-Kubernetes Macvlan
Pod PodPod PodPod PodVM VM
VM VM
Pods get a MAC and IP directly on the VM’s network. VM and
nested Pod MACs/Ips all on the VM’s single Neutron port.
Simple, but no NetworkPolicy support.
Kuryr today supports
• Kubernetes native networking
• Pod gets a Neutron port
– Or macvlan per Pod
• Single tenant
• Full connectivity (default)
• K8s ClusterIP Services (Neutron LBaaS)
• Bare metal and Pod-in-VM
Kuryr-Kubernetes Architecture
Dragonflow
Dragonflow motivation
• Enterprise-grade OpenSource SDN
• That’s vendor-neutral
• That’s simple, stable, flexible
• And scales
Neutron-Server
Dragonflow Plugin
DB
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
OVS
Dragonflow
DB
Driver
Compute Node
DB
VM VM
..
VM VM
..
VM VM
.. VM VM
..
Dragonflow – distributed controller
Dragonflow scale – Redis test
Dragonflow’s pluggable DB and Pub-Sub
• DB
– Etcd
– Redis
– Zookeeper
– RAMcloud
– Cassandra
• Pub-Sub
– Redis
– ZeroMQ (Neutron)
– Etcd
Dragonflow Pipeline
Installed in every OVS
Service
Traffic
Classification
Ingress Processing
(NAT, BUM)
ARP DHCP
L2
Lookup
L3
Lookup
DVR
Egress
Dispatching outgoing
traffic to external
nodes or local ports
Ingress
Port
Security
(ARP spoofing , SG, …)
Egress
Port
Security
Egress
Processing
(NAT)
Fully Proactive
Has Reactive Flows to Controller
Security Groups
…
Outgoing from local
port Classification
and tagging
Dispatching
Incoming traffic from
external nodes to
local ports
Dragonflow recent features
• Pike
– BGP dynamic routing
– Service Function Chaining
– IPv6
– Trunk ports
– Distributed SNAT
Thanks

More Related Content

PPTX
2014 OpenStack Summit - Neutron OVS to LinuxBridge Migration
PPTX
Docker networking tutorial 102
PPTX
Docker networking
PPTX
OpenStack: Virtual Routers On Compute Nodes
PPTX
Docker networking Tutorial 101
PPTX
Docker summit : Docker Networking Control-plane & Data-Plane
PDF
OpenStack networking - Neutron deep dive with PLUMgrid
PDF
OpenStack Neutron Liberty Updates
2014 OpenStack Summit - Neutron OVS to LinuxBridge Migration
Docker networking tutorial 102
Docker networking
OpenStack: Virtual Routers On Compute Nodes
Docker networking Tutorial 101
Docker summit : Docker Networking Control-plane & Data-Plane
OpenStack networking - Neutron deep dive with PLUMgrid
OpenStack Neutron Liberty Updates

What's hot (20)

PPTX
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...
PDF
Whats new in neutron for open stack havana
PDF
OpenStack Neutron 201 1hr
PPTX
Quantum (OpenStack Meetup Feb 9th, 2012)
PDF
Open stack networking_101_update_2014
ODP
Networking in OpenStack for non-networking people: Neutron, Open vSwitch and ...
PPTX
Kubernetes on open stack
PPTX
OpenStack Networking and Automation
PDF
4. CNCF kubernetes Comparison of-existing-cni-plugins-for-kubernetes
PPTX
OpenStack Neutron's Distributed Virtual Router
PDF
4. Kubernetes - Application centric infrastructure kubernetes, contiv
PDF
rtnetlink
PPTX
Openstack Basic with Neutron
PDF
Building a sdn solution for the deployment of web application stacks in docker
PDF
Linux Tag 2014 OpenStack Networking
PPTX
OpenStack Neutron behind the Scenes
PPTX
Neutron behind the scenes
PDF
MidoNet deep dive
PPTX
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
PPTX
DockerCon EU 2018 Workshop: Container Networking for Swarm and Kubernetes in ...
Orchestration tool roundup kubernetes vs. docker vs. heat vs. terra form vs...
Whats new in neutron for open stack havana
OpenStack Neutron 201 1hr
Quantum (OpenStack Meetup Feb 9th, 2012)
Open stack networking_101_update_2014
Networking in OpenStack for non-networking people: Neutron, Open vSwitch and ...
Kubernetes on open stack
OpenStack Networking and Automation
4. CNCF kubernetes Comparison of-existing-cni-plugins-for-kubernetes
OpenStack Neutron's Distributed Virtual Router
4. Kubernetes - Application centric infrastructure kubernetes, contiv
rtnetlink
Openstack Basic with Neutron
Building a sdn solution for the deployment of web application stacks in docker
Linux Tag 2014 OpenStack Networking
OpenStack Neutron behind the Scenes
Neutron behind the scenes
MidoNet deep dive
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
DockerCon EU 2018 Workshop: Container Networking for Swarm and Kubernetes in ...
Ad

Similar to Open stackaustinmeetupsept21 (20)

PPTX
KuberNETes - meetup
PDF
prodops.io k8s presentation
PDF
Kubernetes Networking 101 kubecon EU 2022
PPTX
Introduction kubernetes 2017_12_24
PDF
Container network security
PDF
Introduction to Kubernetes Workshop
PPTX
Introduction to Kubernetes
PDF
Intro to Kubernetes
PPTX
Introduction to kubernetes
PDF
Kubernetes Networking - Giragadurai Vallirajan
PPTX
Kubernetes-Presentation-Syed-Murtaza-Hassan
PDF
Kubernetes From Scratch .pdf
PDF
Meetup 2023 - Gateway API.pdf
PDF
Networking in Kubernetes
PDF
Kubernetes Walk Through from Technical View
PPTX
Kubernetes #1 intro
PDF
Kubernetes basics and hands on exercise
PDF
DevJam 2019 - Introduction to Kubernetes
PPTX
An Introduction to Kubernetes and Continuous Delivery Fundamentals
PPTX
Kubernetes
KuberNETes - meetup
prodops.io k8s presentation
Kubernetes Networking 101 kubecon EU 2022
Introduction kubernetes 2017_12_24
Container network security
Introduction to Kubernetes Workshop
Introduction to Kubernetes
Intro to Kubernetes
Introduction to kubernetes
Kubernetes Networking - Giragadurai Vallirajan
Kubernetes-Presentation-Syed-Murtaza-Hassan
Kubernetes From Scratch .pdf
Meetup 2023 - Gateway API.pdf
Networking in Kubernetes
Kubernetes Walk Through from Technical View
Kubernetes #1 intro
Kubernetes basics and hands on exercise
DevJam 2019 - Introduction to Kubernetes
An Introduction to Kubernetes and Continuous Delivery Fundamentals
Kubernetes
Ad

Recently uploaded (20)

PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
cuic standard and advanced reporting.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Empathic Computing: Creating Shared Understanding
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPT
Teaching material agriculture food technology
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
CIFDAQ's Market Insight: SEC Turns Pro Crypto
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
NewMind AI Monthly Chronicles - July 2025
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
The AUB Centre for AI in Media Proposal.docx
cuic standard and advanced reporting.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Network Security Unit 5.pdf for BCA BBA.
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
20250228 LYD VKU AI Blended-Learning.pptx
GamePlan Trading System Review: Professional Trader's Honest Take
MYSQL Presentation for SQL database connectivity
Empathic Computing: Creating Shared Understanding
Advanced methodologies resolving dimensionality complications for autism neur...
Teaching material agriculture food technology
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
NewMind AI Weekly Chronicles - August'25 Week I
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...

Open stackaustinmeetupsept21

  • 1. Kubernetes Networking Intro for Stackers Pino de Candia @ Huawei
  • 2. Talk Outline • Kubernetes networking • Kuryr • Dragonflow
  • 4. Containers vs. Pods • Containers in the same pod communicate via loopback • Multiple containers in a Pod; single network namespace. • Containers are NOT bridged inside a Pod. Pod C1 C2 C3 eth0
  • 5. Containers vs. Pods Analogy - = + H O H H O H2O C12H22O11 https://speakerdeck.com/thockin/kubernetes-understanding- pods-vs-containers - Process ~ Particle - Container ~ Atom - Pod ~ Molecule - Application ~ Combination of molecules
  • 6. K8s Networking Manifesto • all containers can communicate with all other containers without NAT • all nodes can communicate with all containers (and vice-versa) without NAT • the IP that a container sees itself as is the same IP that others see it as
  • 7. Minimal network setup! • IP per pod • K8s is ready to deploy Pods after install • No L2 Network, Network Port, Subnet, FloatingIP, Security Group, Router, Firewall, DHCP… • In general, user doesn’t have to draw network diagrams
  • 8. Network diagram Pod Node 1 Pod Pod Pod Pod Pod DNS Pod Node 2 Node 3 Svc VIP Svc VIP Svc VIP DNS VIP DNS Pod DNS Pod IP Network
  • 9. Think about Services, not Pods • Pods are grouped by label • Pods are automatically managed • Sets of pods provide a service • Service IP/port is load-balanced to pods • Service names auto-registered in DNS
  • 10. The service architecture PodReplicaSet Pod Pod Endpoints Service Deployment DNS IP1 IP2 IP3 managesmanages LB reads clusterIP, externalIP reads Client Auto-scaler
  • 11. ReplicationController, ReplicaSet apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 • Pets vs. Cattle • Don’t deploy just one pod • Deploy something to manage pods for you • Label your Pods
  • 12. Fundamentals: Pods are ephemeral • A Pod can be killed at any time • A new Pod may be created at any time • No Pod migration! – Port/VIF/IP address doesn’t need to move – Even pods in a StatefulSet change address
  • 13. Deployment apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 • ReplicationController came first • ReplicaSet has a more expressive selector • Deployment enables declarative updates for ReplicaSet
  • 14. Service – L4 load balancing apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: ports: - port: 80 targetPort: 80 protocol: TCP selector: run: my-nginx • Adds pods to an Endpoints object – …if a selector is defined, otherwise you manage Endpoints object some other way • supports TCP and UDP and liveness probes • East-West (pod-to-pod) using a “clusterIP” • North-South using NodePort, ExternalIP, or LoadBalancer (as specified in template) • Type LoadBalancer behavior depends on implementation (and varies by hosting cloud)
  • 15. NodePort and ExternalIP use ClusterIP 10.0.0.2 Port 30001  clusterIP:port clusterIP:port  A:p1, B:p2 externalIP:port  clusterIP:port Service1 -> clusterIP:port, NodePort 30001, Endpoints1 Endpoints1-> A:p1, B:p2 10.0.0.3 Port 30001  clusterIP:port clusterIP:port  A:p1, B:p2 externalIP:port  clusterIP:port 10.0.0.4 Port 30001  clusterIP:port clusterIP:port  A:p1, B:p2 externalIP:port  clusterIP:port A B
  • 16. East-West (pod-to-pod) load-balancing clusterIP:port  A:p1, B:p2 clusterIP:port  A:p1, B:p2 A B Client Pod IP unchanged, clusterIP:port changed to service Pod IP and targetPort
  • 17. North-South load-balancing with NodePort 10.0.0.2 10.0.0.3 Port 30001  clusterIP:port clusterIP:port  A:p1, B:p2 10.0.0.4 A B First, SNAT+DNAT - clientIP to nodeIP, NodePort to clusterIP:port Then DNAT - clusterIP:port to service pod:targetPort
  • 18. DNS records for Services • With clusterIP: – creates DNS A and SRV records of Service.Namespace  clusterIP/port • Without clusterIP (Headless): – With selectors: • manages Endpoints and creates DNS A records for each IP – Without selectors: • With ExternalName  creates DNS CNAME record • Without ExternalName  expects someone else to manage Endpoints and creates DNS A records for each IP
  • 19. Service discovery • Via environment variables • Via DNS
  • 20. Canonical workflow 1. Service records are registered in DNS 2. Client pod queries DNS for service 3. Client sends service request 4. KubeProxy (or other SDN) L4 load- balances it to one of the Endpoints.
  • 21. Overview Diagram PodReplicaSet Pod Pod Endpoints Service Deployment DNS IP1 IP2 IP3 managesmanages LB reads clusterIP, externalIP reads Client Auto-scaler 1 2 3 4 5 6 7
  • 22. Ingress – L7 routing apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80 • Route different URL paths to different backend services • Different Ingress controllers implement different feature subsets • DNS behavior depends on the controller
  • 23. Namespaces • A scope for names and labels • Mechanism to attach authorization and policy – Namespaces can map to organizations or projects • Scope for quotas – Total quotas as well as per-resource limits are defined per namespace
  • 24. NetworkPolicy – Security apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db ingress: - from: - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 • Anyone can query DNS for services in any namespace • By default pods receive traffic from anywhere • Pods selected by a NetworkPolicy allow in only what’s explicitly allowed – E.g. Pods with label “role:db” should allow TCP to port 6379 from pods with label “role:frontend” in namespaces with label “project:myproject” • Only ingress rules in v1.7 – Egress, QoS and other rules in progress NetworkPolicy
  • 25. Comparison to OpenStack • Less networking • No DHCP • No Metadata Service or Proxy • Service is the central concept • Similar to Heat, but more opinionated • No IPSec VPN, port mirroring, QoS, service chaining…
  • 26. Other network-related topics • Number of interfaces (or IPs) per pod • Federation/Ubernetes and Ubernetes Lite • Service Meshes (Istio)
  • 27. Network providers/controllers • Romana • Cilium • Trireme • NGINX Ingress • GCE • AWS • Azure • KubeDNS/SkyDNS • KubeProxy • Flannel • Weave • Kuryr • Calico and Canal • Contiv
  • 28. Kubernetes Admin Challenges • What’s the right combination of controllers? • How to keep users informed of the features you support. • Underlay design • VMs vs. Bare Metal • How many clusters/deployments? • Connectivity across environments and clouds
  • 29. Kuryr
  • 30. Kuryr motivation • It’s hard to connect VMs, bare-metal and containers • Overlay2 for containers in VMs • Smooth transition to cloud-native and micro-services
  • 31. Kuryr-Kubernetes Port per Pod Pod Pod Pod Pod Pod Pod VM VM VM VM
  • 32. Neutron-port-per-Pod, nested case VM Pod Pod Pod eth0 eth0.10 eth0.20 eth0.30 VM Pod Pod Pod network A network B network C network D trunk port child ports Each Pod gets a separate port-level firewall, like any VM in Neutron
  • 33. Kuryr-Kubernetes Macvlan Pod PodPod PodPod PodVM VM VM VM Pods get a MAC and IP directly on the VM’s network. VM and nested Pod MACs/Ips all on the VM’s single Neutron port. Simple, but no NetworkPolicy support.
  • 34. Kuryr today supports • Kubernetes native networking • Pod gets a Neutron port – Or macvlan per Pod • Single tenant • Full connectivity (default) • K8s ClusterIP Services (Neutron LBaaS) • Bare metal and Pod-in-VM
  • 37. Dragonflow motivation • Enterprise-grade OpenSource SDN • That’s vendor-neutral • That’s simple, stable, flexible • And scales
  • 38. Neutron-Server Dragonflow Plugin DB OVS Dragonflow DB Driver Compute Node OVS Dragonflow DB Driver Compute Node OVS Dragonflow DB Driver Compute Node OVS Dragonflow DB Driver Compute Node DB VM VM .. VM VM .. VM VM .. VM VM .. Dragonflow – distributed controller
  • 39. Dragonflow scale – Redis test
  • 40. Dragonflow’s pluggable DB and Pub-Sub • DB – Etcd – Redis – Zookeeper – RAMcloud – Cassandra • Pub-Sub – Redis – ZeroMQ (Neutron) – Etcd
  • 41. Dragonflow Pipeline Installed in every OVS Service Traffic Classification Ingress Processing (NAT, BUM) ARP DHCP L2 Lookup L3 Lookup DVR Egress Dispatching outgoing traffic to external nodes or local ports Ingress Port Security (ARP spoofing , SG, …) Egress Port Security Egress Processing (NAT) Fully Proactive Has Reactive Flows to Controller Security Groups … Outgoing from local port Classification and tagging Dispatching Incoming traffic from external nodes to local ports
  • 42. Dragonflow recent features • Pike – BGP dynamic routing – Service Function Chaining – IPv6 – Trunk ports – Distributed SNAT

Editor's Notes

  • #7: K8s networking defined itself in contrast to Docker’s “host-private” networking that forced mapping node ports to container ports. K8s NodePort Service type inherits from Docker’s thinking.
  • #8: K8s networking defined itself in contrast to Docker’s “host-private” networking. K8s NodePort Service type inherits from Docker’s thinking.
  • #10: From the deployer’s perspective Services matter more than Pods.
  • #12: Let there be pods!
  • #15: Now we get to the networking.
  • #16: NodePort is a remnant from Docker’s early host-private network model, which relied heavily on NAT and mapping ports between nodes and containers.
  • #18: The SNAT forces the reply back through the node that received the request.
  • #19: DNS is an add-on. You don’t have to enable it, but it’s strongly recommended.
  • #23: Now we get to the networking.
  • #32: This model works both for K8s on OpenStack and for K8s on Neutron (shared network with OpenStack).
  • #33: Needs a special CNI driver. Leverages OpenStack Neutron TrunkPort
  • #38: <10k Lines-of-code Mirantis tests from Dec. 16… And people don’t run clusters beyond a few hundred nodes. DF addresses greater scale, but is also great for small/medium clusters (fewer components that can break).
  • #39: <10k Lines-of-code Mirantis tests from Dec. 16… And people don’t run clusters beyond a few hundred nodes. DF addresses greater scale, but is also great for small/medium clusters (fewer components that can break).
  • #40: <10k Lines-of-code Mirantis tests from Dec. 16… And people don’t run clusters beyond a few hundred nodes. DF addresses greater scale, but is also great for small/medium clusters (fewer components that can break).
  • #41: Talk about how easy drivers are to add – 100-200 lines of code