SlideShare a Scribd company logo
Copyright © 2020 Mirantis, Inc. All rights reserved
What's New in
Kubernetes 1.18
WEBINAR | March 17, 2020
2
The content contained herein is for informational purposes only, may
not be referenced or added to any contract, and should not be relied
upon to make purchasing decisions. It is not a commitment,
promise, or legal obligation to provide any features, functionality,
capabilities, code, etc. or to provide anything within any schedule,
date, time, etc. All Mirantis product and service decisions remain at
Mirantis sole and exclusive discretion.
Plus, I can't guarantee what features actually make it into
Kubernetes 1.18 when it's released next week.
Disclaimer
3
Featured Presenter
Nick Chase
Head of Technical Content at Mirantis
Nick Chase is Head of Technical Content for Mirantis and a former member of the Kubernetes
release team. He is a former software developer and author or co-author of more than a
dozen books on various programming topics, including the OpenStack Architecture Guide,
Understanding OPNFV, and Machine Learning for Mere Mortals.
Reach him on Twitter @NickChase.
4
A Little Housekeeping
● Please submit questions in the
Questions panel.
● We’ll provide a link where you
can download the slides at the
end of the webinar.
5
● Generally Available
● Beta
● Alpha
● Q&A
Agenda
Copyright © 2020 Mirantis, Inc. All rights reserved
Generally
available
Production ready and enabled by
default
7
RunAsUsername for
Windows
8
● Windows worker nodes
● Controllers still run on Linux
RunAsUserName for Windows
9
apiVersion: v1
kind: Pod
metadata:
name: username-demo-pod
spec:
securityContext:
windowsOptions:
runAsUserName: "ContainerUser"
containers:
- name: username-demo
image: mcr.microsoft.com/windows/servercore:ltsc2019
command: ["ping", "-t", "localhost"]
nodeSelector:
kubernetes.io/os: windows
RunAsUserName for Windows
10
kubectl apply -f run-as-username-pod.yaml
kubectl exec -it username-demo-pod -- powershell
echo $env:USERNAME
ContainerUser
RunAsUserName for Windows
11
● Limitations
○ Must be valid (non-empty) user (DOMAINUSER)
○ DOMAIN
■ Optional
■ NetBios name or DNS name
○ USER
■ <= 20 characters
■ Can have dots or spaces
■ No control characters
■ Not in  / : * ? " < > |
RunAsUserName for Windows
12
Support gMSA for Windows
workloads
13
● Group Managed Service Account
○ Password management
○ Single identity for group of servers
● Deploy GMSACredentialSpec CRD
● Install validation webhooks (multiple steps)
● Provision gMSAs in Active Directory
Support gMSA for Windows workloads
14
● Create the GMSACredentialSpec object:
apiVersion: windows.k8s.io/v1alpha1
kind: GMSACredentialSpec
metadata:
name: gmsa-WebApp1 #This is an arbitrary name but it will be used as a reference
credspec:
ActiveDirectoryConfig:
GroupManagedServiceAccounts:
- Name: WebApp1 #Username of the GMSA account
Scope: CONTOSO #NETBIOS Domain Name
- Name: WebApp1 #Username of the GMSA account
Scope: contoso.com #DNS Domain Name
CmsPlugins:
- ActiveDirectory
DomainJoinConfig:
DnsName: contoso.com #DNS Domain Name
DnsTreeName: contoso.com #DNS Domain Name Root
Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a #GUID
MachineAccountName: WebApp1 #Username of the GMSA account
NetBiosName: CONTOSO #NETBIOS Domain Name
Sid: S-1-5-21-2126449477-2524075714-3094792973 #SID of GMSA
Support gMSA for Windows workloads
15
● Configure cluster role to enable RBAC on specific
gMSA credential specs
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: webapp1-role
rules:
- apiGroups: ["windows.k8s.io"]
resources: ["gmsacredentialspecs"]
verbs: ["use"]
resourceNames: ["gmsa-WebApp1"]
Support gMSA for Windows workloads
16
● Assign role to service accounts to use specific
gMSA credentialspecs
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-default-svc-account-read-on-gmsa-WebApp1
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: webapp1-role
apiGroup: rbac.authorization.k8s.io
Support gMSA for Windows workloads
17
● Configure Pod to use the gMSA credential spec
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
run: with-creds
name: with-creds
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: with-creds
Support gMSA for Windows workloads
template:
metadata:
labels:
run: with-creds
spec:
securityContext:
windowsOptions:
gmsaCredentialSpecName: gmsa-webapp1
containers:
- image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
imagePullPolicy: Always
name: iis
nodeSelector:
beta.kubernetes.io/os: windows
18
● Configure container to use the gMSA spec
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
run: with-creds
name: with-creds
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: with-creds
Support gMSA for Windows workloads
template:
metadata:
labels:
run: with-creds
spec:
containers:
- image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
imagePullPolicy: Always
name: iis
securityContext:
windowsOptions:
gmsaCredentialSpecName: gmsa-Webapp1
nodeSelector:
beta.kubernetes.io/os: windows
19
Raw block device using
persistent volume source
20
● Raw block devices -- non-networked storage
○ AWSElasticBlockStore
○ AzureDisk
○ CSI
○ FC (Fibre Channel)
○ GCEPersistentDisk
○ iSCSI
○ Local volume
○ OpenStack Cinder
○ RBD (Ceph Block Device)
○ VsphereVolume
Raw block device using persistent volume source
21
● Persistent Volumes using a Raw Block Volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: block-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
volumeMode: Block
persistentVolumeReclaimPolicy: Retain
fc:
targetWWNs: ["50060e801049cfd1"]
lun: 0
readOnly: false
Raw block device using persistent volume source
22
● Persistent Volume Claim requesting a Raw Block
Volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 10Gi
Raw block device using persistent volume source
23
● Add to container
○ Specify device path instead of mount path
apiVersion: v1
kind: Pod
metadata:
name: pod-with-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: block-pvc
Raw block device using persistent volume source
24
Cloning a PVC
25
● Use an existing PersistentVolumeClaim as the
DataSource for a new PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cloned-pvc
spec:
storageClassName: my-csi-plugin
dataSource:
name: existing-src-pvc-name
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Cloning a PVC
26
Kubectl diff
27
● Similar to kubectl apply
kubectl diff -f some-resources.yaml
● Specify KUBECTL_EXTERNAL_DIFF to use your
favorite diff tool
KUBECTL_EXTERNAL_DIFF=meld kubectl diff -f some-resources.yaml
kubectl diff
28
APIServer DryRun
29
kubectl apply --server-dry-run
APIServer DryRun
30
Pass Pod information in CSI
calls
31
● Adds new fields to volume_context for
NodePublishVolumeRequest
○ csi.storage.k8s.io/pod.name: {pod.Name}
○ csi.storage.k8s.io/pod.namespace: {pod.Namespace}
○ csi.storage.k8s.io/pod.uid: {pod.UID}
○ csi.storage.k8s.io/serviceAccount.name: {pod.Spec.ServiceAccountName}
Pass Pod information in CSI calls
32
● Manually include CSIDriver object in driver
manifests
● Used to need cluster-driver-registrar sidecar
container
● Container creates CSIDriver Object automatically
Pass Pod information in CSI calls
33
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: testcsidriver.example.com
spec:
podInfoOnMount: true
Pass Pod information in CSI calls
34
Skip attach for
non-attachable CSI volumes
35
● Some CSI volume types don't have attach
operations:
○ NFS
○ Secrets
○ Ephemeral
Skip attach for non-attachable CSI volumes
Copyright © 2020 Mirantis, Inc. All rights reserved
Beta
Enabled by default, but not necessarily
ready for production environments.
Not likely to change.
37
CertificateSigningRequest
API
38
● Create the request
● Create the object and send to K8s
● Approve the request
○ Manual or automatic
● Associated with a private key
○ Can be held by a pod
■ Identity
■ Authorization
● Be careful who can approve requests!
CertificateSigningRequest API
39
● Must be set up to serve the certificates API
● Default signer implementation in controller
manager
○ Pass CA's keypair --cluster-signing-cert-file and
--cluster-signing-key-file to controller manager
CertificateSigningRequest API
40
cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"my-svc.my-namespace.svc.cluster.local",
"my-pod.my-namespace.pod.cluster.local",
"192.0.2.24",
"10.0.34.2"
],
"CN": "my-pod.my-namespace.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
}
}
EOF
2017/03/21 06:48:17 [INFO] generate received request
2017/03/21 06:48:17 [INFO] received CSR
2017/03/21 06:48:17 [INFO] generating key: ecdsa-256
2017/03/21 06:48:17 [INFO] encoded CSR
CertificateSigningRequest API
41
● Generates 2 files
○ Actual request (server.csr)
○ Encoded key for the final certificate (server-key.pem)
kubectl get csr
NAME AGE REQUESTOR CONDITION
my-svc.my-namespace 10m yourname@example.com Pending
kubectl certificate approve my-svc.my-namespace
● Download to server.crt
kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' 
| base64 --decode > server.crt
● Use server.crt and server-key.pem as keypair for HTTPS
server
CertificateSigningRequest API
42
Even pod spreading across
failure domains
43
● Affinity = infinite
● Antiaffinity = 1
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
topologySpreadConstraints:
- maxSkew: <integer>
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
Even pod spreading across failure domains
44
● Default policy (alpha)
apiVersion: kubescheduler.config.k8s.io/v1alpha2
kind: KubeSchedulerConfiguration
profiles:
pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints:
- maxSkew: 1
topologyKey: failure-domain.beta.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
Even pod spreading across failure domains
45
Add pod-startup
liveness-probe holdoff for
slow starting pods
46
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
Add pod-startup liveness-probe holdoff for
slow-starting pods
47
Kubeadm for Windows
48
● Create a K8s node on Windows
● Run Windows-based containers
○ For Windows containers get Windows Server 2019 license
(or higher)
● Control plane still runs on Linux
Kubeadm for Windows
49
New Endpoint API
50
● Services with > 100 endpoints -> EndpointSlices
● EndpointSliceProxying feature gate (apha)
● Will replace v1
New Endpoint API
51
Node Topology Manager
52
● Performance/latency sensitive operations
● CPU vs Device manager
● Hint providers
● Four supported policies (--topology-manager-policy)
○ none (default)
○ best-effort
○ restricted
○ single-numa-node
● Only none takes pod specs into account
Node Topology Manager
53
● No requests or limits
● BestEffort QoS class
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
Node Topology Manager
54
● requests < limits
● Burstable QoS class
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
memory: "200Mi"
cpu: "2"
example.com/device: "1"
requests:
memory: "200Mi"
cpu: "2"
example.com/device: "1"
Node Topology Manager
55
● requests == limits
● Guaranteed QoS class
spec:
containers:
- name: nginx
image: nginx
resources:
limits:
example.com/deviceA: "1"
example.com/deviceB: "1"
requests:
example.com/deviceA: "1"
example.com/deviceB: "1"
Node Topology Manager
56
● Limitations for Non-Uniform Memory Access
● Max NUMA nodes = 8.
○ state explosion
● Scheduler inot topology-aware
○ Can still fail
● Only Device Manager and the CPU Manager
support Topology Manager's HintProvider interface.
○ Memory and Hugepages not considered
Node Topology Manager
57
IPv6 support
58
● Feature parity with IPv4
● kubeadm uses default gateway network interface
○ advertise address for API server.
○ Specify kubeadm init
--apiserver-advertise-address=<ip-address> to change
○ For example --apiserver-advertise-address=fd00::101
IPv6 support added
59
Pod overhead: account resources
tied to the pod sandbox, but not
specific containers
60
kind: RuntimeClass
apiVersion: node.k8s.io/v1beta1
metadata:
name: kata-fc
handler: kata-fc
overhead:
podFixed:
memory: "120Mi"
cpu: "250m"
...
Pod Overhead: account resources tied to the pod
sandbox, but not specific containers
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
runtimeClassName: kata-fc
containers:
- name: busybox-ctr
image: busybox
stdin: true
tty: true
resources:
limits:
cpu: 500m
memory: 100Mi
- name: nginx-ctr
image: nginx
resources:
limits:
cpu: 1500m
memory: 100Mi
61
Adding AppProtocol to
Services and Endpoints
62
● AppProtocol
● Optional field
○ Endpoint
○ EndpointSlice
○ Service
■ UDP, TCP, SCTP
Adding AppProtocol to Services and Endpoints
63
● Specific protocol
○ postgresql://
○ https://
○ mysql://
Adding AppProtocol to Services and Endpoints
Copyright © 2020 Mirantis, Inc. All rights reserved
Alpha
Disabled by default, may change in the future
65
Skip Volume ownership
change
66
● Changes to match securityContext by default
● For large volumes can be slow
● fSGroupChangePolicy
● No effect on ephemeral volumes
○ secret
○ configMap
○ ephemeral
Skip Volume Ownership Change
67
Configurable scale velocity
for HPA
68
● Horizontal Pod Autoscaler
● Highest recommendation in window
● Configure with
○ --horizontal-pod-autoscaler-downscale-stabilization
○ behavior.scaleDown.stabilizationWindowSeconds
● Specify periodSeconds
○ Length of time for which condition must be true
Configurable scale velocity for HPA
69
● Create defaults
Configurable scale velocity for HPA
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
70
● Limit scale down:
behavior:
scaleDown:
policies:
- type: Percent
value: 10
periodSeconds: 60
- type: Pods
value: 5
periodSeconds: 60
selectPolicy: Max
Configurable scale velocity for HPA
71
behavior:
scaleDown:
policies:
- type: Pods
value: 4
periodSeconds: 60
- type: Percent
value: 10
periodSeconds: 60
Configurable scale velocity for HPA
72
Provide ODIC discovery
for service account
token issuer
73
● Enables federation of clusters
● Identity provider --> relying parties
● Must be OIDC compliant
● system:service-account-issuer-discovery
ClusterRole
○ No role bindings included
○ Admin binds to system:authenticated or
system:unauthenticated
Provide OIDC discovery for service account token
issuer
74
Immutable Secrets and
Configuration
75
● Can be set individually
● Prevents changes
● Can't be un-set
Immutable Secrets and ConfigMaps
76
Kubectl debug
77
● For containers with no OS / debugging
capabilities
● Provides debugging container
kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo
Defaulting debug container name to debugger-8xzrl.
If you don't see a command prompt, try pressing enter.
/ #
Kubectl debug
78
Run multiple scheduling
profiles
79
● Policies vs Profiles
● Policies
○ Filter (PodFitsHostPorts, CheckNodeMemoryPressure)
○ Scoring (SelectorSpreadPriority,
ImageLocalityPriority)
Run multiple Scheduling Profiles
80
● Profiles
○ Uses plugins
○ Can be enabled, disabled, reordered
○ Extension points (ie QueueSort, Permit, Un-reserve)
■ Single QueueSort plugin; only one pending pods queue
○ For example: NodePreferAvoidPods, VolumeRestrictions,
PrioritySort
● Request specific profile using pod's
.spec.schedulerName field
Run multiple Scheduling Profiles
81
Generic data populators
82
● Populate a new PVC via a CRD
● Must have a controller installed
● Same namespace
● Dynamic provisioners must support that resource
● Write your own
○ Create the PV
○ Bind it to the PVC
Generic data populators
83
Extending the HugePage
feature
84
● Not supported in Windows
● Must be pre-allocated
● requests == limits
● Isolated at the container level
● Each container has own limit on their cgroup sandbox as per
spec
● Control via ResourceQuota (like cpu or memory using
hugepages-<size> token)
● Multiple sizes
Extending the HugePage feature
85
apiVersion: v1
kind: Pod
metadata:
name: huge-pages-example
spec:
volumes:
- name: hugepage-2mi
emptyDir:
medium: HugePages-2Mi
- name: hugepage-1gi
emptyDir:
medium: HugePages-1Gi
...
Extending the HugePage feature
containers:
- name: example
image: fedora:latest
command:
- sleep
- inf
volumeMounts:
- mountPath: /hugepages-2Mi
name: hugepage-2mi
- mountPath: /hugepages-1Gi
name: hugepage-1gi
resources:
limits:
hugepages-2Mi: 100Mi
hugepages-1Gi: 2Gi
memory: 100Mi
requests:
memory: 100Mi
86
Training Promotion
Special Offer
87
Mirantis Training - Kubernetes
training.mirantis.com
Webinar attendees! Get 15% off Mirantis training!
Use coupon code: WEBMIR2020
Kubernetes & Docker
Bootcamp I (KD100)
Learn Docker and Kubernetes to deploy, run, and manage
containerized applications
2 days
Kubernetes & Docker
Bootcamp II (KD200)
Advanced training for Kubernetes professionals,
preparation for CKA exam
3 days
Accelerated Kubernetes &
Docker Bootcamp (KD250)
Most popular course! A combination of KD100 & KD200 at
an accelerated pace, preps for the CKA exam
4 days
Kubernetes in Production
Bootcamp (KP300)
In Development Advanced training focused on
production grade architecture, operational best practices,
and cluster management.
2 days
88
Thank You!
Q&A
Download the slides: bit.ly/k8s-1-18_slides
We’ll send you the slides & recording
later this week.

More Related Content

PDF
Securing Your Containers is Not Enough: How to Encrypt Container Data
PDF
How to Build a Basic Edge Cloud
PDF
Your Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes
PDF
Using Kubernetes to make cellular data plans cheaper for 50M users
PDF
Comparison of Current Service Mesh Architectures
PPTX
Secure Application Development in the Age of Continuous Delivery
PPTX
The How and Why of Container Vulnerability Management
PDF
Shifting security left simplifying security for k8s open shift environments
Securing Your Containers is Not Enough: How to Encrypt Container Data
How to Build a Basic Edge Cloud
Your Application Deserves Better than Kubernetes Ingress: Istio vs. Kubernetes
Using Kubernetes to make cellular data plans cheaper for 50M users
Comparison of Current Service Mesh Architectures
Secure Application Development in the Age of Continuous Delivery
The How and Why of Container Vulnerability Management
Shifting security left simplifying security for k8s open shift environments

What's hot (20)

PDF
Security Patterns for Microservice Architectures - London Java Community 2020
PDF
The Future of Security and Productivity in Our Newly Remote World
PPTX
Evaluating container security with ATT&CK Framework
PDF
PKI in DevOps: How to Deploy Certificate Automation within CI/CD
PPTX
Docker and Jenkins [as code]
PDF
Secured APIM-as-a-Service
PDF
Cisco Cloud Networking Workshop
PDF
Barbican 1.0 - Open Source Key Management for OpenStack
PPTX
Using hypervisor and container technology to increase datacenter security pos...
PDF
NGINX DevSecOps Workshop
PDF
Hacking into your containers, and how to stop it!
PDF
Continuous (Non-)Functional Testing of Microservices on K8s
PPTX
Securing Kubernetes Clusters with NGINX Plus Ingress Controller & NAP
PDF
Choose the Right Container Storage for Kubernetes
PPTX
Control Kubernetes Ingress and Egress Together with NGINX
PPTX
Microservices and containers networking: Contiv, an industry leading open sou...
PPTX
Toronto Virtual Meetup #7 - Anypoint VPC, VPN and DLB Architecture
PPTX
Outpost24 webinar mastering container security in modern day dev ops
PDF
Automated Virtualized Testing (AVT) with Docker, Kubernetes, WireMock and Gat...
PPTX
Bandit and Gosec - Security Linters
Security Patterns for Microservice Architectures - London Java Community 2020
The Future of Security and Productivity in Our Newly Remote World
Evaluating container security with ATT&CK Framework
PKI in DevOps: How to Deploy Certificate Automation within CI/CD
Docker and Jenkins [as code]
Secured APIM-as-a-Service
Cisco Cloud Networking Workshop
Barbican 1.0 - Open Source Key Management for OpenStack
Using hypervisor and container technology to increase datacenter security pos...
NGINX DevSecOps Workshop
Hacking into your containers, and how to stop it!
Continuous (Non-)Functional Testing of Microservices on K8s
Securing Kubernetes Clusters with NGINX Plus Ingress Controller & NAP
Choose the Right Container Storage for Kubernetes
Control Kubernetes Ingress and Egress Together with NGINX
Microservices and containers networking: Contiv, an industry leading open sou...
Toronto Virtual Meetup #7 - Anypoint VPC, VPN and DLB Architecture
Outpost24 webinar mastering container security in modern day dev ops
Automated Virtualized Testing (AVT) with Docker, Kubernetes, WireMock and Gat...
Bandit and Gosec - Security Linters
Ad

Similar to What's New in Kubernetes 1.18 Webinar Slides (20)

PDF
Kubernetes fingerprinting with Prometheus.pdf
PDF
Monitoring hybrid container environments
PDF
Traefik on Kubernetes at MySocialApp (CNCF Paris Meetup)
PDF
5 Kubernetes Security Tools You Should Use
PPTX
Docker Enterprise Workshop - Technical
PPTX
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...
PPTX
Kubernetes at (Organizational) Scale
PDF
Using Docker Platform to Provide Services
PPTX
Caching in Windows Azure
PDF
Free GitOps Workshop (with Intro to Kubernetes & GitOps)
PDF
Digital Forensics and Incident Response in The Cloud Part 3
PDF
Is It Safe? Security Hardening for Databases Using Kubernetes Operators
PDF
Heroku to Kubernetes & Gihub to Gitlab success story
PPTX
Headless browser: puppeteer and git client : GitKraken
PDF
Kubernetes One-Click Deployment: Hands-on Workshop (Mainz)
PDF
Securité des container
PDF
Security in a containerized world - Jessie Frazelle
PDF
Cloud-native .NET Microservices mit Kubernetes
PPTX
Architecting .NET solutions in a Docker ecosystem - .NET Fest Kyiv 2019
PPTX
Protecting data with CSI Volume Snapshots on Kubernetes
Kubernetes fingerprinting with Prometheus.pdf
Monitoring hybrid container environments
Traefik on Kubernetes at MySocialApp (CNCF Paris Meetup)
5 Kubernetes Security Tools You Should Use
Docker Enterprise Workshop - Technical
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...
Kubernetes at (Organizational) Scale
Using Docker Platform to Provide Services
Caching in Windows Azure
Free GitOps Workshop (with Intro to Kubernetes & GitOps)
Digital Forensics and Incident Response in The Cloud Part 3
Is It Safe? Security Hardening for Databases Using Kubernetes Operators
Heroku to Kubernetes & Gihub to Gitlab success story
Headless browser: puppeteer and git client : GitKraken
Kubernetes One-Click Deployment: Hands-on Workshop (Mainz)
Securité des container
Security in a containerized world - Jessie Frazelle
Cloud-native .NET Microservices mit Kubernetes
Architecting .NET solutions in a Docker ecosystem - .NET Fest Kyiv 2019
Protecting data with CSI Volume Snapshots on Kubernetes
Ad

More from Mirantis (20)

PDF
How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...
PDF
Kubernetes Security Workshop
PDF
Demystifying Cloud Security Compliance
PDF
Mirantis life
PDF
OpenStack and the IoT: Where we are, where we're going, what we need to get t...
PDF
Boris Renski: OpenStack Summit Keynote Austin 2016
PPTX
Digital Disciplines: Attaining Market Leadership through the Cloud
PPTX
Decomposing Lithium's Monolith with Kubernetes and OpenStack
PPTX
OpenStack: Changing the Face of Service Delivery
PPTX
Accelerating the Next 10,000 Clouds
PPTX
Containers for the Enterprise: It's Not That Simple
PPTX
Protecting Yourself from the Container Shakeout
PPTX
It's Not the Technology, It's You
PDF
OpenStack as the Platform for Innovation
PPTX
Moving AWS workloads to OpenStack
PPTX
Your 1st Ceph cluster
PPTX
App catalog (Vancouver)
PDF
Tales From The Ship: Navigating the OpenStack Community Seas
PDF
OpenStack Overview and History
PDF
OpenStack Architecture
How to Accelerate Your Application Delivery Process on Top of Kubernetes Usin...
Kubernetes Security Workshop
Demystifying Cloud Security Compliance
Mirantis life
OpenStack and the IoT: Where we are, where we're going, what we need to get t...
Boris Renski: OpenStack Summit Keynote Austin 2016
Digital Disciplines: Attaining Market Leadership through the Cloud
Decomposing Lithium's Monolith with Kubernetes and OpenStack
OpenStack: Changing the Face of Service Delivery
Accelerating the Next 10,000 Clouds
Containers for the Enterprise: It's Not That Simple
Protecting Yourself from the Container Shakeout
It's Not the Technology, It's You
OpenStack as the Platform for Innovation
Moving AWS workloads to OpenStack
Your 1st Ceph cluster
App catalog (Vancouver)
Tales From The Ship: Navigating the OpenStack Community Seas
OpenStack Overview and History
OpenStack Architecture

Recently uploaded (20)

PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
cuic standard and advanced reporting.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Machine learning based COVID-19 study performance prediction
PPTX
A Presentation on Artificial Intelligence
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PPTX
Cloud computing and distributed systems.
PDF
Electronic commerce courselecture one. Pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Spectral efficient network and resource selection model in 5G networks
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
NewMind AI Weekly Chronicles - August'25 Week I
Building Integrated photovoltaic BIPV_UPV.pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
cuic standard and advanced reporting.pdf
Review of recent advances in non-invasive hemoglobin estimation
NewMind AI Monthly Chronicles - July 2025
Machine learning based COVID-19 study performance prediction
A Presentation on Artificial Intelligence
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Cloud computing and distributed systems.
Electronic commerce courselecture one. Pdf
Advanced methodologies resolving dimensionality complications for autism neur...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
The Rise and Fall of 3GPP – Time for a Sabbatical?
Diabetes mellitus diagnosis method based random forest with bat algorithm
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...

What's New in Kubernetes 1.18 Webinar Slides

  • 1. Copyright © 2020 Mirantis, Inc. All rights reserved What's New in Kubernetes 1.18 WEBINAR | March 17, 2020
  • 2. 2 The content contained herein is for informational purposes only, may not be referenced or added to any contract, and should not be relied upon to make purchasing decisions. It is not a commitment, promise, or legal obligation to provide any features, functionality, capabilities, code, etc. or to provide anything within any schedule, date, time, etc. All Mirantis product and service decisions remain at Mirantis sole and exclusive discretion. Plus, I can't guarantee what features actually make it into Kubernetes 1.18 when it's released next week. Disclaimer
  • 3. 3 Featured Presenter Nick Chase Head of Technical Content at Mirantis Nick Chase is Head of Technical Content for Mirantis and a former member of the Kubernetes release team. He is a former software developer and author or co-author of more than a dozen books on various programming topics, including the OpenStack Architecture Guide, Understanding OPNFV, and Machine Learning for Mere Mortals. Reach him on Twitter @NickChase.
  • 4. 4 A Little Housekeeping ● Please submit questions in the Questions panel. ● We’ll provide a link where you can download the slides at the end of the webinar.
  • 5. 5 ● Generally Available ● Beta ● Alpha ● Q&A Agenda
  • 6. Copyright © 2020 Mirantis, Inc. All rights reserved Generally available Production ready and enabled by default
  • 8. 8 ● Windows worker nodes ● Controllers still run on Linux RunAsUserName for Windows
  • 9. 9 apiVersion: v1 kind: Pod metadata: name: username-demo-pod spec: securityContext: windowsOptions: runAsUserName: "ContainerUser" containers: - name: username-demo image: mcr.microsoft.com/windows/servercore:ltsc2019 command: ["ping", "-t", "localhost"] nodeSelector: kubernetes.io/os: windows RunAsUserName for Windows
  • 10. 10 kubectl apply -f run-as-username-pod.yaml kubectl exec -it username-demo-pod -- powershell echo $env:USERNAME ContainerUser RunAsUserName for Windows
  • 11. 11 ● Limitations ○ Must be valid (non-empty) user (DOMAINUSER) ○ DOMAIN ■ Optional ■ NetBios name or DNS name ○ USER ■ <= 20 characters ■ Can have dots or spaces ■ No control characters ■ Not in / : * ? " < > | RunAsUserName for Windows
  • 12. 12 Support gMSA for Windows workloads
  • 13. 13 ● Group Managed Service Account ○ Password management ○ Single identity for group of servers ● Deploy GMSACredentialSpec CRD ● Install validation webhooks (multiple steps) ● Provision gMSAs in Active Directory Support gMSA for Windows workloads
  • 14. 14 ● Create the GMSACredentialSpec object: apiVersion: windows.k8s.io/v1alpha1 kind: GMSACredentialSpec metadata: name: gmsa-WebApp1 #This is an arbitrary name but it will be used as a reference credspec: ActiveDirectoryConfig: GroupManagedServiceAccounts: - Name: WebApp1 #Username of the GMSA account Scope: CONTOSO #NETBIOS Domain Name - Name: WebApp1 #Username of the GMSA account Scope: contoso.com #DNS Domain Name CmsPlugins: - ActiveDirectory DomainJoinConfig: DnsName: contoso.com #DNS Domain Name DnsTreeName: contoso.com #DNS Domain Name Root Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a #GUID MachineAccountName: WebApp1 #Username of the GMSA account NetBiosName: CONTOSO #NETBIOS Domain Name Sid: S-1-5-21-2126449477-2524075714-3094792973 #SID of GMSA Support gMSA for Windows workloads
  • 15. 15 ● Configure cluster role to enable RBAC on specific gMSA credential specs apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: webapp1-role rules: - apiGroups: ["windows.k8s.io"] resources: ["gmsacredentialspecs"] verbs: ["use"] resourceNames: ["gmsa-WebApp1"] Support gMSA for Windows workloads
  • 16. 16 ● Assign role to service accounts to use specific gMSA credentialspecs apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: allow-default-svc-account-read-on-gmsa-WebApp1 namespace: default subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: webapp1-role apiGroup: rbac.authorization.k8s.io Support gMSA for Windows workloads
  • 17. 17 ● Configure Pod to use the gMSA credential spec apiVersion: apps/v1beta1 kind: Deployment metadata: labels: run: with-creds name: with-creds namespace: default spec: replicas: 1 selector: matchLabels: run: with-creds Support gMSA for Windows workloads template: metadata: labels: run: with-creds spec: securityContext: windowsOptions: gmsaCredentialSpecName: gmsa-webapp1 containers: - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 imagePullPolicy: Always name: iis nodeSelector: beta.kubernetes.io/os: windows
  • 18. 18 ● Configure container to use the gMSA spec apiVersion: apps/v1beta1 kind: Deployment metadata: labels: run: with-creds name: with-creds namespace: default spec: replicas: 1 selector: matchLabels: run: with-creds Support gMSA for Windows workloads template: metadata: labels: run: with-creds spec: containers: - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 imagePullPolicy: Always name: iis securityContext: windowsOptions: gmsaCredentialSpecName: gmsa-Webapp1 nodeSelector: beta.kubernetes.io/os: windows
  • 19. 19 Raw block device using persistent volume source
  • 20. 20 ● Raw block devices -- non-networked storage ○ AWSElasticBlockStore ○ AzureDisk ○ CSI ○ FC (Fibre Channel) ○ GCEPersistentDisk ○ iSCSI ○ Local volume ○ OpenStack Cinder ○ RBD (Ceph Block Device) ○ VsphereVolume Raw block device using persistent volume source
  • 21. 21 ● Persistent Volumes using a Raw Block Volume apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block persistentVolumeReclaimPolicy: Retain fc: targetWWNs: ["50060e801049cfd1"] lun: 0 readOnly: false Raw block device using persistent volume source
  • 22. 22 ● Persistent Volume Claim requesting a Raw Block Volume apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 10Gi Raw block device using persistent volume source
  • 23. 23 ● Add to container ○ Specify device path instead of mount path apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeDevices: - name: data devicePath: /dev/xvda volumes: - name: data persistentVolumeClaim: claimName: block-pvc Raw block device using persistent volume source
  • 25. 25 ● Use an existing PersistentVolumeClaim as the DataSource for a new PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloned-pvc spec: storageClassName: my-csi-plugin dataSource: name: existing-src-pvc-name kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Cloning a PVC
  • 27. 27 ● Similar to kubectl apply kubectl diff -f some-resources.yaml ● Specify KUBECTL_EXTERNAL_DIFF to use your favorite diff tool KUBECTL_EXTERNAL_DIFF=meld kubectl diff -f some-resources.yaml kubectl diff
  • 30. 30 Pass Pod information in CSI calls
  • 31. 31 ● Adds new fields to volume_context for NodePublishVolumeRequest ○ csi.storage.k8s.io/pod.name: {pod.Name} ○ csi.storage.k8s.io/pod.namespace: {pod.Namespace} ○ csi.storage.k8s.io/pod.uid: {pod.UID} ○ csi.storage.k8s.io/serviceAccount.name: {pod.Spec.ServiceAccountName} Pass Pod information in CSI calls
  • 32. 32 ● Manually include CSIDriver object in driver manifests ● Used to need cluster-driver-registrar sidecar container ● Container creates CSIDriver Object automatically Pass Pod information in CSI calls
  • 33. 33 apiVersion: storage.k8s.io/v1beta1 kind: CSIDriver metadata: name: testcsidriver.example.com spec: podInfoOnMount: true Pass Pod information in CSI calls
  • 35. 35 ● Some CSI volume types don't have attach operations: ○ NFS ○ Secrets ○ Ephemeral Skip attach for non-attachable CSI volumes
  • 36. Copyright © 2020 Mirantis, Inc. All rights reserved Beta Enabled by default, but not necessarily ready for production environments. Not likely to change.
  • 38. 38 ● Create the request ● Create the object and send to K8s ● Approve the request ○ Manual or automatic ● Associated with a private key ○ Can be held by a pod ■ Identity ■ Authorization ● Be careful who can approve requests! CertificateSigningRequest API
  • 39. 39 ● Must be set up to serve the certificates API ● Default signer implementation in controller manager ○ Pass CA's keypair --cluster-signing-cert-file and --cluster-signing-key-file to controller manager CertificateSigningRequest API
  • 40. 40 cat <<EOF | cfssl genkey - | cfssljson -bare server { "hosts": [ "my-svc.my-namespace.svc.cluster.local", "my-pod.my-namespace.pod.cluster.local", "192.0.2.24", "10.0.34.2" ], "CN": "my-pod.my-namespace.pod.cluster.local", "key": { "algo": "ecdsa", "size": 256 } } EOF 2017/03/21 06:48:17 [INFO] generate received request 2017/03/21 06:48:17 [INFO] received CSR 2017/03/21 06:48:17 [INFO] generating key: ecdsa-256 2017/03/21 06:48:17 [INFO] encoded CSR CertificateSigningRequest API
  • 41. 41 ● Generates 2 files ○ Actual request (server.csr) ○ Encoded key for the final certificate (server-key.pem) kubectl get csr NAME AGE REQUESTOR CONDITION my-svc.my-namespace 10m yourname@example.com Pending kubectl certificate approve my-svc.my-namespace ● Download to server.crt kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' | base64 --decode > server.crt ● Use server.crt and server-key.pem as keypair for HTTPS server CertificateSigningRequest API
  • 42. 42 Even pod spreading across failure domains
  • 43. 43 ● Affinity = infinite ● Antiaffinity = 1 apiVersion: v1 kind: Pod metadata: name: mypod spec: topologySpreadConstraints: - maxSkew: <integer> topologyKey: <string> whenUnsatisfiable: <string> labelSelector: <object> Even pod spreading across failure domains
  • 44. 44 ● Default policy (alpha) apiVersion: kubescheduler.config.k8s.io/v1alpha2 kind: KubeSchedulerConfiguration profiles: pluginConfig: - name: PodTopologySpread args: defaultConstraints: - maxSkew: 1 topologyKey: failure-domain.beta.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway Even pod spreading across failure domains
  • 46. 46 apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: k8s.gcr.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 Add pod-startup liveness-probe holdoff for slow-starting pods
  • 48. 48 ● Create a K8s node on Windows ● Run Windows-based containers ○ For Windows containers get Windows Server 2019 license (or higher) ● Control plane still runs on Linux Kubeadm for Windows
  • 50. 50 ● Services with > 100 endpoints -> EndpointSlices ● EndpointSliceProxying feature gate (apha) ● Will replace v1 New Endpoint API
  • 52. 52 ● Performance/latency sensitive operations ● CPU vs Device manager ● Hint providers ● Four supported policies (--topology-manager-policy) ○ none (default) ○ best-effort ○ restricted ○ single-numa-node ● Only none takes pod specs into account Node Topology Manager
  • 53. 53 ● No requests or limits ● BestEffort QoS class spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" Node Topology Manager
  • 54. 54 ● requests < limits ● Burstable QoS class spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Node Topology Manager
  • 55. 55 ● requests == limits ● Guaranteed QoS class spec: containers: - name: nginx image: nginx resources: limits: example.com/deviceA: "1" example.com/deviceB: "1" requests: example.com/deviceA: "1" example.com/deviceB: "1" Node Topology Manager
  • 56. 56 ● Limitations for Non-Uniform Memory Access ● Max NUMA nodes = 8. ○ state explosion ● Scheduler inot topology-aware ○ Can still fail ● Only Device Manager and the CPU Manager support Topology Manager's HintProvider interface. ○ Memory and Hugepages not considered Node Topology Manager
  • 58. 58 ● Feature parity with IPv4 ● kubeadm uses default gateway network interface ○ advertise address for API server. ○ Specify kubeadm init --apiserver-advertise-address=<ip-address> to change ○ For example --apiserver-advertise-address=fd00::101 IPv6 support added
  • 59. 59 Pod overhead: account resources tied to the pod sandbox, but not specific containers
  • 60. 60 kind: RuntimeClass apiVersion: node.k8s.io/v1beta1 metadata: name: kata-fc handler: kata-fc overhead: podFixed: memory: "120Mi" cpu: "250m" ... Pod Overhead: account resources tied to the pod sandbox, but not specific containers apiVersion: v1 kind: Pod metadata: name: test-pod spec: runtimeClassName: kata-fc containers: - name: busybox-ctr image: busybox stdin: true tty: true resources: limits: cpu: 500m memory: 100Mi - name: nginx-ctr image: nginx resources: limits: cpu: 1500m memory: 100Mi
  • 62. 62 ● AppProtocol ● Optional field ○ Endpoint ○ EndpointSlice ○ Service ■ UDP, TCP, SCTP Adding AppProtocol to Services and Endpoints
  • 63. 63 ● Specific protocol ○ postgresql:// ○ https:// ○ mysql:// Adding AppProtocol to Services and Endpoints
  • 64. Copyright © 2020 Mirantis, Inc. All rights reserved Alpha Disabled by default, may change in the future
  • 66. 66 ● Changes to match securityContext by default ● For large volumes can be slow ● fSGroupChangePolicy ● No effect on ephemeral volumes ○ secret ○ configMap ○ ephemeral Skip Volume Ownership Change
  • 68. 68 ● Horizontal Pod Autoscaler ● Highest recommendation in window ● Configure with ○ --horizontal-pod-autoscaler-downscale-stabilization ○ behavior.scaleDown.stabilizationWindowSeconds ● Specify periodSeconds ○ Length of time for which condition must be true Configurable scale velocity for HPA
  • 69. 69 ● Create defaults Configurable scale velocity for HPA behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Percent value: 100 periodSeconds: 15 - type: Pods value: 4 periodSeconds: 15 selectPolicy: Max
  • 70. 70 ● Limit scale down: behavior: scaleDown: policies: - type: Percent value: 10 periodSeconds: 60 - type: Pods value: 5 periodSeconds: 60 selectPolicy: Max Configurable scale velocity for HPA
  • 71. 71 behavior: scaleDown: policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 Configurable scale velocity for HPA
  • 72. 72 Provide ODIC discovery for service account token issuer
  • 73. 73 ● Enables federation of clusters ● Identity provider --> relying parties ● Must be OIDC compliant ● system:service-account-issuer-discovery ClusterRole ○ No role bindings included ○ Admin binds to system:authenticated or system:unauthenticated Provide OIDC discovery for service account token issuer
  • 75. 75 ● Can be set individually ● Prevents changes ● Can't be un-set Immutable Secrets and ConfigMaps
  • 77. 77 ● For containers with no OS / debugging capabilities ● Provides debugging container kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo Defaulting debug container name to debugger-8xzrl. If you don't see a command prompt, try pressing enter. / # Kubectl debug
  • 79. 79 ● Policies vs Profiles ● Policies ○ Filter (PodFitsHostPorts, CheckNodeMemoryPressure) ○ Scoring (SelectorSpreadPriority, ImageLocalityPriority) Run multiple Scheduling Profiles
  • 80. 80 ● Profiles ○ Uses plugins ○ Can be enabled, disabled, reordered ○ Extension points (ie QueueSort, Permit, Un-reserve) ■ Single QueueSort plugin; only one pending pods queue ○ For example: NodePreferAvoidPods, VolumeRestrictions, PrioritySort ● Request specific profile using pod's .spec.schedulerName field Run multiple Scheduling Profiles
  • 82. 82 ● Populate a new PVC via a CRD ● Must have a controller installed ● Same namespace ● Dynamic provisioners must support that resource ● Write your own ○ Create the PV ○ Bind it to the PVC Generic data populators
  • 84. 84 ● Not supported in Windows ● Must be pre-allocated ● requests == limits ● Isolated at the container level ● Each container has own limit on their cgroup sandbox as per spec ● Control via ResourceQuota (like cpu or memory using hugepages-<size> token) ● Multiple sizes Extending the HugePage feature
  • 85. 85 apiVersion: v1 kind: Pod metadata: name: huge-pages-example spec: volumes: - name: hugepage-2mi emptyDir: medium: HugePages-2Mi - name: hugepage-1gi emptyDir: medium: HugePages-1Gi ... Extending the HugePage feature containers: - name: example image: fedora:latest command: - sleep - inf volumeMounts: - mountPath: /hugepages-2Mi name: hugepage-2mi - mountPath: /hugepages-1Gi name: hugepage-1gi resources: limits: hugepages-2Mi: 100Mi hugepages-1Gi: 2Gi memory: 100Mi requests: memory: 100Mi
  • 87. 87 Mirantis Training - Kubernetes training.mirantis.com Webinar attendees! Get 15% off Mirantis training! Use coupon code: WEBMIR2020 Kubernetes & Docker Bootcamp I (KD100) Learn Docker and Kubernetes to deploy, run, and manage containerized applications 2 days Kubernetes & Docker Bootcamp II (KD200) Advanced training for Kubernetes professionals, preparation for CKA exam 3 days Accelerated Kubernetes & Docker Bootcamp (KD250) Most popular course! A combination of KD100 & KD200 at an accelerated pace, preps for the CKA exam 4 days Kubernetes in Production Bootcamp (KP300) In Development Advanced training focused on production grade architecture, operational best practices, and cluster management. 2 days
  • 88. 88 Thank You! Q&A Download the slides: bit.ly/k8s-1-18_slides We’ll send you the slides & recording later this week.