SlideShare a Scribd company logo
Thomas Poetter, Compris Technologies AG
2022
Overview / Table of Contents
Cheat Sheets:
1. Docker
2. Kubernetes, K8s, K3s,
Minikube
3. OpenStack (IaaS)
4. OpenShift (PaaS)
Infographics:
1. Overview
2. Microservices
3. AWS
4. Azure
5. GCP
6. Docker
7. Kubernetes
8. In-Memory Data Grids
(IMC/IMDGs) and
Databases
Clouds and Tools: Cheat Sheets & Infographics
Docker Cheat Sheet 1
https://phoenixnap.com/kb/list-of-docker-commands-cheat-sheet
Docker Cheat Sheet 2
https://dockerlabs.collabnix.com/docker/cheatsheet/
docker create [options] IMAGE
-a, --attach # attach stdout/err
-i, --interactive # attach stdin
(interactive)
-t, --tty # pseudo-tty
--name NAME # name your image
-p, --publish 5000:5000 # port map
--expose 5432 # expose a port to
linked containers
-P, --publish-all # publish all ports
--link container:alias # linking
-v, --volume `pwd`:/app # mount (absolute
paths needed)
-e, --env NAME=hello # env vars
Docker Cheat Sheet 3
https://intellipaat.com/mediaFiles/2019/03/docker-cheat-sheet.jpg
Docker Cheat Sheet 4
https://www.docker.com/wp-content/uploads/2022/03/docker-cheat-sheet.pdf
Docker Cheat Sheet 5: Logical
Docker Container Commands
Create a container (without starting it):
docker create [IMAGE]
Rename an existing container:
docker rename [CONTAINER_NAME] [NEW_CONTAINER_NAME]
Run a command in a new container:
docker run [IMAGE] [COMMAND]
docker run --rm [IMAGE] – removes a container after it exits.
docker run -td [IMAGE] – starts a container and keeps it running.
docker run -it [IMAGE] – starts a container, allocates a pseudo-TTY
connected to the container’s stdin, and creates an interactive bash
shell in the container.
docker run -it-rm [IMAGE] – creates, starts, and runs a command
inside the container. Once it executes the command, the container is
removed.
Delete a container (if it is not running):
docker rm [CONTAINER]
Update the configuration of one or more containers:
docker update [CONTAINER]
Starting and Stopping Containers
Start a container:
docker start [CONTAINER]
Stop a running container:
docker stop [CONTAINER]
Stop a running container and start it up again:
docker restart [CONTAINER]
Pause processes in a running container:
docker pause [CONTAINER]
Unpause processes in a running container:
docker unpause [CONTAINER]
Block a container until others stop (after which it prints their
exit codes):
docker wait [CONTAINER]
Kill a container by sending a SIGKILL to a running container:
docker kill [CONTAINER]
Attach local standard input, output, and error streams to a
running container:
docker attach [CONTAINER]
Docker Cheat Sheet 6: Logical
Docker Image Commands
Create an image from a Dockerfile:
docker build [URL]
docker build -t – builds an image from a Dockerfile in the current
directory and tags the image
Pull an image from a registry:
docker pull [IMAGE]
Push an image to a registry:
docker push [IMAGE]
Create an image from a tarball:
docker import [URL/FILE]
Create an image from a container:
docker commit [CONTAINER] [NEW_IMAGE_NAME]
Remove an image:
docker rmi [IMAGE]
Load an image from a tar archive or stdin:
docker load [TAR_FILE/STDIN_FILE]
Save an image to a tar archive, streamed to STDOUT with all parent
layers, tags, and versions:
docker save [IMAGE] > [TAR_FILE]
Docker Commands for Container and Image Information
List running containers:
docker ps
docker ps -a – lists both running containers and ones that have
stopped
List the logs from a running container:
docker logs [CONTAINER]
List low-level information on Docker objects:
docker inspect [OBJECT_NAME/ID]
List real-time events from a container:
docker events [CONTAINER]
Show port (or specific) mapping for a container:
docker port [CONTAINER]
Show running processes in a container:
docker top [CONTAINER]
Show live resource usage statistics of containers:
docker stats [CONTAINER]
Show changes to files (or directories) on a filesystem:
docker diff [CONTAINER]
Docker Cheat Sheet 7: Logical
Networks
List networks:
docker network ls
Remove one or more networks:
docker network rm [NETWORK]
Show information on one or more networks:
docker network inspect [NETWORK]
Connects a container to a network:
docker network connect [NETWORK] [CONTAINER]
Disconnect a container from a network:
docker network disconnect [NETWORK] [CONTAINER]
Docker Commands for Container and Image Information
List all images that are locally stored with the docker engine:
docke image ls
Show the history of an image:
docker history [IMAGE]
Docker Cheat Sheet 8: Syntax
-e, --env NAME[="value"]
Set environment variable. If the value is omitted, the value will be
taken from the current environment.
--entrypoint "some/entry/point"
Overwrite the default ENTRYPOINT of the image
-h, --hostname ="<hostname>" Container host name
--add-host =<hostname>:<ip>
Add a custom host-to-IP mapping
--net ="<mode>"
Set the network mode for the container (default: bridge):
• bridge: create a network stack on the default Docker bridge
• none: no networking
• container:<name|id>: reuse another container’s stack
• host: use the Docker host network stack
• <network-name>|<network-id>: connect to a user-defined
network
--group-add =<groups>
Add additional groups to run as
--rm Automatically remove the container when it exits
--restart ="no|on-failure[:<max-retry>]|always|unless-stopped"
Restart policy; default: no
--name "foo" Assign a name to the container
--detach-keys ="<keys>"
Override the key sequence to detach a container. Default:
"ctrl-p ctrl-q"
$ docker create [<opts>] <image> [<command>] [<arg>...] Create a new container, but
don’t run it (instead, print its id). See options for docker run
$ docker start [<opts>] <container> [<container>...] Start one or more containers
-a, --attach Attach container’s STDOUT and STDERR and forward
all signals to the process
-i, --interactive
Attach container’s STDIN
$ docker stop [<opts>] <container> [<container>...] Stop one or more containers by
sending SIGTERM and then SIGKILL after a grace period
-t, --time [=10] Number of seconds to wait before killing the container
Building images
$ docker build [<opts>] <path> | <URL>
Build a new image from the source code at PATH
-f, --file path/to/Dockerfile
Path to the Dockerfile to use. Default: Dockerfile.
--build-arg <varname>=<value>
Name and value of a build argument defined with ARG
Dockerfile instruction
-t "<name>[:<tag>]"
Repository names (and optionally with tags) to be applied to
the resulting image
--label =<label>
Set metadata for an image
-q, --quiet Suppress the output generated by containers
--rm Remove intermediate containers after a successful build
Creating, running and stopping containers
$ docker run [<opts>] <image> [<command>] [<arg>...]
Run a command in a new container
-i, --interactive
Keep STDIN open even if not attached
-t, --tty Allocate a pseudo-TTY
-v, --volume [<host-dir>:]<container-dir>[:<opts>]
Bind mount a volume. Options are comma-separated:
[ro,rw]. By default, rw is used.
--device =<host-dev>:<container-dev>[:<opts>]
Add a host
device to the
container; e.g.
--device="/dev/sda:/dev/xvdc:rwm".
Possible <opts> flags: r: read, w: write, m: mknod
-d, --detach Detached (daemon) mode
--env-file file Read in a line delimited file of environment variables
Docker Cheat Sheet 9: Syntax
--since ="<timestamp>“ Show logs since the given timestamp
-t, --timestamps Show timestamps
--tail ="<n>“ Output the specified number of lines at the end of logs
$ docker wait <container> [<container>...]
Block until a container stops, then print its exit code
Saving and loading images and containers
$ docker save [<opts>] <image> [<image>...]
Save one or more images to a tar archive (streamed to
STDOUT by default)
-o, --output =""
Write to a file instead of STDOUT
$ docker load [<opts>]
Load image(s) from a tar archive or STDIN. Restores both images
and tags
-i, --input ="<tar-archive>"
Read from a tar archive file, instead of STDIN. The tarball may be
compressed with gzip, bzip, or xz.
-q, --quiet Suppress the load progress bar
$ docker export [<opts>] <container>
Export the contents of a container’s filesystem as a tar archive
-o, --output ="<file>"
Write to a file instead of STDOUT
$ docker import [<opts>] <file>|<URL>|- [<repository>[:<tag>]] Create an empty
filesystem image and import the contents of the tarball into it, then
optionally tag it.
-c, --change =[]
Apply specified Dockerfile instructions while importing the image;
one of these: CMD, ENTRYPOINT, ENV, EXPOSE,
ONBUILD, USER, VOLUME, WORKDIR
-m, --message ="<msg>"
Set commit message for imported image
$ docker kill [<opts>] <container> [<container>...]
Kill a runing container using SIGKILL or a specified signal
-s, --signal [="KILL"]
Signal to send to the container
$ docker pause <container> [<container>...]
Pause all processes within a container
$ docker unpause <container> [<container>...]
Unpause all processes within a container
DOCKER CLI QUICK REFERENCE (continued)
Interacting with running containers
$ docker attach [<opts>] <container>
Attach to a running container
--no-stdin Do not attach STDIN (i.e. attach in read-only mode)
--detach-keys ="<keys>"
Override the key sequence to detach a container.
Default: "ctrl-p ctrl-q"
$ docker exec [<opts>] <container> <command> [<arg> ...] Run a process in a
running container
-i, --interactive Keep STDIN open even if not attached
-t, --tty Allocate a pseudo-TTY
-d, --detach Detached (daemon) mode
$ docker top <container> [<ps options>]
Display the running processes within a container. The ps
options are any options you would give to the ps command
$ docker cp [<opts>] <container>:<src path> <host dest path>
$ docker cp [<opts>] <host src path> <container>:<dest path>
Copy files/folders between a container and the local
filesystem. Behaves like Linux command cp -a. It’s possible
to specify - as either the host dest path or host src path, in
which case you can also stream a tar archive.
-L, --follow-link Follow symbol link in source path
$ docker logs [<opts>] <container> Fetch the logs of a container
-f, --follow Follow log output: it combines docker log and docker
attach
Docker Cheat Sheet 10: Syntax
--no-trunc Don’t truncate output
-q, --quiet Only display numeric IDs
-f, --filter ="<filter>“ Filter output based on these conditions:
• exited=<int> an exit code of <int>
• label=<key> or label=<key>=<value>
• status=(created|restarting|running|paused|exited|dead)
• name=<string> a container’s name
• id=<ID> a container’s ID
• before=(<container-name>|<container-id>)
• since=(<container-name>|<container-id>)
• ancestor=(<image-name>[:tag]|<image-id>| image@digest)
containers created from an image or a descendant
• volume=(<volume-name>|<mount-point-destination>)
--format ="<template>“ Pretty-print containers using a Go template, e.g. {{.ID}}.
Valid placeholders:
• .ID - Container ID
• .Image - Image ID
•.Command - Quoted command
•.CreatedAt - Time when the container was created.
•.RunningFor - Time since the container was started.
•.Ports - Exposed ports.
•.Status - Container status.
•.Size - Container disk size.
•.Names - Container names.
•.Labels - All labels assigned to the container.
•.Label - Value of a specific label for this container. For example
{{.Label "com.docker.swarm.cpu"}}
Communicating with a Docker Registry
$ docker login [<opts>] [<server>]
Log in to a Docker Registry on the specified <server>.
If server is omitted, https://registry-1.docker.io is used.
Credentials are stored in /.docker/config.json
-u, --username ="<username>"
-p, --password ="<password>"
$ docker logout [<server>]
Log out from a Docker Registry on the specified <server>.
If server is omitted, https://registry-1.docker.io is used.
$ docker push [<registry host>[:<registry port>]/]<name>[:<tag>]
Push an image or a repository to a Registry
$ docker pull [<opts>] [<registry host>[:<registry port>]/]<name>[:<tag>]
Pull an image or a repository from a Registry
-a, --all-tags
Download all tagged images in the repository
Listing images and containers
$ docker images [<opts>] List images
-a, --all Show all images (by default, intermediate image layers aren’t
shown)
--no-trunc Don’t truncate output
-f, --filter ="<filter>“ Filter output based on these conditions:
• dangling=true - unused (untagged) images
• label=<key> or label=<key>=<value>
--format ="<template>“ Pretty-print containers using a Go template, e.g.
{{.ID}}. Valid placeholders:
• .ID - Image ID • .Repository - Image repository
• .Tag - Image tag
• .Digest - Image digest
• .CreatedSince - Time since the image was created
• .CreatedAt - Time when the image was created
• .Size - Image disk size
$ docker ps [<opts>] List containers
-a, --all Show all containers (including non-running ones)
Docker Cheat Sheet 11: Syntax
Inspecting images and containers
$ docker inspect [<opts>] <container>|<image> [<container>|<image>...]
Return low-level information on a container or image
-f, --format ="<format>"
Format the output using the given Go template. You can see
the available placeholders by looking at the total output
without --format
-s, --size Display total file sizes if the type is container
-t, --type ="<container>|<image>" Return JSON for specified type only
Removing images and containers
$ docker rm [<opts>] <container> [<container>...]
Remove one or more containers from the host
-f, --force Force the removal of a running container (uses SIGKILL)
-l, --link Remove the specified link and not the underlying container
-v, --volume Remove the volumes associated with the container
$ docker rmi [<opts>] <image> [<image>...] Remove one or more images from
the host
-f, --force Force the removal of images of a running container --no-pruneDo
not delete untagged parents
Clouds and Tools: Cheat Sheets & Infographics
Dockerfile Cheat Sheet 1: Overview
https://devhints.io/dockerfile
Dockerfile Cheat Sheet 2: Logical
LABEL Adds metadata (a non-executable instruction)
LABEL description="Updating the foo and bar"
LABEL version="0.15"
RUN Execute commands in a new layer on top of the current image and
commit the results.
Runs during 'build' time.
Strongly consider using '&&': RUN apt-get update && update apt-get
install –y php
USER Sets the username or UID to use when running the image and
commands
USER alvin
VOLUME Creates a mount point (path) to external volumes (on the
native host or other containers)
WORKDIR Sets the working directory for any subsequent RUN, CMD,
ENTRYPOINT, COPY, and ADD commands.
If it’s a relative path, it’s relative to the previous WORKDIR.
WORKDIR /home/alvin
WORKDIR foo # results in "/home/alvin/foo"
# NOTE: I haven’t used these yet:
ARG Defines a variable that users can pass at build-time to the builder
using --build-arg
ONBUILD Adds an instruction to be executed later, when the image is
used as the base for another build
STOPSIGNAL Sets the system call signal that will be sent to the container to
exit
Dockerfile commands/arguments
# Comments begin with '#'
ADD Copy new files, directories, or remote file URLs from into
the filesystem of the container
CMD Allowed only once; if given multiple times, only the last
one takes effect.
The intended command for the image.
Doesn’t do anything during 'build' time.
COPY Copy files or directories from a source into the filesystem
of the container
COPY readme.txt /home/al
ENTRYPOINT TODO: A container that will run as an executable? Or,
the primary command of your Docker image?
ENV Set environment variables.
ENV CONF_FILE=application.conf HEAP_SIZE=2G
EXPOSE Tells the container runtime that the container listens on
these network ports at runtime
EXPOSE 5150
EXPOSE 5150 5151
FROM Sets the base image (ubuntu, openjdk:11, alpine, etc.)
MAINTAINER Sets the author field of the generated images
Dockerfile Cheat Sheet 3: Syntax
ENTRYPOINT ["<executable>", "<param1>", "<param2>"]
Executable form
ENTRYPOINT <command param1 param2 ...>
Run the command in the shell /bin/sh -c
ENV<key> <value>Sets the environment variable <key> to the value <value>. This
value is passed to all future RUN, ENTRYPOINT, and CMD instructions
EXPOSE <port1> <port2> ...
Informs Docker that the container listens on the specified network ports at
runtime. Docker uses this information to interconnect containers using links
and to set up port redirection on the host system
LABEL ... Adds metadata to an image. A label is a key-value pair
LABEL <key>=<value> <key2>=<value2> ...
LABEL <key> <value>
ONBUILD <instruction>
Adds a trigger instruction to an image. The trigger is executed at a later time,
when the image is used as the base for another build. Docker executes the
trigger in the context of the downstream build, as if the trigger existed
immediately after the FROM instruction in the downstream Dockerfile.
RUN ... Executes any commands in a new layer on top of the current image and
commits the results. There are two forms:
RUN <command> Run the command in the shell /bin/sh -c
RUN ["<executable>", "<param1>", "<param2>"]
Executable form. The square brackets are a part of the syntax
STOPSIGNAL Sets the system call signal that will be sent to the container to exit
USER <user>
USER <user>:<group>
Sets the username or UID used for running subsequent commands. <user> can
be either username or UID; <group> can be either group name or GID
VOLUME ["/some/path"]
Creates a mount point with the specified name and marks it as holding
externally-mounted volumes from the native host or from other containers
WORKDIR /path/to/workdir
Sets the working directory for the RUN, CMD, ENTRYPOINT, COPY and ADD
Dockerfile commands that follow. Relative paths are defined relative to the
path of the previous WORKDIR instruction.
Dockerfile commands/arguments
# Comments begin with '#'
FROM <image>
FROM <image>:<tag>
Sets the base image for subsequent instructions. Dockerfile must start
with FROM instruction.
MAINTAINER <name>
Sets the Author field for the generated images
ADD <src> <dest>
ADD ["<src>", ... "<dest>"]
Like COPY, but additionally allows <src> to be an URL, and if <src> is
an archive in a recognized format, it will be unpacked. The best practice
is to prefer COPY
ARG <name>
ARG <name>=<default value>
Defines a variable that users can pass at build-time to the builder with
the docker build command using the
--build-arg <varname>=<value> flag
CMD ... Provides defaults for executing container. There could
be at most one CMD instruction in a Dockerfile
CMD ["<executable>", "<param1>", "<param2>"] Executable form
CMD ["<param1>", "<param2>"]
Provide default arguments to ENTRYPOINT
CMD <command args ...>
Run the command in the shell /bin/sh -c
COPY <src> <dest>
COPY ["<src>", ... "<dest>"]
Copies new files, directories or remote file URLs to the filesystem of the
container at path <dest>. All new files and directories are created with
mode 0755 and with the uid and gid of 0.
ENTRYPOINT ... Helps you configure a container that can be run as an
executable. The ENTRYPOINT instruction adds an entry command that is not
overwritten when arguments are passed to docker run. This is different from
the behavior of CMD. This allows arguments to be passed to the entrypoint
Clouds and Tools: Cheat Sheets & Infographics
Kubernetes Cheat Sheet 1
https://www.upgrad.com/blog/kubernetes-cheat-sheet/
Kubernetes Cheat Sheet 2
https://phoenixnap.com/kb/kubectl-commands-cheat-sheet
Kubernetes Cheat Sheet 3
https://intellipaat.com/blog/tutorial/devops-tutorial/kubernetes-cheat-sheet/
Kubernetes Cheat Sheet 4
Commands Description
kubectl get node To list down all worker nodes.
kubectl delete node
<node_name>
Delete the given node in cluster.
kubectl top node Show metrics for a given node.
kubectl describe nodes | grep
ALLOCATED -A 5
Describe all the nodes in
verbose.
kubectl get pods -o wide | grep
<node_name>
List all pods in the current
namespace, with more details.
kubectl get no -o wide
List all the nodes with mode
details.
kubectl describe no
Describe the given node in
verbose.
kubectl annotate node
<node_name>
Add an annotation for the given
node.
kubectl uncordon node
<node_name>
Mark my-node as schedulable.
kubectl label node Add a label to given node
Nodes
Commands Description
kubectl get po
To list the available pods in
the default namespace.
kubectl describe pod
<pod_name>
To list the detailed
description of pod.
kubectl delete pod <pod_name>
To delete a pod with the
name.
kubectl create pod <pod_name>
To create a pod with the
name.
Kubectl get pod -n
<name_space>
To list all the pods in a
namespace.
Kubectl create pod <pod_name>
-n <name_space>
To create a pod with the
name in a namespace.
Commands Description
kubectl create namespace
<namespace_name>
To create a namespace by the
given name.
kubectl get namespace
To list the current namespace in a
cluster.
kubectl describe namespace
<namespace_name>
To display the detailed state of
one or more namespaces.
kubectl delete namespace
<namespace_name>
To delete a namespace.
kubectl edit namespace
<namespace_name>
To edit and update the definition
of a namespace.
Namespaces
Pods
https://www.interviewbit.com/kubernetes-cheat-sheet/
Kubernetes Cheat Sheet 5
Deployments
Service Accounts
ReplicaSets
Commands Description
kubectl create deployment
<deployment_name>
To create a new deployment.
kubectl get deployment To list one or more deployments.
kubectl describe deployment
<deployment_name>
To list a detailed state of one or
more deployments.
kubectl delete
deployment<deployment_name>
To delete a deployment.
DaemonSets
Command Description
kubectl get ds To list out all the daemon sets.
kubectl get ds -all-namespaces
To list out the daemon sets in a
namespace.
kubectl describe ds
[daemonset_name][namespace
_name]
To list out the detailed
information for a daemon set
inside a namespace.
Events
Commands Description
kubectl get events
To list down the recent events for
all the resources in the system.
kubectl get events --field-selector
involvedObject.kind != Pod
To list down all the events except
the pod events.
kubectl get events --field-selector
type != Normal
To filter out normal events from
a list of events.
Commands Description
kubectl get replicasets To List down the ReplicaSets.
kubectl describe replicasets
<replicaset_name>
To list down the detailed state
of one or more ReplicaSets.
kubectl scale --replace=[x] To scale a replica set.
Commands Description
kubectl get serviceaccounts To List Service Accounts.
kubectl describe serviceaccounts
To list the detailed state of one
or more service accounts.
kubectl replace serviceaccounts To replace a service account.
kubectl delete serviceaccounts
<name>
To delete a service account.
Commands Description
kubectl logs <pod_name>
To display the logs for a Pod with
the given name.
kubectl logs --since=1h
<pod_name>
To display the logs of last 1 hour for
the pod with the given name.
kubectl logs --tail-20 <pod_name>
To display the most recent 20 lines
of logs.
kubectl logs -c <container_name>
<pod_name>
To display the logs for a container
in a pod with the given names.
kubectl logs <pod_name> pod.log
To save the logs into a file named as
pod.log.
Logs
https://www.interviewbit.com/kubernetes-cheat-sheet/
Kubectl context and configuration
kubectl config view # Show Merged kubeconfig settings.
# use multiple kubeconfig files at the same time and view merged config
KUBECONFIG=~/.kube/config:~/.kube/kubconfig2
kubectl config view
# get the password for the e2e user
kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
kubectl config view -o jsonpath='{.users[].name}' # display the first user
kubectl config view -o jsonpath='{.users[*].name}' # get a list of users
kubectl config get-contexts # display list of contexts
kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
kubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig
# configure the URL to a proxy server to use for requests made by this client in the kubeconfig
kubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url
# add a new user to your kubeconf that supports basic auth
kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword
# permanently save the namespace for all subsequent kubectl commands in that context.
kubectl config set-context --current --namespace=ggckad-s2
# set a context utilizing a specific username and namespace.
kubectl config set-context gce --user=cluster-admin --namespace=foo 
&& kubectl config use-context gce
kubectl config unset users.foo # delete user foo
# short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace)
alias kx='f() { [ "$1" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'
alias kn='f() { [ "$1" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d" " -f6 ; } ; f'
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubectl Creating objects
kubectl apply -f ./my-manifest.yaml # create resource(s)
kubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files
kubectl apply -f ./dir # create resource(s) in all manifest files in dir
kubectl apply -f https://git.io/vPieo # create resource(s) from url
kubectl create deployment nginx --image=nginx # start a single instance of nginx
# create a Job which prints "Hello World"
kubectl create job hello --image=busybox:1.28 -- echo "Hello World"
# create a CronJob that prints "Hello World" every minute
kubectl create cronjob hello --image=busybox:1.28 --schedule="*/1 * * * *" -- echo "Hello World"
kubectl explain pods # get the documentation for pod manifests
# Create multiple YAML objects from stdin
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec: ...
# Create a secret with several keys
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: $(echo -n "s33msi4" | base64 -w0)
username: $(echo -n "jane" | base64 -w0)
EOF
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubectl Viewing, finding resources
# Get commands with basic output
kubectl get services # List all services in the namespace
kubectl get pods --all-namespaces # List all pods in all namespaces
kubectl get pods -o wide # List all pods in the current namespace, with more details
kubectl get deployment my-dep # List a particular deployment
kubectl get pods # List all pods in the namespace
kubectl get pod my-pod -o yaml # Get a pod's YAML
# Describe commands with verbose output
kubectl describe nodes my-node
kubectl describe pods my-pod
# List Services Sorted by Name
kubectl get services --sort-by=.metadata.name
# List pods Sorted by Restart Count
kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
# List PersistentVolumes sorted by capacity
kubectl get pv --sort-by=.spec.capacity.storage
# Get the version label of all pods with label app=cassandra
kubectl get pods --selector=app=cassandra -o 
jsonpath='{.items[*].metadata.labels.version}'
# Retrieve the value of a key with dots, e.g. 'ca.crt'
kubectl get configmap myconfig 
-o jsonpath='{.data.ca.crt}'
# Retrieve a base64 encoded value with dashes instead of underscores.
kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}'
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubectl Viewing, finding resources
# Get all worker nodes (use a selector to exclude results that have a label
# named 'node-role.kubernetes.io/control-plane')
kubectl get node --selector='!node-role.kubernetes.io/control-plane'
# Get all running pods in the namespace
kubectl get pods --field-selector=status.phase=Running
# Get ExternalIPs of all nodes
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
# List Names of Pods that belong to Particular RC
# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/
sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "(.key)=(.value),"')%?}
echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})
# Show labels for all pods (or any other Kubernetes object that supports labelling)
kubectl get pods --show-labels
# Check which nodes are ready
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' 
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
# Output decoded secrets without external tools
kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"n"}}{{$v|base64decode}}{{"nn"}}{{end}}'
# List all Secrets currently in use by a pod
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
# List all containerIDs of initContainer of all pods
# Helpful when cleaning up stopped containers, while avoiding removal of initContainers.
kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"n"}{end}' | cut -d/ -f3
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubectl Viewing, finding resources
# List Events sorted by timestamp
kubectl get events --sort-by=.metadata.creationTimestamp
# Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.
kubectl diff -f ./my-manifest.yaml
# Produce a period-delimited tree of all keys returned for nodes
# Helpful when locating a key within a complex nested JSON structure
kubectl get nodes -o json | jq -c 'paths|join(".")'
# Produce a period-delimited tree of all keys returned for pods, etc
kubectl get pods -o json | jq -c 'paths|join(".")'
# Produce ENV for all pods, assuming you have a default container for the pods, default namespace and the `env` command is supported.
# Helpful when running any supported command across all pods, not just `env`
for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod -- env; done
# Get a deployment's status subresource
kubectl get deployment nginx-deployment --subresource=status
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubectl Updating resources
kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image
kubectl rollout history deployment/frontend # Check the history of deployments including the revision
kubectl rollout undo deployment/frontend # Rollback to the previous deployment
kubectl rollout undo deployment/frontend --to-revision=2 # Rollback to a specific revision
kubectl rollout status -w deployment/frontend # Watch rolling update status of "frontend" deployment until completion
kubectl rollout restart deployment/frontend # Rolling restart of the "frontend" deployment
cat pod.json | kubectl replace -f - # Replace a pod based on the JSON passed into stdin
# Force replace, delete and then re-create the resource. Will cause a service outage.
kubectl replace --force -f ./pod.json
# Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000
kubectl expose rc nginx --port=80 --target-port=8000
# Update a single-container pod's image version (tag) to v4
kubectl get pod mypod -o yaml | sed 's/(image: myimage):.*$/1:v4/' | kubectl replace -f -
kubectl label pods my-pod new-label=awesome # Add a Label
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annotation
kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo"
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubectl Patching resources
# Partially update a node
kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
# Update a container's image; spec.containers[*].name is required because it's a merge key
kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
# Update a container's image using a json patch with positional arrays
kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
# Disable a deployment livenessProbe using a json patch with positional arrays
kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'
# Add a new element to a positional array
kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'
# Update a deployment's replica count by patching its scale subresource
kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}'
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Editing resources
kubectl edit svc/docker-registry # Edit the service named docker-registry
KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Use an alternative editor
Scaling resources
kubectl scale --replicas=3 rs/foo # Scale a replicaset named 'foo' to 3
kubectl scale --replicas=3 -f foo.yaml # Scale a resource specified in "foo.yaml" to 3
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # If the deployment named mysql's current size is 2, scale mysql to 3
kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale multiple replication controllers
Kubectl Interacting with running Pods
kubectl logs my-pod # dump pod logs (stdout)
kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container
kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case)
kubectl logs -l name=myLabel -c my-container # dump pod logs, with label name=myLabel (stdout)
kubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container
kubectl logs -f my-pod # stream pod logs (stdout)
kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)
kubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout)
kubectl run -i --tty busybox --image=busybox:1.28 -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n mynamespace # Start a single instance of nginx pod in the namespace of mynamespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml
kubectl attach my-pod -i # Attach to Running Container
kubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod
kubectl exec my-pod -- ls / # Run command in existing pod (1 container case)
kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case)
kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)
kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers
kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Deleting resources
kubectl delete -f ./pod.json # Delete a pod using the type and name specified in pod.json
kubectl delete pod unwanted --now # Delete a pod with no grace period
kubectl delete pod,service baz foo # Delete pods and services with same names "baz" and "foo"
kubectl delete pods,services -l name=myLabel # Delete pods and services with label name=myLabel
kubectl -n my-ns delete pod,svc --all # Delete all pods and services in namespace my-ns,
# Delete all pods matching the awk pattern1 or pattern2
kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs kubectl delete -n mynamespace pod
Copy files and directories to and from containers
kubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace
kubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container
kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace
kubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally
Kubectl Interacting with Nodes and cluster
kubectl cordon my-node # Mark my-node as unschedulable
kubectl drain my-node # Drain my-node in preparation for maintenance
kubectl uncordon my-node # Mark my-node as schedulable
kubectl top node my-node # Show metrics for a given node
kubectl cluster-info # Display addresses of the master and services
kubectl cluster-info dump # Dump current cluster state to stdout
kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state
# View existing taints on which exist on current nodes.
kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect
# If a taint with that key and effect already exists, its value is replaced as specified.
kubectl taint nodes foo dedicated=special-user:NoSchedule
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Interacting with Deployments and Services
kubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)
kubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)
kubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend
kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name <my-service-port>
kubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by <my-deployment>
kubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases)
Copy files and directories to and from containers
tar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-
namespace
kubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally
Clouds and Tools: Cheat Sheets & Infographics
OpenStack Cheat Sheet 1
https://www.openstack.org/software/
OpenStack Cheat Sheet 2
https://assets.ubuntu.com/v1/8d3130a1-OpenStack.cheat.sheet.1.pdf
OpenStack Cheat Sheet 3
https://assets.ubuntu.com/v1/8d3130a1-OpenStack.cheat.sheet.1.pdf
OpenStack Cheat Sheet 4
https://assets.ubuntu.com/v1/8d3130a1-OpenStack.cheat.sheet.1.pdf
OpenStack Cheat Sheet 5 (old)
https://cloud.curs.pub.ro/wp-content/uploads/2014/12/Openstack_CheatSheet.pdf
OpenStack Cheat Sheet 6
Compute (nova)¶
List instances, check status of instance
$ openstack server list
List images
$ openstack image list
Create a flavor named m1.tiny
$ openstack flavor create --ram 512 --disk 1 --vcpus 1 m1.tiny
List flavors
$ openstack flavor list
Boot an instance using flavor and image names (if names are unique)
$ openstack server create --image IMAGE --flavor FLAVOR INSTANCE_NAME
$ openstack server create --image cirros-0.3.5-x86_64-uec --flavor m1.tiny 
MyFirstInstance
Log in to the instance (from Linux)
# ip netns
# ip netns exec NETNS_NAME ssh USER@SERVER
# ip netns exec qdhcp-6021a3b4-8587-4f9c-8064-0103885dfba2 
ssh cirros@10.0.0.2
Log in to the instance with a public IP address (from Mac)
$ ssh cloud-user@128.107.37.150
Show details of instance
$ openstack server show NAME
$ openstack server show MyFirstInstance
View console log of instance
$ openstack console log show MyFirstInstance
https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html
Images (glance)¶
List images you can access
$ openstack image list
Delete specified image
$ openstack image delete IMAGE
Describe a specific image
$ openstack image show IMAGE
Update image
$ openstack image set IMAGE
Upload kernel image
$ openstack image create "cirros-threepart-kernel" 
--disk-format aki --container-format aki --public 
--file ~/images/cirros-0.3.5-x86_64-kernel
Upload RAM image
$ openstack image create "cirros-threepart-ramdisk" 
--disk-format ari --container-format ari --public 
--file ~/images/cirros-0.3.5-x86_64-initramfs
Upload three-part image
$ openstack image create "cirros-threepart" --disk-format ami 
--container-format ami --public 
--property kernel_id=$KID-property ramdisk_id=$RID 
--file ~/images/cirros-0.3.5-x86_64-rootfs.img
Register raw image
$ openstack image create "cirros-raw" --disk-format raw 
--container-format bare --public 
--file ~/images/cirros-0.3.5-x86_64-disk.img
OpenStack Cheat Sheet 7
Resize
$ openstack server resize NAME FLAVOR
$ openstack server resize my-pem-server m1.small
$ openstack server resize --confirm my-pem-server1
Rebuild
$ openstack server rebuild NAME IMAGE
$ openstack server rebuild newtinny cirros-qcow2
Reboot
$ openstack server reboot NAME
$ openstack server reboot newtinny
Inject user data and files into an instance
$ openstack server create --user-data FILE INSTANCE
$ openstack server create --user-data userdata.txt --image cirros-qcow2 
--flavor m1.tiny MyUserdataInstance2
Create keypair
$ openstack keypair create test > test.pem
$ chmod 600 test.pem
Start an instance (boot)
$ openstack server create --image cirros-0.3.5-x86_64 --flavor m1.small 
--key-name test MyFirstServer
Use ssh to connect to the instance
# ip netns exec qdhcp-98f09f1e-64c4-4301-a897-5067ee6d544f 
ssh -i test.pem cirros@10.0.0.4
https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html
Set metadata on an instance
$ nova meta volumeTwoImage set newmeta='my meta data'
Create an instance snapshot
$ openstack image create volumeTwoImage snapshotOfVolumeImage
$ openstack image show snapshotOfVolumeImage
Pause, suspend, stop, rescue, resize, rebuild, reboot an instance¶
Pause
$ openstack server pause NAME
$ openstack server pause volumeTwoImage
Unpause
$ openstack server unpause NAME
Suspend
$ openstack server suspend NAME
Unsuspend
$ openstack server resume NAME
Stop
$ openstack server stop NAME
Start
$ openstack server start NAME
Rescue
$ openstack server rescue NAME
$ openstack server rescue NAME --rescue_image_ref RESCUE_IMAGE
OpenStack Cheat Sheet 8
Attach a volume to an instance after the instance is active, and the volume is
available
$ openstack server add volume INSTANCE_ID VOLUME_ID
$ openstack server add volume MyVolumeInstance 573e024d-5235-49ce-8332-
be1576d323f8
$ openstack server add volume --device /dev/vdb MyVolumeInstance
573e024d..1576d323f8
This is not currently possible when using non-Xen hypervisors with OpenStack.
Manage volumes after login into the instance
List storage devices
$ fdisk –l # Also other normal Unix file system commands apply
Object Storage (swift)¶
Display information for the account, container, or object
$ swift stat
$ swift stat ACCOUNT
$ swift stat CONTAINER
$ swift stat OBJECT
List containers
$ swift list
Keystone
See Status of Keystone Services
$ keystone service-list
List All Keystone Endpoints
$ keystone endpoint-list
Glance
List Current Glance Images
$ glance image-list
https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html,
https://thornelabs.net/posts/openstack-commands-cheat-sheet/
Manage security groups
Add rules to default security group allowing ping and SSH between instances in
the default security group
$ openstack security group rule create default 
--remote-group default --protocol icmp
$ openstack security group rule create default 
--remote-group default --dst-port 22
Networking (neutron)¶
Create network
$ openstack network create NETWORK_NAME
Create a subnet
$ openstack subnet create --subnet-pool SUBNET --network NETWORK
SUBNET_NAME
$ openstack subnet create --subnet-pool 10.0.0.0/29 --network net1 subnet1
Block Storage (cinder)¶
Used to manage volumes and volume snapshots that attach to instances.
Create a new volume
$ openstack volume create --size SIZE_IN_GB NAME
$ openstack volume create --size 1 MyFirstVolume
Boot an instance and attach to volume
$ openstack server create --image cirros-qcow2 --flavor m1.tiny
MyVolumeInstance
List all volumes, noticing the volume status
$ openstack volume list
OpenStack Cheat Sheet 9
Create a Flavor
nova flavor-create <FLAVOR-NAME> <FLAVOR-ID> <RAM-IN-MB> <ROOT-
DISK-IN-GB> <VCPU>
For example, create a new flavor called m1.custom with an ID of 6, 512 MB of
RAM, 5 GB of root disk space, and 1 vCPU:
nova flavor-create m1.custom 6 512 5 1
Create Nova Security Group
This command is only used if you are using nova-network.
nova secgroup-create <NAME> <DESCRIPTION>
Add Rules to Nova Security Group
These command is only used if you are using nova-network.
nova secgroup-add-rule <NAME> <PROTOCOL> <BEGINNING-PORT>
<ENDING-PORT> <SOURCE-SUBNET>
Example 1: Add a rule to the default Nova Security Group to allow SSH access to
instances:
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
Example 2: Add a rule to the default Nova Security Group Rule to allow ICMP
communication to instances:
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
Apply Nova Security Group to Instance
This command is only used if you are using nova-network.
nova add-secgroup <NOVA-ID> <SECURITY-GROUP-ID>
Create Nova Floating IP Pool
These command is only used if you are using nova-network.
nova-manage floating create <SUBNET-NAME> <NAME-OF-POOL>
Create Nova Key SSH Pair
nova keypair-add --pub_key <SSH-PUBLIC-KEY-FILE-NAME> <NAME-OF-KEY>
https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html,
https://thornelabs.net/posts/openstack-commands-cheat-sheet/
Upload Images to Glance
glance image-create --name <IMAGE-NAME> --is-public <true OR false> --
container-format <CONTAINER-FORMAT> --disk-format <DISK-FORMAT> --
copy-from <URI>
Example 1: Upload the cirros-0.3.2-x86_64 OpenStack cloud image:
glance image-create --name cirros-0.3.2-x86_64 --is-public true --container-
format bare --disk-format qcow2 --copy-from http://download.cirros-
cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
Example 2: Upload the ubuntu-server-12.04 OpenStack cloud image:
glance image-create --name ubuntu-server-12.04 --is-public true --container-
format bare --disk-format qcow2 --copy-from http://cloud-
images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
Nova
See Status of Nova Services
nova service-list
List Current Nova Instances
nova list
Boot an Instance
Boot an instance assigned to a particular Neutron Network:
nova boot <INSTANCE-NAME> --image <GLANCE-IMAGE-ID> --flavor
<FLAVOR-ID> --security-groups <SEC-GROUP-1,SEC-GROUP-2> --key-name
<SSH-KEY-NAME> --nic net-id=<NET-ID> --availability-zone <AVAILABILITY-
ZONE-NAME>
Boot an instance assigned to a particular Neutron Port:
nova boot <INSTANCE-NAME> --image <GLANCE-IMAGE-ID> --flavor
<FLAVOR-ID> --security-groups <SEC-GROUP-1,SEC-GROUP-2> --key-name
<SSH-KEY-NAME> --nic port-id=<PORT-ID> --availability-zone
<AVAILABILITY-ZONE-NAME>
OpenStack Cheat Sheet 10
You can also use the active command line switch to force an instance back into an
active state:
nova reset-state --active <INSTANCE-ID>
Get Direct URL to Instance Console Using novnc
nova get-vnc-console <INSTANCE-ID> novnc
Get Direct URL to Instance Console Using xvpvnc
nova get-vnc-console <INSTANCE-ID> xvpvnc
Set OpenStack Project Nova Quota
The following command will set an unlimited quota for a particular OpenStack
Project:
nova quota-update --instances -1 --cores -1 --ram -1 --floating-ips -1 --fixed-ips -1 --
metadata-items -1 --injected-files -1 --injected-file-content-bytes -1 --injected-file-
path-bytes -1 --key-pairs -1 --security-groups -1 --security-group-rules -1 --server-
groups -1 --server-group-members -1 <PROJECT ID>
Cinder
See Status of Cinder Services
cinder service-list
List Current Cinder Volumes
cinder list
Create Cinder Volume
cinder create --display-name <CINDER-IMAGE-DISPLAY-NAME> <SIZE-IN-GB>
Create Cinder Volume from Glance Image
cinder create --image-id <GLANCE-IMAGE-ID> --display-name <CINDER-
IMAGE-DISPLAY-NAME> <SIZE-IN-GB>
Create Snapshot of Cinder Volume
cinder snapshot-create --display-name <SNAPSHOT-DISPLAY-NAME>
<CINDER-VOLUME-ID>
https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html,
https://thornelabs.net/posts/openstack-commands-cheat-sheet/
Create Host Aggregate With Availability Zone
nova aggregate-create <HOST-AGG-NAME> <AVAIL-ZONE-NAME>
Add Compute Host to Host Aggregate
nova aggregate-add-host <HOST-AGG-ID> <COMPUTE-HOST-NAME>
Live Migrate an Instance
If your compute hosts use shared storage:
nova live-migration <INSTANCE-ID> <COMPUTE-HOST-ID>
If your compute hosts do not use shared storage:
nova live-migration --block-migrate <INSTANCE-ID> <COMPUTE-HOST-ID>
Attach Cinder Volume to Instance
Before running this command, you will need to have already created the
particular Cinder Volume.
nova volume-attach <INSTANCE-ID> <CINDER-VOLUME-ID> <DEVICE (use
auto)>
Create and Boot an Instance from a Cinder Volume
Before running this command, you will need to have already created the
particular Cinder Volume from a Glance Image.
nova boot --flavor <FLAVOR-ID> --block_device_mapping vda=<CINDER-
VOLUME-ID>:::0 <INSTANCE-NAME>
Create and Boot an Instance from a Cinder Volume Snapshot
Before running this command, you will have to have already created the
particular Cinder Volume Snapshot:
nova boot --flavor <FLAVOR-ID> --block_device_mapping vda=<CINDER-
SNAPSHOT-ID>:snap::0 <INSTANCE-NAME>
Reset the State of an Instance
If an instance gets stuck in a delete state, the instance state can be reset and then
deleted:
nova reset-state <INSTANCE-ID>
nova delete <INSTANCE-ID>
OpenStack Cheat Sheet 11
Example 2: Add a rule to the default Neutron Security Group to allow ICMP
communication to instances:
neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol
icmp default
Create a Neutron Tenant Network
neutron net-create <NET-NAME>
neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET-
CIDR>
Create a Neutron Provider Network
neutron net-create <NET-NAME> --provider:physical_network=<LABEL-
PHYSICAL-INTERFACE> --provider:network_type=<flat or vlan> --shared --
router:external=True
neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET-
CIDR> --gateway <GATEWAY-IP> --allocation-pool start=<STARTING-
IP>,end=<ENDING-IP> --dns-nameservers list=true <DNS-1 DNS-2>
Create a Neutron Router
neutron router-create <ROUTER-NAME>
Set Default Gateway on a Neutron Router
neutron router-gateway-set <ROUTER-NAME> <NET-NAME>
Attach a Tenant Network to a Neutron Router
neutron router-interface-add <ROUTER-NAME> <SUBNET-NAME>
Create a Neutron Floating IP Pool
If you need N number of floating IP addresses, run this command N number of
times:
neutron floatingip-create <NET-NAME>
Assign a Neutron Floating IP Address to an Instances
neutron floatingip-associate <FLOATING-IP-ID> <NEUTRON-PORT-ID>
Create a Neutron Port with a Fixed IP Address
neutron port-create <NET-NAME> --fixed-ip ip_address=<IP-ADDRESS>
https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html,
https://thornelabs.net/posts/openstack-commands-cheat-sheet/
If the Cinder Volume is not available, i.e. it is currently attached to an instance,
you must pass the force flag:
cinder snapshot-create --display-name <SNAPSHOT-DISPLAY-NAME>
<CINDER-VOLUME-ID> --force True
Neutron
See Status of Neutron Services
neutron agent-list
List Current Neutron Networks
neutron net-list
List Current Neutron Subnets
neutron subnet-list
Rename Neutron Network
neutron net-update <CURRENT-NET-NAME> --name <NEW-NET-NAME>
Rename Neutron Subnet
neutron subnet-update <CURRENT-SUBNET-NAME> --name <NEW-SUBNET-
NAME>
Create Neutron Security Group
neutron security-group-create <SEC-GROUP-NAME>
Add Rules to Neutron Security Group
neutron security-group-rule-create --direction <ingress OR egress> --ethertype
<IPv4 or IPv6> --protocol <PROTOCOL> --port-range-min <PORT-NUMBER> --
port-range-max <PORT-NUMBER> <SEC-GROUP-NAME>
Example 1: Add a rule to the default Neutron Security Group to allow SSH access
to instances:
neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol
tcp --port-range-min 22 --port-range-max 22 default
Clouds and Tools: Cheat Sheets & Infographics
OpenShift Cheat Sheet 1
https://cheatography.com/itservicestart-up/cheat-sheets/oc-cli-commands/pdf_bw/
https://github.com/okd-
project/okd/releases
OpenShift Cheat Sheet 2
https://cheatography.com/itservicestart-up/cheat-sheets/oc-cli-commands/pdf_bw/
https://github.com/okd-project/okd/releases
OpenShift Cheat Sheet 3
https://cheatography.com/itservicestart-up/cheat-sheets/oc-cli-commands/pdf_bw/
https://github.com/okd-project/okd/releases
OpenShift Cheat Sheet 4
Install pkgs using yum in a Dockerfile
# Install Runtime Environment
RUN set -x &&  2
yum clean all && 
REPOLIST=rhel-7-server-rpms,rhel-7-server-optional-rpms,rhel-7-server-
thirdparty-oracle-java-rpms 
INSTALL_PKGS="tar java-1.8.0-oracle-devel" && 
yum -y update-minimal --disablerepo "*" --enablerepo ${REPOLIST} --
setopt=tsflags=nodocs 
--security --sec-severity=Important --sec-severity=Critical && 
yum -y install --disablerepo "*" --enablerepo ${REPOLIST} --
setopt=tsflags=nodocs ${INSTALL_PKGS} && 
yum clean all
Docker push to ocp internal registry
01. oc extract -n default secrets/registry-certificates --keys=registry.crt
02. REGISTRY=$(oc get routes -n default docker-registry -o
jsonpath='{.spec.host}')
03. mkdir -p /etc/containers/certs.d/${REGISTRY}
04. mv registry.crt /etc/containers/certs.d/${REGISTRY}/
05. oc adm new-project openshift-pipeline
06. oc create -n openshift-pipeline serviceaccount pipeline
07. SA_SECRET=$(oc get secret -n openshift-pipeline | grep pipeline-token | cut -
d ' ' -f 1 | head -n 1)
08. SA_PASSWORD=$(oc get secret -n openshift-pipeline ${SA_SECRET} -o
jsonpath='{.data.token}' | base64 -d)
09. oc adm policy add-cluster-role-to-user system:image-builder
system:serviceaccount:openshift-pipeline:pipeline
10. docker login ${REGISTRY} -u unused -p ${SA_PASSWORD}
11. docker pull docker.io/library/hello-world
12. docker tag docker.io/library/hello-world ${REGISTRY}/openshift-
pipeline/helloworld
13. docker push ${REGISTRY}/openshift-pipeline/helloworld
14. oc new-project demo-project
15. oc policy add-role-to-user system:image-puller system:serviceaccount:demo-
project:default -n openshift-pipeline
16. oc new-app --image-stream=openshift-pipeline/helloworld:latest
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
To create ssh secret:
oc create secret generic sshsecret 
--from-file=ssh-privatekey=$HOME/.ssh/id_rsa
To create SSH-based authentication secret with .gitconfig file:
oc create secret generic sshsecret 
--from-file=ssh-privatekey=$HOME/.ssh/id_rsa 
--from-file=.gitconfig=</path/to/file>
To create secret that combines .gitconfig file and CA certificate:
oc create secret generic sshsecret 
--from-file=ca.crt=<path/to/certificate> 
--from-file=.gitconfig=</path/to/file>
To create basic authentication secret with CA certificate file:
oc create secret generic <secret_name> 
--from-literal=username=<user_name> 
--from-literal=password=<password> 
--from-file=ca.crt=<path/to/certificate>
To create basic authentication secret with .gitconfig file and CA certificate file:
oc create secret generic <secret_name> 
--from-literal=username=<user_name> 
--from-literal=password=<password> 
--from-file=.gitconfig=</path/to/file> 
--from-file=ca.crt=<path/to/certificate>
Examine the cluster quota defined for the environment:
$ oc describe AppliedClusterResourceQuota
OpenShift Cheat Sheet 5
Set the default storage-class
oc patch storageclass glusterfs-storage -p '{"metadata":
{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Change Default response timeout for a specific route:
oc annotate route <route_name> --overwrite
haproxy.router.openshift.io/timeout=10s
Add a nodeSelector on RC ou DC
oc patch dc|rc <dc_name> -p "spec:
template:
spec:
nodeSelector:
region: infra"
Binary Builds
oc new-build --binary=true --name=ola2 --image-stream=redhat-openjdk18-
openshift --to='mycustom-jdk8:1.0'
oc start-build ola2 --from-file=./target/ola.jar --follow
oc new-app
Turn off/on DC triggers to do a batch of changes without spam many
deployments
oc rollout pause dc <dc name>
oc rollout resume dc <dc name>
Get a route URL using OC
http://$(oc get route nexus3 --template='{{ .spec.host }}')
Maven can automatically store artifacts using -DaltDeploymentRepository
parameter for deploy task:
mvn deploy -DskipTests=true 
-DaltDeploymentRepository=
nexus::default::http://nexus3.nexus.svc.cluster.local:8081/repository/releases
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
Creates a service to point to an external service addr (DNS or IP)
oc create service externalname myservice 
--external-name myhost.example.com
Patching a DeploymentConfig from the CLI
this example removes an config attribute using JSON path
oc patch dc/mysql --type=json 
-p='[{"op":"remove", "path": "/spec/strategy/rollingParams"}]'
this example cnhage an existing attribute value using JSON format
oc patch dc/mysql --patch 
'{"spec":{"strategy":{"type":"Recreate"}}}‘
Creating a Custom template by exporting existing resources
oc export is,bc,dc,svc,route --as-template > mytemplate.yml
Process a template, create a new binary build to customize something and them
change the DeploymentConfig to use the new Image...
oc process openshift//datagrid72-basic | oc create -f -
oc new-build --name=customdg -i openshift/jboss-datagrid72-openshift:1.0 --
binary=true --to='customdg:1.0'
oc set triggers dc/datagrid-app --from-image=openshift/jboss-datagrid72-
openshift:1.0 --remove
oc set triggers dc/datagrid-app --from-image=customdg:1.0 -c datagrid-app
List only paramaters of a given template file definition
oc process -f mytemplate.yaml --parameters
Copy file content from a specific image to local file system
docker run registry.access.redhat.com/jboss-datagrid-7/datagrid72-openshift:1.0
/bin/sh -c 'cat /opt/datagrid/standalone/configuration/clustered-openshift.xml' >
clustered-openshift.xml
OpenShift Cheat Sheet 6
Configure Liveness/Readiness probes on DCs
oc set probe dc cotd1 --liveness -- echo ok
oc set probe dc/cotd1 --readiness --get-url=http://:8080/index.php --initial-delay-
seconds=2
Create a new JOB
oc run pi --image=perl --replicas=1 --restart=OnFailure 
--command -- perl -Mbignum=bpi -wle 'print bpi(2000)'
CRON JOB
oc run pi --image=perl --schedule='*/1 * * * *' 
--restart=OnFailure --labels parent="cronjobpi" 
--command -- perl -Mbignum=bpi -wle 'print bpi(2000)'
A/B Deployments - Split route trafic between services
oc expose service cotd1 --name='abcotd' -l name='cotd'
oc set route-backends abcotd --adjust cotd2=+20%
oc set route-backends abcotd cotd1=50 cotd2=50
To pull an image directly from red hat offcial docker registry
docker pull registry.access.redhat.com/jboss-eap-6/eap64-openshift
To validate a openshift/kubernates resource definition (json/yaml file) in order to
find malformed/sintax problems
oc create --dry-run --validate -f openshift/template/tomcat6-docker-
buildconfig.yaml
To get current user Barear Auth Token
oc whoami -t
To test Master API
curl -k -H "Authorization: Bearer <api_token>"
https://<master_host>:8443/api/v1/namespaces/<projcet_name>/pods/https:<po
d_name>:8778/proxy/jolokia/
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
To update a DeploymentConfig in order to change the Docker Image used by a
specific container
oc project <project>
oc get is
# creates an ImageStream from a Remote Docker Registry image
oc import-image <image name> --from=docker.io/<imagerepo>/<imagename> --
all --confirm
oc get istag
OC_EDITOR="vim" oc edit dc/<your_dc>
spec:
containers:
- image: docker.io/openshiftdemos/gogs@sha256:<the new image digest from
Image Stream>
imagePullPolicy: Always
BuildConfig with Source pull secrets
oc secrets new-basicauth gogs-basicauth --username=<your gogs login> --
password=<gogs pwd>
oc set build-secret --source bc/tasks gogs-basicauth
Adding a volume in a given DeploymentConfig
oc set volume dc/myAppDC --add --overwrite --name....
Create a configmap file and mount as a volume on DC
oc create configmap myconfigfile --from-file=./configfile.txt
oc set volumes dc/printenv --add --overwrite=true --name=config-volume --
mount-path=/data -t configmap --configmap-name=myconfigfile
Create a secret via CLI
oc create secret generic mysec --from-literal=app_user=superuser --from-
literal=app_password=topsecret
oc env dc/printenv --from=secret/mysec
oc set volume dc/printenv --add --name=db-config-volume --mount-
path=/dbconfig --secret-name=printenv-db-secret
OpenShift Cheat Sheet 7
To access a POD container shell
oc exec -ti `oc get pods | awk '/registry/ { print $1; }'` /bin/bash
#new way to do the same:
oc rsh <container-name>
to edit an object/resource
oc edit <object_type>/<object_name>
#eg
oc edit dc/myDeploymentConfig
Ataching a new PersistentVolumeClaim to a DeploymentConfig
oc volume dc/docker-registry 
--add --overwrite 
-t persistentVolumeClaim 
--claim-name=registry-claim 
--name=registry-storage
Docker builder app creation
oc new-app --docker-image=openshift/hello-openshift:v1.0.6 -l "todelete=yes"
To create an app using a template (eap64-basic-s2i): Ticketmonster demo
oc new-app javaee6-demo
oc new-app --template=eap64-basic-s2i -
p=APPLICATION_NAME=ticketmonster,SOURCE_REPOSITORY_URL=https://gi
thub.com/jboss-developer/ticket-
monster,SOURCE_REPOSITORY_REF=2.7.0.Final,CONTEXT_DIR=demo
STI app creation
oc new-app https://github.com/openshift/sinatra-example -l "todelete=yes"
oc new-app openshift/php~https://github.com/openshift/sti-php -l "todelete=yes"
To watch a build process log
oc get builds
oc logs -f builds/sti-php-1
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
# get pod memory via jmx
curl -k -H "Authorization: Bearer <api_token>"
https://<master_host>:8443/api/v1/namespaces/<projcet_name>/pods/https:<po
d_name>:8778/proxy/jolokia//read/java.lang:type=Memory/HeapMemoryUsage
| jq .
to login via CLI oc
oc login --username=tuelho --insecure-skip-tls-verify --server=https://master00-
${guid}.oslab.opentlc.com:8443
### to login as Cluster Admin through master host
oc login -u system:admin -n openshift
To view the cluster roles and their associated rule sets in the cluster policy
oc describe clusterPolicy default
add a role to user
#local binding
oadm policy add-role-to-user <role> <username>
#cluster biding
oadm policy add-cluster-role-to-user <role> <username>
allow containers run with root user inside openshift
oadm policy add-scc-to-user anyuid -z default
for more details consult:
https://docs.openshift.com/enterprise/3.1/admin_guide/manage_authorization_p
olicy.html
to test a POD service locally
ip=`oc describe pod hello-openshift|grep IP:|awk '{print $2}'`
curl http://${ip}:8080
OpenShift Cheat Sheet 8
To output new-app artifacts to file, edit them, then create them using oc create:
$ oc new-app https://github.com/openshift/ruby-hello-world -o json >
myapp.json
$ vi myapp.json
$ oc create -f myapp.json
To deploy together image built from source and external image:
$ oc new-app 
ruby~https://github.com/openshift/ruby-hello-world 
mysql 
--group=ruby+mysql
To export all the project's objects/resources as a single template:
$ oc export all --as-template=<template_name>
To create a new project using oadm and defining an admin user
$ oadm new-project instant-app --display-name="instant app example project" 
--description='A demonstration of an instant-app/template' 
--node-selector='region=primary' --admin=andrew
To create an app using oc CLI based on a template
$ oc new-app --template=mysql-ephemeral --
param=MYSQL_USER=mysqluser,MYSQL_PASSWORD=redhat,MYSQL_DATAB
ASE=mydb,DA
To see a list of env vars defined in a DeploymentConfig object
$ oc env dc database --list
# deploymentconfigs database, container mysql
MYSQL_USER=***
MYSQL_PASSWORD=***
MYSQL_DATABASE=***
To manage enviorenmet variables in different ose objects types.
The first adds, with value /data. The second updates, with value /opt.
$ oc env dc/registry STORAGE=/data
$ oc env dc/registry --overwrite STORAGE=/opt
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
To create application using Git repository at current directory:
$ oc new-app
To create application using remote Git repository and context subdirectory:
$ oc new-app https://github.com/openshift/sti-ruby.git 
--context-dir=2.0/test/puma-test-app
To create application using remote Git repository with specific branch reference:
$ oc new-app https://github.com/openshift/ruby-hello-world.git#beta4
$ oc new-app /home/user/code/myapp --strategy=docker
To create a definition generated by oc new-app command based on S2I support
$ oc new-app https://github.com/openshift/simple-openshift-sinatra-sti.git -o
json | 
tee ~/simple-sinatra.json
To create application from MySQL image in Docker Hub:
$ oc new-app mysql
To create application from local registry:
$ oc new-app myregistry:5000/example/myimage
To create application from stored template:
$ oc create -f examples/sample-app/application-template-stibuild.json
$ oc new-app ruby-helloworld-sample
To set environment variables when creating application for database image:
$ oc new-app openshift/postgresql-92-centos7 
-e POSTGRESQL_USER=user 
-e POSTGRESQL_DATABASE=db 
-e POSTGRESQL_PASSWORD=password
To deploy two images in single pod:
$ oc new-app nginx+mysql
OpenShift Cheat Sheet 9
To create a registry with storage-volume mounted on host
oadm registry --service-account=registry 
--config=/etc/origin/master/admin.kubeconfig 
--credentials=/etc/origin/master/openshift-registry.kubeconfig 
--images='registry.access.redhat.com/openshift3/ose-${component}:${version}' 
--mount-host=<path> --selector=meuselector
To export all resources from a project/namespace as a template
oc export all --as-template=<template_name>
To create a build from a Dockerfile
# create the build
cat ./path/to/your/Dockerfile | oc new-build --name=build-from-docker --binary -
-strategy=docker -l app=app-from-custom-docker-build -D -
#if you need to give some input to your Docker Build process
oc start-build build-from-docker --from-dir=. --follow
# create an OSE app from the docker build image
oc new-app app-from-custom-docker-build -l app=app-from-custom-docker-
build
oc expose service app-from-custom-docker-build
To copy files to/from a POD
#Ref: https://docs.openshift.org/latest/dev_guide/copy_files_to_container.html
oc rsync /home/user/source devpod1234:/src
oc rsync devpod1234:/src /home/user/source
Cluster nodes CleanUp
$ oadm pod-network make-projects-global ci
Adjust Master Log Level
To adjust openshift-master log level, edit following line of /etc/sysconfig/atomic-
openshift-master from master VM:
OPTIONS=--loglevel=4
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
To unset environment variables in the pod templates:
$ oc env <object-selection> KEY_1- ... KEY_N- [<common-options>]
The trailing hyphen (-, U+2D) is required.
This example removes environment variables ENV1 and ENV2 from deployment
config d1:
$ oc env dc/d1 ENV1- ENV2-
This removes environment variable ENV from all replication controllers:
$ oc env rc --all ENV-
To list environment variables in pods or pod templates:
$ oc env rc r1 --containers='c1' ENV-
This example lists all environment variables for pod p1:
$ oc env <object-selection> --list [<common-options>]
$ oc env pod/p1 --list
To apply some change (patch)
oc patch dc/<dc_name> 
-p '{"spec":{"template":{"spec":{"nodeSelector":{"nodeLabel":"logging-es-node-
1"}}}}}'
To apply a vlome storage
oc volume dc/<dc_name> 
--add --overwrite --name=<volume_name> 
--type=persistentVolumeClaim --claim-name=<claim_name>
To make a node unschedulable in a cluster
oadm manage node <nome do node > --schedulable=false
OpenShift Cheat Sheet 10
Create Definition Files for Volumes
ssh master00-$guid
mkdir /root/pvs
export volsize="5Gi"
for volume in pv{1..25}; 
do 
cat << EOF > /root/pvs/${volume}.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ${volume}
spec:
capacity:
storage: ${volsize}
accessModes:
- ReadWriteOnce
nfs:
path: /var/export/pvs/${volume}
server: 192.168.0.254
persistentVolumeReclaimPolicy: Recycle
EOF
echo "Created def file for ${volume}"; 
Done
Patch PVs definitions
for pv in $(oc get pv|awk '{print $1}' | grep pv | grep -v NAME); do oc patch pv $pv
-p "spec:
accessModes:
- ReadWriteMany
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Recycle"
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
To make changes valid, restart atomic-openshift-master service:
$ sudo -i systemctl restart atomic-openshift-master.service
In node machine, to provide filtered information:
# journalctl -f -u atomic-openshift-node
Enable EAP clustering/replication
Make sure that your default service account has sufficient privileges to
communicate with the Kubernetes REST API. Add the view role to serviceaccount
for the project:
$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
OCP Internal VIP failover for Routers running on Infra nodes
oc adm ipfailover ipf-ha-router
--replicas=2 --watch-port=80 
--selector="region=infra" 
--virtual-ips="x.0.0.x" 
--iptables-chain="INPUT" 
--service-account=ipfailover --create
Use oc new-app with -o json option to bootstrap your new template definition file
oc new-app -o json openshift/hello-openshift > hello.json
Working with Templates
to list all parameters from mysql-persistent template:
$ oc process --parameters=true -n openshift mysql-persistent
Customizing resources from a preexisting Template, Example:
$ oc export -o json
-n openshift mysql-ephemeral > mysql-ephemeral.json
... change the mysql-ephemeral.json file ...
$ oc process -f mysql-ephemeral.json 
-v
MYSQL_DATABASE=testdb,MYSQL_USE=testuser,MYSQL_PASSWORD=
> testdb.json
$ oc create -f testdb.json
OpenShift Cheat Sheet 11
DeploymentConfig Post-deployment (lifecycle) hook sample
oc patch dc/mysql --patch 
'{"spec":{"strategy":{"recreateParams":{"post":{"failurePolicy":
"Abort","execNewPod":{"containerName":"mysql","command":["/bin/sh","-c","curl
-L -s https://github.com/RedHatTraining/DO288-apps/releases/download/OCP-
4.1-1/import.sh -o /tmp/import.sh&&chmod 755
/tmp/import.sh&&/tmp/import.sh"]}}}}}}'
oc CLI + bash tricks:
tail logs for all pods at once
oc get pods -o name | xargs -L 1 oc logs [--tail 1 [-c <conatiner-name>]]
print response fields with curl
curl -s 
-w 'HTTP code: %{http_code}nTime: %{time_total}sn' 
"$SVC_URL"
retrieving a POD Name dynamically
INGRESS_POD=$(oc -n istio-system get pods -l istio=ingressgateway -o
jsonpath='{.items..metadata.name}')
oc -n istio-system exec $INGRESS_POD -- ls /etc/istio/customer-certs
Istio
Verify the given pod uses a unique SVID (SPIFFE - Secure Production Identity
Framework for Everyone Verified Identity Document):
oc exec $POD_NAME -c istio-proxy -- 
curl -s http://127.0.0.1:15000/config_dump | 
jq -r .configs[5].dynamic_active_secrets[0].secret | 
jq -r .tls_certificate.certificate_chain.inline_bytes | 
base64 --decode | 
openssl x509 -text -noout | 
grep "X509v3 Subject" -A 1
X509v3 Subject Alternative Name: critical
URI:spiffe://cluster.local/ns/mtls/sa/POD_NAME
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
Patch a DC on OCP 4 to set env vars from a ConfigMap
oc patch -n user1 dc/events -p '{ "metadata" : { "annotations" : {
"app.openshift.io/connects-to" : "invoice-events,i
Patch a ConfigMap
oc patch configmap/myconf --patch '{"data":{"key1":"newvalue1"}}'
Verify if a giver Service Account has a given rolebinding
oc get rolebinding -o wide -A | grep -E
'NAME|ClusterRole/view|namespace/sa_name'
Using jq utility to search/filter through a oc get json output:
#!/bin/bash
oc get service --all-namespaces -o json | jq '.items[]
| select(
.metadata.labels."discovery.3scale.net" == "true"
and .metadata.annotations."discovery.3scale.net/port"
and .metadata.annotations."discovery.3scale.net/scheme"
)
| {
"service-name": .metadata.name,
"service-namespace": .metadata.namespace,
"labels": .metadata.labels,
"annotations": .metadata.annotations
} '
Operators troubleshooting
oc get ClusterServiceVersion --all-namespaces
oc get subs -n openshift-operators
oc api-resources
oc explain <resource name>[.json attribute]
OpenShift Cheat Sheet 12
.
https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e
creating a inline json patch file and applying to a resource
cat > gateway-patch.json << EOF
[{
"op": "add",
"path": "/spec/template/spec/containers/0/volumeMounts/0",
"value": {
"mountPath": "/etc/istio/customer-certs",
"name": "customer-certs",
"readOnly": true
}
},
{
"op": "add",
"path": "/spec/template/spec/volumes/0",
"value": {
"name": "customer-certs",
"secret": {
"secretName": "istio-ingressgateway-customer-certs",
"optional": true
}
}
}]
EOF
applying the patch
oc -n istio-system patch --type=json deploy istio-ingressgateway -p "$(cat
gateway-patch.json)“
Wait for a resource (eg. POD) to be read (met a condition)
kubectl wait --namespace ingress-nginx 
--for=condition=ready pod 
--selector=app.kubernetes.io/component=controller 
--timeout=90s
oc create secret generic sshsecret ` ` oc create secret generic sshsecret
Clouds and Tools: Cheat Sheets & Infographics
Microservice Architecture Maturity Model
Brewer: CAP (Distributed Systems)
Source: http://blog.nahurst.com/visual-guide-to-nosql-systems
PACELS Theorem
 An extension to the CAP theorem. It states that in case
of network partitioning (P) in a distributed computer
system, one has to choose between availability (A) and
consistency (C) (as per the CAP theorem), but else (E),
even when the system is running normally in the
absence of partitions, one has to choose between
latency (L) and consistency (C).
 => Eventual consistency approach in Cassandra DB
and other solutions …
PACELS Theorem – DB Ratings
DDBS P+A P+C E+L E+C
DynamoDB Yes Yes
Cassandra Yes Yes
Cosmos DB Yes Yes
Riak Yes Yes
VoltDB/H-
Store
Yes Yes
Megastore Yes Yes
BigTable/HBase Yes Yes
MongoDB Yes Yes
PNUTS Yes Yes
Hazelcast
IMDG
Yes Yes Yes
C(A/P)S Versioning Principle:
Sacrifice
With Big Data, storing data redundantly or converting terabytes is an issue.
C(A/P)S Principle: Tradeoff between Code Amount – Availability/Performance –
Storage: One needs to be sacrificed:
1. With each new version as storage format, all old data could be eagerly migrated to
the latest version (active migration, perhaps partial service availability during
migration and perhaps loss of attributes from old versions although they might be
required for revision-safety: Low-Availability/Performance cost factor).
2. Migrate only those pieces of the data that are needed (lazily, e.g. on-access
migration or when it’s foreseeable); however, then the large ORC/Parquet files
cannot be fully migrated and thus not deleted before copying also the rest of the
data away or before migrating it (also due to block size; high storage costs,
medium programming costs with converter cascade from very old to the latest
version: Storage cost factor).
3. Program converters from source into older (replay/late arrivals) and perhaps
multiple versions of newer storage formats and out of these formats to potentially
multiple versioned destination formats (Code Amount).
How to store/employ the data model versions used and the relevant converters: High
programming costs, mitigatable through converter cascades or code generation;
complex version and release management: Wages as cost factor.
C(A/P)S Versioning Principle:
Benefits
C(A/P)S Principle: Only 2 of the 3 benefits can be
achieved: Code Amount (low) –
Availability/Performance (high) – Storage (low).
Code Amount (low)
Availability/
Performance
(high)
Storage (low)
Lazy migration
Eager migration
No focus on migration: Programming
Multiple Converters, Optimizations below
C(A/P)S Versioning Principle
shown with Circles
You can
choose 1
point in this
space. A
combination
of all
properties is
not possible.
It is best to
choose 1 of the 3
overlapping
areas.
Code Amount
(low)
Availability/
Performance
(high)
Storage (low)
© Thomas Pötter
Rema
ining
Risk
Assets/
Function
alities
(here
Server/
Client/W
ebapp)
Common Criteria Basic Concept
Vul
ner
abil
itie
s
Counter Measures
Requi
reme
nts
Mitiga
tions
At
ta
c
k
s
Risks / Damage Potential
Polici
es
68
© 2017 FORRESTER. REPRODUCTION PROHIBITED.
Reference architecture for container
platforms
› Container engine provides the foundational
execution environment.
› Container orchestration enables key capabilities
for enterprise adoption.
› External integration allows extensive support for
diversified use scenarios.
› Operations management streamlines operations
or maintenance processes.
› Container infrastructure allows adaptability of
operating environments.
› Container image management ensures unified
control and value co-creation.
› Container security safeguards end-to-end security.
› DevOps automation allows application life-cycle
acceleration.
OVERVIEW
Source: Vendor Landscape: Container Solutions For Cloud-Native Applications Forrester report
Service-oriented Computing
Private
Storage
Server HW
Networking
Servers
Databases
Virtualization
Runtimes
Applications
Security &
Integration
Infrastructu
re
as a Service
Storage
Server HW
Networking
Servers
Databases
Virtualization
Runtimes
Applications
Security &
Integration
Managed
by
vendor
You
manage
Platform
as a Service
Storage
Server HW
Networking
Servers
Databases
Virtualization
Runtimes
Applications
Security &
Integration
Software
as a Service
Storage
Server HW
Networking
Servers
Databases
Virtualization
Runtimes
Applications
Security &
Integration
HW
virtual
DC
cloud
Cloud Computing Components
 Azure, Google Cloud Platf, Amazon Web Services, IBM BlueMix, OpenShift, OpenStack, ... and many others
Computing Services
Execution Models
Virtual Machines Web Sites
Cloud
Services/Apps
Containers, μ-svcs
Serverless /
Lambdas
Mobile services
Hi-Perf Computing Management, Orchestration, Monitoring
Storage & Data
Key-Value Tables Column Store Document DB
Graph DB Blobs Caching
Data Processing Map/Reduce Hadoop Zoo Reporting
Networking Virtual Network Connect Traffic Manager
Messaging
Service Bus Queue/Topic/Relay Event Hub
Multi- & Media Media Services Streaming Content Delivery
Other Services
Machine Learning
Searching /
Indexing
Maps / GIS
Gaming
Language /
Translate
Marketplace
Languages / SDK C++ .Net Java PHP Python Node.js ...
Roadmap
for
Cloud
Adoption
Use Cloud Patterns
COMPLEXITY/RISK Less
More
More
REWARD/BENEFIT
Less
Cloud Strategy Approach
SaaS PaaS
IaaS
SaaS
SaaS
SaaS
SaaS
PaaS
PaaS
IaaS
IaaS
PaaS
PaaS
IaaS
IaaS
New Development
(Leveraging all cloud
paradigms – 6 Cells)
Hybrid Cloud
(IaaS Lift and Shift; IaaS and PaaS
New Deployments)
SaaS
(Business Architecture Led)
CLOUD STRATEGY
IaaS
IaaS
PaaS
PaaS
IaaS
IaaS
VMs
VMs
HW
HW
CONNECTIVITY
(Cross Discipline Team)
Infrastructure
• Office 365
• SharePoint Online
• Exchange Online
• OneDrive Pro
Line of Business
• Dynamics CRM
• 3rd Party Solutions
• Yammer, Skype
Engineering & Operations
Enabling
• MDM - In Tune
• DevOps -TFS
Open Stack
 Cloud Lock-in
 functionality, license,
development
 OpenStack
 2010 NASA + Rackspace
Compute - Nova
Object Storage - Swift
Block Storage -
Cinder
Image Service -
Glance
Networking - Neuron
Identity - Keystone
Dashboard - Horizon
Orchestration - Heat
Workflow - Mistral
Telemetry - Ceilometer
Database - Trove
Map Reduce - Sahara
Bare Metal - Ironic
Messaging - Zaqar
Shared FS - Manila
DNS - Designate
Search - Searchlight
Key Manager -
Barbican
Clouds and Tools: Cheat Sheets & Infographics
AWS Athena Sample Architecture
PHI: protected health information
AWS Analytics Architecture Stack
77
Communication Services
Amazon Simple
Queue Service (SQS)
Amazon Simple
Notification Service
(EBS)
Amazon Simple
Email Service (SES)
Amazon Route 53
Amazon Virtual
Private Cloud (VPC)
Amazon Direct
Connect
Amazon Elastic Load
Balancing
Storage Services
Amazon Simple
Storage Service (S3)
Amazon Elastic Block
Store (EBS)
Amazon ElastiCache
Amazon SimpleDB
Amazon Relational
Database Service
(RDS)
Amazon CloudFront
Amazon
Import/Export
Compute Services
Amazon Elastic
(EC2)
Amazon Elastic
Compute Cloud
(EC2)
Amazon Elastic
MapReduce
AWS Elastic
Beanstalk
AWS Cloudformation
Autoscaling
Amazon AWS Platform
Additional Services
Amazon GovCloud
Amazon Flexible
(FPS)
Amazon Flexible
Payment Service
(FPS)
Amazon DevPay
Amazon Fullfillment
Web Service (FWS)
Amazon Mechanical
Turk
Alexa Web
Information Service
Amazon CloudWatch
Alexa Top Sites
Amazon Web Services / Elastic Cloud
Amazon Web Services / Elastic Cloud
 Compute
 Elastic Compute Cloud (EC2) - scalable virtual machines using Xen
 Elastic MapReduce (EMR)
 Lambda (LAMBDA) - compute service that runs code in response to
events
 Networking
 Route 53 - highly available and scalable DNS
 Virtual Private Cloud (VPC) - logically isolated set of EC2, VPN
connection
 AWS Direct Connect - dedicated network connections into AWS data
centers
 Elastic Load Balancing (ELB) - automatically distributes incoming traffic
 Storage and content delivery
 CloudFront - CDN
 Simple Storage Service (S3) - Web Service based storage
 Glacier - low-cost, long-term storage, redundancy, low-frequent access
times
 AWS Storage Gateway - iSCSI block storage, cloud-based backup
 Elastic Block Store (EBS) - persistent block-level storage volumes for EC2
 AWS Import/Export - accelerates moving large amounts of data in/out
AWS
 Elastic File System (EFS) - file storage service
 Database
 DynamoDB - low-latency NoSQL backed by SSDs
 ElastiCache - in-memory caching, implementation of Memcached and
Redis
 Relational Database Service (RDS) - MySQL, Oracle, SQL Server,
PostgreSQL
 Redshift - petabyte-scale data warehousing with column-based storage
 SimpleDB - run queries on structured data, "the core functionality of a
database"
 AWS Data Pipeline - data transfer between different AWS services
 Analytics
 Machine Learning
 Kinesis - real-time data processing over large, distributed data streams
 Deployment
 CloudFormation - file-based interface for provisioning other AWS
resources
 AWS Elastic Beanstalk - quick deployment and management of
applications
 AWS OpsWorks - configuration of EC2 services using Chef
 AWS CodeDeploy - automated code deployment to EC2 instances
 Management
 Identity and Access Management (IAM) - authentication service
 AWS Directory Service - connection to an existing Active Directory
 CloudWatch - monitoring for AWS cloud resources and applications
 AWS Management Console - web-based management and monitoring
 CloudHSM - data security - dedicated Hardware Security Module
(HSM)
 AWS Key Management Service (KMS) - control keys used to data
encryption
 Application services
 API Gateway - service for publishing and maintaining web service APIs
 CloudSearch - basic full-text search and indexing of textual content
 DevPay - billing and account management system
 Elastic Transcoder (ETS) - video transcoding
 Flexible Payments Service (FPS) - interface for micropayments
 Simple Email Service (SES) - bulk and transactional email sending
 Simple Queue Service (SQS) - message queue for web applications
 Simple Notification Service (SNS) - multi-protocol "push" messaging
 Simple Workflow (SWF) - workflow service for building scalable,
resilient apps
 Cognito - user identity and data synchronization service across mobile
devices
 AppStream - streaming of resource intensive applications from the cloud
 Miscellaneous
 Product Advertising API - electronic commerce
Network Architecture
Direct Connect (DX)
Network Architecture
Direct Connect (DX)
Building Web Scaling Apps
Building Web Scaling Apps
In Action!
Lets go back and review a real live example!
Example Application Hosting in
AWS
85
AWS Platform Example
Deployment
86
Project 1 Project 2 Project 3 Project ….
Tactical Migration
Strategy Business
Case
Application
Assessment
Risk &
Compliance
Operational
Framework
Continuous Feedback
Future
State
Cycles of
Learning
Migration Strategy –
Recommended Approach
AWS Serverless Architecture
http://serverlessarchitecture.com/2016
/01/28/where-can-i-see-an-example-of-
a-serverless-architecture-application/
AWS Serverless Architecture
https://stackoverflow.com/questions/3
8757271/i-need-feedback-on-this-
partly-serverless-architecture-design
Clouds and Tools: Cheat Sheets & Infographics
Platform Services
Infrastructure Services
Web Apps
Mobile
Apps
API
Management
API Apps
Logic Apps
Notification
Hubs
Content
Delivery
Network (CDN)
Media
Services
BizTalk
Services
Hybrid
Connections
Service Bus
Storage
Queues
Hybrid
Operations
Backup
StorSimple
Azure Site
Recovery
Import/Export
SQL
Database
DocumentDB
Redis
Cache
Azure
Search
Storage
Tables
Data
Warehouse
Azure AD
Health Monitoring
AD Privileged
Identity
Management
Operational
Analytics
Cloud
Services
Batch
RemoteApp
Service
Fabric
Visual Studio
App
Insights
Azure
SDK
VS Online
Domain Services
HDInsight Machine
Learning
Stream
Analytics
Data
Factory
Event
Hubs
Mobile
Engagement
Data
Lake
IoT Hub
Data
Catalog
Security &
Management
Azure Active
Directory
Multi-Factor
Authentication
Automation
Portal
Key Vault
Store/
Marketplace
VM Image Gallery
& VM Depot
Azure AD
B2C
Scheduler
Azure Architecture
Microsoft Azure
Microsoft Azure
Virtual Machines - Provision Windows and Linux virtual machines in minutes
App Service - Create web and mobile apps for any platform and any device
SQL Database - Managed relational SQL Database-as-a-service
Storage - Durable, highly available, and massively scalable cloud storage
Cloud Services - Create highly available, infinitely scalable cloud applications and APIs
DocumentDB - Managed NoSQL document database-as-a-service
Azure Active Directory - Synchronize on-premises directories and enable single sign-on
Backup - Simple and reliable server backup to the cloud
HDInsight - Provision cloud Hadoop, Spark, R Server, HBase, and Storm clusters
RemoteApp - Deploy Windows client apps in the cloud, run on any device
Batch - Run large-scale parallel and batch compute jobs
StorSimple - Hybrid cloud storage for enterprises, reduces costs and improves data security
Visual Studio Team Services - Services for teams to share code, track work, and ship software
API Management - Publish APIs to developers, partners and employees securely and at scale
Azure IoT Hub - Connect, monitor, and control millions of IoT assets
CDN - Deliver content to end-users through a robust network of global data centers
ExpressRoute - Dedicated private network fiber connections to Azure
Site Recovery - Orchestrate protection and recovery of private clouds
Azure DNS - Host your DNS domain in Azure
Machine Learning - Powerful cloud-based predictive analytics
Service Fabric - Build and operate always-on, scalable, distributed applications
Multi-Factor Authentication - Safe access to data and apps, extra level of authentication
Visual Studio Application Insights - Detect and diagnose issues in your web apps and services
SQL Data Warehouse - Elastic data warehouse-as-a-service with enterprise-class features
Virtual Network - Provision private networks, optionally connect to on-premises datacenters
Media Services - Encode, store, and stream video and audio at scale
Stream Analytics - Real-time stream processing
Azure Active Directory Domain Services - Join Azure VM to a domain w/o domain controllers
Event Hubs - Ingest, persist, and process millions of events per second
Data Factory - Orchestrate and manage data transformation and movement
Key Vault - Safeguard and maintain control of keys and other secrets
Service Bus - Connect across private and public cloud environments
Azure Active Directory B2C - Consumer identity and access management in the cloud
Scheduler - Run your jobs on simple or complex recurring schedules
Azure DevTest Labs - Quickly create environments to deploy and test applications
Notification Hubs - Scalable, cross-platform push notification infrastructure
Automation - Simplify cloud management with process automation
Log Analytics - Collect, search and visualize machine data from on-premises and cloud
Security Center - Prevent, detect, and respond to threats with increased visibility
BizTalk Services - Seamlessly integrate the enterprise and the cloud
Traffic Manager - Route incoming traffic for high performance and availability
Redis Cache - Access to a secure, dedicated cache for your Azure applications
Search - Fully-managed search-as-a-service
Load Balancer - Deliver high availability and network performance to your applications
VPN Gateway - Establish secure, cross-premises connectivity
Application Gateway - Layer 7 Load Balancer with built-in HTTP balancing and delivery cntrl
Data Catalog - Data source discovery to get more value from existing enterprise data assets
Virtual Machine Scale Sets - Highly available, auto scalable Linux or Windows virtual machines
Power BI Embedded - Embed fully interactive, stunning data visualizations in your applications
Mobile Engagement - Increase app usage and user retention
Data Lake Store - Hyperscale repository for big data analytics workloads
Data Lake Analytics - Distributed analytics service that makes big data easy
Cognitive Services - Add smart API capabilities to enable contextual interactions
Azure Container Service - Use Docker based tools to deploy and manage containers
SQL Server Stretch Database - Dynamically stretch on-premises SQL Server databases to Azure
HockeyApp - Deploy mobile apps, collect feedback and crash reports, and monitor usage
Functions - Process events with serverless code
Logic Apps - Automate the access and use of data across clouds without writing code
Cortana Intelligence - Transform your business with big data and advanced analytics
IoT Suite - Capture and analyze untapped data to improve business results
Operations Management Suite - Manage your cloud and on-premises infrastructure
Apache Spark for Azure HDInsight - Apache Spark in the cloud for mission critical
deployments
Apache Storm for HDInsight - Real-time stream processing made easy for big data
R Server for HDInsight - Predictive modeling, machine learning, and analysis for big data
Encoding - Studio Grade encoding at cloud scale
Live and On-Demand Streaming - Deliver content to all devices with business scale
Azure Media Player - A single layer for all your playback needs
Content Protection - Securely deliver content using AES, PlayReady, Widevine, and Fairplay
Blob Storage Accounts - REST-based object storage for unstructured data
Premium Storage - Low latency and high throughput storage
Web Apps - Quickly create and deploy mission critical Web apps at scale
Mobile Apps - Build and host the backend for any mobile app
API Apps - Easily build and consume Cloud APIs
Text Analytics API - Easily evaluate sentiment and topics to understand what users want
Recommendations API - Predict and recommend items your customers want
Academic Knowledge API - Academic content in the Microsoft Academic Graph
Computer Vision API - Distill actionable information from images
Emotion API - Personalize experiences with emotion recognition
Face API - Detect, analyze, organize, and tag human faces in photos
Bing Speech API - Convert speech to text and back again to understand user intent
Web Language Model API - Predictive language models trained on web-scale data
Language Understanding Intelligent Service - Understanding commands from your users
Speaker Recognition API - Use speech to identify and authenticate individual speakers
Bing Search APIs - Web, image, video, and news search APIs for your app
Bing Autosuggest API - Give your app intelligent options for searches
Bing Spell Check API - Detect and correct spelling mistakes in your app
Media Analytics - Speech and Vision services at enterprise scale, compliance, and security
Queue Storage - Effectively scale apps according to traffic
File Storage - File shares that use the standard SMB 3.0 protocol
Tables Storage - NoSQL key-value storage using semi-structured datasets
Applications
Clients
Infrastructure
Management
Databases &
Middleware
App Frameworks
& Tools
DevOps
PaaS &
DevOps
Swar
m
DC/O
S
Kubernetes
Azure Container Service
Azure Container Service
Containers
Containers
Orchestrator
(Docker Swarm, DC/OS, Kubernetes)
Orchestrator
(Docker Swarm, DC/OS, Kubernetes)
Container Tooling
e.g. Docker CLI
Container Tooling
e.g. Docker CLI
Service Tooling
e.g. ARM Template
Service Tooling
e.g. ARM Template
Cosmos DB
Billions transactions/day
Services Powered by Service Fabric
SQL Database
2.1 million DBs
Cortana Power BI
Event Hubs
60bn events/day
IoT Hub
Millions of messages
Skype Intune Dynamics
Azure Other Clouds
On Premise
Azure Service Fabric
Any OS, Any Cloud
Dev Box
Service Fabric Programming Models & CI/CD
Other Clouds
Azure
Dev Box On Premise
.NET Core/Full
.NET/Java
Windows Azure Platform Components
Apps & Services
Services
Web Frontend
Queues
Distributed Storage
Distributed
Cache
Partitioned Data
Content Delivery
Network
Load Balancer
IIS
Web Server
VM Role
Worker Role
Web Role
Caching
Queues Access Control
Composite App
Blobs
Relational Database
Tables
Drives Service Bus
Reporting
DataSync
Virtual Network
Connect
Virtual Machine vs VM Role
VM Role Virtual Machine
Storage Non-Persistent Storage Persistent Storage
Easily add additional storage
Deployment Build VHD offsite and upload to
storage.
Build VHD directly in the cloud or build
the VHD offsite and upload
Networking Internal and Input Endpoints
configured through service
model.
Internal Endpoints are open by default.
Access control with firewall on guest
OS. Input endpoints controlled through
portal, service model or API/Script.
Primary Use Deploying applications with
long or complex installation
requirements into stateless PaaS
applications
Applications that require persistent
storage to easily run in Windows Azure.
Persistent Disks and Highly
Durable
Base OS image for new Virtual Machines
Sys-Prepped/Generalized/Read Only
Created by uploading or by capture
Writable Disks for Virtual Machines
Created during VM creation or during
upload of existing VHDs.
Cross-premise Connectivity
IP-level connectivity
Data Synchronization
SQL Azure Data Sync
Application-layer
Connectivity & Messaging
Service Bus
Secure Machine-to-Machine
Network Connectivity
Windows Azure Connect
Secure Site-to-Site
Network Connectivity
Windows Azure Virtual Network
Microsoft Azure Example
October 9, 2022 105
How to Architect for High
Availability?
Azure: Hosting Choices for SQL
SQL Server in Azure VM
You access a VM with SQL Server
You manage SQL Server and Windows: High
Availability, Backups, Patching (automation
available)
You can run any SQL Server version and
edition
Full on-premises compatibility
Different VM sizes: A0 (1 core, 1GB mem,
100GB) to G5 (32 cores, 512GB mem, 32TB)
VM availability SLA: 99.95%: In practice SQL
AlwaysOn provides higher availability
(~99.99%)
Reuse on-premises infrastructure (e.g. Active
Directory)
You access a DB
DB is fully managed: High Availability, Backups, Patching
Runs latest SQL Server version, based on Enterprise edition
New paradigm of databases and modern app building
Different DB sizes: Basic (2GB, 5tps) to Premium (500GB,
735tps
DB availability SLA: 99.99% 735tps)
Azure SQL Database
What is a SQL Always On Availability Group
• SQL AlwaysOn Availability Groups feature is a HA and DR solution for SQL
• Each Server keeps it’s own copy of the databases.
• Shared Storage is not required
• Databases are synchronized with secondary node databases.
• Supports automatic, planned and forced failover.
• Depends on the failover clustering role.
• Secondary nodes can be used as Read only Nodes and for Backups.
WITNESS
Azure: Architecture Diagram
PRIMARY
Availability Group
SECONDARY
WindowsCluster
On-Premises
SECONDARY
Azure
Primary: On-premises
Secondary: Azure – Data in azure act as a DR
Cost : Egress Traffic
Azure: Architecture Diagram
PRIMARY
Availability Group
SECONDARY
WindowsCluster
On-Premises
SECONDARY
Cloud
Primary: Azure
Secondary: On-Premises - a copy of for reporting and regulatory purposes
Cost : Egress Traffic
WITNESS
Compute
Enterprise Level Infrastructure
Storage Networking Identity Marketplace
Management portal
Windows Azure Platform
cal Development
Environment
Performance
Performance
velopment Tools
velopment Tools
Compute
Windows
Azure
Compute
Windows
Azure Storage
Windows Azure
Connect
Content Delivery
Network (CDN)
AppFabric
Caching
AppFabric
Service Bus
AppFabric
Integration
Control
AppFabri
c
Access
Control
Windows Azure
SQL Azure
DataMarke
t
Applicatio
ns
Marketplace
SQL Azure
Windows Azure AppFabric
Windows Azure
Microsoft Azure Services
Data
&
Storage
Web
&
Mobile
Compute
SQL
Database
App
Service
Virtual
Machines
Media
&
CDN
Media
Services
CDN
Developer
Services
DocumentDB Redis Cache
Cloud
Services Batch Service Fabric Networking
Virtual
Network ExpressRoute
Traffic
Manager
StorSimple
Search
Storage
Identity
&
Access
Azure Active
Directory
Multi-Factor
Authent
API
Management
Notification
Hubs
Mobile
Engagement
Visual Studio
Online
Application
Insights
Management
Scheduler Automation
Operational
Insights Key Vault
Analytics
&
IoT
HDInsight
Machine
Learning
Stream
Analytics Data Factory Event Hubs
Hybrid
Integration
BizTalk
Services Service Bus Backup Site Recovery
Web
App
Mobile
App
API
App
Logic App
Blobs Tables Queue
s
Files
Marketplace
…
Data Lake
Data
Warehouse
RemoteApp DNS
Application
Gateway
Azure Blob Storage Concepts
113
Queues Storage
3-Tier service pattern
Front End
(Stateless
Web)
Stateless
Middle-tier
Compute
Cache
• Scale with partitioned
storage
• Increase reliability with
queues
• Reduce read latency
with caches
• Manage your own
transactions for state
consistency
• Many moving parts each
managed differently
Load Balancer
• Box
• Chatter
• Delay
• Dropbox
• Azure HD Insight
• Marketo
• Azure Media Services
• OneDrive
• SharePoint
• SQL Server
• Office 365
• Oracle
• QuickBooks
• SalesForce
• Sugar CRM
• SAP
• Azure Service Bus
• Azure Storage
• Timer / Recurrence
• Twilio
• Twitter
• IBM DB2
• Informix
• Websphere MQ
• Azure Web Jobs
• Yammer
• Dynamics CRM
• Dynamics AX
• Hybrid Connectivity
• HTTP, HTTPS
• File
• Flat File
• FTP, SFTP
• POP3/IMAP
• SMTP
• SOAP + WCF
• Batching / Debatching
• Validate
• Extract (XPath)
• Transform (+Mapper)
• Convert (XML-JSON)
• Convert (XML-FF)
• X12
• EDIFACT
• AS2
• TPMOM
• Rules Engine
Connectors
Protocols BizTalk Services
Built-in API Connectors
Azure Web Apps
 Rich monitoring and
alerting
 Traffic manager
 Custom CNAMEs
 VNET and VPN
 Backup and restore
 Many VM size and instance
options
 In production A/B testing
 Auto load-balance
 Share capacity across Web
and Mobile
 Staging slots
 Validate changes in your
staging environment
before publishing to
production
 More DevOps features
 Support for BitBucket and
Visual Studio Online;
seamless integration with
GitHub
 Web Jobs
Architecture Azure SQL DW
https://azure.microsoft.com/en-
us/documentation/articles/sql-data-
warehouse-overview-what-is
Dist_DB_1
Dist_DB_2
Dist_DB_15
…
Dist_DB_16
Dist_DB_17
Dist_DB_30
… …
…
…
Dist_DB_46
Dist_DB_47
Dist_DB_60
…
Compute Consumption
Azure Data Lake & SQL DW
Loading data not Polybase
https://blogs.msdn.microsoft.
com/sqlcat/2016/02/06/azure
-sql-data-warehouse-loading-
patterns-and-strategies/
Loading data via Polybase
https://blogs.msdn.microsoft.co
m/sqlcat/2016/02/06/azure-sql-
data-warehouse-loading-
patterns-and-strategies/
Azure Mobile App
REST
API
Offline
sync
Facebook Twitter Microsoft Google Azure
Active
Directory
Windows
iOS
Android
HTML 5/JS
Xamarin
PhoneGap
Sencha
Windows
Androi
d
Chrome
iOS
OSX
In-App
Kindle
Backend code
SQL Mongo
Tables O365 API Apps
Offline Sync
New Data Model
TableController
DataManager
DTO
DTO
Mobile Service/App
Device
SQL Database
BYOD
MongoDB
Table Storage
Social Authentication
APP
R
E
S
T
A
P
I
G
A
T
E
W
A
Y
Valid
User ID
+
Token
Azure Notification Hub
 Register device handle at app launch
1. Client app retrieves handle from Platform Notification Service
2. Client sends handle to your backend
Backend registers with Notification Hub using tags to
represent logical users and groups
 Send Notification
3. Backend sends request to Notification Hub using a tag
Notification Hub manages scale
Notification Hub maps logical users/groups to device
handles
4. Notification Hub delivers notifications to matching
devices via PNS
 Maintain backend device handles
5. Notification Hub deletes expired handles when PNS
rejects them
6. Notification Hub maintains mapping between logical
users/groups and device handles
PNS
App back-end
Client app
1
2
2
4
5
6
Notification
Hub
3
4
File / application servers
• Live backups, archives, and disaster
recovery
• Dramatic cost reduction
• No changes to application environment
File / application
servers
• File share with integrated
data protection
• All-in-one primary data +
backup + live archives +
DR with de-duplication &
compression
Policies Automated
Encrypted
• SharePoint storage on
StorSimple + Azure
• StorSimple SharePoint
Database Optimizer
• Improved performance
& scalability
• Control Virtual Sprawl
• Cloud-as-a-tier
• Offload storage footprint
• VMware Storage DRS Storage
pools
• Virtual Machine Archive
• Regional VM Storage
• Storage for Tier 2 – 3
SQL Databases
• Integrated Backup,
Restore & Disaster
Recovery
StoreSimple
Archive
Data
Benefits
• Consolidates primary, archive,
backup, DR thru seamless
integration with Azure
• Cloud Snapshots
• De-duplication
• Compression
• Encryption
• Reduces enterprise
storage TCO by 60–80%
Warm
data on
SAS Local
Tier
Most
Active
Data
on SSD
ExpressRoute
Recovery
De-duplicated
De-duplicated
& compressed
De-duplicated, compressed
& encrypted
VPN
Microsoft
Azure
StorSimple Cloud Storage
Three-Tier System Architecture
Cloud Application Design
Customer Environment
Application Tier
Logic Tier
Database Tier
Isolated Virtual Network
INTERNET
Cloud Access & Firewall Layer
THREAT DETECTION: DoS/IDS
Layer
DOS/IDS Layer
DOS/IDS Layer
DOS/IDS Layer
Clients /
End Users
Microsoft Azure
443
443
Azure
Storage
SQL
Database
Azure Platform
• Logical isolation for customer environments and data
• Centralized management via SMAPI or the Azure Portal
• No internet access by default
• Intrusion detection and DoS prevention
measures
• Customer can deploy additional
DoS/IDS measures within their virtual
networks
• Penetration testing
ExpressRoute
Peer
Private fiber connections to
access compute, storage and
more using ExpressRoute
Azure Security and Compliance
Secure development, operations, and threat
mitigation practices provide a trusted
foundation
VPN
Remote Workers
Computers
Behind Firewalls
Enables connection from
customer sites and remote
workers to Azure Virtual
Networks using Site-to-Site
and Point-to-Site VPNs
Azure manages
compliance with:
• ISO 27001
• SOC1 / SOC2
• HIPAA BAA
• DPA / EU-MC
• UK G-Cloud / IL2
• PCI DSS
• FedRAMP
Azure’s certification process is ongoing
with annual updates and increasing
breadth of coverage.
Azure provides a number of options for
encryption and data protection.
Repository Build
Test Deploy App
Ops
Process tools
Service Manager
ONE
CONSISTENT
PLATFORM
ON-
PREMISES
SERVICE
PROVIDER
Microsoft Azure
System Center
Operations Manager
Microsoft ALM & DevOps
Microsoft Cloud Services Foundation Reference Model
By: Thomas W Shinder and Jim Dial
Management
and Support
Service
Operations
Infrastructure
Service Delivery
Platform
Software
Manage
and
support
Support
Provide
capability
Provide
capability
Define
Define
Define
Request
Fulfillment
Asset and
Configuration
Management
Change
Management
Incident and
Problem
Management
Release and
Deployment
Management
Access
Management
Systems
Administration
Knowledge
Management
Service
Monitoring
Configuration
Management
Service
Reporting
Network
Support
Service
Management
Fabric
Management
Deployment and
Provisioning
Authentication
Consumer and
Provider Portal
Usage and
Billing
Authorization
Data Protection
Directory
Process
Automation
Compute Storage
Network
Virtualization
ServiceLevel
Management
Financial
Management
Regulatory
Policy and
Compliance
Management
Information
Security
Management
Availability and
Continuity
Management
Capacity
Management
ServiceLifecycle
Management
Enable services
Provide
capability
Enable services
Define
Business
Relationship
Management
This diagram is updated periodically. The latest
version can be found online. Version 1
Detailed information about this diagram is
provided in the Cloud Services Foundation
Reference Model article.
http://blogs.technet.com/b/cloudsolutions/archive/2013/08/15/cloud-services-foundation-reference-architecture-reference-model.aspx
• Green subdomains contain components that represent IT
operational processes
• Blue subdomains contain technical capabilities components,
which represent the functionality that is provided by hardware
devices or software applications or both
Hybrid Cloud Scenarios
Recovery
Encrypted Backup
VPN
Windows Backup
SC Data Protection Manager
Microsoft
Azure
System Center
Virtual Machine
Manager
Recovery
plan
Health Monitor
System Center
Virtual Machine
Manager
Site A Site B
Hyper-V
Replica
Orchestrated Recovery in case of outage
Manage
Site B
System Center
Virtual Machine
Manager
Site A
Replication
Recovery
Microsoft
Azure
Microsoft
Azure
VPN
Remote Users
Admin
Hybrid Cloud Scenarios
File /
Application
Servers
• Live Backups, Archives,
and Disaster Recovery
• Dramatic Cost
Reduction
• No Changes to
Application
Environment
File /
Application
Servers
• File share with integrated
data protection
• All-in-one primary data +
backup + live archives +
DR with de-duplication &
Compression
Policies Automated
Encrypted
• SharePoint storage on
StorSimple + Azure
• StorSimple SharePoint
Database Optimizer
• Improved performance
& scalability
• Control Virtual Sprawl
• Cloud-as-a-tier
• Offload storage footprint
• VMware Storage DRS Storage
pools
• Virtual Machine Archive
• Regional VM Storage
• Storage for Tier 2 – 3
SQL Databases
• Integrated Backup,
Restore & Disaster
Recovery
StoreSimple
Archive
Data
Benefits
• Consolidates primary, archive,
backup, DR thru seamless
integration with Azure
• Cloud Snapshots
• De duplication
• Compression
• Encryption
• Reduces enterprise storage TCO
by 60–80%
Warm
data on
SAS Local
Tier
Most
Active
Data
on SSD
Encrypted Backup
Recovery
De
duplicated
De duplicated
& Compressed
De duplicated, Compressed
& Encrypted
VPN
Microsoft
Azure
Hybrid Cloud Scenarios
AvailabilitySet
Load
Balancing
Auto
Scaling
Tier1
AvailabilitySet
Tier2
Auto
Scaling
SharePoint
AvailabilitySet
Tier3
Azure
Storage
SQL
Azure
Analytics
& Reporting
VPN
VPN
Web
Site
Mobile
Service
HDInsigh
t
(Hadoop)
Storage
BLOB
Storage
Table
Storage
Queue
Virtual
Machines
VHD
Windows
Azure
Cache
Windows
Azure
CDN
Microsof
t
Azure
AD
Notification
Hub
User
s
Microsoft
Azure
SDK
Developers
On Premises
Microsoft
Azure
Connected Devices
Collect / Decode
Load
Balancing
Auto
Scaling
Worker
Roles
INGRESSNODES
Filter / Analyze/ Aggregate
ANALYTICSNODE
Auto
Scaling
Worker
Roles
Azure
Storage
Record Reporting / BI
CONSUME
Azure
Storage
SQL
Azure
Analytics
& Reporting
Microsoft
Azure
Hybrid Cloud Scenarios
Enterprise Mobility Suite
• Hybrid Identity Management
• Mobile Device Security&Management
• Mobile ApplicationManagement
• Strong Authentication& Accessbased Information
Protection
Consumer
identity providers
PCs and devices
Microsoft apps
3rd party clouds/hosting
ISV/CSV
apps
Custom
LOB apps
Encrypted Synchronization
Microsoft Azure
AD
ADFS / SAML
Multi-Factor
Authentication
Server
Multi-Factor
Authentication
Server
Corporate devices
On Premises
Applications
BYOD / Personal
devices
.NET, Java, PHP, …
• Built-in
• SDK for integration
• Strong multi Factor Authentication
• Real Time Fraud Alert
• Reporting, Logging & Auditing
• Enables compliance with NIST 800-63
Level 3, HIPAA,
PCI DSS, and other regulatory
requirements
Microsoft Azure
AD
Microsoft Azure Service Fabric
A platform for reliable, hyperscale, microservice-based applications
Microservices
Azure
Windows
Server
Linux
Hosted Clouds
Windows
Server
Linux
Service Fabric
Private Clouds
Windows
Server
Linux
High
Availability
Hyper-Scale
Hybrid
Operations
High Density Rolling
Upgrades
Stateful services
Low Latency
Fast startup &
shutdown
Container
Orchestration &
lifecycle management
Replication &
Failover
Simple
programming
models
Load balancing
Self-healing
Data Partitioning
Automated
Rollback
Health
Monitoring
Placement
Constraints
Azure Governance Architecture
CRUD
Azure Resource Manager (ARM)
Query
providing control over the cloud environment, without sacrificing developer agility
2. Policy-based Control: Real-time
enforcement, compliance assessment and
remediation at scale
3. Resource Visibility: Query, explore &
analyze cloud resources at scale
1. Environment Factory:
Deploy and update
cloud environments in a
repeatable manner
using composable
artifacts
Role-based
Access
Policy
Definitions
ARM
Templates
Management Groups
Subscriptions
Introducing Azure
Management Groups
Management Group & Subscription
Modeling Strategy
App A
Pre-Prod
Microsoft
Recommended
App B
Pre-Prod
Shared
services
(Pre-Prod)
App C
Pre-Prod
App A
Prod
App B
Prod
Shared
services
(Prod)
App D
Prod
Prod RBAC + Policy Pre-Prod RBAC + Policy
Org Management Group
Remediation
Enforcement &
Compliance
Apply policies
at scale
Turn on built-in policies
or build custom ones for all
resource types
Real-time policy evaluation and
enforcement
Periodic & on-demand compliance
evaluation
Apply policies to a Management
Group with control across your
entire organization
Apply multiple policies and &
aggregate policy states with
policy initiative
Real time remediation
Remediation on existing resources
(NEW)
Exclusion Scope
Azure Policy
VM In-Guest Policy (NEW)
State of Cloud Computing
 Perceptions
 “The end of software”
 On-demand infrastructure
 Cheaper and better
 Reality
 Hybrid world; not “all-or-nothing”
 Leverage existing IT skills and
investments
 Seamless user experiences
 Evolutionary; not revolutionary
 Drivers
 Ease-of-use, convenience
 Product effectiveness
 Simplify IT, reduce costs
> Types
• Public
• Private
• Internal
• External
• Hybrid
> Categories
• SaaS
• PaaS
• IaaS
Private
(On-Premise)
IT as a Service
Infrastructure
(as a Service)
Platform
(as a Service)
Storage
Server HW
Networking
Servers
Databases
Virtualization
Runtimes
Applications
Security & Integration
Storage
Server HW
Networking
Servers
Databases
Virtualization
Runtimes
Applications
Security & Integration
Storage
Server HW
Networking
Servers
Databases
Virtualization
Runtimes
Applications
Security & Integration
You
manage
Managed
by
vendor
Managed
by
vendor
You
manage
You
manage
.NET Services
Windows Azure
Applications
Applications
SQL Azure
Others
Windows
Mobile
Windows
Vista/XP
Windows
Server
Fabric
Storage
Config
Compute
Application
Windows Azure
An illustration
Access Control
Service Bus
Service Bus
Registry
Endpoints
Organization Y
Organization X
Application Application
Illustrating the Service Bus
2) Discover
endpoints
1) Register
endpoints
3) Access
application
Application Models
Web Hosting
 Massive scale infrastructure
 Burst & overflow capacity
 Temporary, ad-hoc sites
Application Hosting
 Hybrid applications
 Composite applications
 Automated agents / jobs
Media Hosting & Processing
 CGI rendering
 Content transcoding
 Media streaming
Distributed Storage
 External backup and storage
High Performance Computing
 Parallel & distributed processing
 Massive modeling & simulation
 Advanced analytics
Information Sharing
 Reference data
 Common data repositories
 Knowledge discovery & mgmt
Collaborative Processes
 Multi-enterprise integration
 B2B & e-commerce
 Supply chain management
 Health & life sciences
 Domain-specific services
Kappa Architecture, in Azure,
Managed
Kappa Architecture, in Azure,
Managed
Internet-Scale Application
Architecture
Design
 Horizontal scaling
 Service-oriented composition
 Eventual consistency
 Fault tolerant (expect failures)
Security
 Claims-based authentication &
access control
 Federated identity
 Data encryption & key mgmt.
Management
 Policy-driven automation
 Aware of application lifecycles
 Handle dynamic data schema and
configuration changes
Data & Content
 De-normalization
 Logical partitioning
 Distributed in-memory cache
 Diverse data storage options
(persistent & transient, relational &
unstructured, text & binary, read &
write, etc.)
Processes
 Loosely coupled components
 Parallel & distributed processing
 Asynchronous distributed
communication
 Idempotent (handle duplicity)
 Isolation (separation of concerns)
Storage
• Relational & transactional data
• Federated databases
• Unstructured, de-normalized data
• Logical partitioning
• Persistent file & blob storage
• Encrypted storage
Connectivity
• Message queues
• Service orchestrations
• Identity federation
• Claims-based access control
• External services connectivity
Presentation
• ASP.NET C#, PHP, Java
• Distributed in-memory cache
Services
• .NET C#, Java, native code
• Distributed in-memory cache
• Asynchronous processes
• Distributed parallel processes
• Transient file storage
Internet-Scale Application
Architecture
SERVICE
BUS
ACCESS
CONTROL
WORK
FLOWS
User
Private
Cloud
Public Cloud
Services
Application Patterns
Table
Service
Table
Storage
Service
Blob
Service
Blob
Storage
Service
Queue
Service
Queue
Service
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Web Svc
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Jobs
(Worker
Role)
Jobs
(Worker
Role)
Silverlight
Application
Web
Browser
Mobile
Browser
WPF
Application
Service Bus
Service Bus
Access
Control
Service
Access
Control
Service
Workflow
Service
Workflow
Service
User
Data
Application
Data
Reference
Data
Reference
Data
Cloud Web Application
Enterprise
Data
Enterprise
Data
Enterprise
Web Svc
Enterprise
Web Svc
Enterprise
Application
Enterprise
Application
Data
Service
Data
Service
Storage
Service
Storage
Service
Identity
Service
Identity
Service
Application
Service
Application
Service
Enterprise
Identity
Enterprise
Identity
User
Private
Cloud
Public
Services
Application Patterns
Table
Storage
Service
Table
Storage
Service
Blob
Storage
Service
Blob
Storage
Service
Queue
Service
Queue
Service
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Web Svc
(Web Role)
Web Svc
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Jobs
(Worker
Role)
Jobs
(Worker
Role)
Silverlight
Application
Silverlight
Application
Web
Browser
Mobile
Browser
WPF
Application
WPF
Application
Service Bus
Access
Service
Access
Control
Service
Workflow
Service
User
Data
User
Data
Application
Data
Application
Data
Reference
Data
Reference
Data
Composite Services Application
Enterprise
Data
Enterprise
Data
Enterprise
Web Svc
Enterprise
Web Svc
Enterprise
Application
Enterprise
Application
Data
Service
Storage
Service
Identity
Service
Application
Service
Enterprise
Identity
Enterprise
Identity
User
Private
Cloud
Public
Services
Application Patterns
Table
Storage
Service
Table
Storage
Service
Blob
Storage
Service
Blob
Storage
Service
Queue
Service
Queue
Service
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Web Svc
(Web Role)
Web Svc
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Jobs
Role)
Jobs
(Worker
Role)
Silverlight
Application
Silverlight
Application
Web
Browser
Web
Browser
Mobile
Browser
Mobile
Browser
WPF
Application
WPF
Application
Service Bus
Access
Service
Access
Control
Service
Workflow
Service
User
Data
Application
Data
Application
Data
Reference
Data
Cloud Agent Application
Enterprise
Data
Enterprise
Web Svc
Enterprise
Application
Enterprise
Application
Data
Service
Storage
Service
Identity
Service
Application
Service
Enterprise
Identity
Enterprise
Identity
User
Private
Cloud
Public
Services
Application Patterns
Table
Storage
Service
Table
Storage
Service
Blob
Storage
Service
Blob
Storage
Service
Queue
Service
Queue
Service
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Web Svc
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Jobs
(Worker
Role)
Jobs
(Worker
Role)
Silverlight
Application
Silverlight
Application
Web
Browser
Web
Browser
Mobile
Browser
Mobile
Browser
WPF
Application
WPF
Application
Service Bus
Access
Service
Access
Control
Service
Workflow
Service
User
Data
User
Data
Application
Data
Reference
Data
B2B Integration Application
Enterprise
Data
Enterprise
Web Svc
Enterprise
Application
Data
Service
Storage
Service
Identity
Service
Application
Service
Enterprise
Identity
User
Private
Cloud
Public
Services
Application Patterns
Table
Storage
Service
Table
Storage
Service
Blob
Storage
Service
Blob
Storage
Service
Queue
Service
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Web Svc
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Jobs
Role)
Jobs
(Worker
Role)
Silverlight
Application
Silverlight
Application
Web
Browser
Web
Browser
Mobile
Browser
Mobile
Browser
WPF
Application
WPF
Application
Service Bus
Service Bus
Access
Control
Service
Access
Control
Service
Workflow
Service
Workflow
Service
User
Data
User
Data
Application
Data
Reference
Data
Reference
Data
Grid / Parallel Computing Application
Enterprise
Data
Enterprise
Web Svc
Enterprise
Web Svc
Enterprise
Application
Data
Service
Data
Service
Storage
Service
Storage
Service
Identity
Service
Identity
Service
Application
Service
Application
Service
Enterprise
Identity
Enterprise
Identity
User
Private
Cloud
Public
Services
Application Patterns
Table
Storage
Service
Table
Storage
Service
Blob
Storage
Service
Blob
Storage
Service
Queue
Service
Queue
Service
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Web Svc
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
ASP.NET
(Web Role)
Jobs
(Worker
Role)
Jobs
(Worker
Role)
Silverlight
Application
Silverlight
Application
Web
Browser
Mobile
Browser
Mobile
Browser
WPF
Application
WPF
Application
Service Bus
Access
Service
Access
Control
Service
Workflow
Service
User
Data
User
Data
Application
Data
Reference
Data
Hybrid Enterprise Application
Enterprise
Data
Enterprise
Web Svc
Enterprise
Application
Data
Service
Data
Service
Storage
Service
Storage
Service
Identity
Service
Identity
Service
Application
Service
Application
Service
Enterprise
Identity
High-Level Architecture
Hypervisor
Guest Partition
Host Partition Guest Partition
Hardware
Virtualization
Stack
(VSP)
Drivers
Host OS
Server Core
Applications
Applications
Virtualization
Stack
(VSC)
Guest OS
Server Enterprise
Virtualization
Stack
(VSC)
Guest OS
Server Enterprise
NIC
NIC Disk1
Disk1
VMBUS VMBUS VMBUS
Disk2
Disk2 CPU
CPU
HV-enabled
Server Core
base VHD
HV-enabled
Server Core
base VHD
Image-Based Deployment
Host Partition
Host partition
differencing
VHD
Host partition
differencing
VHD
Guest Partition
Guest
partition
differencing
VHD
Guest
partition
differencing
VHD
Guest partition
differencing
VHD
Guest partition
differencing
VHD
Server Enterprise base VHD
Server Enterprise base VHD
Guest
partition
differencing
VHD
Guest
partition
differencing
VHD
Application
VHD
Application
VHD
Application
VHD
Server Core
base VHD
Server Core
base VHD
Server
Enterprise
base VHD
Server
Enterprise
base VHD
Maintenance
OS
App1 Package
App1 Package App3 Package
App3 Package App2 Package
App2 Package
Guest Partition Guest Partition
 Your services are isolated
from other services
 Can access resources
declared in model only
 Local node resources –
temp storage
 Network end-points
 Isolation using
multiple mechanisms
 Automatic application
of Windows
security patches
 Rolling OS
image upgrades
Managed
code
Restriction
of privileges
Firewall
Virtual
Machine
IP filtering
Windows Azure Storage Stamps
Storage Stamp
LB
Storage
Location
Service
Access blob storage via the URL: http://<account>.blob.core.windows.net/
Data access
Partition Layer
Partition Layer
Front-Ends
Front-Ends
Stream Layer
Stream Layer
Intra-stamp replication
Storage Stamp
LB
Partition Layer
Partition Layer
Front-Ends
Front-Ends
Stream Layer
Stream Layer
Intra-stamp replication
Inter-stamp (Geo) replication
Storage Stamp Architecture –
Stream Layer
 Append-only distributed file system
 All data from the Partition Layer is stored into files (extents) in the Stream layer
 An extent is replicated 3 times across different fault and upgrade domains
 With random selection for where to place replicas for fast MTTR
 Checksum all stored data
 Verified on every client read
 Scrubbed every few days
 Re-replicate on disk/node/rack failure or checksum mismatch
M
Extent Nodes (EN)
Paxos
M
M
Stream
Layer
(Distributed
File System)
Storage Stamp Architecture –
Partition Layer
 Provide transaction semantics and strong consistency for Blobs, Tables and Queues
 Stores and reads the objects to/from extents in the Stream layer
 Provides inter-stamp (geo) replication by shipping logs to other stamps
 Scalable object index via partitioning
M
Extent Nodes (EN)
Paxos
M
M
Partition
Server
Partition
Server
Partition
Server
Partition
Server
Partition
Master
Lock
Service
Partition Layer
Stream
Layer
Storage Stamp
Architecture  Stateless Servers
 Authentication + authorization
 Request routing
M
Extent Nodes (EN)
Paxos
Front End
Layer
FE
M
M
Partition
Server
Partition
Server
Partition
Server
Partition
Server
Partition
Master
FE FE FE FE
Lock
Service
Partition Layer
Stream
Layer
Storage Stamp Architecture
M
Extent Nodes (EN)
Paxos
Front End
Layer
FE
Incoming Write Request
M
M
Partition
Server
Partition
Server
Partition
Server
Partition
Server
Partition
Master
FE FE FE FE
Lock
Service
Ack
Partition Layer
Stream
Layer
Account
Name
Container
Name
Blob
Name
aaaa aaaa aaaaa
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
…….. …….. ……..
zzzz zzzz zzzzz
Storage Stamp
Partition
Server
Partition
Server
Account
Name
Container
Name
Blob
Name
richard videos tennis
……… ……… ………
……… ……… ………
zzzz zzzz zzzzz
Account
Name
Container
Name
Blob
Name
harry pictures sunset
……… ……… ………
……… ……… ………
richard videos soccer
Partition
Server
Partition
Master
Partition Layer – Index Range
Partitioning
Front-End
Server
PS 2 PS 3
PS 1
A-H: PS1
H’-R: PS2
R’-Z: PS3
A-H: PS1
H’-R: PS2
R’-Z: PS3
Partitio
n
Map
Blob Index
Partition
Map
Account
Name
Container
Name
Blob
Name
aaaa aaaa aaaaa
……… ……… ………
……… ……… ………
harry pictures sunrise
A-H
R’-Z
H’-R
Each RangePartition – Log
Structured Merge-Tree
Checkpoint
File Table
Checkpoint
File Table
Checkpoint
File Table
Blob Data Blob Data Blob Data
Commit Log Stream
Metadata log Stream
Writes Read/Query
Extent E2 Extent E3
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Stream Layer Concepts
Block
 Min unit of write/read
 Checksum
 Up to N bytes (e.g. 4MB)
Extent
 Unit of replication
 Sequence of blocks
 Size limit (e.g. 1GB)
 Sealed/unsealed
Stream
 Hierarchical namespace
 Ordered list of pointers
to extents
 Append/Concatenate
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Bloc
k
Extent E4
Stream //foo/myfile.data
Ptr E1 Ptr E2 Ptr E3 Ptr E4
Extent E1
Creating an Extent
SM
SM
Stream
Master
Paxos
Partition
Layer
EN 1 EN 2 EN 3 EN
Create Stream/Extent
Allocate Extent replica set
Primary Secondary A Secondary B
EN1 Primary
EN2, EN3 Secondary
Replication Flow
SM
SM
SM
Paxos
Partition
Layer
EN 1 EN 2 EN 3 EN
Append
Primary Secondary A Secondary B
Ack
EN1 Primary
EN2, EN3 Secondary
Design Choices
 Multi-Data Architecture
 Use extra resources to serve mixed
workload for incremental costs
 Blob -> storage capacity
 Table -> IOps
 Queue -> memory
 Drives -> storage capacity and IOps
 Multiple data abstractions from a single stack
 Improvements at lower layers help all data
abstractions
 Simplifies hardware management
 Tradeoff: single stack is not optimized for specific
workload pattern
 Append-only System
 Greatly simplifies replication protocol and
failure handling
 Consistent and identical replicas up to the
extent’s commit length
 Keep snapshots at no extra cost
 Benefit for diagnosis and repair
 Erasure Coding
 Tradeoff: GC overhead
 Scaling Compute Separate from Storage
 Allows each to be scaled separately
 Important for multitenant environment
 Moving toward full bisection bandwidth
between compute and storage
 Tradeoff: Latency/BW to/from storage
Lessons Learned
 Automatic load balancing
 Quickly adapt to various traffic conditions
 Need to handle every type of workload thrown at the system
 Built an easily tunable and extensible language to dynamically
tune the load balancing rules
 Need to tune based on many dimensions
 CPU, Network, Memory, tps, GC load, Geo-Rep load, Size of
partitions, etc
 Achieving consistently low append latencies
 Ended up using journaling
 Efficient upgrade support
 Pressure point testing
Windows Azure Storage Summary
 Highly Available Cloud Storage with Strong Consistency
 Scalable data abstractions to build your applications
 Blobs – Files and large objects
 Tables – Massively scalable structured storage
 Queues – Reliable delivery of messages
 Drives – Durable NTFS volume for Windows Azure applications
 More information
 Windows Azure tutorial this Wednesday 26th, 17:00 at start of SOCC
 http://blogs.msdn.com/windowsazurestorage/
Methods of Machine Learning
Live Media Streaming
Clouds and Tools: Cheat Sheets & Infographics
Google Cloud Platform - Compute Engine / App Engine
 App Engine - PaaS
 Translate API
 Prediction API
 Big Query
 Compute Engine -
IaaS
 Cloud Datastore
 Cloud SQL
 Cloud Endpoints
 Cloud Storage
176
Google’s TPU 1.0—looking at the
Technology
• Employs 8 bit integer arithmetic to save power and area
• A theme for others too—GraphCore
• Google supports this with a development environment—Tensor Flow
• Publically available
Google Cloud Platform (GCP)
 Compute
 Compute Engine - Run large-scale workloads on virtual machines
 App Engine - A platform for building scalable web apps and mobile
backends
 Container Engine - Run Docker containers powered by Kubernetes
 Container Registry - Fast, private Docker image storage on GCP
 Cloud Functions - A serverless platform for event-based microservices
 Storage and Databases
 Cloud Storage - Powerful and effective object storage with global edge-
caching
 Cloud SQL - A fully-managed, relational MySQL database
 Cloud Bigtable - A fast, managed, massively scalable NoSQL database
service
 Cloud Datastore - A managed NoSQL database for storing non-relational
data
 Persistent Disk - Reliable, high-perf block storage for virtual machine
instances
 Networking
 Cloud Virtual Network - Managed networking functionality for your
resources
 Cloud Load Balancing - High performance, scalable load balancing
 Cloud CDN - Low-latency, low-cost content delivery using global network
 Cloud Interconnect - Connect your infrastructure to Google's network edge
 Cloud DNS - Reliable, resilient, low-latency DNS
 Big Data
 BigQuery - A fast and managed data warehouse for large-scale data analytics
 Cloud Dataflow - A rt data processing service for batch and stream data
proc
 Cloud Dataproc - A managed Spark and Hadoop service
 Cloud Datalab - An interactive tool for large-scale data analysis and visual
 Cloud Pub/Sub - Connect your services with reliable asynchronous
messaging
 Genomics - Power your science with Google Genomics
 Machine Learning
 Cloud Machine Learning Platform - Machine Learning services
 Vision API - Derive insight from images with our powerful Cloud Vision API
 Speech API - Speech to text conversion powered by machine learning
 Natural Language API - Processing text using machine learning
 Translate API - Create multilingual apps and translate text into other
languages
 Management Tools
 Stackdriver Overview - Monitoring, logging, and diagnostics GCP and AWS
 Monitoring - Monitoring for applications running on GCP and AWS
 Logging - Logging for applications running on GCP and AWS
 Error Reporting - Identify and understand your application errors
 Trace - Find performance bottlenecks in production
 Debugger - Investigate your code’s behavior in production
 Deployment Manager - Create and manage cloud resources with templates
 Cloud Console - Your integrated Google Cloud Platform management console
 Cloud Shell - Manage your infrastructure and applications from the cmd-line
 Cloud Mobile App - Manage GCP services from Android or iOS
 Billing API - management of billing for your projects in the GCP
 Cloud APIs - Programmatic interfaces for all Google Cloud Platform services
 Developer Tools
 Cloud SDK - Command-line interface for GCP products and services
 Deployment Manager - Create and manage cloud resources with templates
 Cloud Source Repositories - Fully-featured private Git repositories
 Cloud Endpoints - Create RESTful services from your code
 Cloud Tools for Android Studio - Build backend services for your Android
apps
 Cloud Tools for IntelliJ - Debug production cloud applications inside of
IntelliJ
 Cloud Tools for PowerShell - Full cloud control from Windows PowerShell
 Cloud Tools for Visual Studio - Deploy Visual Studio applications to GCP
 Google Plug In for Eclipse - Simplifies development in the Eclipse IDE
 Cloud Test Lab - On-demand app testing with the scalability of a cloud service
 Identity & Security
 Cloud Identity & Access Management - Fine-grained access control
 Cloud Resource Manager - Hierarchically manage resources by project/org
 Cloud Security Scanner - Scan your App Engine apps for common
vulnerabilities
Development Runtime
Local Machine
Python SDK
Google AppEngine Infrastructure
Sandboxed Runtime
Environment Sandboxed Runtime Environment
Data Store
Url Fetch
Image Manipulation
Task Queue
Cron Jobs
Web App
Web App Web App Web App
Install/uninstall/upgrade all
command-line tools related
to
Google Cloud Platform
Notification for new release
of any
Cloud SDK component
Automatization
Cloud Storage
Protected
Your data is protected at multiple physical locations
Strong, configurable security
OAuth or simple access control on your data
Multiple usages
+ Serve static objects directly
+ Use with other Google Cloud products (Bridge)
Simple Citrix deployment on GCE
Virtual Network
XD VDI Host
XA Session Host
AD Controller
Single Subnet
SQL Server
Secure Gateway
Web Interface
Delivery Controller
License Server
User Access
via Internet
Connect Via
go.gcexencloud.ne
t port 443
endpoint on
Secure Gateway
Simple hybrid deployment
On-Premise Network
AD Controller
Virtual Network
XD VDI Host
XA Session Host
Single Subnet
SQL Server
Secure Gateway
Web Interface
Delivery Controller
License Server
AD Controller
Site-to-Site VPN
Company resources
and Applications
Data
Single Zone
Delivery Controller
License Server
AD Controller
Delivery Controller
SQL Server
SQL Server
XD VDI Host XA Session Host
XD VDI Host
Virtual
Network
Single Zone
Delivery Controller
License Server
AD Controller
Delivery Controller
SQL Server
SQL Server
XD VDI Host XA Session Host
XD VDI Host
Site-to-Site VPN
Virtual
Network
Single Zone
Secure Gateway
Secure Gateway
Web Interface
Web Interface
Delivery Controller
License Server
AD Controller
Delivery Controller
SQL Server
SQL Server
XD VDI Host XA Session Host
XD VDI Host
443
443
443
EastCitrix.CloudApp.net
Virtual
Network
Single Zone
Secure Gateway
Secure Gateway
Web Interface
Web Interface
Delivery Controller
License Server
AD Controller
Delivery Controller
SQL Server
SQL Server
XD VDI Host XA Session Host
XD VDI Host
443
443
443
WestCitrix.CloudApp.net
Citrix.trafficmanager.net
CNAME:
citrixonazure.com
StoreFront
StoreFront
StoreFront
StoreFront
Netscaler
in GCE
Netscaler
in GCE
Instance types – Knowledge Workers workload
cost/users
Knowledge Worker Workloa d - XenApp 7 .6 - Windows Server 20 08 R2 - Offic e 20 10
Instance type Units (GiB) vCPUs Cost/Hour Se ssions Cost per User/h
General Purpose
f1- mic ro Var 0.6 1 0.032
$ 0 #DIV/0!
g1- small 1.38 1.7 1 0.052
$ 2 0.026
$
n1- sta ndard- 1 2.75 3.75 1 0.263
$ 3 0.088
$
n1- sta ndard- 2 5.5 7.5 2 0.526
$ 7 0.075
$
n1- sta ndard- 4 11 15 4 1.052
$ 15 0.070
$
n1- sta ndard- 8 22 30 8 2.104
$ 31 0.068
$
n1- sta ndard- 16 44 60 16 4.208
$ 63 0.067
$
n1- sta ndard- 3 2 88 120 32 7.256
$ 98 0.074
$
Compute Optimized
n1- highc pu- 2 5.5 1.8 2 0.200
$ 2 0.1000
$
n1- highc pu- 4 11 3.6 4 0.480
$ 6 0.0800
$
n1- highc pu- 8 22 7.2 8 1.360
$ 19 0.0716
$
n1- highc pu- 16 44 14.4 16 2.760
$ 38 0.0726
$
n1- highc pu- 3 2 88 28.8 32 5.720
$ 79 0.0724
$
Memory Optimize d
n1- highmem- 2 5.5 13 2 0.548
$ 7 0.078
$
n1- highmem- 4 11 26 4 1.096
$ 15 0.073
$
n1- highmem- 8 22 52 8 2.192
$ 31 0.071
$
ni- highmem- 16 44 104 16 4.384
$ 63 0.070
$
n1- highmem- 3 2 88 208 32 7.608
$ 98 0.078
$
Knowledge Worker Workloa d - XenApp 7 .6 - Windows Server 20 12 R2 - Offic e 2 013
Instance type Units (GiB) vCPUs Cost/Hour Se ssions Cost per User/h
General Purpose
f1- mic ro Var 0.6 1 0.032
$ 1 #DIV/0!
g1- small 1.38 1.7 1 0.052
$ 2 0.026
$
n1- sta ndard- 1 2.75 3.75 1 0.263
$ 5 0.088
$
n1- sta ndard- 2 5.5 7.5 2 0.526
$ 8 0.088
$
n1- sta ndard- 4 11 15 4 1.052
$ 17 0.096
$
n1- sta ndard- 8 22 30 8 2.104
$ 33 0.100
$
n1- sta ndard- 16 44 60 16 4.208
$ 65 0.111
$
n1- sta ndard- 3 2 88 120 32 7.256
$ 101 0.097
$
Compute Optimized
n1- highc pu- 2 5.5 1.8 2 0.200
$ 3 0.1000
$
n1- highc pu- 4 11 3.6 4 0.480
$ 8 0.0800
$
n1- highc pu- 8 22 7.2 8 1.360
$ 17 0.0971
$
n1- highc pu- 16 44 14.4 16 2.760
$ 34 0.1062
$
n1- highc pu- 3 2 88 28.8 32 5.720
$ 66 0.1192
$
Memory Optimize d
n1- highmem- 2 5.5 13 2 0.548
$ 6 0.091
$
n1- highmem- 4 11 26 4 1.096
$ 11 0.100
$
n1- highmem- 8 22 52 8 2.192
$ 21 0.104
$
ni- highmem- 16 44 104 16 4.384
$ 38 0.115
$
n1- highmem- 3 2 88 208 32 7.608
$ 75 0.101
$
Economics of GCE
 Excel spreadsheet
 Provided as a tool to
estimate costs
 Supports two regions and
two user profiles
 Accounts for computer,
network, and storage
GCP Cheat Sheet
Clouds and Tools: Cheat Sheets & Infographics
Containers
Jan Balewski, NERSC Google GCE Tutorial March 2017
Docker container in 60 seconds
190
Virtual
Machine w/
containers
Your image is
fully isolated,
computations
are private
Hardware controlled by some OS
Your image is meshed with
hardware OS, your resources are
capped, but computations are
public
Boundaries w/o privacy
Why it Works: Separation of
Concerns……
• Docker Engine
– CLI
– Docker Daemon
– Docker Registry
• Docker Hub
– Cloud service
• Share Applications
• Automate workflows
• Assemble apps from components
• Docker images
• Docker containers
Docker Architecture……
 NOT A VHD
 NOT A FILESYSTEM
 uses a Union File System
 a read-only Layer
 do not have state
 Basically a tar file
 Has a hierarchy
• Arbitrary depth
• Fits into the Docker Registry
Docker images……
Units of software delivery (ship it!)
● run everywhere
– regardless of kernel version
– regardless of host distro
– (but container and host architecture must match*)
● run anything
– if it can run on the host, it can run in the container
– i.e., if it can run on a Linux kernel, it can run
*Unless you emulate CPU with qemu and binfmt
Docker Containers...
Containers before Docker……
Containers after Docker ……
Introduction to Docker
• Open Software
– Launched March 2013
– 100+ million downloads of Docker
images
• Open Contribution
– 750+ contributors
– #2 most popular project
– 137 community meet-up groups in 49
countries
• Open Design
– Contributors include IBM, Red Hat,
Google, Microsoft, VMware, AWS,
Rackspace, and others
• Open Governance
– 12 member governance advisory board
selected by the community
197
Enabling application development efficiency, making deployment more efficient, eliminating
vendor ‘lock-in’ with true portability
198
Docker is a shipping container system for code
Multiplicity
of
Stacks
Multiplicity
of
hardware
environments
QA server
Development
VM
Contributor’s
laptop
Customer Data
Center
Production
Cluster
Public Cloud
Static website User DB Analytics DB
Queue
Web frontend
Do
services
and
apps
interact
appropriately?
Can
I
migrate
smoothly
and
quickly
…that can be manipulated using
standard operations and run
consistently on virtually any
hardware platform
An engine that enables any
payload to be encapsulated as
a lightweight, portable, self-
sufficient container…
Docker Mission
Docker is an open platform for building distributed applications for
developers and system administrators.
Build Ship Run
Anywhere
Any App
199
Docker Containers simplifies cloud portability
200
A platform to build, ship, and run applications in “containers”.
Developers & SysAdmins love the flexibility and standardization of Docker
Standardization  Application portability
Package, ship, and run applications anywhere
The Docker Hub Registry has 5,000+ "Dockerized" applications
Lightweight
Containers are “light” users of system resources, smaller than VMs, start up
much faster, and have better performance
Ecosystem-friendly
A new industry standard, with a vibrant ecosystem of partners.
750+ community contributors; 50,000 third-party Docker projects on GitHub
User-friendly
Developers build with ease and ship higher-quality applications
SysAdmins deploy workloads based on business priorities and policies.
"Containers managed by Docker are effective in resource isolation. They are almost on
par with the Linux OS and hypervisors in secure operations management and
configuration governance."
Joerg Fritsch, Gartner Analyst, Security Properties of Containers Managed by Docker, January 7, 2015
Docker Containers
A technical view into the shared and layered file systems technology
 Docker uses a copy-on-write (union) filesystem
 New files(& edits) are only visible to current/above layers
 Layers allow for reuse
 More containers per host
 Faster start-up/download time – base layers are "cached"
 Images
 Tarball of layers (each layer is a tarball)
201
Filesystem
Base OS / Kernel
Fedora Ubuntu
tomcat tomcat
liberty
CNTR1 CNTR2 CNTR3 CNTR4
app1 app2 app4
app3
Layer
Layer
Layer
Docker Architecture
202
Source: https://docs.docker.com/introduction/understanding-docker/
The Client is typically a laptop or a build server such as Jenkins
The DOCKER_HOST could be a VM on the same laptop as the Client, or a
Linux VM in a Datacenter.
Registry could be the Docker hub, a private corporate registry.
Typical Container Lifecycle
203
Client
(Laptop)
DOCKER_HOST
(Laptop)
DOCKER_HOST
(Bluemix)
Registry
(Bluemix)
Node.js
(IBM Created)
Node.js
(IBM Created)
docker pull
registry.ng.bluemix.net/i
bmnode:latest
git clone .../etherpad-lite
docker build -t
etherpad_bluemix .
etherpad_bluemix
etherpad_bluemix
docker push
registry.ng.bluemix.net/<n
amespace_here>/etherpa
d_bluemix
iblue300etherpadxxxx
docker run
etherpad_bluemix
(or click Start in Bluemix
Console)
Why do Developers care about Containers?
 Demand for Increased Application Development Efficiency
• Enable Continuous Integration/Continuous Delivery
• Developer Laptops, through automated test, to production, and through scaling without modification
 DevOps Requires Improved Deployment Efficiency
• Higher Density of Compute Resources (CPU, Memory, Storage)
 Hybrid Cloud and Choice Require Portability
• Cross Cloud Deployment - move the same application across multiple clouds.
• Eliminate “lock-in”, become a “Cloud Broker”
204
Customer pain points User scenarios How this offering helps
Need resources faster Get a working environment up and running
in minutes, not hours or weeks
Users can instantiate new container instances in
seconds with the consistent experience working
directly with Docker
Innovation requires agility and
DevOps
Continuous delivery pipeline IBM Containers integrates with Bluemix apps
including a continuous delivery pipeline, partnered
with the fast deployments of containers
Ability to migrate workload from
on-prem to off-prem infrastructure
Changes made on developer’s local image
is ready to deploy to production cloud
Portability as images can be developed on a local
workstation, tested in a staging cloud on-prem, and
finally to the production off-prem cloud
Environment to facilitate
incremental production deployment
Business wants to deploy in a phased
approach to validate the expected
experience of the new version
Users can deploy new releases in a controlled
manner enabling them to monitor the performance
and behavior with the ability to roll back if needed
VMs
Benefits
Better resource
pooling
Easier to scale
VM’s on the cloud.
Limitations
Dedicated resources
for each VM (more VM
= more resources).
Guest VM = Wasted
resources.
Clouds and Tools: Cheat Sheets & Infographics
Containers
Virtual Machine Versus Container……
Virtual Machine Versus Container……
Virtual Machine Versus Container……
A “container“ delivers an application with all the libraries, environments and
dependencies needed to run.
Containers
Containers vs VMs
 Containers are more lightweight.
 No need for a guest OS.
 Less resources.
 Greater portability
 Faster
• The Life of a Container
– Conception
• BUILD an Image from a Dockerfile
– Birth
• RUN (create+start) a container
– Reproduction
• COMMIT (persist) a container to a new image
• RUN a new container from an image
– Sleep
• KILL a running container
– Wake
• START a stopped container
– Death
• RM (delete) a stopped container
• Extinction
– RMI a container image (delete image)
Docker Container Lifecycle ……
• Kernel Feature
• Groups of processes
• Control resource allocations
– CPU
– Memory
– Disk
– I/O
• May be nested
Linux Cgroups ……
• Kernel Feature
• Restrict your view of the system
– Mounts (CLONE_NEWNS)
– UTS (CLONE_NEWUTS)
• uname() output
– IPC (CLONE_NEWIPC)
– PID (CLONE_NEWPID)
– Networks (CLONE_NEWNET)
– User (CLONE_NEWUSER)
• Not supported in Docker yet
• Has privileged/unprivileged modes today
• May be nested
Linux Kernel Namespaces ……
Dockerfile
215
Build
git clone https://github.com/dockerfile/nginx.git
docker build -t="dockerfile/nginx" github.com/dockerfile/nginx
Run
docker run dockerfile/nginx
• Like a Makefile (shell script with keywords)
• Extends from a Base Image
• Results in a new Docker Image
• Imperative, not Declarative
 A Docker file lists the steps needed to build an images
• docker build is used to run a Docker file
• Can define default command for docker run, ports to expose, etc
Dockerfile ……
Docker CLI Commands
(v1.1.2)……
Methods of building images
• Three ways
– Commit changes from a container as a new image
– Build from a Dockerfile
– Import a tarball into Docker as a standalone base layer
38
Building a Docker Image
Base Image (Disk)
Container (Memory)
New Image (Disk)
Dockerfile
Load
Commit
Run
Run the
Installation
procedure
Base Image (Disk)
New Image (Disk)
Build
Installation
script
Interactive building Building from a Docker File
Docker Commit
• docker com
m
i t command saves changes in a container as a new image
• Syntax
docker com
m
i t [ opt i ons] [ cont ai ner I D] [ r eposi t or y: t ag]
• Repository name should be based on username/application
• Can reference the container with container name instead of ID
41
Save the container with ID of 984d25f537c5 as a new image in the
repository johnnytu/myapplication. Tag the image as 1.0
docker com
m
i t 984d25f 537c5 j ohnnyt u/ m
yappl i cat i on: 1. 0
Interactive building Example: vim
and curl
$ docker run -t -i ubuntu:14.04
root@2a896c8cdd83:/# apt-get install -y curl
root@2a896c8cdd83:/# apt-get install -y vim
root@2a896c8cdd83:/# exit
$ docker commit –m “test” 2a896c8cdd83 azab/test:1.0
Dockerfile
Intro to Dockerfile
• Provides a more effective way to build images compared to using
docker com
m
i t
• Easily fits into your development workflow and your continuous
integration and deployment process
44
A Dockerfile is a configuration file that contains instructions
for building a Docker image
Building a Docker Image from a
Dockerfile
Dockerfile
.dockerignore
files
<source-directory>
$docker build -t <image-name> <source-directory>
Docker APIs - Python
Docker - How it works
Images are self-sufficient
It’s possible to build container on OS X and use it in a Secure
server
Clouds and Tools: Cheat Sheets & Infographics
Docker on 2 Servers
Docker repositories
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Local
Registry
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Pull
Srv
1
Srv
2
Docker on the cluster – Swarm
Docker repositories
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Compute
Node
Docker Engine
Local Registry
Pull
Swarm
cluster Swarm
manager
Docker Engine
Docker Container
VM
Tools
Docker containers on VMs Connecting to the Cluster
Stroll Job
Runner
Container
Virtual
path
Scheduler
W
W
W
W
W
W
Job Input
data
Job Output data
/cluster/
Project Area
/var/proj/data
VM
Colossus
Stroll
File-system
Containers Use Case – Microservices
What is a Microservices Architecture?
Application architected as a suite of small services, each running in its own
process, and communicating with lightweight mechanisms e.g. REST/HTTP
Services built around business capabilities
Each service independently deployable via automation
Minimal centralized governance
 May be written in different languages
 May use different data storage technologies
Challenges with Microservices Architecture
Cultural
 Embracing a DevOps culture
 Agility required from inception through to deployment – not just
development
 Ensuring autonomy does not preclude sharing
Technological
 Distributed systems are hard – introduce network latency, fault
tolerance, serialization, …
 Automation needed everywhere
 Keeping latency down
 Designing decoupled non-transactional systems is hard
 Service versioning
Why Microservices?
• Agility
 Services evolve independently and at difference
speeds
 Easier to adopt new technology and evolve
architecture
 Enables continuous delivery
• Resilience
 Use services boundaries for fault tolerance and
isolation
 Design for failure
• Runtime scalability
 Stateless services designed for horizontal scalability
 Services can be scaled independently
• Scalability of the development organisation
 Easier to develop services in parallel
 Smaller working set for each developer
Microservices misconceptions
• Microservices do not require Docker containers
• Docker containers do not have to be microservices
• Containers assist with portability, maintenance, and
deployment; hence a natural choice for microservices
Moving from monolithic applications to microservices
231
Monolithic app Microservices
Scaling Scaling
 Package your app to run virtually anywhere, including Bluemix
• Cloud Foundry – Bluemix foundation that provides developers the ability to quickly
compose their apps without worrying about the underlying infrastructure as these services
run in secure droplet execution agent (DEA) environments. The Bluemix catalog consists of
over 100 selections.
• IBM Containers – Provides portability and consistency regardless of where your app is run—
be it on bare metal servers in Bluemix, your company's data center, or on your laptop.
Easily deploy containers from IBM’s hosted image hub or from your own private registry.
• Virtual Machines – Offers the most control over your apps and middleware. The virtual
machine contains the complete operating system and application, running on virtualized
hardware that is provided by Bluemix.
Deploying to the Cloud in a repeatable way
Summer
2015
 Same great services, no matter where your app runs
• Bluemix Public – World class enterprise PaaS in the
public cloud
• Bluemix Dedicated – Your own PaaS private cloud,
that’s securely connected to both the public Bluemix
and your own network.
• Bluemix Local – Bring cloud agility to even the most
sensitive workloads in your data center. Delivered as a
fully managed service behind your firewall.
232
Service
Existing services on Bluemix, they can be either public or private ones only visible within the organization. An application can be made into a
service following an on boarding process.
Application
Basic unit of deployment in Bluemix. It may include multiple services, public or private. It cannot include other application.
It's recommended to use traditional application architecture(a.k.a monolithic) on an application.
It's the basic unit of red/black deployment.
[Do we want this?] An application can be made into a system. Doing so, the original application will become the first app in the system.
System (We coined this)
An special kind of application that follows the MSA architecture(or multi-tier application architecture).
It can integrate other applications(micro services) & services.
Containers (Docker)
• Dockerfile
• A text doc that contains all the commands to build a Docker image.
• Docker Image
• The building block from which containers are launched. An image is the read-only layer that never changes. Images can be created
based on the committed containers.
• Docker Container
• An running instance, generated from an Docker image. Self-contained environment built from one or more images
• Information available at the Container level includes image from which it is generated, memory used, ip address assigned it, etc.
• Container Group
• A group of containers, which all share the same image.
• Docker Registry
• A registry server for Docker that helps hosting and delivery of repositories and images.
• Layer
• Each file system that is stacked when Docker mounts rootfs
• Repository
• Set of images on local Docker or registry server.
233
Terminology
Page  234
Typical Docker Pull Data Flow
Page  235
Typical Docker Run Data Flow
Clouds and Tools: Cheat Sheets & Infographics
Kubernetes is an ecosystem...
Source: Redmonk - http://redmonk.com/sogrady/2017/09/22/cloud-native-license-choices/
Source: Shippable.com http://blog.shippable.com/why-the-adoption-of-kubernetes-will-explode-in-2018
kubernetes won the container orchestration war...
hardware
vm vs container
os
hypervisor
vm
os
libs
app
vm
os
libs
app
hardware
os
container
libs
app
container
libs
app
container
libs
app
pets vs cattle
- long lived
- name them
- care for them
- ephemeral
- brand them with #’s
- well..vets are expensive
git cc/ld
java/jar
Build
docker build
resources
config libraries
Package
helm package
Construct
helm install/scale
Load balancer
Deploy
Cloud as-is: No unified data access or security concepts
Edge Private Cloud
On Premise
Public Cloud Public Cloud Public Cloud
API
Application
✓
API API API API
API Connector
Multi cloud strategy:
• Complex data movement between clouds
• On any other cloud:
• Different API‘s, application breaks
• Different Security concept
Creating a Global Filesystem
/mapr/edge1
/mapr/edge2
/mapr/edge3
/mapr/newyork
/mapr/amsterdam
/mapr/azure /mapr/gcp
/mapr/aws-eu-west
NFS POSIX HDFS
REST
HOT
WARM
COLD
/mapr
Kafka JSON HBASE SQL S3
Application
✓
Global access to
local data
Creating an “Ubernetes” Platform
Application
GLOBAL DATA MANAGEMENT
Edge Private Cloud
On Premise
Public Cloud Public Cloud Public Cloud
Pod
Pod Pod Image Classification using
Tensorflow in a Docker container
Classic ETL
Scheduling & Scaling
MapR Kubernetes Volume Driver
Single pane of glass to control
jobs anywhere
Kubernetes Architecture
Kubernetes Scaling Architecture
Kubernetes, Docker & Infrastructure
 You don’t have to worry about the
infrastructure
 The entire design of pods and services
is described with YAML files
 Nothing in deployments, pod
management, service discovery,
monitoring, etc required any
knowledge about how many servers,
IP addresses, load balancers, or
anything else with the infrastructure
 Behind the scenes, Kubernetes is
aware of all of the servers available,
load balancers, application gateways
and will configure them automatically
according to what is in the YAML files
247
Cloud Native Docker Container
Cloud
 Supporting a new Cloud Native DevOps
Docker model with a Scale Out
Infrastructure
 Modernizing Hundreds of Websphere
Apps on Power providing services both
to internal employees and external
clients
 Embracing Open Source Technologies
like Docker, Mongo, Redis etc.
 Cooperatively Integrating Open Source
Components to deliver a complete
Container Cloud Service Power Compute Node Cloud
Approx 100’s of Systems
Kubernetes Container Management Service
Web
Apps
Web
Apps
Web
Apps
Web
Apps
Web
Apps
Web
Apps
Web
Apps
Open
Source
Tooling
and
SW
Mongo
Redis
etc
SQL
DB’s
Data Services
User Applications
(Internal and External) Self Service Developer Portal to Get
Containers and Data Services
…
Docker Containers
LE Linux O/S & KVM
RedHat 7.x LE Linux O/S & KVM
SDN
Registry
Operations
Dashboard
Registry
UI
248
Use Case
Open Source Options for Container Cloud Orchestration on Power
Docker Swarm/Datacenter Kubernetes
Mesos
Docker Inc Google
Mesosphere
• Strengths
• Built-in to Docker 1.12 Engine
• Easy to use for Small Clouds
• Weaknesses
• Full Docker DC not on Power Yet
• Strengths
• Good for Batch and Analytics
• Lots of Apps in Catalog
• Weaknesses
• Less usage in Web Applications
• Requires Marathon Framework for
Web Apps
• Strengths
• Lots of Industry usage and
experience for Web Apps
• Synergy with Other parts of Client
Business for X86 Container Mgmt
• Weaknesses
• Significant Integration of many
components for Production Cloud
249
Kubernetes Cluster Components
RHEL 7 LE
Hardware
docker
cAdvisor
Kubernetes
Slave
flannel
App Containers
RHEL 7 LE
Hardware
Heapster
Kubernetes
Master
Etcd
RHEL 7 LE
Hardware
Docker
Private
Registry
Grafana
dashboard
for showing
utilizations
Data Network
Management Network
l Storage – Provides Persistent Storage for Docker Containers and Private Registry
l Docker Private Registry – Provides central on-premise repository of dockerized images
l Heapster – Provides cluster wide monitoring by cAdvisor data from multiple Kubernetes slave
l Kubernetes – Container Orchestration Platform
l Etcd – Provides key-valuestorage for Kubernetes
l RHEL – Base operating system for hosting containers
l Dashboards – Provides self-service UI, monitoring views
Storage
InfluxDB
Kubernetes-
Dashboard for
cluster
management
250
Kubernetes Component Interaction
251
Client Environment
K8s
Master
Environment-1 Environment-2
F5 Loadbalancer
Clients
K8s Slaves
K8s Slaves
• F5 Virtual IP (VIP) and port is
configured for
• K8s master
• K8s slaves
• Etcd distributed key-value
store
• Any direct communication
between servers in Environment-1
and Environment-2 needs to be
explicitly allowed by Firewall rules
• K8s master and slaves are
configured to use Flannel overlay
network for PODs
• Heapster/InfluxDB/Grafana is
used for K8s resource monitoring
• Ingress (with Nginx) is used for
exposing services to clients
Firewall
Docker
Private
Registry
Flannel
252
Integration with Enterprise LDAP
Server
253
Keystone
Existing
LDAP
• Kubernetes uses namespaces to partition the
cluster among multiple users
• Three steps to Access:
• Authentication
• Authorization
• Admission Control
• Authorization defines what a Authenticated user can and can’t do:
– AlwaysDeny: Used only for testing - AlwaysAllow: Used only for testing
– ABAC: Attribute-based access control - Webhook: Calls out to an external authorization service via a REST call
• ABAC based Authorization
• Auth policies need to be created for every user and can be changed only by API server restart
• Every user get's their own namespace
• Read/write access to their own namespace
• Read access to default (global) namespace
• Kubernetes supports Openstack
Keystone Component for
Authentication
• Keystone Provides LDAP/AD
Integration
Container Architecture
km ctrl
manager
km
apiserver
km
scheduler
Agent Node
Master Node
Boot Node
Ansible based
installer and ops
manager
LDAP Server
Mesos
master
MySQL
haproxy etcd GUI cfc-auth Keystone
VIP
Mesos Agent
km
proxy
Agent Node
Pod Pod Pod
Docker
Agent Node
VIP VIP
254
254
cfc-router
Image-mgr
appstore
network mgr
Heapter
km
agent
Kube-DNS
Flanneld
Mesos Agent
km
proxy
Pod Pod Pod
Docker
km
agent
Flanneld
Mesos Agent
km
proxy
Pod Pod Pod
Docker
km
agent
Flanneld
master mgr
Infrastructure Resource Aggregation
xCAT
Bare-Metal
Generic Public Cloud
adapter
Cluster
Deployment
PaaS BD & A
Infrastructure
discovery
Image Registry
(OS, VM, container)
SW Repository
Logging/Metric
Alert & Policy
Authentication
Load Balance
DevOps
Infrastructure Management
Discover bare metals and quickly deploy the
environment on-demand (bare metal,
virtualization or hybrid)
1
Simplify IT operations
Fine grain, dynamic allocation of resources
maximizes efficiency of servers (Bare metals
and VMs) sharing a common resource pool.
2
Increase Resource Utilization
Proven architecture at extreme scale, with
enterprise class infrastructure management,
monitoring, reporting, and security
capabilities.
3
Reduce Administration Costs
255
255
Deliver an Agile Containerization Infrastructure in Enterprise
256
Server
Storage Network Server
Server
IBM Spectrum Cluster Foundation
Orchestration
Cluster
Template
xCAT
Conduct Cluster#1
Operating System
Bare Metal
Spectrum Scale
Docker Engine
Elastic
scale in/out
Design
Deploy
Monitor
& Health
upgrade
scale
Automation
OpenStack
Virtualizations Pools
Bare Metal
Operating System
Spectrum Scale
OpenStack (KVM)
VM VM VM VM
Provisioning
Conductor Cluster#2
POD
Benefits
• Auto deploy customized OpenStack to offer the virtualization pools
• Auto deploy two container management environments on both bare metals and virtual
machines.
• Easy to adjust the size of container management environments to balance the workload,and
full
• Building up Multi-tenant management based on LDAP
POD POD POD
POD POD
kind: “Pod”
(i.e. Type)
Kubernetes Analysis: 2 types of containers
“Dumb” (no HA, no Autoscale) =
Pod Template
kind: “ReplicationController”
(i.e. Type)
id: redis
kind: ReplicationController
apiVersion: v1beta1
desiredState:
replicas: 1
replicaSelector:
name: redis
podTemplate:
desiredState:
manifest:
version: v1beta1
id: redis
containers:
- name: redis
image: kubernetes/redis:v1
cpu: 1000
ports:
- containerPort: 6379
volumeMounts:
- name: data
mountPath: /redis-master-data
volumes:
- name: data
source:
emptyDir: {}
labels:
name: redis
id: redis-master
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: redis-master
containers:
- name: master
image: kubernetes/redis:v1
cpu: 1000
ports:
- containerPort: 6379
volumeMounts:
- name: data
mountPath: /redis-master-data
env:
- key: MASTER
value: "true"
- name: sentinel
image: kubernetes/redis:v1
ports:
- containerPort: 26379
env:
- key: SENTINEL
value: "true"
volumes:
- name: data
source:
emptyDir: {}
labels:
name: redis
role: master
redis-sentinel: "true"
Approach:
• Use reuse exiting TOSCA normative node, capability and relationship types where possible
• Model Kubernetes types (for now), then model similar container managers like Swarm, etc. and look for common base types,
properties that can be abstracted.
“Smart” (HA, Scaling) =
ReplicationController Template
Kubernetes.Pod
tosca.groups.Placement
derived_from: tosca.groups.Placement
version: <version_number>
metadata: <tosca:map(string)>
description: <description>
properties: TBD
attributes: TBD
# Allow get_property() against targets
targets:
[ tosca.nodes.Container.App.Kubernetes ]
kind: “Pod”
(a Template of type “Pod”)
id: redis-master
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1 (non-numeric)
id: redis-master
containers:
-------------------------------------------------------------------------------------
-------------
- name: master (TOSCA template name)
image: kubernetes/redis:v1 (TOSCA Container.App; create
artifact of type image.Docker)
cpu: 1000 (TOSCA Container capability; num_cpus,
cpu_frequency)
ports: (TOSCA EndPoint capability)
- containerPort: 6379 (TOSCA Endpoint; port, ports)
volumeMounts: (TOSCA Attachment capability)
- name: data
mountPath: /redis-master-data (TOSCA AttachesTo Rel.;
location)
env:
- key: MASTER
value: "true” # passed as Envirronment vars to instance
-----------------------------------------------------------------------------------------
-------
- name: sentinel
image: kubernetes/redis:v1
ports:
- containerPort: 26379
env:
- key: SENTINEL
value: "true” # passed as Env. var.
-----------------------------------------------------------------------------------------
-------
volumes:
- name: data
source:
labels:
name: redis
role: master
redis-sentinel:
"true"
Kubernetes Analysis: Pod Modeling: TOSCA Type mapping
• A Pod is an aggregate of Docker Container Requirements of 1..N homogenous Container (topologies)
TOSCA Types for Kubernetes:
“Redis-master” Template of Kubernetes “Pod”
Type:
Kubernetes.Container
tosca.nodes.Container.App
derived_from: tosca.nodes.Container.App
metadata: <tosca:map(string)>
version: <version_number>
description: <description>
properties:
environment: <tosca:map of string>
requirements:
- host:
# hosted on kubelets
type: Container.Runtime.Kubernetes
- ports:
capability: EndPoint
properties: ports, ports, etc.
- volumes:
capability: Attachment
relationship: AttachesTo
properties: location, device
occurrences: [0, UNBOUNDED]
redis-master-pod
Kubernetes.Pod
type: tosca.groups.Placement
version: 1.0
metadata:
name: redis
role: master
redis-sentinel: true
targets:
[ master-container,
sentinel-container ]
Kubernetes Analysis: Pod Modeling: TOSCA Template Mapping: Simple “Group
Approach”:
• Using the Types defined on the previous slide the TOSCA Topology Template looks like this for “redis-master”
TOSCA Topology for Kubernetes
“:
“Redis-master” Template of Kubernetes “Pod”
Type:
master-container
Kubernetes.Container
derived_from: Kubernetes.Container
metadata: <tosca:map(string)>
version: <version_number>
description: <description>
artifacts: kubernetes/redis:v1
properties:
requirements:
- host:
properties:
num_cpus: 1000 ?
- port:
capability: EndPoint
properties:
port: 6379
- volume:
capability: Attachment
relationship: AttachesTo
properties: location, device
occurrences: [0, UNBOUNDED]
interfaces:
inputs:
MASTER: true
kind: “Pod”
(a Template of type “Pod”)
id: redis-master
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1 (non-numeric)
id: redis-master
containers:
-------------------------------------------------------------------------------------
-------------
- name: master
image: kubernetes/redis:v1
cpu: 1000
ports:
- containerPort: 6379
volumeMounts:
- name: data
mountPath: /redis-master-data
env:
- key: MASTER
value: "true” # passed as Envirronment vars to instance
-----------------------------------------------------------------------------------------
-------
- name: sentinel
image: kubernetes/redis:v1
ports:
- containerPort: 26379
env:
- key: SENTINEL
value: "true” # passed as Env. var.
-----------------------------------------------------------------------------------------
-------
volumes:
- name: data
source:
emptyDir: {}
labels:
name: redis
role: master
redis-sentinel:
"true"
sentinel-container
Kubernetes.Contain
er
implied “InvitesTo”
Relationship
implied “InvitesTo”
Relationship
Issue: location property
lost as there is no
“AttachesTo” relationship
in the topology.
Create new Capability
Type?
derived_from:
Kubernetes.Container
...
...
...
Issue: Are there more
than 1 volumes / mount
points allowed?
Choice: or use
Docker.Runtime type to
allow use of template
on Swarm, etc.?
redis-master-pod
Kubernetes.Pod
type: tosca.groups.Placement
sources:
[ master-container,
sentinel-container ]
Membership (MemberOf) direction is wrong for management (group):
TOSCA Groups
master-container
Kubernetes.Contai
ner
sentinel-container
Kubernetes.Contain
er
implied “MemberOf”
Relationship
implied “MemberOf”
Relationship
derived_from:
Kubernetes.Container
...
...
...
tosca.capabilities.Container.Docker:
derived_from: tosca.capabilities.Container
properties:
version:
type: list
required: false
entry_schema: version
publish_all:
type: boolean
default: false
required: false
publish_ports:
type: list
entry_schema: PortSpec
required: false
expose_ports:
type: list
entry_schema: PortSpec
required: false
volumes:
type: list
entry_schema: string
required: false
However: We do not want to “buy into” Docker file as a Capability Type:
Old Style: Docker capability type
that mirrors a Dockerfile:
Instead we want to use
Endpoints (for ports) and
Attachments (for volumes)
This allows Docker, Rocket and
containers
to be modeled with other TOSCA nodes
(i.e., via ConnectsTo) and leverage
underlying Compute attached
BlockStorage
TBD: Need to show this
tosca.groups.Placement
tosca.groups.Root
derived_from: tosca.groups.Placement
version: <version_number>
metadata: <tosca:map(string)>
description: <description>
properties: TBD
attributes: TBD
# Allow get_property() against targets
targets:
[ Container.App.Docker,
Container.App.Rocket, ... ]
Kubernetes Pod reuses “Docker” Container.App type which can now reference other Container.App types like
Rocket (Rkt)
Container.App.Docker
tosca.nodes.Container.App
derived_from: tosca.nodes.Container.App
metadata: <tosca:map(string)>
version: <version_number>
description: <description>
capabilities:
Container.App:
attribute:
response_time:
properties:
environment: <tosca:map of string>
requirements:
- host:
capability: Container.Docker
type: Container.Runtime.Kubernetes
- ports:
capability: EndPoint
properties: ports, ports, etc.
- volumes:
capability: Attachment
relationship: AttachesTo
properties: location, device
occurrences: [0, UNBOUNDED]
• There is no need for a “Kubernetes” Runtime type, just use the real
Container’s built-in runtime requirement
• (don’t care to model or reference Kubelets)
• Homogenous Pods/Containers for Kubernetes is still an
issue, but
• this is a current Kubernetes limitation
• (heterogonous is possible in future)
Policies:
• Security,
• Scaling,
• Update,
• etc.
“AppliesTo” group (members)
• i.e., targets
• Not using “BindsTo” as that implies it is coupled to an implementation
BETTER: We do not need to define Kubernetes specific Types (reuse Docker types) :
Container.App.Rocket
Container.APP
derived_from:
Kubernetes.Container
...
...
...
Event Type (new):
<event_type_name>:
derived_from: <parent_event_type>
version: <version_number>
description: <policy_description>
Policy Definition
<policy_name>:
type: <policy_type_name>
description: <policy_description>
properties: <property_definitions>
# allowed targets for policy association
targets: [ <list_of_valid_target_templates> ] *
triggers:
<trigger_symbolic_name_1>:
event: <event_type_name>
# TODO: Allow a TOSCA node filter here
# required node (resource) to monitor
filter:
node: <node_template_name> <node_type>
# Used to reference another node related to
# the node above via a relationship
requirement: <requirement_name>
# optional capability within node to monitor
capability: <capability_name>
# required clause that compares an attribute
# with the identified node or capability
# for some condition
condition: <constraint_clause>
action:
# a) Define new TOSCA normative strategies
# per-policy type and use here OR
# b) allow domain-specific names
<operation_name>: # (no lifecycle)
# TBD: Do we care about validation of types?
# If so, we should use a TOSCA Lifecycle type
description: <optional description>
inputs: <list of property assignments >
implementation: <script> | <service_name>
<trigger_symbolic_name_2>:
...
<trigger_symbolic_name_n>:
Event
name of a normative
TOSCA Event Type
Condition
described as a
constraint of an
attribute of the
node (or capability)
identified) by the
filter.
Action
Describes either:
a)a well-known strategy
b)an implementation
artifact (e.g., scripts,
service) to invoke
with optional property
definitions as inputs
(to either choice)
TOSCA Policy – Entities that compose Policy (Event, Condition, Action)
model
<filter_name
properties:
-
-
-
capabilities
-
-
-
Possible TOSCA Metamodel and Normative Type
additions
NodeType, Rel. Types
<node_type_name>:
metadata:
description: >
allow tags / labels for search of
instance model
type: map of string
derived_from: <parent_node_type_name>
version: <version_number>
description: <node_type_description>
properties:
<property_definitions>
attributes:
<attribute_definitions>
requirements:
- <requirement_definitions>
capabilities:
<capability_definitions>
interfaces:
<interface_definitions>
artifacts:
<artifact_definitions>
tosca.capabilities.Container
tosca.capabilities.Container:
derived_from: tosca.capabilities.Root
properties:
num_cpus:
type: integer
required: false
constraints:
- greater_or_equal: 1
cpu_frequency:
type: scalar-unit.frequency
required: false
disk_size:
type: scalar-unit.size
required: false
mem_size:
type: scalar-unit.size
required: false
attributes:
utilization:
description: referenced by scaling policies
type: # float (percent) |
integer (percent) |
# scalar-percent ?
required: no ?
constraints:
- in_range: [ 0, 100 ]
TOSCA Policy Definition
my_scaling_policy:
type: tosca.policies.scaling
properties: # normative TOSCA properties for scaling
min_instances: 1
max_instances: 10
default_instances: 3
increment: 1
# target the policy at the “Pod”
targets: [redis-master-pod ]
triggers:
resize_compute: # symbolic name
event: tosca.events.resource.utilization
filter:
node: master-container
requirement: host
capability: Container
condition: utilization greater_than 80%
action:
# map to SENLIN::ACTION::RESIZE
RESIZE_BEST_EFFORT: # logical operation name
inputs: # optional inputs parameters
number: 1
Implementation: <script> | <service_name>
...
TOSCA Policy Mapping – Example Senlin “scaling_out_policy_ceilometer.yaml”
Target is a
Kubernetes Pod of the
tosca.groups.placement type
TODO:
Need a % data type
for TOSCA
using the Kubernetes “redis” example from earlier slides (and its pod, container):
TOSCA normative event
OpenStack Ceilometer)
TOSCA normative event
type (name) that would map
to
domain-specific names (e.g.,
OpenStack Ceilometer)
Symbolic name
for the trigger
(could be used to
reference an
externalized version;
however, this would
violate a
Policy’s integrity as a
“Security document”
Find the attribute via the topology:
a) Navigate to node (directly or via the
requirement name) and optionally the
Capability name
b) The condition to map & register with the
target monitoring service (e.g.,
Ceilometer)
Describe NODE to attach
an alarm | alert | event to
i.e., Using the “node”, “req”,
“cap” and “condition” keys
would expressed as a
descriptive “filter”
Note: we combined the Senlin
name
Note: we combined the Senlin
“Action” of
SENLIN:ACTION:RESIZE
with the strategy:
BEST_EFFORT to have one
name
List optional input
parms. here
Kubeflow Architecture
Firma | Referent | Abteilung | 13.05.2015
266
Kubeflow Architecture
Firma | Referent | Abteilung | 13.05.2015
267
Clouds and Tools: Cheat Sheets & Infographics
Memory Hierarchy: Past, Present
and Future
https://blog.dellemc.com/en-us/memory-centric-architecture-vision/
New Memory Usage Paradigma
https://blog.westerndigital.com/in-memory-computing-scale-ultrastar-memory-drive/
Motivation: Memory Access Data
Structures
https://www.gridgain.com/resources/papers/introducing-apache-ignite
Memory-Centric Data Process
https://www.eckerson.com/articles/diving-into-dataops-the-underbelly-of-modern-data-pipelines
Database Type Decision Tree
https://www.nuodb.com/digging-distributed-sql
Hybrid Transact./Analyt. Processing
https://medium.com/@ckayay/how-to-pick-the-right-database-c2539efe2589
Choosing the right IMC Technology
https://www.gridgain.com/
Uber Horovod: Main Mechanism
Thomas Pötter 276
https://eng.uber.com/horovod/, https://www.slideshare.net/databricks/horovod-ubers-open-source-distributed-deep-learning-framework-for-tensorflow
Exchange of (average)
gradients for distributed
learning.
Apache Kafka (can be replaced by
Pulsar)
https://twitter.com/PoetterThomas/status/1203472185135960066?s=20
Apache Pulsar
https://pulsar.apache.org/docs/en/concepts-architecture-overview/
Apache Pulsar: Tool Integration
https://jack-vanlightly.com/blog/2018/10/2/understanding-
how-apache-pulsar-works
Apache Pulsar: DbLedgerStorage
https://jack-vanlightly.com/blog/2018/10/2/understanding-
how-apache-pulsar-works
Apache Pulsar: Round-up of Concepts
https://jack-vanlightly.com/blog/2018/10/2/understanding-
how-apache-pulsar-works
Memcached
https://dev.mysql.com/doc/refman/8.0/en/innodb-
memcached-intro.html
Memcached Use
https://blogs.oracle.com/cloud-infrastructure/deploying-a-highly-available-
memcached-cluster-on-oracle-cloud-infrastructure
Use of Redis Cache in AWS (Ichnaea)
https://ichnaea.readthedocs.io/en/latest/deploy.html
Apache Ignite
https://www.youtube.com/watch?v=eMs_2vEsbBk
Apache Ignite
https://www.slideshare.net/Codemotion/an-introduction-to-apache-ignite-mandhir-gidda-codemotion-rome-2017
Apache Ignite: In-Memory Capabilities
https://www.slideshare.net/Codemotion/an-introduction-to-apache-ignite-mandhir-gidda-codemotion-rome-2017
Ignite & GridGain based on it
https://www.youtube.com/watch?v=zVQ2clIoxIQ
GridGain Functionality / Use Cases
https://www.youtube.com/watch?v=rDX_ialHfkU
GridGain typical IMC Architecture
https://www.youtube.com/watch?v=rDX_ialHfkU
GridGain Enterprise Edition
https://www.youtube.com/watch?v=jJQ_R6ICqW4
Alluxio 2.x
https://www.alluxio.io/blog/building-a-cloud-
native-analytics-mpp-database-with-alluxio/
Alluxio 2.0
https://www.alluxio.io/blog/building-a-cloud-
native-analytics-mpp-database-with-alluxio/
VoltDB
https://www.voltdb.com/blog/2017/05/24/mifidii-youre-wrong/sarah-mifid/
VoltDB claims to be the only enterprise-grade data platform that meets the real-time
streaming data requirements of 5G-powered applications and thus got a $10
million US series C funding in October 2019.
VoltDB
https://www.voltdb.com/product/data-architecture/oltp/
Kinetica
https://www.itbusinessedge.com/blogs/it-unmasked/kinetica-makes-case-
for-in-memory-database-hosted-on-gpus.html
Hazelcast: Example Use in
Digital Transformation
https://hazelcast.com/use-cases/digital-transformation/
Hazelcast IMDG Architecture
https://hazelcast.com/use-cases/digital-transformation/
Gigaspaces XAP Architecture
https://www.gigaspaces.com/products/xap/
Red Hat JBoss Data Grid
https://developers.redhat.com/blog/2017/02/20/unlock-your-red-hat-jboss-
data-grid-data-with-red-hat-jboss-data-virtualization/
Red Hat JBoss Data Grid
https://www.slideshare.net/opensourcementor/jdv-big-data-summit-final
Pivotal GemFire
https://www.journaldunet.com/solutions/cloud-computing/1148801-
comparatif-quatre-distributions-hadoop-au-crible/1148806-pivotal-data-suite
Inbound Layer
Corporate Memory Architecture
Analytical
Applications
Outbound Layer
Processing and
Storage Layer
Internal and External
Sources
Analytics and
Reporting
Analytics Service Delivery Platform
CMS
GALA
External
Data
Source
Systems
Fleet
WF-I IP
Kafka / Flume (Flafka)
Corporate Memory
Streaming
Batch
CDC
Data Integration / Validation Data Governance Data Provisioning
Process Orchestration / Error Handling / Monitoring / Meta Data Management /Security
Data
Lake
LE1
LE2
LEx
.
.
.
Commo
n
Batch
Processin
g, YARN,
Spark, Hive
R
SAS
SAP
Design
Studio
Web
I
UI
5
Cryst
al
Reports
WebServices
SAP HANA
SAP
BA
HANA
Native
FRDP
Core
Warehouse
Data
Marts
BP
Interface for
Analytical
Applications
Hive+ORC/Parquet,
REST +
(Sqoop / Drill
Exhibit/SploutSQL)
Target Data
Queues
Hive*
Source Data
Pool
Hive /Spark +
ORC/Parquet*
(Hybrid:
Attribs + JSON-
BLOBs), Diff-
Records,
Historic
Corrections
treated
separately
Spark SQL, DataSets +
Streaming, MLlib,…
Additional Ingestion Tool: Perhaps HDF/Nifi, Transformation Support-Tool: Talend or Diff-DB
*= Hive/Parquet as complementary technology considered
Flume / Sqoop
MFT
Initial Load
Ma
nag
er:
Con
vers
ion
/
For
mat
Dec
isio
n /
Con
sist
enc
y /
Bite
mp
Spark SQL, DataSets +
Streaming, MLlib, …
Alluxio+Succinct
Alluxio+Succinct
H
D
F
S
Alluxio+Succinct
H
D
F
S
Graph-Based Data Management
class GraphInheritanceDB-Overview
GraphInheritanceDB
JDBCWrapper
CommandLine
HiveUDFs
Dispatcher
QueryAnalyzer
SAP
SAS
R
Sqoop
HQLStructuralProcessor
HQLDataProcessor
DiffDBCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
AddNodeAttributeCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
JDBCConnection
+ connect(String) :void
+ disconnect() :int
+ sendCommand(String) :void
Connection
- addIfNotExist :boolean
- apendSemicolon :boolean
- name :String
+ connect(String) :void
+ disconnect() :int
+ sendCommand(String) :void
HiveCommandProcessorConnection
+ connect(String) :void
+ disconnect() :int
+ sendCommand(String) :void
BeelineConnection
+ connect(String) :void
+ disconnect() :int
+ sendCommand(String) :void
HCatalogConnection
+ connect(String) :void
+ disconnect() :int
+ sendCommand(String) :void
UpdateNodeAttributeFromTableCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
DeleteNodeRowCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
AddNodeCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
AddEdgeCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
AddEdgeAttributeCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
UpdateEdgeAttributeFromTableCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
DropNodeCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
DropEdgeCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
DeleteEdgeRowCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
InheritsAndExtendsCommand
+ dataOp(ArrayList<String>) :void
+ structuralHQLAfterDataOp(ArrayList<String>) :void
+ structuralHQLBeforeDataOp(ArrayList<String>) :void
Hive or Spark
Parallel processing
as UDF
Processing via a
connection that
ensures exactly once
semantics.
XSDImporter
TypeManagement
XMLImporter
XSD (XML
Schema Doc)
XML
JavaExporterWithHibernateAndJAXBAnnots
JSONImporter
TypeScriptOrSwaggerOrRAMLImporter
JSON
TypeScript or
Swagger
XMLExporter
XSDExporter
TypeScriptOrSwaggerOrRAMLExporter
ORC Parquet HBase
Using HyperJAXB, Hibernate, JAXB
Exhibit
JSONExporter
Using Hive
XML-SerDe
Cassandra ScyllaDB PostgreSQL
HibernateHQLDriver
https://github.com/jwills/exhibit
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow»
«flow» «flow» «flow»
«flow»
The results of the
queries should be
aggregated
efficiently using
in-memory
technology.
Questions?
Understood?
Comprendes?
 verstanden.de
 compris.com, potentialism.net
Further Infographics:
1. https://www.pinterest.de/poetter_thomas/data-science-infographics/
2. https://www.pinterest.de/poetter_thomas/ai-artificial-intelligence-infographics/
3. https://www.pinterest.de/poetter_thomas/deep-learning-infographics/
4. https://www.pinterest.de/poetter_thomas/deep-learning-architecture-elements-
architectures-/
5. https://www.pinterest.de/poetter_thomas/explainable-ai-xai-interpretable-
machine-learninga/
6. https://github.com/FavioVazquez/ds-cheatsheets

More Related Content

PDF
Introduction to Istio Service Mesh
PPTX
Reusable, composable, battle-tested Terraform modules
PDF
Docker Tutorial.pdf
PDF
KCD Italy 2022 - Application driven infrastructure with Crossplane
PDF
Istio Service Mesh for Developers and Platform Engineers
PDF
Rancher Rodeo
PPTX
Google Cloud Fundamentals by CloudZone
PDF
The Patterns of Distributed Logging and Containers
Introduction to Istio Service Mesh
Reusable, composable, battle-tested Terraform modules
Docker Tutorial.pdf
KCD Italy 2022 - Application driven infrastructure with Crossplane
Istio Service Mesh for Developers and Platform Engineers
Rancher Rodeo
Google Cloud Fundamentals by CloudZone
The Patterns of Distributed Logging and Containers

What's hot (20)

PDF
Deploying your first application with Kubernetes
PDF
Kubernetes Basics
PPTX
Docker introduction
PPTX
Docker: From Zero to Hero
PPTX
Docker introduction (1)
ODP
Openshift Container Platform
PPTX
Docker 101 - Nov 2016
PDF
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
PPTX
Kubernetes for Beginners: An Introductory Guide
PDF
Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...
PDF
Kubernetes 101
PDF
Devops On Cloud Powerpoint Template Slides Powerpoint Presentation Slides
PDF
Kubernetes architecture
PDF
Dockerfile
PPTX
Azure DevOps Best Practices Webinar
PDF
Introduction to Docker - VIT Campus
PPTX
Final terraform
PPSX
CI-CD Jenkins, GitHub Actions, Tekton
PDF
Docker 101: Introduction to Docker
Deploying your first application with Kubernetes
Kubernetes Basics
Docker introduction
Docker: From Zero to Hero
Docker introduction (1)
Openshift Container Platform
Docker 101 - Nov 2016
What is Docker | Docker Tutorial for Beginners | Docker Container | DevOps To...
Kubernetes for Beginners: An Introductory Guide
Kubernetes Docker Container Implementation Ppt PowerPoint Presentation Slide ...
Kubernetes 101
Devops On Cloud Powerpoint Template Slides Powerpoint Presentation Slides
Kubernetes architecture
Dockerfile
Azure DevOps Best Practices Webinar
Introduction to Docker - VIT Campus
Final terraform
CI-CD Jenkins, GitHub Actions, Tekton
Docker 101: Introduction to Docker
Ad

Similar to Clouds and Tools: Cheat Sheets & Infographics (20)

PPTX
PDF
PDF
Getting Started with Docker
PPTX
Getting started with Docker
PDF
Docker by Example - Basics
PDF
Docker as an every day work tool
PPTX
Docker Starter Pack
PDF
Work shop - an introduction to the docker ecosystem
PPTX
Docker and the Container Ecosystem
PDF
Docker.pdf
PDF
Lecture eight to be introduced in class.
PDF
docker.pdf
PPTX
Docker for developers z java
PPTX
PDF
Docker in a JS Developer’s Life
PPTX
Powercoders · Docker · Fall 2021.pptx
PDF
Docker 101 - Intro to Docker
PPTX
ABCs of docker
PDF
Check the version with fixes. Link in description
PDF
Computer science docker file Week -6 to7
Getting Started with Docker
Getting started with Docker
Docker by Example - Basics
Docker as an every day work tool
Docker Starter Pack
Work shop - an introduction to the docker ecosystem
Docker and the Container Ecosystem
Docker.pdf
Lecture eight to be introduced in class.
docker.pdf
Docker for developers z java
Docker in a JS Developer’s Life
Powercoders · Docker · Fall 2021.pptx
Docker 101 - Intro to Docker
ABCs of docker
Check the version with fixes. Link in description
Computer science docker file Week -6 to7
Ad

Recently uploaded (20)

PPTX
Funds Management Learning Material for Beg
PDF
LABUAN4D EXCLUSIVE SERVER STAR GAMING ASIA NO.1
PPTX
Slides PPTX World Game (s) Eco Economic Epochs.pptx
PPTX
CHE NAA, , b,mn,mblblblbljb jb jlb ,j , ,C PPT.pptx
PPTX
Internet___Basics___Styled_ presentation
PPTX
June-4-Sermon-Powerpoint.pptx USE THIS FOR YOUR MOTIVATION
PPTX
PptxGenJS_Demo_Chart_20250317130215833.pptx
PDF
APNIC Update, presented at PHNOG 2025 by Shane Hermoso
PDF
Behind the Smile Unmasking Ken Childs and the Quiet Trail of Deceit Left in H...
PPTX
Introduction about ICD -10 and ICD11 on 5.8.25.pptx
PDF
Slides PDF The World Game (s) Eco Economic Epochs.pdf
PDF
Vigrab.top – Online Tool for Downloading and Converting Social Media Videos a...
PPTX
presentation_pfe-universite-molay-seltan.pptx
PPTX
durere- in cancer tu ttresjjnklj gfrrjnrs mhugyfrd
PDF
Decoding a Decade: 10 Years of Applied CTI Discipline
PDF
Cloud-Scale Log Monitoring _ Datadog.pdf
PDF
RPKI Status Update, presented by Makito Lay at IDNOG 10
PDF
The Internet -By the Numbers, Sri Lanka Edition
PDF
Triggering QUIC, presented by Geoff Huston at IETF 123
PDF
LABUAN4D EXCLUSIVE SERVER STAR GAMING ASIA NO.1
Funds Management Learning Material for Beg
LABUAN4D EXCLUSIVE SERVER STAR GAMING ASIA NO.1
Slides PPTX World Game (s) Eco Economic Epochs.pptx
CHE NAA, , b,mn,mblblblbljb jb jlb ,j , ,C PPT.pptx
Internet___Basics___Styled_ presentation
June-4-Sermon-Powerpoint.pptx USE THIS FOR YOUR MOTIVATION
PptxGenJS_Demo_Chart_20250317130215833.pptx
APNIC Update, presented at PHNOG 2025 by Shane Hermoso
Behind the Smile Unmasking Ken Childs and the Quiet Trail of Deceit Left in H...
Introduction about ICD -10 and ICD11 on 5.8.25.pptx
Slides PDF The World Game (s) Eco Economic Epochs.pdf
Vigrab.top – Online Tool for Downloading and Converting Social Media Videos a...
presentation_pfe-universite-molay-seltan.pptx
durere- in cancer tu ttresjjnklj gfrrjnrs mhugyfrd
Decoding a Decade: 10 Years of Applied CTI Discipline
Cloud-Scale Log Monitoring _ Datadog.pdf
RPKI Status Update, presented by Makito Lay at IDNOG 10
The Internet -By the Numbers, Sri Lanka Edition
Triggering QUIC, presented by Geoff Huston at IETF 123
LABUAN4D EXCLUSIVE SERVER STAR GAMING ASIA NO.1

Clouds and Tools: Cheat Sheets & Infographics

  • 1. Thomas Poetter, Compris Technologies AG 2022
  • 2. Overview / Table of Contents Cheat Sheets: 1. Docker 2. Kubernetes, K8s, K3s, Minikube 3. OpenStack (IaaS) 4. OpenShift (PaaS) Infographics: 1. Overview 2. Microservices 3. AWS 4. Azure 5. GCP 6. Docker 7. Kubernetes 8. In-Memory Data Grids (IMC/IMDGs) and Databases
  • 4. Docker Cheat Sheet 1 https://phoenixnap.com/kb/list-of-docker-commands-cheat-sheet
  • 5. Docker Cheat Sheet 2 https://dockerlabs.collabnix.com/docker/cheatsheet/ docker create [options] IMAGE -a, --attach # attach stdout/err -i, --interactive # attach stdin (interactive) -t, --tty # pseudo-tty --name NAME # name your image -p, --publish 5000:5000 # port map --expose 5432 # expose a port to linked containers -P, --publish-all # publish all ports --link container:alias # linking -v, --volume `pwd`:/app # mount (absolute paths needed) -e, --env NAME=hello # env vars
  • 6. Docker Cheat Sheet 3 https://intellipaat.com/mediaFiles/2019/03/docker-cheat-sheet.jpg
  • 7. Docker Cheat Sheet 4 https://www.docker.com/wp-content/uploads/2022/03/docker-cheat-sheet.pdf
  • 8. Docker Cheat Sheet 5: Logical Docker Container Commands Create a container (without starting it): docker create [IMAGE] Rename an existing container: docker rename [CONTAINER_NAME] [NEW_CONTAINER_NAME] Run a command in a new container: docker run [IMAGE] [COMMAND] docker run --rm [IMAGE] – removes a container after it exits. docker run -td [IMAGE] – starts a container and keeps it running. docker run -it [IMAGE] – starts a container, allocates a pseudo-TTY connected to the container’s stdin, and creates an interactive bash shell in the container. docker run -it-rm [IMAGE] – creates, starts, and runs a command inside the container. Once it executes the command, the container is removed. Delete a container (if it is not running): docker rm [CONTAINER] Update the configuration of one or more containers: docker update [CONTAINER] Starting and Stopping Containers Start a container: docker start [CONTAINER] Stop a running container: docker stop [CONTAINER] Stop a running container and start it up again: docker restart [CONTAINER] Pause processes in a running container: docker pause [CONTAINER] Unpause processes in a running container: docker unpause [CONTAINER] Block a container until others stop (after which it prints their exit codes): docker wait [CONTAINER] Kill a container by sending a SIGKILL to a running container: docker kill [CONTAINER] Attach local standard input, output, and error streams to a running container: docker attach [CONTAINER]
  • 9. Docker Cheat Sheet 6: Logical Docker Image Commands Create an image from a Dockerfile: docker build [URL] docker build -t – builds an image from a Dockerfile in the current directory and tags the image Pull an image from a registry: docker pull [IMAGE] Push an image to a registry: docker push [IMAGE] Create an image from a tarball: docker import [URL/FILE] Create an image from a container: docker commit [CONTAINER] [NEW_IMAGE_NAME] Remove an image: docker rmi [IMAGE] Load an image from a tar archive or stdin: docker load [TAR_FILE/STDIN_FILE] Save an image to a tar archive, streamed to STDOUT with all parent layers, tags, and versions: docker save [IMAGE] > [TAR_FILE] Docker Commands for Container and Image Information List running containers: docker ps docker ps -a – lists both running containers and ones that have stopped List the logs from a running container: docker logs [CONTAINER] List low-level information on Docker objects: docker inspect [OBJECT_NAME/ID] List real-time events from a container: docker events [CONTAINER] Show port (or specific) mapping for a container: docker port [CONTAINER] Show running processes in a container: docker top [CONTAINER] Show live resource usage statistics of containers: docker stats [CONTAINER] Show changes to files (or directories) on a filesystem: docker diff [CONTAINER]
  • 10. Docker Cheat Sheet 7: Logical Networks List networks: docker network ls Remove one or more networks: docker network rm [NETWORK] Show information on one or more networks: docker network inspect [NETWORK] Connects a container to a network: docker network connect [NETWORK] [CONTAINER] Disconnect a container from a network: docker network disconnect [NETWORK] [CONTAINER] Docker Commands for Container and Image Information List all images that are locally stored with the docker engine: docke image ls Show the history of an image: docker history [IMAGE]
  • 11. Docker Cheat Sheet 8: Syntax -e, --env NAME[="value"] Set environment variable. If the value is omitted, the value will be taken from the current environment. --entrypoint "some/entry/point" Overwrite the default ENTRYPOINT of the image -h, --hostname ="<hostname>" Container host name --add-host =<hostname>:<ip> Add a custom host-to-IP mapping --net ="<mode>" Set the network mode for the container (default: bridge): • bridge: create a network stack on the default Docker bridge • none: no networking • container:<name|id>: reuse another container’s stack • host: use the Docker host network stack • <network-name>|<network-id>: connect to a user-defined network --group-add =<groups> Add additional groups to run as --rm Automatically remove the container when it exits --restart ="no|on-failure[:<max-retry>]|always|unless-stopped" Restart policy; default: no --name "foo" Assign a name to the container --detach-keys ="<keys>" Override the key sequence to detach a container. Default: "ctrl-p ctrl-q" $ docker create [<opts>] <image> [<command>] [<arg>...] Create a new container, but don’t run it (instead, print its id). See options for docker run $ docker start [<opts>] <container> [<container>...] Start one or more containers -a, --attach Attach container’s STDOUT and STDERR and forward all signals to the process -i, --interactive Attach container’s STDIN $ docker stop [<opts>] <container> [<container>...] Stop one or more containers by sending SIGTERM and then SIGKILL after a grace period -t, --time [=10] Number of seconds to wait before killing the container Building images $ docker build [<opts>] <path> | <URL> Build a new image from the source code at PATH -f, --file path/to/Dockerfile Path to the Dockerfile to use. Default: Dockerfile. --build-arg <varname>=<value> Name and value of a build argument defined with ARG Dockerfile instruction -t "<name>[:<tag>]" Repository names (and optionally with tags) to be applied to the resulting image --label =<label> Set metadata for an image -q, --quiet Suppress the output generated by containers --rm Remove intermediate containers after a successful build Creating, running and stopping containers $ docker run [<opts>] <image> [<command>] [<arg>...] Run a command in a new container -i, --interactive Keep STDIN open even if not attached -t, --tty Allocate a pseudo-TTY -v, --volume [<host-dir>:]<container-dir>[:<opts>] Bind mount a volume. Options are comma-separated: [ro,rw]. By default, rw is used. --device =<host-dev>:<container-dev>[:<opts>] Add a host device to the container; e.g. --device="/dev/sda:/dev/xvdc:rwm". Possible <opts> flags: r: read, w: write, m: mknod -d, --detach Detached (daemon) mode --env-file file Read in a line delimited file of environment variables
  • 12. Docker Cheat Sheet 9: Syntax --since ="<timestamp>“ Show logs since the given timestamp -t, --timestamps Show timestamps --tail ="<n>“ Output the specified number of lines at the end of logs $ docker wait <container> [<container>...] Block until a container stops, then print its exit code Saving and loading images and containers $ docker save [<opts>] <image> [<image>...] Save one or more images to a tar archive (streamed to STDOUT by default) -o, --output ="" Write to a file instead of STDOUT $ docker load [<opts>] Load image(s) from a tar archive or STDIN. Restores both images and tags -i, --input ="<tar-archive>" Read from a tar archive file, instead of STDIN. The tarball may be compressed with gzip, bzip, or xz. -q, --quiet Suppress the load progress bar $ docker export [<opts>] <container> Export the contents of a container’s filesystem as a tar archive -o, --output ="<file>" Write to a file instead of STDOUT $ docker import [<opts>] <file>|<URL>|- [<repository>[:<tag>]] Create an empty filesystem image and import the contents of the tarball into it, then optionally tag it. -c, --change =[] Apply specified Dockerfile instructions while importing the image; one of these: CMD, ENTRYPOINT, ENV, EXPOSE, ONBUILD, USER, VOLUME, WORKDIR -m, --message ="<msg>" Set commit message for imported image $ docker kill [<opts>] <container> [<container>...] Kill a runing container using SIGKILL or a specified signal -s, --signal [="KILL"] Signal to send to the container $ docker pause <container> [<container>...] Pause all processes within a container $ docker unpause <container> [<container>...] Unpause all processes within a container DOCKER CLI QUICK REFERENCE (continued) Interacting with running containers $ docker attach [<opts>] <container> Attach to a running container --no-stdin Do not attach STDIN (i.e. attach in read-only mode) --detach-keys ="<keys>" Override the key sequence to detach a container. Default: "ctrl-p ctrl-q" $ docker exec [<opts>] <container> <command> [<arg> ...] Run a process in a running container -i, --interactive Keep STDIN open even if not attached -t, --tty Allocate a pseudo-TTY -d, --detach Detached (daemon) mode $ docker top <container> [<ps options>] Display the running processes within a container. The ps options are any options you would give to the ps command $ docker cp [<opts>] <container>:<src path> <host dest path> $ docker cp [<opts>] <host src path> <container>:<dest path> Copy files/folders between a container and the local filesystem. Behaves like Linux command cp -a. It’s possible to specify - as either the host dest path or host src path, in which case you can also stream a tar archive. -L, --follow-link Follow symbol link in source path $ docker logs [<opts>] <container> Fetch the logs of a container -f, --follow Follow log output: it combines docker log and docker attach
  • 13. Docker Cheat Sheet 10: Syntax --no-trunc Don’t truncate output -q, --quiet Only display numeric IDs -f, --filter ="<filter>“ Filter output based on these conditions: • exited=<int> an exit code of <int> • label=<key> or label=<key>=<value> • status=(created|restarting|running|paused|exited|dead) • name=<string> a container’s name • id=<ID> a container’s ID • before=(<container-name>|<container-id>) • since=(<container-name>|<container-id>) • ancestor=(<image-name>[:tag]|<image-id>| image@digest) containers created from an image or a descendant • volume=(<volume-name>|<mount-point-destination>) --format ="<template>“ Pretty-print containers using a Go template, e.g. {{.ID}}. Valid placeholders: • .ID - Container ID • .Image - Image ID •.Command - Quoted command •.CreatedAt - Time when the container was created. •.RunningFor - Time since the container was started. •.Ports - Exposed ports. •.Status - Container status. •.Size - Container disk size. •.Names - Container names. •.Labels - All labels assigned to the container. •.Label - Value of a specific label for this container. For example {{.Label "com.docker.swarm.cpu"}} Communicating with a Docker Registry $ docker login [<opts>] [<server>] Log in to a Docker Registry on the specified <server>. If server is omitted, https://registry-1.docker.io is used. Credentials are stored in /.docker/config.json -u, --username ="<username>" -p, --password ="<password>" $ docker logout [<server>] Log out from a Docker Registry on the specified <server>. If server is omitted, https://registry-1.docker.io is used. $ docker push [<registry host>[:<registry port>]/]<name>[:<tag>] Push an image or a repository to a Registry $ docker pull [<opts>] [<registry host>[:<registry port>]/]<name>[:<tag>] Pull an image or a repository from a Registry -a, --all-tags Download all tagged images in the repository Listing images and containers $ docker images [<opts>] List images -a, --all Show all images (by default, intermediate image layers aren’t shown) --no-trunc Don’t truncate output -f, --filter ="<filter>“ Filter output based on these conditions: • dangling=true - unused (untagged) images • label=<key> or label=<key>=<value> --format ="<template>“ Pretty-print containers using a Go template, e.g. {{.ID}}. Valid placeholders: • .ID - Image ID • .Repository - Image repository • .Tag - Image tag • .Digest - Image digest • .CreatedSince - Time since the image was created • .CreatedAt - Time when the image was created • .Size - Image disk size $ docker ps [<opts>] List containers -a, --all Show all containers (including non-running ones)
  • 14. Docker Cheat Sheet 11: Syntax Inspecting images and containers $ docker inspect [<opts>] <container>|<image> [<container>|<image>...] Return low-level information on a container or image -f, --format ="<format>" Format the output using the given Go template. You can see the available placeholders by looking at the total output without --format -s, --size Display total file sizes if the type is container -t, --type ="<container>|<image>" Return JSON for specified type only Removing images and containers $ docker rm [<opts>] <container> [<container>...] Remove one or more containers from the host -f, --force Force the removal of a running container (uses SIGKILL) -l, --link Remove the specified link and not the underlying container -v, --volume Remove the volumes associated with the container $ docker rmi [<opts>] <image> [<image>...] Remove one or more images from the host -f, --force Force the removal of images of a running container --no-pruneDo not delete untagged parents
  • 16. Dockerfile Cheat Sheet 1: Overview https://devhints.io/dockerfile
  • 17. Dockerfile Cheat Sheet 2: Logical LABEL Adds metadata (a non-executable instruction) LABEL description="Updating the foo and bar" LABEL version="0.15" RUN Execute commands in a new layer on top of the current image and commit the results. Runs during 'build' time. Strongly consider using '&&': RUN apt-get update && update apt-get install –y php USER Sets the username or UID to use when running the image and commands USER alvin VOLUME Creates a mount point (path) to external volumes (on the native host or other containers) WORKDIR Sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY, and ADD commands. If it’s a relative path, it’s relative to the previous WORKDIR. WORKDIR /home/alvin WORKDIR foo # results in "/home/alvin/foo" # NOTE: I haven’t used these yet: ARG Defines a variable that users can pass at build-time to the builder using --build-arg ONBUILD Adds an instruction to be executed later, when the image is used as the base for another build STOPSIGNAL Sets the system call signal that will be sent to the container to exit Dockerfile commands/arguments # Comments begin with '#' ADD Copy new files, directories, or remote file URLs from into the filesystem of the container CMD Allowed only once; if given multiple times, only the last one takes effect. The intended command for the image. Doesn’t do anything during 'build' time. COPY Copy files or directories from a source into the filesystem of the container COPY readme.txt /home/al ENTRYPOINT TODO: A container that will run as an executable? Or, the primary command of your Docker image? ENV Set environment variables. ENV CONF_FILE=application.conf HEAP_SIZE=2G EXPOSE Tells the container runtime that the container listens on these network ports at runtime EXPOSE 5150 EXPOSE 5150 5151 FROM Sets the base image (ubuntu, openjdk:11, alpine, etc.) MAINTAINER Sets the author field of the generated images
  • 18. Dockerfile Cheat Sheet 3: Syntax ENTRYPOINT ["<executable>", "<param1>", "<param2>"] Executable form ENTRYPOINT <command param1 param2 ...> Run the command in the shell /bin/sh -c ENV<key> <value>Sets the environment variable <key> to the value <value>. This value is passed to all future RUN, ENTRYPOINT, and CMD instructions EXPOSE <port1> <port2> ... Informs Docker that the container listens on the specified network ports at runtime. Docker uses this information to interconnect containers using links and to set up port redirection on the host system LABEL ... Adds metadata to an image. A label is a key-value pair LABEL <key>=<value> <key2>=<value2> ... LABEL <key> <value> ONBUILD <instruction> Adds a trigger instruction to an image. The trigger is executed at a later time, when the image is used as the base for another build. Docker executes the trigger in the context of the downstream build, as if the trigger existed immediately after the FROM instruction in the downstream Dockerfile. RUN ... Executes any commands in a new layer on top of the current image and commits the results. There are two forms: RUN <command> Run the command in the shell /bin/sh -c RUN ["<executable>", "<param1>", "<param2>"] Executable form. The square brackets are a part of the syntax STOPSIGNAL Sets the system call signal that will be sent to the container to exit USER <user> USER <user>:<group> Sets the username or UID used for running subsequent commands. <user> can be either username or UID; <group> can be either group name or GID VOLUME ["/some/path"] Creates a mount point with the specified name and marks it as holding externally-mounted volumes from the native host or from other containers WORKDIR /path/to/workdir Sets the working directory for the RUN, CMD, ENTRYPOINT, COPY and ADD Dockerfile commands that follow. Relative paths are defined relative to the path of the previous WORKDIR instruction. Dockerfile commands/arguments # Comments begin with '#' FROM <image> FROM <image>:<tag> Sets the base image for subsequent instructions. Dockerfile must start with FROM instruction. MAINTAINER <name> Sets the Author field for the generated images ADD <src> <dest> ADD ["<src>", ... "<dest>"] Like COPY, but additionally allows <src> to be an URL, and if <src> is an archive in a recognized format, it will be unpacked. The best practice is to prefer COPY ARG <name> ARG <name>=<default value> Defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag CMD ... Provides defaults for executing container. There could be at most one CMD instruction in a Dockerfile CMD ["<executable>", "<param1>", "<param2>"] Executable form CMD ["<param1>", "<param2>"] Provide default arguments to ENTRYPOINT CMD <command args ...> Run the command in the shell /bin/sh -c COPY <src> <dest> COPY ["<src>", ... "<dest>"] Copies new files, directories or remote file URLs to the filesystem of the container at path <dest>. All new files and directories are created with mode 0755 and with the uid and gid of 0. ENTRYPOINT ... Helps you configure a container that can be run as an executable. The ENTRYPOINT instruction adds an entry command that is not overwritten when arguments are passed to docker run. This is different from the behavior of CMD. This allows arguments to be passed to the entrypoint
  • 20. Kubernetes Cheat Sheet 1 https://www.upgrad.com/blog/kubernetes-cheat-sheet/
  • 21. Kubernetes Cheat Sheet 2 https://phoenixnap.com/kb/kubectl-commands-cheat-sheet
  • 22. Kubernetes Cheat Sheet 3 https://intellipaat.com/blog/tutorial/devops-tutorial/kubernetes-cheat-sheet/
  • 23. Kubernetes Cheat Sheet 4 Commands Description kubectl get node To list down all worker nodes. kubectl delete node <node_name> Delete the given node in cluster. kubectl top node Show metrics for a given node. kubectl describe nodes | grep ALLOCATED -A 5 Describe all the nodes in verbose. kubectl get pods -o wide | grep <node_name> List all pods in the current namespace, with more details. kubectl get no -o wide List all the nodes with mode details. kubectl describe no Describe the given node in verbose. kubectl annotate node <node_name> Add an annotation for the given node. kubectl uncordon node <node_name> Mark my-node as schedulable. kubectl label node Add a label to given node Nodes Commands Description kubectl get po To list the available pods in the default namespace. kubectl describe pod <pod_name> To list the detailed description of pod. kubectl delete pod <pod_name> To delete a pod with the name. kubectl create pod <pod_name> To create a pod with the name. Kubectl get pod -n <name_space> To list all the pods in a namespace. Kubectl create pod <pod_name> -n <name_space> To create a pod with the name in a namespace. Commands Description kubectl create namespace <namespace_name> To create a namespace by the given name. kubectl get namespace To list the current namespace in a cluster. kubectl describe namespace <namespace_name> To display the detailed state of one or more namespaces. kubectl delete namespace <namespace_name> To delete a namespace. kubectl edit namespace <namespace_name> To edit and update the definition of a namespace. Namespaces Pods https://www.interviewbit.com/kubernetes-cheat-sheet/
  • 24. Kubernetes Cheat Sheet 5 Deployments Service Accounts ReplicaSets Commands Description kubectl create deployment <deployment_name> To create a new deployment. kubectl get deployment To list one or more deployments. kubectl describe deployment <deployment_name> To list a detailed state of one or more deployments. kubectl delete deployment<deployment_name> To delete a deployment. DaemonSets Command Description kubectl get ds To list out all the daemon sets. kubectl get ds -all-namespaces To list out the daemon sets in a namespace. kubectl describe ds [daemonset_name][namespace _name] To list out the detailed information for a daemon set inside a namespace. Events Commands Description kubectl get events To list down the recent events for all the resources in the system. kubectl get events --field-selector involvedObject.kind != Pod To list down all the events except the pod events. kubectl get events --field-selector type != Normal To filter out normal events from a list of events. Commands Description kubectl get replicasets To List down the ReplicaSets. kubectl describe replicasets <replicaset_name> To list down the detailed state of one or more ReplicaSets. kubectl scale --replace=[x] To scale a replica set. Commands Description kubectl get serviceaccounts To List Service Accounts. kubectl describe serviceaccounts To list the detailed state of one or more service accounts. kubectl replace serviceaccounts To replace a service account. kubectl delete serviceaccounts <name> To delete a service account. Commands Description kubectl logs <pod_name> To display the logs for a Pod with the given name. kubectl logs --since=1h <pod_name> To display the logs of last 1 hour for the pod with the given name. kubectl logs --tail-20 <pod_name> To display the most recent 20 lines of logs. kubectl logs -c <container_name> <pod_name> To display the logs for a container in a pod with the given names. kubectl logs <pod_name> pod.log To save the logs into a file named as pod.log. Logs https://www.interviewbit.com/kubernetes-cheat-sheet/
  • 25. Kubectl context and configuration kubectl config view # Show Merged kubeconfig settings. # use multiple kubeconfig files at the same time and view merged config KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view # get the password for the e2e user kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' kubectl config view -o jsonpath='{.users[].name}' # display the first user kubectl config view -o jsonpath='{.users[*].name}' # get a list of users kubectl config get-contexts # display list of contexts kubectl config current-context # display the current-context kubectl config use-context my-cluster-name # set the default context to my-cluster-name kubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig # configure the URL to a proxy server to use for requests made by this client in the kubeconfig kubectl config set-cluster my-cluster-name --proxy-url=my-proxy-url # add a new user to your kubeconf that supports basic auth kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword # permanently save the namespace for all subsequent kubectl commands in that context. kubectl config set-context --current --namespace=ggckad-s2 # set a context utilizing a specific username and namespace. kubectl config set-context gce --user=cluster-admin --namespace=foo && kubectl config use-context gce kubectl config unset users.foo # delete user foo # short alias to set/show context/namespace (only works for bash and bash-compatible shells, current context to be set before using kn to set namespace) alias kx='f() { [ "$1" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f' alias kn='f() { [ "$1" ] && kubectl config set-context --current --namespace $1 || kubectl config view --minify | grep namespace | cut -d" " -f6 ; } ; f' https://kubernetes.io/docs/reference/kubectl/cheatsheet/
  • 26. Kubectl Creating objects kubectl apply -f ./my-manifest.yaml # create resource(s) kubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files kubectl apply -f ./dir # create resource(s) in all manifest files in dir kubectl apply -f https://git.io/vPieo # create resource(s) from url kubectl create deployment nginx --image=nginx # start a single instance of nginx # create a Job which prints "Hello World" kubectl create job hello --image=busybox:1.28 -- echo "Hello World" # create a CronJob that prints "Hello World" every minute kubectl create cronjob hello --image=busybox:1.28 --schedule="*/1 * * * *" -- echo "Hello World" kubectl explain pods # get the documentation for pod manifests # Create multiple YAML objects from stdin cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox-sleep spec: ... # Create a secret with several keys cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: password: $(echo -n "s33msi4" | base64 -w0) username: $(echo -n "jane" | base64 -w0) EOF https://kubernetes.io/docs/reference/kubectl/cheatsheet/
  • 27. Kubectl Viewing, finding resources # Get commands with basic output kubectl get services # List all services in the namespace kubectl get pods --all-namespaces # List all pods in all namespaces kubectl get pods -o wide # List all pods in the current namespace, with more details kubectl get deployment my-dep # List a particular deployment kubectl get pods # List all pods in the namespace kubectl get pod my-pod -o yaml # Get a pod's YAML # Describe commands with verbose output kubectl describe nodes my-node kubectl describe pods my-pod # List Services Sorted by Name kubectl get services --sort-by=.metadata.name # List pods Sorted by Restart Count kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' # List PersistentVolumes sorted by capacity kubectl get pv --sort-by=.spec.capacity.storage # Get the version label of all pods with label app=cassandra kubectl get pods --selector=app=cassandra -o jsonpath='{.items[*].metadata.labels.version}' # Retrieve the value of a key with dots, e.g. 'ca.crt' kubectl get configmap myconfig -o jsonpath='{.data.ca.crt}' # Retrieve a base64 encoded value with dashes instead of underscores. kubectl get secret my-secret --template='{{index .data "key-name-with-dashes"}}' https://kubernetes.io/docs/reference/kubectl/cheatsheet/
  • 28. Kubectl Viewing, finding resources # Get all worker nodes (use a selector to exclude results that have a label # named 'node-role.kubernetes.io/control-plane') kubectl get node --selector='!node-role.kubernetes.io/control-plane' # Get all running pods in the namespace kubectl get pods --field-selector=status.phase=Running # Get ExternalIPs of all nodes kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' # List Names of Pods that belong to Particular RC # "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/ sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "(.key)=(.value),"')%?} echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name}) # Show labels for all pods (or any other Kubernetes object that supports labelling) kubectl get pods --show-labels # Check which nodes are ready JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" # Output decoded secrets without external tools kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"n"}}{{$v|base64decode}}{{"nn"}}{{end}}' # List all Secrets currently in use by a pod kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq # List all containerIDs of initContainer of all pods # Helpful when cleaning up stopped containers, while avoiding removal of initContainers. kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"n"}{end}' | cut -d/ -f3 https://kubernetes.io/docs/reference/kubectl/cheatsheet/
  • 29. Kubectl Viewing, finding resources # List Events sorted by timestamp kubectl get events --sort-by=.metadata.creationTimestamp # Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied. kubectl diff -f ./my-manifest.yaml # Produce a period-delimited tree of all keys returned for nodes # Helpful when locating a key within a complex nested JSON structure kubectl get nodes -o json | jq -c 'paths|join(".")' # Produce a period-delimited tree of all keys returned for pods, etc kubectl get pods -o json | jq -c 'paths|join(".")' # Produce ENV for all pods, assuming you have a default container for the pods, default namespace and the `env` command is supported. # Helpful when running any supported command across all pods, not just `env` for pod in $(kubectl get po --output=jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod -- env; done # Get a deployment's status subresource kubectl get deployment nginx-deployment --subresource=status https://kubernetes.io/docs/reference/kubectl/cheatsheet/
  • 30. Kubectl Updating resources kubectl set image deployment/frontend www=image:v2 # Rolling update "www" containers of "frontend" deployment, updating the image kubectl rollout history deployment/frontend # Check the history of deployments including the revision kubectl rollout undo deployment/frontend # Rollback to the previous deployment kubectl rollout undo deployment/frontend --to-revision=2 # Rollback to a specific revision kubectl rollout status -w deployment/frontend # Watch rolling update status of "frontend" deployment until completion kubectl rollout restart deployment/frontend # Rolling restart of the "frontend" deployment cat pod.json | kubectl replace -f - # Replace a pod based on the JSON passed into stdin # Force replace, delete and then re-create the resource. Will cause a service outage. kubectl replace --force -f ./pod.json # Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000 kubectl expose rc nginx --port=80 --target-port=8000 # Update a single-container pod's image version (tag) to v4 kubectl get pod mypod -o yaml | sed 's/(image: myimage):.*$/1:v4/' | kubectl replace -f - kubectl label pods my-pod new-label=awesome # Add a Label kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annotation kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo" https://kubernetes.io/docs/reference/kubectl/cheatsheet/
  • 31. Kubectl Patching resources # Partially update a node kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a json patch with positional arrays kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' # Disable a deployment livenessProbe using a json patch with positional arrays kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]' # Add a new element to a positional array kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]' # Update a deployment's replica count by patching its scale subresource kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' https://kubernetes.io/docs/reference/kubectl/cheatsheet/ Editing resources kubectl edit svc/docker-registry # Edit the service named docker-registry KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Use an alternative editor Scaling resources kubectl scale --replicas=3 rs/foo # Scale a replicaset named 'foo' to 3 kubectl scale --replicas=3 -f foo.yaml # Scale a resource specified in "foo.yaml" to 3 kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # If the deployment named mysql's current size is 2, scale mysql to 3 kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale multiple replication controllers
  • 32. Kubectl Interacting with running Pods kubectl logs my-pod # dump pod logs (stdout) kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout) kubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case) kubectl logs -l name=myLabel -c my-container # dump pod logs, with label name=myLabel (stdout) kubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container kubectl logs -f my-pod # stream pod logs (stdout) kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case) kubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout) kubectl run -i --tty busybox --image=busybox:1.28 -- sh # Run pod as interactive shell kubectl run nginx --image=nginx -n mynamespace # Start a single instance of nginx pod in the namespace of mynamespace kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml --dry-run=client -o yaml > pod.yaml kubectl attach my-pod -i # Attach to Running Container kubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod kubectl exec my-pod -- ls / # Run command in existing pod (1 container case) kubectl exec --stdin --tty my-pod -- /bin/sh # Interactive shell access to a running pod (1 container case) kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case) kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory' https://kubernetes.io/docs/reference/kubectl/cheatsheet/ Deleting resources kubectl delete -f ./pod.json # Delete a pod using the type and name specified in pod.json kubectl delete pod unwanted --now # Delete a pod with no grace period kubectl delete pod,service baz foo # Delete pods and services with same names "baz" and "foo" kubectl delete pods,services -l name=myLabel # Delete pods and services with label name=myLabel kubectl -n my-ns delete pod,svc --all # Delete all pods and services in namespace my-ns, # Delete all pods matching the awk pattern1 or pattern2 kubectl get pods -n mynamespace --no-headers=true | awk '/pattern1|pattern2/{print $1}' | xargs kubectl delete -n mynamespace pod Copy files and directories to and from containers kubectl cp /tmp/foo_dir my-pod:/tmp/bar_dir # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace kubectl cp /tmp/foo my-pod:/tmp/bar -c my-container # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container kubectl cp /tmp/foo my-namespace/my-pod:/tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace kubectl cp my-namespace/my-pod:/tmp/foo /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally
  • 33. Kubectl Interacting with Nodes and cluster kubectl cordon my-node # Mark my-node as unschedulable kubectl drain my-node # Drain my-node in preparation for maintenance kubectl uncordon my-node # Mark my-node as schedulable kubectl top node my-node # Show metrics for a given node kubectl cluster-info # Display addresses of the master and services kubectl cluster-info dump # Dump current cluster state to stdout kubectl cluster-info dump --output-directory=/path/to/cluster-state # Dump current cluster state to /path/to/cluster-state # View existing taints on which exist on current nodes. kubectl get nodes -o=custom-columns=NodeName:.metadata.name,TaintKey:.spec.taints[*].key,TaintValue:.spec.taints[*].value,TaintEffect:.spec.taints[*].effect # If a taint with that key and effect already exists, its value is replaced as specified. kubectl taint nodes foo dedicated=special-user:NoSchedule https://kubernetes.io/docs/reference/kubectl/cheatsheet/ Interacting with Deployments and Services kubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case) kubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case) kubectl port-forward svc/my-service 5000 # listen on local port 5000 and forward to port 5000 on Service backend kubectl port-forward svc/my-service 5000:my-service-port # listen on local port 5000 and forward to Service target port with name <my-service-port> kubectl port-forward deploy/my-deployment 5000:6000 # listen on local port 5000 and forward to port 6000 on a Pod created by <my-deployment> kubectl exec deploy/my-deployment -- ls # run command in first Pod and first container in Deployment (single- or multi-container cases) Copy files and directories to and from containers tar cf - /tmp/foo | kubectl exec -i -n my-namespace my-pod -- tar xf - -C /tmp/bar # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my- namespace kubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally
  • 35. OpenStack Cheat Sheet 1 https://www.openstack.org/software/
  • 36. OpenStack Cheat Sheet 2 https://assets.ubuntu.com/v1/8d3130a1-OpenStack.cheat.sheet.1.pdf
  • 37. OpenStack Cheat Sheet 3 https://assets.ubuntu.com/v1/8d3130a1-OpenStack.cheat.sheet.1.pdf
  • 38. OpenStack Cheat Sheet 4 https://assets.ubuntu.com/v1/8d3130a1-OpenStack.cheat.sheet.1.pdf
  • 39. OpenStack Cheat Sheet 5 (old) https://cloud.curs.pub.ro/wp-content/uploads/2014/12/Openstack_CheatSheet.pdf
  • 40. OpenStack Cheat Sheet 6 Compute (nova)¶ List instances, check status of instance $ openstack server list List images $ openstack image list Create a flavor named m1.tiny $ openstack flavor create --ram 512 --disk 1 --vcpus 1 m1.tiny List flavors $ openstack flavor list Boot an instance using flavor and image names (if names are unique) $ openstack server create --image IMAGE --flavor FLAVOR INSTANCE_NAME $ openstack server create --image cirros-0.3.5-x86_64-uec --flavor m1.tiny MyFirstInstance Log in to the instance (from Linux) # ip netns # ip netns exec NETNS_NAME ssh USER@SERVER # ip netns exec qdhcp-6021a3b4-8587-4f9c-8064-0103885dfba2 ssh cirros@10.0.0.2 Log in to the instance with a public IP address (from Mac) $ ssh cloud-user@128.107.37.150 Show details of instance $ openstack server show NAME $ openstack server show MyFirstInstance View console log of instance $ openstack console log show MyFirstInstance https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html Images (glance)¶ List images you can access $ openstack image list Delete specified image $ openstack image delete IMAGE Describe a specific image $ openstack image show IMAGE Update image $ openstack image set IMAGE Upload kernel image $ openstack image create "cirros-threepart-kernel" --disk-format aki --container-format aki --public --file ~/images/cirros-0.3.5-x86_64-kernel Upload RAM image $ openstack image create "cirros-threepart-ramdisk" --disk-format ari --container-format ari --public --file ~/images/cirros-0.3.5-x86_64-initramfs Upload three-part image $ openstack image create "cirros-threepart" --disk-format ami --container-format ami --public --property kernel_id=$KID-property ramdisk_id=$RID --file ~/images/cirros-0.3.5-x86_64-rootfs.img Register raw image $ openstack image create "cirros-raw" --disk-format raw --container-format bare --public --file ~/images/cirros-0.3.5-x86_64-disk.img
  • 41. OpenStack Cheat Sheet 7 Resize $ openstack server resize NAME FLAVOR $ openstack server resize my-pem-server m1.small $ openstack server resize --confirm my-pem-server1 Rebuild $ openstack server rebuild NAME IMAGE $ openstack server rebuild newtinny cirros-qcow2 Reboot $ openstack server reboot NAME $ openstack server reboot newtinny Inject user data and files into an instance $ openstack server create --user-data FILE INSTANCE $ openstack server create --user-data userdata.txt --image cirros-qcow2 --flavor m1.tiny MyUserdataInstance2 Create keypair $ openstack keypair create test > test.pem $ chmod 600 test.pem Start an instance (boot) $ openstack server create --image cirros-0.3.5-x86_64 --flavor m1.small --key-name test MyFirstServer Use ssh to connect to the instance # ip netns exec qdhcp-98f09f1e-64c4-4301-a897-5067ee6d544f ssh -i test.pem cirros@10.0.0.4 https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html Set metadata on an instance $ nova meta volumeTwoImage set newmeta='my meta data' Create an instance snapshot $ openstack image create volumeTwoImage snapshotOfVolumeImage $ openstack image show snapshotOfVolumeImage Pause, suspend, stop, rescue, resize, rebuild, reboot an instance¶ Pause $ openstack server pause NAME $ openstack server pause volumeTwoImage Unpause $ openstack server unpause NAME Suspend $ openstack server suspend NAME Unsuspend $ openstack server resume NAME Stop $ openstack server stop NAME Start $ openstack server start NAME Rescue $ openstack server rescue NAME $ openstack server rescue NAME --rescue_image_ref RESCUE_IMAGE
  • 42. OpenStack Cheat Sheet 8 Attach a volume to an instance after the instance is active, and the volume is available $ openstack server add volume INSTANCE_ID VOLUME_ID $ openstack server add volume MyVolumeInstance 573e024d-5235-49ce-8332- be1576d323f8 $ openstack server add volume --device /dev/vdb MyVolumeInstance 573e024d..1576d323f8 This is not currently possible when using non-Xen hypervisors with OpenStack. Manage volumes after login into the instance List storage devices $ fdisk –l # Also other normal Unix file system commands apply Object Storage (swift)¶ Display information for the account, container, or object $ swift stat $ swift stat ACCOUNT $ swift stat CONTAINER $ swift stat OBJECT List containers $ swift list Keystone See Status of Keystone Services $ keystone service-list List All Keystone Endpoints $ keystone endpoint-list Glance List Current Glance Images $ glance image-list https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html, https://thornelabs.net/posts/openstack-commands-cheat-sheet/ Manage security groups Add rules to default security group allowing ping and SSH between instances in the default security group $ openstack security group rule create default --remote-group default --protocol icmp $ openstack security group rule create default --remote-group default --dst-port 22 Networking (neutron)¶ Create network $ openstack network create NETWORK_NAME Create a subnet $ openstack subnet create --subnet-pool SUBNET --network NETWORK SUBNET_NAME $ openstack subnet create --subnet-pool 10.0.0.0/29 --network net1 subnet1 Block Storage (cinder)¶ Used to manage volumes and volume snapshots that attach to instances. Create a new volume $ openstack volume create --size SIZE_IN_GB NAME $ openstack volume create --size 1 MyFirstVolume Boot an instance and attach to volume $ openstack server create --image cirros-qcow2 --flavor m1.tiny MyVolumeInstance List all volumes, noticing the volume status $ openstack volume list
  • 43. OpenStack Cheat Sheet 9 Create a Flavor nova flavor-create <FLAVOR-NAME> <FLAVOR-ID> <RAM-IN-MB> <ROOT- DISK-IN-GB> <VCPU> For example, create a new flavor called m1.custom with an ID of 6, 512 MB of RAM, 5 GB of root disk space, and 1 vCPU: nova flavor-create m1.custom 6 512 5 1 Create Nova Security Group This command is only used if you are using nova-network. nova secgroup-create <NAME> <DESCRIPTION> Add Rules to Nova Security Group These command is only used if you are using nova-network. nova secgroup-add-rule <NAME> <PROTOCOL> <BEGINNING-PORT> <ENDING-PORT> <SOURCE-SUBNET> Example 1: Add a rule to the default Nova Security Group to allow SSH access to instances: nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 Example 2: Add a rule to the default Nova Security Group Rule to allow ICMP communication to instances: nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 Apply Nova Security Group to Instance This command is only used if you are using nova-network. nova add-secgroup <NOVA-ID> <SECURITY-GROUP-ID> Create Nova Floating IP Pool These command is only used if you are using nova-network. nova-manage floating create <SUBNET-NAME> <NAME-OF-POOL> Create Nova Key SSH Pair nova keypair-add --pub_key <SSH-PUBLIC-KEY-FILE-NAME> <NAME-OF-KEY> https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html, https://thornelabs.net/posts/openstack-commands-cheat-sheet/ Upload Images to Glance glance image-create --name <IMAGE-NAME> --is-public <true OR false> -- container-format <CONTAINER-FORMAT> --disk-format <DISK-FORMAT> -- copy-from <URI> Example 1: Upload the cirros-0.3.2-x86_64 OpenStack cloud image: glance image-create --name cirros-0.3.2-x86_64 --is-public true --container- format bare --disk-format qcow2 --copy-from http://download.cirros- cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img Example 2: Upload the ubuntu-server-12.04 OpenStack cloud image: glance image-create --name ubuntu-server-12.04 --is-public true --container- format bare --disk-format qcow2 --copy-from http://cloud- images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img Nova See Status of Nova Services nova service-list List Current Nova Instances nova list Boot an Instance Boot an instance assigned to a particular Neutron Network: nova boot <INSTANCE-NAME> --image <GLANCE-IMAGE-ID> --flavor <FLAVOR-ID> --security-groups <SEC-GROUP-1,SEC-GROUP-2> --key-name <SSH-KEY-NAME> --nic net-id=<NET-ID> --availability-zone <AVAILABILITY- ZONE-NAME> Boot an instance assigned to a particular Neutron Port: nova boot <INSTANCE-NAME> --image <GLANCE-IMAGE-ID> --flavor <FLAVOR-ID> --security-groups <SEC-GROUP-1,SEC-GROUP-2> --key-name <SSH-KEY-NAME> --nic port-id=<PORT-ID> --availability-zone <AVAILABILITY-ZONE-NAME>
  • 44. OpenStack Cheat Sheet 10 You can also use the active command line switch to force an instance back into an active state: nova reset-state --active <INSTANCE-ID> Get Direct URL to Instance Console Using novnc nova get-vnc-console <INSTANCE-ID> novnc Get Direct URL to Instance Console Using xvpvnc nova get-vnc-console <INSTANCE-ID> xvpvnc Set OpenStack Project Nova Quota The following command will set an unlimited quota for a particular OpenStack Project: nova quota-update --instances -1 --cores -1 --ram -1 --floating-ips -1 --fixed-ips -1 -- metadata-items -1 --injected-files -1 --injected-file-content-bytes -1 --injected-file- path-bytes -1 --key-pairs -1 --security-groups -1 --security-group-rules -1 --server- groups -1 --server-group-members -1 <PROJECT ID> Cinder See Status of Cinder Services cinder service-list List Current Cinder Volumes cinder list Create Cinder Volume cinder create --display-name <CINDER-IMAGE-DISPLAY-NAME> <SIZE-IN-GB> Create Cinder Volume from Glance Image cinder create --image-id <GLANCE-IMAGE-ID> --display-name <CINDER- IMAGE-DISPLAY-NAME> <SIZE-IN-GB> Create Snapshot of Cinder Volume cinder snapshot-create --display-name <SNAPSHOT-DISPLAY-NAME> <CINDER-VOLUME-ID> https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html, https://thornelabs.net/posts/openstack-commands-cheat-sheet/ Create Host Aggregate With Availability Zone nova aggregate-create <HOST-AGG-NAME> <AVAIL-ZONE-NAME> Add Compute Host to Host Aggregate nova aggregate-add-host <HOST-AGG-ID> <COMPUTE-HOST-NAME> Live Migrate an Instance If your compute hosts use shared storage: nova live-migration <INSTANCE-ID> <COMPUTE-HOST-ID> If your compute hosts do not use shared storage: nova live-migration --block-migrate <INSTANCE-ID> <COMPUTE-HOST-ID> Attach Cinder Volume to Instance Before running this command, you will need to have already created the particular Cinder Volume. nova volume-attach <INSTANCE-ID> <CINDER-VOLUME-ID> <DEVICE (use auto)> Create and Boot an Instance from a Cinder Volume Before running this command, you will need to have already created the particular Cinder Volume from a Glance Image. nova boot --flavor <FLAVOR-ID> --block_device_mapping vda=<CINDER- VOLUME-ID>:::0 <INSTANCE-NAME> Create and Boot an Instance from a Cinder Volume Snapshot Before running this command, you will have to have already created the particular Cinder Volume Snapshot: nova boot --flavor <FLAVOR-ID> --block_device_mapping vda=<CINDER- SNAPSHOT-ID>:snap::0 <INSTANCE-NAME> Reset the State of an Instance If an instance gets stuck in a delete state, the instance state can be reset and then deleted: nova reset-state <INSTANCE-ID> nova delete <INSTANCE-ID>
  • 45. OpenStack Cheat Sheet 11 Example 2: Add a rule to the default Neutron Security Group to allow ICMP communication to instances: neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol icmp default Create a Neutron Tenant Network neutron net-create <NET-NAME> neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET- CIDR> Create a Neutron Provider Network neutron net-create <NET-NAME> --provider:physical_network=<LABEL- PHYSICAL-INTERFACE> --provider:network_type=<flat or vlan> --shared -- router:external=True neutron subnet-create --name <SUBNET-NAME> <NET-NAME> <SUBNET- CIDR> --gateway <GATEWAY-IP> --allocation-pool start=<STARTING- IP>,end=<ENDING-IP> --dns-nameservers list=true <DNS-1 DNS-2> Create a Neutron Router neutron router-create <ROUTER-NAME> Set Default Gateway on a Neutron Router neutron router-gateway-set <ROUTER-NAME> <NET-NAME> Attach a Tenant Network to a Neutron Router neutron router-interface-add <ROUTER-NAME> <SUBNET-NAME> Create a Neutron Floating IP Pool If you need N number of floating IP addresses, run this command N number of times: neutron floatingip-create <NET-NAME> Assign a Neutron Floating IP Address to an Instances neutron floatingip-associate <FLOATING-IP-ID> <NEUTRON-PORT-ID> Create a Neutron Port with a Fixed IP Address neutron port-create <NET-NAME> --fixed-ip ip_address=<IP-ADDRESS> https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html, https://thornelabs.net/posts/openstack-commands-cheat-sheet/ If the Cinder Volume is not available, i.e. it is currently attached to an instance, you must pass the force flag: cinder snapshot-create --display-name <SNAPSHOT-DISPLAY-NAME> <CINDER-VOLUME-ID> --force True Neutron See Status of Neutron Services neutron agent-list List Current Neutron Networks neutron net-list List Current Neutron Subnets neutron subnet-list Rename Neutron Network neutron net-update <CURRENT-NET-NAME> --name <NEW-NET-NAME> Rename Neutron Subnet neutron subnet-update <CURRENT-SUBNET-NAME> --name <NEW-SUBNET- NAME> Create Neutron Security Group neutron security-group-create <SEC-GROUP-NAME> Add Rules to Neutron Security Group neutron security-group-rule-create --direction <ingress OR egress> --ethertype <IPv4 or IPv6> --protocol <PROTOCOL> --port-range-min <PORT-NUMBER> -- port-range-max <PORT-NUMBER> <SEC-GROUP-NAME> Example 1: Add a rule to the default Neutron Security Group to allow SSH access to instances: neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 default
  • 47. OpenShift Cheat Sheet 1 https://cheatography.com/itservicestart-up/cheat-sheets/oc-cli-commands/pdf_bw/ https://github.com/okd- project/okd/releases
  • 48. OpenShift Cheat Sheet 2 https://cheatography.com/itservicestart-up/cheat-sheets/oc-cli-commands/pdf_bw/ https://github.com/okd-project/okd/releases
  • 49. OpenShift Cheat Sheet 3 https://cheatography.com/itservicestart-up/cheat-sheets/oc-cli-commands/pdf_bw/ https://github.com/okd-project/okd/releases
  • 50. OpenShift Cheat Sheet 4 Install pkgs using yum in a Dockerfile # Install Runtime Environment RUN set -x && 2 yum clean all && REPOLIST=rhel-7-server-rpms,rhel-7-server-optional-rpms,rhel-7-server- thirdparty-oracle-java-rpms INSTALL_PKGS="tar java-1.8.0-oracle-devel" && yum -y update-minimal --disablerepo "*" --enablerepo ${REPOLIST} -- setopt=tsflags=nodocs --security --sec-severity=Important --sec-severity=Critical && yum -y install --disablerepo "*" --enablerepo ${REPOLIST} -- setopt=tsflags=nodocs ${INSTALL_PKGS} && yum clean all Docker push to ocp internal registry 01. oc extract -n default secrets/registry-certificates --keys=registry.crt 02. REGISTRY=$(oc get routes -n default docker-registry -o jsonpath='{.spec.host}') 03. mkdir -p /etc/containers/certs.d/${REGISTRY} 04. mv registry.crt /etc/containers/certs.d/${REGISTRY}/ 05. oc adm new-project openshift-pipeline 06. oc create -n openshift-pipeline serviceaccount pipeline 07. SA_SECRET=$(oc get secret -n openshift-pipeline | grep pipeline-token | cut - d ' ' -f 1 | head -n 1) 08. SA_PASSWORD=$(oc get secret -n openshift-pipeline ${SA_SECRET} -o jsonpath='{.data.token}' | base64 -d) 09. oc adm policy add-cluster-role-to-user system:image-builder system:serviceaccount:openshift-pipeline:pipeline 10. docker login ${REGISTRY} -u unused -p ${SA_PASSWORD} 11. docker pull docker.io/library/hello-world 12. docker tag docker.io/library/hello-world ${REGISTRY}/openshift- pipeline/helloworld 13. docker push ${REGISTRY}/openshift-pipeline/helloworld 14. oc new-project demo-project 15. oc policy add-role-to-user system:image-puller system:serviceaccount:demo- project:default -n openshift-pipeline 16. oc new-app --image-stream=openshift-pipeline/helloworld:latest https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e To create ssh secret: oc create secret generic sshsecret --from-file=ssh-privatekey=$HOME/.ssh/id_rsa To create SSH-based authentication secret with .gitconfig file: oc create secret generic sshsecret --from-file=ssh-privatekey=$HOME/.ssh/id_rsa --from-file=.gitconfig=</path/to/file> To create secret that combines .gitconfig file and CA certificate: oc create secret generic sshsecret --from-file=ca.crt=<path/to/certificate> --from-file=.gitconfig=</path/to/file> To create basic authentication secret with CA certificate file: oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca.crt=<path/to/certificate> To create basic authentication secret with .gitconfig file and CA certificate file: oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=.gitconfig=</path/to/file> --from-file=ca.crt=<path/to/certificate> Examine the cluster quota defined for the environment: $ oc describe AppliedClusterResourceQuota
  • 51. OpenShift Cheat Sheet 5 Set the default storage-class oc patch storageclass glusterfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' Change Default response timeout for a specific route: oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=10s Add a nodeSelector on RC ou DC oc patch dc|rc <dc_name> -p "spec: template: spec: nodeSelector: region: infra" Binary Builds oc new-build --binary=true --name=ola2 --image-stream=redhat-openjdk18- openshift --to='mycustom-jdk8:1.0' oc start-build ola2 --from-file=./target/ola.jar --follow oc new-app Turn off/on DC triggers to do a batch of changes without spam many deployments oc rollout pause dc <dc name> oc rollout resume dc <dc name> Get a route URL using OC http://$(oc get route nexus3 --template='{{ .spec.host }}') Maven can automatically store artifacts using -DaltDeploymentRepository parameter for deploy task: mvn deploy -DskipTests=true -DaltDeploymentRepository= nexus::default::http://nexus3.nexus.svc.cluster.local:8081/repository/releases https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e Creates a service to point to an external service addr (DNS or IP) oc create service externalname myservice --external-name myhost.example.com Patching a DeploymentConfig from the CLI this example removes an config attribute using JSON path oc patch dc/mysql --type=json -p='[{"op":"remove", "path": "/spec/strategy/rollingParams"}]' this example cnhage an existing attribute value using JSON format oc patch dc/mysql --patch '{"spec":{"strategy":{"type":"Recreate"}}}‘ Creating a Custom template by exporting existing resources oc export is,bc,dc,svc,route --as-template > mytemplate.yml Process a template, create a new binary build to customize something and them change the DeploymentConfig to use the new Image... oc process openshift//datagrid72-basic | oc create -f - oc new-build --name=customdg -i openshift/jboss-datagrid72-openshift:1.0 -- binary=true --to='customdg:1.0' oc set triggers dc/datagrid-app --from-image=openshift/jboss-datagrid72- openshift:1.0 --remove oc set triggers dc/datagrid-app --from-image=customdg:1.0 -c datagrid-app List only paramaters of a given template file definition oc process -f mytemplate.yaml --parameters Copy file content from a specific image to local file system docker run registry.access.redhat.com/jboss-datagrid-7/datagrid72-openshift:1.0 /bin/sh -c 'cat /opt/datagrid/standalone/configuration/clustered-openshift.xml' > clustered-openshift.xml
  • 52. OpenShift Cheat Sheet 6 Configure Liveness/Readiness probes on DCs oc set probe dc cotd1 --liveness -- echo ok oc set probe dc/cotd1 --readiness --get-url=http://:8080/index.php --initial-delay- seconds=2 Create a new JOB oc run pi --image=perl --replicas=1 --restart=OnFailure --command -- perl -Mbignum=bpi -wle 'print bpi(2000)' CRON JOB oc run pi --image=perl --schedule='*/1 * * * *' --restart=OnFailure --labels parent="cronjobpi" --command -- perl -Mbignum=bpi -wle 'print bpi(2000)' A/B Deployments - Split route trafic between services oc expose service cotd1 --name='abcotd' -l name='cotd' oc set route-backends abcotd --adjust cotd2=+20% oc set route-backends abcotd cotd1=50 cotd2=50 To pull an image directly from red hat offcial docker registry docker pull registry.access.redhat.com/jboss-eap-6/eap64-openshift To validate a openshift/kubernates resource definition (json/yaml file) in order to find malformed/sintax problems oc create --dry-run --validate -f openshift/template/tomcat6-docker- buildconfig.yaml To get current user Barear Auth Token oc whoami -t To test Master API curl -k -H "Authorization: Bearer <api_token>" https://<master_host>:8443/api/v1/namespaces/<projcet_name>/pods/https:<po d_name>:8778/proxy/jolokia/ https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e To update a DeploymentConfig in order to change the Docker Image used by a specific container oc project <project> oc get is # creates an ImageStream from a Remote Docker Registry image oc import-image <image name> --from=docker.io/<imagerepo>/<imagename> -- all --confirm oc get istag OC_EDITOR="vim" oc edit dc/<your_dc> spec: containers: - image: docker.io/openshiftdemos/gogs@sha256:<the new image digest from Image Stream> imagePullPolicy: Always BuildConfig with Source pull secrets oc secrets new-basicauth gogs-basicauth --username=<your gogs login> -- password=<gogs pwd> oc set build-secret --source bc/tasks gogs-basicauth Adding a volume in a given DeploymentConfig oc set volume dc/myAppDC --add --overwrite --name.... Create a configmap file and mount as a volume on DC oc create configmap myconfigfile --from-file=./configfile.txt oc set volumes dc/printenv --add --overwrite=true --name=config-volume -- mount-path=/data -t configmap --configmap-name=myconfigfile Create a secret via CLI oc create secret generic mysec --from-literal=app_user=superuser --from- literal=app_password=topsecret oc env dc/printenv --from=secret/mysec oc set volume dc/printenv --add --name=db-config-volume --mount- path=/dbconfig --secret-name=printenv-db-secret
  • 53. OpenShift Cheat Sheet 7 To access a POD container shell oc exec -ti `oc get pods | awk '/registry/ { print $1; }'` /bin/bash #new way to do the same: oc rsh <container-name> to edit an object/resource oc edit <object_type>/<object_name> #eg oc edit dc/myDeploymentConfig Ataching a new PersistentVolumeClaim to a DeploymentConfig oc volume dc/docker-registry --add --overwrite -t persistentVolumeClaim --claim-name=registry-claim --name=registry-storage Docker builder app creation oc new-app --docker-image=openshift/hello-openshift:v1.0.6 -l "todelete=yes" To create an app using a template (eap64-basic-s2i): Ticketmonster demo oc new-app javaee6-demo oc new-app --template=eap64-basic-s2i - p=APPLICATION_NAME=ticketmonster,SOURCE_REPOSITORY_URL=https://gi thub.com/jboss-developer/ticket- monster,SOURCE_REPOSITORY_REF=2.7.0.Final,CONTEXT_DIR=demo STI app creation oc new-app https://github.com/openshift/sinatra-example -l "todelete=yes" oc new-app openshift/php~https://github.com/openshift/sti-php -l "todelete=yes" To watch a build process log oc get builds oc logs -f builds/sti-php-1 https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e # get pod memory via jmx curl -k -H "Authorization: Bearer <api_token>" https://<master_host>:8443/api/v1/namespaces/<projcet_name>/pods/https:<po d_name>:8778/proxy/jolokia//read/java.lang:type=Memory/HeapMemoryUsage | jq . to login via CLI oc oc login --username=tuelho --insecure-skip-tls-verify --server=https://master00- ${guid}.oslab.opentlc.com:8443 ### to login as Cluster Admin through master host oc login -u system:admin -n openshift To view the cluster roles and their associated rule sets in the cluster policy oc describe clusterPolicy default add a role to user #local binding oadm policy add-role-to-user <role> <username> #cluster biding oadm policy add-cluster-role-to-user <role> <username> allow containers run with root user inside openshift oadm policy add-scc-to-user anyuid -z default for more details consult: https://docs.openshift.com/enterprise/3.1/admin_guide/manage_authorization_p olicy.html to test a POD service locally ip=`oc describe pod hello-openshift|grep IP:|awk '{print $2}'` curl http://${ip}:8080
  • 54. OpenShift Cheat Sheet 8 To output new-app artifacts to file, edit them, then create them using oc create: $ oc new-app https://github.com/openshift/ruby-hello-world -o json > myapp.json $ vi myapp.json $ oc create -f myapp.json To deploy together image built from source and external image: $ oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql To export all the project's objects/resources as a single template: $ oc export all --as-template=<template_name> To create a new project using oadm and defining an admin user $ oadm new-project instant-app --display-name="instant app example project" --description='A demonstration of an instant-app/template' --node-selector='region=primary' --admin=andrew To create an app using oc CLI based on a template $ oc new-app --template=mysql-ephemeral -- param=MYSQL_USER=mysqluser,MYSQL_PASSWORD=redhat,MYSQL_DATAB ASE=mydb,DA To see a list of env vars defined in a DeploymentConfig object $ oc env dc database --list # deploymentconfigs database, container mysql MYSQL_USER=*** MYSQL_PASSWORD=*** MYSQL_DATABASE=*** To manage enviorenmet variables in different ose objects types. The first adds, with value /data. The second updates, with value /opt. $ oc env dc/registry STORAGE=/data $ oc env dc/registry --overwrite STORAGE=/opt https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e To create application using Git repository at current directory: $ oc new-app To create application using remote Git repository and context subdirectory: $ oc new-app https://github.com/openshift/sti-ruby.git --context-dir=2.0/test/puma-test-app To create application using remote Git repository with specific branch reference: $ oc new-app https://github.com/openshift/ruby-hello-world.git#beta4 $ oc new-app /home/user/code/myapp --strategy=docker To create a definition generated by oc new-app command based on S2I support $ oc new-app https://github.com/openshift/simple-openshift-sinatra-sti.git -o json | tee ~/simple-sinatra.json To create application from MySQL image in Docker Hub: $ oc new-app mysql To create application from local registry: $ oc new-app myregistry:5000/example/myimage To create application from stored template: $ oc create -f examples/sample-app/application-template-stibuild.json $ oc new-app ruby-helloworld-sample To set environment variables when creating application for database image: $ oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password To deploy two images in single pod: $ oc new-app nginx+mysql
  • 55. OpenShift Cheat Sheet 9 To create a registry with storage-volume mounted on host oadm registry --service-account=registry --config=/etc/origin/master/admin.kubeconfig --credentials=/etc/origin/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --mount-host=<path> --selector=meuselector To export all resources from a project/namespace as a template oc export all --as-template=<template_name> To create a build from a Dockerfile # create the build cat ./path/to/your/Dockerfile | oc new-build --name=build-from-docker --binary - -strategy=docker -l app=app-from-custom-docker-build -D - #if you need to give some input to your Docker Build process oc start-build build-from-docker --from-dir=. --follow # create an OSE app from the docker build image oc new-app app-from-custom-docker-build -l app=app-from-custom-docker- build oc expose service app-from-custom-docker-build To copy files to/from a POD #Ref: https://docs.openshift.org/latest/dev_guide/copy_files_to_container.html oc rsync /home/user/source devpod1234:/src oc rsync devpod1234:/src /home/user/source Cluster nodes CleanUp $ oadm pod-network make-projects-global ci Adjust Master Log Level To adjust openshift-master log level, edit following line of /etc/sysconfig/atomic- openshift-master from master VM: OPTIONS=--loglevel=4 https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e To unset environment variables in the pod templates: $ oc env <object-selection> KEY_1- ... KEY_N- [<common-options>] The trailing hyphen (-, U+2D) is required. This example removes environment variables ENV1 and ENV2 from deployment config d1: $ oc env dc/d1 ENV1- ENV2- This removes environment variable ENV from all replication controllers: $ oc env rc --all ENV- To list environment variables in pods or pod templates: $ oc env rc r1 --containers='c1' ENV- This example lists all environment variables for pod p1: $ oc env <object-selection> --list [<common-options>] $ oc env pod/p1 --list To apply some change (patch) oc patch dc/<dc_name> -p '{"spec":{"template":{"spec":{"nodeSelector":{"nodeLabel":"logging-es-node- 1"}}}}}' To apply a vlome storage oc volume dc/<dc_name> --add --overwrite --name=<volume_name> --type=persistentVolumeClaim --claim-name=<claim_name> To make a node unschedulable in a cluster oadm manage node <nome do node > --schedulable=false
  • 56. OpenShift Cheat Sheet 10 Create Definition Files for Volumes ssh master00-$guid mkdir /root/pvs export volsize="5Gi" for volume in pv{1..25}; do cat << EOF > /root/pvs/${volume}.yaml apiVersion: v1 kind: PersistentVolume metadata: name: ${volume} spec: capacity: storage: ${volsize} accessModes: - ReadWriteOnce nfs: path: /var/export/pvs/${volume} server: 192.168.0.254 persistentVolumeReclaimPolicy: Recycle EOF echo "Created def file for ${volume}"; Done Patch PVs definitions for pv in $(oc get pv|awk '{print $1}' | grep pv | grep -v NAME); do oc patch pv $pv -p "spec: accessModes: - ReadWriteMany - ReadWriteOnce - ReadOnlyMany persistentVolumeReclaimPolicy: Recycle" https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e To make changes valid, restart atomic-openshift-master service: $ sudo -i systemctl restart atomic-openshift-master.service In node machine, to provide filtered information: # journalctl -f -u atomic-openshift-node Enable EAP clustering/replication Make sure that your default service account has sufficient privileges to communicate with the Kubernetes REST API. Add the view role to serviceaccount for the project: $ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default OCP Internal VIP failover for Routers running on Infra nodes oc adm ipfailover ipf-ha-router --replicas=2 --watch-port=80 --selector="region=infra" --virtual-ips="x.0.0.x" --iptables-chain="INPUT" --service-account=ipfailover --create Use oc new-app with -o json option to bootstrap your new template definition file oc new-app -o json openshift/hello-openshift > hello.json Working with Templates to list all parameters from mysql-persistent template: $ oc process --parameters=true -n openshift mysql-persistent Customizing resources from a preexisting Template, Example: $ oc export -o json -n openshift mysql-ephemeral > mysql-ephemeral.json ... change the mysql-ephemeral.json file ... $ oc process -f mysql-ephemeral.json -v MYSQL_DATABASE=testdb,MYSQL_USE=testuser,MYSQL_PASSWORD= > testdb.json $ oc create -f testdb.json
  • 57. OpenShift Cheat Sheet 11 DeploymentConfig Post-deployment (lifecycle) hook sample oc patch dc/mysql --patch '{"spec":{"strategy":{"recreateParams":{"post":{"failurePolicy": "Abort","execNewPod":{"containerName":"mysql","command":["/bin/sh","-c","curl -L -s https://github.com/RedHatTraining/DO288-apps/releases/download/OCP- 4.1-1/import.sh -o /tmp/import.sh&&chmod 755 /tmp/import.sh&&/tmp/import.sh"]}}}}}}' oc CLI + bash tricks: tail logs for all pods at once oc get pods -o name | xargs -L 1 oc logs [--tail 1 [-c <conatiner-name>]] print response fields with curl curl -s -w 'HTTP code: %{http_code}nTime: %{time_total}sn' "$SVC_URL" retrieving a POD Name dynamically INGRESS_POD=$(oc -n istio-system get pods -l istio=ingressgateway -o jsonpath='{.items..metadata.name}') oc -n istio-system exec $INGRESS_POD -- ls /etc/istio/customer-certs Istio Verify the given pod uses a unique SVID (SPIFFE - Secure Production Identity Framework for Everyone Verified Identity Document): oc exec $POD_NAME -c istio-proxy -- curl -s http://127.0.0.1:15000/config_dump | jq -r .configs[5].dynamic_active_secrets[0].secret | jq -r .tls_certificate.certificate_chain.inline_bytes | base64 --decode | openssl x509 -text -noout | grep "X509v3 Subject" -A 1 X509v3 Subject Alternative Name: critical URI:spiffe://cluster.local/ns/mtls/sa/POD_NAME https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e Patch a DC on OCP 4 to set env vars from a ConfigMap oc patch -n user1 dc/events -p '{ "metadata" : { "annotations" : { "app.openshift.io/connects-to" : "invoice-events,i Patch a ConfigMap oc patch configmap/myconf --patch '{"data":{"key1":"newvalue1"}}' Verify if a giver Service Account has a given rolebinding oc get rolebinding -o wide -A | grep -E 'NAME|ClusterRole/view|namespace/sa_name' Using jq utility to search/filter through a oc get json output: #!/bin/bash oc get service --all-namespaces -o json | jq '.items[] | select( .metadata.labels."discovery.3scale.net" == "true" and .metadata.annotations."discovery.3scale.net/port" and .metadata.annotations."discovery.3scale.net/scheme" ) | { "service-name": .metadata.name, "service-namespace": .metadata.namespace, "labels": .metadata.labels, "annotations": .metadata.annotations } ' Operators troubleshooting oc get ClusterServiceVersion --all-namespaces oc get subs -n openshift-operators oc api-resources oc explain <resource name>[.json attribute]
  • 58. OpenShift Cheat Sheet 12 . https://gist.github.com/rafaeltuelho/111850b0db31106a4d12a186e1fbc53e creating a inline json patch file and applying to a resource cat > gateway-patch.json << EOF [{ "op": "add", "path": "/spec/template/spec/containers/0/volumeMounts/0", "value": { "mountPath": "/etc/istio/customer-certs", "name": "customer-certs", "readOnly": true } }, { "op": "add", "path": "/spec/template/spec/volumes/0", "value": { "name": "customer-certs", "secret": { "secretName": "istio-ingressgateway-customer-certs", "optional": true } } }] EOF applying the patch oc -n istio-system patch --type=json deploy istio-ingressgateway -p "$(cat gateway-patch.json)“ Wait for a resource (eg. POD) to be read (met a condition) kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s oc create secret generic sshsecret ` ` oc create secret generic sshsecret
  • 61. Brewer: CAP (Distributed Systems) Source: http://blog.nahurst.com/visual-guide-to-nosql-systems
  • 62. PACELS Theorem  An extension to the CAP theorem. It states that in case of network partitioning (P) in a distributed computer system, one has to choose between availability (A) and consistency (C) (as per the CAP theorem), but else (E), even when the system is running normally in the absence of partitions, one has to choose between latency (L) and consistency (C).  => Eventual consistency approach in Cassandra DB and other solutions …
  • 63. PACELS Theorem – DB Ratings DDBS P+A P+C E+L E+C DynamoDB Yes Yes Cassandra Yes Yes Cosmos DB Yes Yes Riak Yes Yes VoltDB/H- Store Yes Yes Megastore Yes Yes BigTable/HBase Yes Yes MongoDB Yes Yes PNUTS Yes Yes Hazelcast IMDG Yes Yes Yes
  • 64. C(A/P)S Versioning Principle: Sacrifice With Big Data, storing data redundantly or converting terabytes is an issue. C(A/P)S Principle: Tradeoff between Code Amount – Availability/Performance – Storage: One needs to be sacrificed: 1. With each new version as storage format, all old data could be eagerly migrated to the latest version (active migration, perhaps partial service availability during migration and perhaps loss of attributes from old versions although they might be required for revision-safety: Low-Availability/Performance cost factor). 2. Migrate only those pieces of the data that are needed (lazily, e.g. on-access migration or when it’s foreseeable); however, then the large ORC/Parquet files cannot be fully migrated and thus not deleted before copying also the rest of the data away or before migrating it (also due to block size; high storage costs, medium programming costs with converter cascade from very old to the latest version: Storage cost factor). 3. Program converters from source into older (replay/late arrivals) and perhaps multiple versions of newer storage formats and out of these formats to potentially multiple versioned destination formats (Code Amount). How to store/employ the data model versions used and the relevant converters: High programming costs, mitigatable through converter cascades or code generation; complex version and release management: Wages as cost factor.
  • 65. C(A/P)S Versioning Principle: Benefits C(A/P)S Principle: Only 2 of the 3 benefits can be achieved: Code Amount (low) – Availability/Performance (high) – Storage (low). Code Amount (low) Availability/ Performance (high) Storage (low) Lazy migration Eager migration No focus on migration: Programming Multiple Converters, Optimizations below
  • 66. C(A/P)S Versioning Principle shown with Circles You can choose 1 point in this space. A combination of all properties is not possible. It is best to choose 1 of the 3 overlapping areas. Code Amount (low) Availability/ Performance (high) Storage (low)
  • 67. © Thomas Pötter Rema ining Risk Assets/ Function alities (here Server/ Client/W ebapp) Common Criteria Basic Concept Vul ner abil itie s Counter Measures Requi reme nts Mitiga tions At ta c k s Risks / Damage Potential Polici es
  • 68. 68 © 2017 FORRESTER. REPRODUCTION PROHIBITED. Reference architecture for container platforms › Container engine provides the foundational execution environment. › Container orchestration enables key capabilities for enterprise adoption. › External integration allows extensive support for diversified use scenarios. › Operations management streamlines operations or maintenance processes. › Container infrastructure allows adaptability of operating environments. › Container image management ensures unified control and value co-creation. › Container security safeguards end-to-end security. › DevOps automation allows application life-cycle acceleration. OVERVIEW Source: Vendor Landscape: Container Solutions For Cloud-Native Applications Forrester report
  • 69. Service-oriented Computing Private Storage Server HW Networking Servers Databases Virtualization Runtimes Applications Security & Integration Infrastructu re as a Service Storage Server HW Networking Servers Databases Virtualization Runtimes Applications Security & Integration Managed by vendor You manage Platform as a Service Storage Server HW Networking Servers Databases Virtualization Runtimes Applications Security & Integration Software as a Service Storage Server HW Networking Servers Databases Virtualization Runtimes Applications Security & Integration HW virtual DC cloud
  • 70. Cloud Computing Components  Azure, Google Cloud Platf, Amazon Web Services, IBM BlueMix, OpenShift, OpenStack, ... and many others Computing Services Execution Models Virtual Machines Web Sites Cloud Services/Apps Containers, μ-svcs Serverless / Lambdas Mobile services Hi-Perf Computing Management, Orchestration, Monitoring Storage & Data Key-Value Tables Column Store Document DB Graph DB Blobs Caching Data Processing Map/Reduce Hadoop Zoo Reporting Networking Virtual Network Connect Traffic Manager Messaging Service Bus Queue/Topic/Relay Event Hub Multi- & Media Media Services Streaming Content Delivery Other Services Machine Learning Searching / Indexing Maps / GIS Gaming Language / Translate Marketplace Languages / SDK C++ .Net Java PHP Python Node.js ...
  • 72. Cloud Strategy Approach SaaS PaaS IaaS SaaS SaaS SaaS SaaS PaaS PaaS IaaS IaaS PaaS PaaS IaaS IaaS New Development (Leveraging all cloud paradigms – 6 Cells) Hybrid Cloud (IaaS Lift and Shift; IaaS and PaaS New Deployments) SaaS (Business Architecture Led) CLOUD STRATEGY IaaS IaaS PaaS PaaS IaaS IaaS VMs VMs HW HW CONNECTIVITY (Cross Discipline Team) Infrastructure • Office 365 • SharePoint Online • Exchange Online • OneDrive Pro Line of Business • Dynamics CRM • 3rd Party Solutions • Yammer, Skype Engineering & Operations Enabling • MDM - In Tune • DevOps -TFS
  • 73. Open Stack  Cloud Lock-in  functionality, license, development  OpenStack  2010 NASA + Rackspace Compute - Nova Object Storage - Swift Block Storage - Cinder Image Service - Glance Networking - Neuron Identity - Keystone Dashboard - Horizon Orchestration - Heat Workflow - Mistral Telemetry - Ceilometer Database - Trove Map Reduce - Sahara Bare Metal - Ironic Messaging - Zaqar Shared FS - Manila DNS - Designate Search - Searchlight Key Manager - Barbican
  • 75. AWS Athena Sample Architecture PHI: protected health information
  • 77. 77
  • 78. Communication Services Amazon Simple Queue Service (SQS) Amazon Simple Notification Service (EBS) Amazon Simple Email Service (SES) Amazon Route 53 Amazon Virtual Private Cloud (VPC) Amazon Direct Connect Amazon Elastic Load Balancing Storage Services Amazon Simple Storage Service (S3) Amazon Elastic Block Store (EBS) Amazon ElastiCache Amazon SimpleDB Amazon Relational Database Service (RDS) Amazon CloudFront Amazon Import/Export Compute Services Amazon Elastic (EC2) Amazon Elastic Compute Cloud (EC2) Amazon Elastic MapReduce AWS Elastic Beanstalk AWS Cloudformation Autoscaling Amazon AWS Platform Additional Services Amazon GovCloud Amazon Flexible (FPS) Amazon Flexible Payment Service (FPS) Amazon DevPay Amazon Fullfillment Web Service (FWS) Amazon Mechanical Turk Alexa Web Information Service Amazon CloudWatch Alexa Top Sites
  • 79. Amazon Web Services / Elastic Cloud
  • 80. Amazon Web Services / Elastic Cloud  Compute  Elastic Compute Cloud (EC2) - scalable virtual machines using Xen  Elastic MapReduce (EMR)  Lambda (LAMBDA) - compute service that runs code in response to events  Networking  Route 53 - highly available and scalable DNS  Virtual Private Cloud (VPC) - logically isolated set of EC2, VPN connection  AWS Direct Connect - dedicated network connections into AWS data centers  Elastic Load Balancing (ELB) - automatically distributes incoming traffic  Storage and content delivery  CloudFront - CDN  Simple Storage Service (S3) - Web Service based storage  Glacier - low-cost, long-term storage, redundancy, low-frequent access times  AWS Storage Gateway - iSCSI block storage, cloud-based backup  Elastic Block Store (EBS) - persistent block-level storage volumes for EC2  AWS Import/Export - accelerates moving large amounts of data in/out AWS  Elastic File System (EFS) - file storage service  Database  DynamoDB - low-latency NoSQL backed by SSDs  ElastiCache - in-memory caching, implementation of Memcached and Redis  Relational Database Service (RDS) - MySQL, Oracle, SQL Server, PostgreSQL  Redshift - petabyte-scale data warehousing with column-based storage  SimpleDB - run queries on structured data, "the core functionality of a database"  AWS Data Pipeline - data transfer between different AWS services  Analytics  Machine Learning  Kinesis - real-time data processing over large, distributed data streams  Deployment  CloudFormation - file-based interface for provisioning other AWS resources  AWS Elastic Beanstalk - quick deployment and management of applications  AWS OpsWorks - configuration of EC2 services using Chef  AWS CodeDeploy - automated code deployment to EC2 instances  Management  Identity and Access Management (IAM) - authentication service  AWS Directory Service - connection to an existing Active Directory  CloudWatch - monitoring for AWS cloud resources and applications  AWS Management Console - web-based management and monitoring  CloudHSM - data security - dedicated Hardware Security Module (HSM)  AWS Key Management Service (KMS) - control keys used to data encryption  Application services  API Gateway - service for publishing and maintaining web service APIs  CloudSearch - basic full-text search and indexing of textual content  DevPay - billing and account management system  Elastic Transcoder (ETS) - video transcoding  Flexible Payments Service (FPS) - interface for micropayments  Simple Email Service (SES) - bulk and transactional email sending  Simple Queue Service (SQS) - message queue for web applications  Simple Notification Service (SNS) - multi-protocol "push" messaging  Simple Workflow (SWF) - workflow service for building scalable, resilient apps  Cognito - user identity and data synchronization service across mobile devices  AppStream - streaming of resource intensive applications from the cloud  Miscellaneous  Product Advertising API - electronic commerce
  • 84. Building Web Scaling Apps In Action! Lets go back and review a real live example!
  • 87. Project 1 Project 2 Project 3 Project …. Tactical Migration Strategy Business Case Application Assessment Risk & Compliance Operational Framework Continuous Feedback Future State Cycles of Learning Migration Strategy – Recommended Approach
  • 91. Platform Services Infrastructure Services Web Apps Mobile Apps API Management API Apps Logic Apps Notification Hubs Content Delivery Network (CDN) Media Services BizTalk Services Hybrid Connections Service Bus Storage Queues Hybrid Operations Backup StorSimple Azure Site Recovery Import/Export SQL Database DocumentDB Redis Cache Azure Search Storage Tables Data Warehouse Azure AD Health Monitoring AD Privileged Identity Management Operational Analytics Cloud Services Batch RemoteApp Service Fabric Visual Studio App Insights Azure SDK VS Online Domain Services HDInsight Machine Learning Stream Analytics Data Factory Event Hubs Mobile Engagement Data Lake IoT Hub Data Catalog Security & Management Azure Active Directory Multi-Factor Authentication Automation Portal Key Vault Store/ Marketplace VM Image Gallery & VM Depot Azure AD B2C Scheduler Azure Architecture
  • 93. Microsoft Azure Virtual Machines - Provision Windows and Linux virtual machines in minutes App Service - Create web and mobile apps for any platform and any device SQL Database - Managed relational SQL Database-as-a-service Storage - Durable, highly available, and massively scalable cloud storage Cloud Services - Create highly available, infinitely scalable cloud applications and APIs DocumentDB - Managed NoSQL document database-as-a-service Azure Active Directory - Synchronize on-premises directories and enable single sign-on Backup - Simple and reliable server backup to the cloud HDInsight - Provision cloud Hadoop, Spark, R Server, HBase, and Storm clusters RemoteApp - Deploy Windows client apps in the cloud, run on any device Batch - Run large-scale parallel and batch compute jobs StorSimple - Hybrid cloud storage for enterprises, reduces costs and improves data security Visual Studio Team Services - Services for teams to share code, track work, and ship software API Management - Publish APIs to developers, partners and employees securely and at scale Azure IoT Hub - Connect, monitor, and control millions of IoT assets CDN - Deliver content to end-users through a robust network of global data centers ExpressRoute - Dedicated private network fiber connections to Azure Site Recovery - Orchestrate protection and recovery of private clouds Azure DNS - Host your DNS domain in Azure Machine Learning - Powerful cloud-based predictive analytics Service Fabric - Build and operate always-on, scalable, distributed applications Multi-Factor Authentication - Safe access to data and apps, extra level of authentication Visual Studio Application Insights - Detect and diagnose issues in your web apps and services SQL Data Warehouse - Elastic data warehouse-as-a-service with enterprise-class features Virtual Network - Provision private networks, optionally connect to on-premises datacenters Media Services - Encode, store, and stream video and audio at scale Stream Analytics - Real-time stream processing Azure Active Directory Domain Services - Join Azure VM to a domain w/o domain controllers Event Hubs - Ingest, persist, and process millions of events per second Data Factory - Orchestrate and manage data transformation and movement Key Vault - Safeguard and maintain control of keys and other secrets Service Bus - Connect across private and public cloud environments Azure Active Directory B2C - Consumer identity and access management in the cloud Scheduler - Run your jobs on simple or complex recurring schedules Azure DevTest Labs - Quickly create environments to deploy and test applications Notification Hubs - Scalable, cross-platform push notification infrastructure Automation - Simplify cloud management with process automation Log Analytics - Collect, search and visualize machine data from on-premises and cloud Security Center - Prevent, detect, and respond to threats with increased visibility BizTalk Services - Seamlessly integrate the enterprise and the cloud Traffic Manager - Route incoming traffic for high performance and availability Redis Cache - Access to a secure, dedicated cache for your Azure applications Search - Fully-managed search-as-a-service Load Balancer - Deliver high availability and network performance to your applications VPN Gateway - Establish secure, cross-premises connectivity Application Gateway - Layer 7 Load Balancer with built-in HTTP balancing and delivery cntrl Data Catalog - Data source discovery to get more value from existing enterprise data assets Virtual Machine Scale Sets - Highly available, auto scalable Linux or Windows virtual machines Power BI Embedded - Embed fully interactive, stunning data visualizations in your applications Mobile Engagement - Increase app usage and user retention Data Lake Store - Hyperscale repository for big data analytics workloads Data Lake Analytics - Distributed analytics service that makes big data easy Cognitive Services - Add smart API capabilities to enable contextual interactions Azure Container Service - Use Docker based tools to deploy and manage containers SQL Server Stretch Database - Dynamically stretch on-premises SQL Server databases to Azure HockeyApp - Deploy mobile apps, collect feedback and crash reports, and monitor usage Functions - Process events with serverless code Logic Apps - Automate the access and use of data across clouds without writing code Cortana Intelligence - Transform your business with big data and advanced analytics IoT Suite - Capture and analyze untapped data to improve business results Operations Management Suite - Manage your cloud and on-premises infrastructure Apache Spark for Azure HDInsight - Apache Spark in the cloud for mission critical deployments Apache Storm for HDInsight - Real-time stream processing made easy for big data R Server for HDInsight - Predictive modeling, machine learning, and analysis for big data Encoding - Studio Grade encoding at cloud scale Live and On-Demand Streaming - Deliver content to all devices with business scale Azure Media Player - A single layer for all your playback needs Content Protection - Securely deliver content using AES, PlayReady, Widevine, and Fairplay Blob Storage Accounts - REST-based object storage for unstructured data Premium Storage - Low latency and high throughput storage Web Apps - Quickly create and deploy mission critical Web apps at scale Mobile Apps - Build and host the backend for any mobile app API Apps - Easily build and consume Cloud APIs Text Analytics API - Easily evaluate sentiment and topics to understand what users want Recommendations API - Predict and recommend items your customers want Academic Knowledge API - Academic content in the Microsoft Academic Graph Computer Vision API - Distill actionable information from images Emotion API - Personalize experiences with emotion recognition Face API - Detect, analyze, organize, and tag human faces in photos Bing Speech API - Convert speech to text and back again to understand user intent Web Language Model API - Predictive language models trained on web-scale data Language Understanding Intelligent Service - Understanding commands from your users Speaker Recognition API - Use speech to identify and authenticate individual speakers Bing Search APIs - Web, image, video, and news search APIs for your app Bing Autosuggest API - Give your app intelligent options for searches Bing Spell Check API - Detect and correct spelling mistakes in your app Media Analytics - Speech and Vision services at enterprise scale, compliance, and security Queue Storage - Effectively scale apps according to traffic File Storage - File shares that use the standard SMB 3.0 protocol Tables Storage - NoSQL key-value storage using semi-structured datasets
  • 96. Azure Container Service Containers Containers Orchestrator (Docker Swarm, DC/OS, Kubernetes) Orchestrator (Docker Swarm, DC/OS, Kubernetes) Container Tooling e.g. Docker CLI Container Tooling e.g. Docker CLI Service Tooling e.g. ARM Template Service Tooling e.g. ARM Template
  • 97. Cosmos DB Billions transactions/day Services Powered by Service Fabric SQL Database 2.1 million DBs Cortana Power BI Event Hubs 60bn events/day IoT Hub Millions of messages Skype Intune Dynamics
  • 98. Azure Other Clouds On Premise Azure Service Fabric Any OS, Any Cloud Dev Box
  • 99. Service Fabric Programming Models & CI/CD Other Clouds Azure Dev Box On Premise .NET Core/Full .NET/Java
  • 100. Windows Azure Platform Components Apps & Services Services Web Frontend Queues Distributed Storage Distributed Cache Partitioned Data Content Delivery Network Load Balancer IIS Web Server VM Role Worker Role Web Role Caching Queues Access Control Composite App Blobs Relational Database Tables Drives Service Bus Reporting DataSync Virtual Network Connect
  • 101. Virtual Machine vs VM Role VM Role Virtual Machine Storage Non-Persistent Storage Persistent Storage Easily add additional storage Deployment Build VHD offsite and upload to storage. Build VHD directly in the cloud or build the VHD offsite and upload Networking Internal and Input Endpoints configured through service model. Internal Endpoints are open by default. Access control with firewall on guest OS. Input endpoints controlled through portal, service model or API/Script. Primary Use Deploying applications with long or complex installation requirements into stateless PaaS applications Applications that require persistent storage to easily run in Windows Azure.
  • 102. Persistent Disks and Highly Durable
  • 103. Base OS image for new Virtual Machines Sys-Prepped/Generalized/Read Only Created by uploading or by capture Writable Disks for Virtual Machines Created during VM creation or during upload of existing VHDs.
  • 104. Cross-premise Connectivity IP-level connectivity Data Synchronization SQL Azure Data Sync Application-layer Connectivity & Messaging Service Bus Secure Machine-to-Machine Network Connectivity Windows Azure Connect Secure Site-to-Site Network Connectivity Windows Azure Virtual Network
  • 105. Microsoft Azure Example October 9, 2022 105 How to Architect for High Availability?
  • 107. SQL Server in Azure VM You access a VM with SQL Server You manage SQL Server and Windows: High Availability, Backups, Patching (automation available) You can run any SQL Server version and edition Full on-premises compatibility Different VM sizes: A0 (1 core, 1GB mem, 100GB) to G5 (32 cores, 512GB mem, 32TB) VM availability SLA: 99.95%: In practice SQL AlwaysOn provides higher availability (~99.99%) Reuse on-premises infrastructure (e.g. Active Directory) You access a DB DB is fully managed: High Availability, Backups, Patching Runs latest SQL Server version, based on Enterprise edition New paradigm of databases and modern app building Different DB sizes: Basic (2GB, 5tps) to Premium (500GB, 735tps DB availability SLA: 99.99% 735tps) Azure SQL Database
  • 108. What is a SQL Always On Availability Group • SQL AlwaysOn Availability Groups feature is a HA and DR solution for SQL • Each Server keeps it’s own copy of the databases. • Shared Storage is not required • Databases are synchronized with secondary node databases. • Supports automatic, planned and forced failover. • Depends on the failover clustering role. • Secondary nodes can be used as Read only Nodes and for Backups.
  • 109. WITNESS Azure: Architecture Diagram PRIMARY Availability Group SECONDARY WindowsCluster On-Premises SECONDARY Azure Primary: On-premises Secondary: Azure – Data in azure act as a DR Cost : Egress Traffic
  • 110. Azure: Architecture Diagram PRIMARY Availability Group SECONDARY WindowsCluster On-Premises SECONDARY Cloud Primary: Azure Secondary: On-Premises - a copy of for reporting and regulatory purposes Cost : Egress Traffic WITNESS
  • 111. Compute Enterprise Level Infrastructure Storage Networking Identity Marketplace Management portal Windows Azure Platform cal Development Environment Performance Performance velopment Tools velopment Tools Compute Windows Azure Compute Windows Azure Storage Windows Azure Connect Content Delivery Network (CDN) AppFabric Caching AppFabric Service Bus AppFabric Integration Control AppFabri c Access Control Windows Azure SQL Azure DataMarke t Applicatio ns Marketplace SQL Azure Windows Azure AppFabric Windows Azure
  • 112. Microsoft Azure Services Data & Storage Web & Mobile Compute SQL Database App Service Virtual Machines Media & CDN Media Services CDN Developer Services DocumentDB Redis Cache Cloud Services Batch Service Fabric Networking Virtual Network ExpressRoute Traffic Manager StorSimple Search Storage Identity & Access Azure Active Directory Multi-Factor Authent API Management Notification Hubs Mobile Engagement Visual Studio Online Application Insights Management Scheduler Automation Operational Insights Key Vault Analytics & IoT HDInsight Machine Learning Stream Analytics Data Factory Event Hubs Hybrid Integration BizTalk Services Service Bus Backup Site Recovery Web App Mobile App API App Logic App Blobs Tables Queue s Files Marketplace … Data Lake Data Warehouse RemoteApp DNS Application Gateway
  • 113. Azure Blob Storage Concepts 113
  • 114. Queues Storage 3-Tier service pattern Front End (Stateless Web) Stateless Middle-tier Compute Cache • Scale with partitioned storage • Increase reliability with queues • Reduce read latency with caches • Manage your own transactions for state consistency • Many moving parts each managed differently Load Balancer
  • 115. • Box • Chatter • Delay • Dropbox • Azure HD Insight • Marketo • Azure Media Services • OneDrive • SharePoint • SQL Server • Office 365 • Oracle • QuickBooks • SalesForce • Sugar CRM • SAP • Azure Service Bus • Azure Storage • Timer / Recurrence • Twilio • Twitter • IBM DB2 • Informix • Websphere MQ • Azure Web Jobs • Yammer • Dynamics CRM • Dynamics AX • Hybrid Connectivity • HTTP, HTTPS • File • Flat File • FTP, SFTP • POP3/IMAP • SMTP • SOAP + WCF • Batching / Debatching • Validate • Extract (XPath) • Transform (+Mapper) • Convert (XML-JSON) • Convert (XML-FF) • X12 • EDIFACT • AS2 • TPMOM • Rules Engine Connectors Protocols BizTalk Services Built-in API Connectors
  • 116. Azure Web Apps  Rich monitoring and alerting  Traffic manager  Custom CNAMEs  VNET and VPN  Backup and restore  Many VM size and instance options  In production A/B testing  Auto load-balance  Share capacity across Web and Mobile  Staging slots  Validate changes in your staging environment before publishing to production  More DevOps features  Support for BitBucket and Visual Studio Online; seamless integration with GitHub  Web Jobs
  • 117. Architecture Azure SQL DW https://azure.microsoft.com/en- us/documentation/articles/sql-data- warehouse-overview-what-is Dist_DB_1 Dist_DB_2 Dist_DB_15 … Dist_DB_16 Dist_DB_17 Dist_DB_30 … … … … Dist_DB_46 Dist_DB_47 Dist_DB_60 …
  • 119. Azure Data Lake & SQL DW
  • 120. Loading data not Polybase https://blogs.msdn.microsoft. com/sqlcat/2016/02/06/azure -sql-data-warehouse-loading- patterns-and-strategies/
  • 121. Loading data via Polybase https://blogs.msdn.microsoft.co m/sqlcat/2016/02/06/azure-sql- data-warehouse-loading- patterns-and-strategies/
  • 122. Azure Mobile App REST API Offline sync Facebook Twitter Microsoft Google Azure Active Directory Windows iOS Android HTML 5/JS Xamarin PhoneGap Sencha Windows Androi d Chrome iOS OSX In-App Kindle Backend code SQL Mongo Tables O365 API Apps Offline Sync
  • 123. New Data Model TableController DataManager DTO DTO Mobile Service/App Device SQL Database BYOD MongoDB Table Storage
  • 125. Azure Notification Hub  Register device handle at app launch 1. Client app retrieves handle from Platform Notification Service 2. Client sends handle to your backend Backend registers with Notification Hub using tags to represent logical users and groups  Send Notification 3. Backend sends request to Notification Hub using a tag Notification Hub manages scale Notification Hub maps logical users/groups to device handles 4. Notification Hub delivers notifications to matching devices via PNS  Maintain backend device handles 5. Notification Hub deletes expired handles when PNS rejects them 6. Notification Hub maintains mapping between logical users/groups and device handles PNS App back-end Client app 1 2 2 4 5 6 Notification Hub 3 4
  • 126. File / application servers • Live backups, archives, and disaster recovery • Dramatic cost reduction • No changes to application environment File / application servers • File share with integrated data protection • All-in-one primary data + backup + live archives + DR with de-duplication & compression Policies Automated Encrypted • SharePoint storage on StorSimple + Azure • StorSimple SharePoint Database Optimizer • Improved performance & scalability • Control Virtual Sprawl • Cloud-as-a-tier • Offload storage footprint • VMware Storage DRS Storage pools • Virtual Machine Archive • Regional VM Storage • Storage for Tier 2 – 3 SQL Databases • Integrated Backup, Restore & Disaster Recovery StoreSimple Archive Data Benefits • Consolidates primary, archive, backup, DR thru seamless integration with Azure • Cloud Snapshots • De-duplication • Compression • Encryption • Reduces enterprise storage TCO by 60–80% Warm data on SAS Local Tier Most Active Data on SSD ExpressRoute Recovery De-duplicated De-duplicated & compressed De-duplicated, compressed & encrypted VPN Microsoft Azure StorSimple Cloud Storage
  • 129. Customer Environment Application Tier Logic Tier Database Tier Isolated Virtual Network INTERNET Cloud Access & Firewall Layer THREAT DETECTION: DoS/IDS Layer DOS/IDS Layer DOS/IDS Layer DOS/IDS Layer Clients / End Users Microsoft Azure 443 443 Azure Storage SQL Database Azure Platform • Logical isolation for customer environments and data • Centralized management via SMAPI or the Azure Portal • No internet access by default • Intrusion detection and DoS prevention measures • Customer can deploy additional DoS/IDS measures within their virtual networks • Penetration testing ExpressRoute Peer Private fiber connections to access compute, storage and more using ExpressRoute Azure Security and Compliance Secure development, operations, and threat mitigation practices provide a trusted foundation VPN Remote Workers Computers Behind Firewalls Enables connection from customer sites and remote workers to Azure Virtual Networks using Site-to-Site and Point-to-Site VPNs Azure manages compliance with: • ISO 27001 • SOC1 / SOC2 • HIPAA BAA • DPA / EU-MC • UK G-Cloud / IL2 • PCI DSS • FedRAMP Azure’s certification process is ongoing with annual updates and increasing breadth of coverage. Azure provides a number of options for encryption and data protection.
  • 130. Repository Build Test Deploy App Ops Process tools Service Manager ONE CONSISTENT PLATFORM ON- PREMISES SERVICE PROVIDER Microsoft Azure System Center Operations Manager Microsoft ALM & DevOps
  • 131. Microsoft Cloud Services Foundation Reference Model By: Thomas W Shinder and Jim Dial Management and Support Service Operations Infrastructure Service Delivery Platform Software Manage and support Support Provide capability Provide capability Define Define Define Request Fulfillment Asset and Configuration Management Change Management Incident and Problem Management Release and Deployment Management Access Management Systems Administration Knowledge Management Service Monitoring Configuration Management Service Reporting Network Support Service Management Fabric Management Deployment and Provisioning Authentication Consumer and Provider Portal Usage and Billing Authorization Data Protection Directory Process Automation Compute Storage Network Virtualization ServiceLevel Management Financial Management Regulatory Policy and Compliance Management Information Security Management Availability and Continuity Management Capacity Management ServiceLifecycle Management Enable services Provide capability Enable services Define Business Relationship Management This diagram is updated periodically. The latest version can be found online. Version 1 Detailed information about this diagram is provided in the Cloud Services Foundation Reference Model article. http://blogs.technet.com/b/cloudsolutions/archive/2013/08/15/cloud-services-foundation-reference-architecture-reference-model.aspx • Green subdomains contain components that represent IT operational processes • Blue subdomains contain technical capabilities components, which represent the functionality that is provided by hardware devices or software applications or both
  • 132. Hybrid Cloud Scenarios Recovery Encrypted Backup VPN Windows Backup SC Data Protection Manager Microsoft Azure System Center Virtual Machine Manager Recovery plan Health Monitor System Center Virtual Machine Manager Site A Site B Hyper-V Replica Orchestrated Recovery in case of outage Manage Site B System Center Virtual Machine Manager Site A Replication Recovery Microsoft Azure Microsoft Azure VPN Remote Users Admin
  • 133. Hybrid Cloud Scenarios File / Application Servers • Live Backups, Archives, and Disaster Recovery • Dramatic Cost Reduction • No Changes to Application Environment File / Application Servers • File share with integrated data protection • All-in-one primary data + backup + live archives + DR with de-duplication & Compression Policies Automated Encrypted • SharePoint storage on StorSimple + Azure • StorSimple SharePoint Database Optimizer • Improved performance & scalability • Control Virtual Sprawl • Cloud-as-a-tier • Offload storage footprint • VMware Storage DRS Storage pools • Virtual Machine Archive • Regional VM Storage • Storage for Tier 2 – 3 SQL Databases • Integrated Backup, Restore & Disaster Recovery StoreSimple Archive Data Benefits • Consolidates primary, archive, backup, DR thru seamless integration with Azure • Cloud Snapshots • De duplication • Compression • Encryption • Reduces enterprise storage TCO by 60–80% Warm data on SAS Local Tier Most Active Data on SSD Encrypted Backup Recovery De duplicated De duplicated & Compressed De duplicated, Compressed & Encrypted VPN Microsoft Azure
  • 134. Hybrid Cloud Scenarios AvailabilitySet Load Balancing Auto Scaling Tier1 AvailabilitySet Tier2 Auto Scaling SharePoint AvailabilitySet Tier3 Azure Storage SQL Azure Analytics & Reporting VPN VPN Web Site Mobile Service HDInsigh t (Hadoop) Storage BLOB Storage Table Storage Queue Virtual Machines VHD Windows Azure Cache Windows Azure CDN Microsof t Azure AD Notification Hub User s Microsoft Azure SDK Developers On Premises Microsoft Azure Connected Devices Collect / Decode Load Balancing Auto Scaling Worker Roles INGRESSNODES Filter / Analyze/ Aggregate ANALYTICSNODE Auto Scaling Worker Roles Azure Storage Record Reporting / BI CONSUME Azure Storage SQL Azure Analytics & Reporting Microsoft Azure
  • 135. Hybrid Cloud Scenarios Enterprise Mobility Suite • Hybrid Identity Management • Mobile Device Security&Management • Mobile ApplicationManagement • Strong Authentication& Accessbased Information Protection Consumer identity providers PCs and devices Microsoft apps 3rd party clouds/hosting ISV/CSV apps Custom LOB apps Encrypted Synchronization Microsoft Azure AD ADFS / SAML Multi-Factor Authentication Server Multi-Factor Authentication Server Corporate devices On Premises Applications BYOD / Personal devices .NET, Java, PHP, … • Built-in • SDK for integration • Strong multi Factor Authentication • Real Time Fraud Alert • Reporting, Logging & Auditing • Enables compliance with NIST 800-63 Level 3, HIPAA, PCI DSS, and other regulatory requirements Microsoft Azure AD
  • 136. Microsoft Azure Service Fabric A platform for reliable, hyperscale, microservice-based applications Microservices Azure Windows Server Linux Hosted Clouds Windows Server Linux Service Fabric Private Clouds Windows Server Linux High Availability Hyper-Scale Hybrid Operations High Density Rolling Upgrades Stateful services Low Latency Fast startup & shutdown Container Orchestration & lifecycle management Replication & Failover Simple programming models Load balancing Self-healing Data Partitioning Automated Rollback Health Monitoring Placement Constraints
  • 137. Azure Governance Architecture CRUD Azure Resource Manager (ARM) Query providing control over the cloud environment, without sacrificing developer agility 2. Policy-based Control: Real-time enforcement, compliance assessment and remediation at scale 3. Resource Visibility: Query, explore & analyze cloud resources at scale 1. Environment Factory: Deploy and update cloud environments in a repeatable manner using composable artifacts Role-based Access Policy Definitions ARM Templates Management Groups Subscriptions
  • 139. Management Group & Subscription Modeling Strategy App A Pre-Prod Microsoft Recommended App B Pre-Prod Shared services (Pre-Prod) App C Pre-Prod App A Prod App B Prod Shared services (Prod) App D Prod Prod RBAC + Policy Pre-Prod RBAC + Policy Org Management Group
  • 140. Remediation Enforcement & Compliance Apply policies at scale Turn on built-in policies or build custom ones for all resource types Real-time policy evaluation and enforcement Periodic & on-demand compliance evaluation Apply policies to a Management Group with control across your entire organization Apply multiple policies and & aggregate policy states with policy initiative Real time remediation Remediation on existing resources (NEW) Exclusion Scope Azure Policy VM In-Guest Policy (NEW)
  • 141. State of Cloud Computing  Perceptions  “The end of software”  On-demand infrastructure  Cheaper and better  Reality  Hybrid world; not “all-or-nothing”  Leverage existing IT skills and investments  Seamless user experiences  Evolutionary; not revolutionary  Drivers  Ease-of-use, convenience  Product effectiveness  Simplify IT, reduce costs > Types • Public • Private • Internal • External • Hybrid > Categories • SaaS • PaaS • IaaS
  • 142. Private (On-Premise) IT as a Service Infrastructure (as a Service) Platform (as a Service) Storage Server HW Networking Servers Databases Virtualization Runtimes Applications Security & Integration Storage Server HW Networking Servers Databases Virtualization Runtimes Applications Security & Integration Storage Server HW Networking Servers Databases Virtualization Runtimes Applications Security & Integration You manage Managed by vendor Managed by vendor You manage You manage
  • 143. .NET Services Windows Azure Applications Applications SQL Azure Others Windows Mobile Windows Vista/XP Windows Server Fabric Storage Config Compute Application Windows Azure An illustration
  • 144. Access Control Service Bus Service Bus Registry Endpoints Organization Y Organization X Application Application Illustrating the Service Bus 2) Discover endpoints 1) Register endpoints 3) Access application
  • 145. Application Models Web Hosting  Massive scale infrastructure  Burst & overflow capacity  Temporary, ad-hoc sites Application Hosting  Hybrid applications  Composite applications  Automated agents / jobs Media Hosting & Processing  CGI rendering  Content transcoding  Media streaming Distributed Storage  External backup and storage High Performance Computing  Parallel & distributed processing  Massive modeling & simulation  Advanced analytics Information Sharing  Reference data  Common data repositories  Knowledge discovery & mgmt Collaborative Processes  Multi-enterprise integration  B2B & e-commerce  Supply chain management  Health & life sciences  Domain-specific services
  • 146. Kappa Architecture, in Azure, Managed
  • 147. Kappa Architecture, in Azure, Managed
  • 148. Internet-Scale Application Architecture Design  Horizontal scaling  Service-oriented composition  Eventual consistency  Fault tolerant (expect failures) Security  Claims-based authentication & access control  Federated identity  Data encryption & key mgmt. Management  Policy-driven automation  Aware of application lifecycles  Handle dynamic data schema and configuration changes Data & Content  De-normalization  Logical partitioning  Distributed in-memory cache  Diverse data storage options (persistent & transient, relational & unstructured, text & binary, read & write, etc.) Processes  Loosely coupled components  Parallel & distributed processing  Asynchronous distributed communication  Idempotent (handle duplicity)  Isolation (separation of concerns)
  • 149. Storage • Relational & transactional data • Federated databases • Unstructured, de-normalized data • Logical partitioning • Persistent file & blob storage • Encrypted storage Connectivity • Message queues • Service orchestrations • Identity federation • Claims-based access control • External services connectivity Presentation • ASP.NET C#, PHP, Java • Distributed in-memory cache Services • .NET C#, Java, native code • Distributed in-memory cache • Asynchronous processes • Distributed parallel processes • Transient file storage Internet-Scale Application Architecture SERVICE BUS ACCESS CONTROL WORK FLOWS
  • 150. User Private Cloud Public Cloud Services Application Patterns Table Service Table Storage Service Blob Service Blob Storage Service Queue Service Queue Service ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Web Svc (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Jobs (Worker Role) Jobs (Worker Role) Silverlight Application Web Browser Mobile Browser WPF Application Service Bus Service Bus Access Control Service Access Control Service Workflow Service Workflow Service User Data Application Data Reference Data Reference Data Cloud Web Application Enterprise Data Enterprise Data Enterprise Web Svc Enterprise Web Svc Enterprise Application Enterprise Application Data Service Data Service Storage Service Storage Service Identity Service Identity Service Application Service Application Service Enterprise Identity Enterprise Identity
  • 151. User Private Cloud Public Services Application Patterns Table Storage Service Table Storage Service Blob Storage Service Blob Storage Service Queue Service Queue Service ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Web Svc (Web Role) Web Svc (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Jobs (Worker Role) Jobs (Worker Role) Silverlight Application Silverlight Application Web Browser Mobile Browser WPF Application WPF Application Service Bus Access Service Access Control Service Workflow Service User Data User Data Application Data Application Data Reference Data Reference Data Composite Services Application Enterprise Data Enterprise Data Enterprise Web Svc Enterprise Web Svc Enterprise Application Enterprise Application Data Service Storage Service Identity Service Application Service Enterprise Identity Enterprise Identity
  • 152. User Private Cloud Public Services Application Patterns Table Storage Service Table Storage Service Blob Storage Service Blob Storage Service Queue Service Queue Service ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Web Svc (Web Role) Web Svc (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Jobs Role) Jobs (Worker Role) Silverlight Application Silverlight Application Web Browser Web Browser Mobile Browser Mobile Browser WPF Application WPF Application Service Bus Access Service Access Control Service Workflow Service User Data Application Data Application Data Reference Data Cloud Agent Application Enterprise Data Enterprise Web Svc Enterprise Application Enterprise Application Data Service Storage Service Identity Service Application Service Enterprise Identity Enterprise Identity
  • 153. User Private Cloud Public Services Application Patterns Table Storage Service Table Storage Service Blob Storage Service Blob Storage Service Queue Service Queue Service ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Web Svc (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Jobs (Worker Role) Jobs (Worker Role) Silverlight Application Silverlight Application Web Browser Web Browser Mobile Browser Mobile Browser WPF Application WPF Application Service Bus Access Service Access Control Service Workflow Service User Data User Data Application Data Reference Data B2B Integration Application Enterprise Data Enterprise Web Svc Enterprise Application Data Service Storage Service Identity Service Application Service Enterprise Identity
  • 154. User Private Cloud Public Services Application Patterns Table Storage Service Table Storage Service Blob Storage Service Blob Storage Service Queue Service ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Web Svc (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Jobs Role) Jobs (Worker Role) Silverlight Application Silverlight Application Web Browser Web Browser Mobile Browser Mobile Browser WPF Application WPF Application Service Bus Service Bus Access Control Service Access Control Service Workflow Service Workflow Service User Data User Data Application Data Reference Data Reference Data Grid / Parallel Computing Application Enterprise Data Enterprise Web Svc Enterprise Web Svc Enterprise Application Data Service Data Service Storage Service Storage Service Identity Service Identity Service Application Service Application Service Enterprise Identity Enterprise Identity
  • 155. User Private Cloud Public Services Application Patterns Table Storage Service Table Storage Service Blob Storage Service Blob Storage Service Queue Service Queue Service ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Web Svc (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) ASP.NET (Web Role) Jobs (Worker Role) Jobs (Worker Role) Silverlight Application Silverlight Application Web Browser Mobile Browser Mobile Browser WPF Application WPF Application Service Bus Access Service Access Control Service Workflow Service User Data User Data Application Data Reference Data Hybrid Enterprise Application Enterprise Data Enterprise Web Svc Enterprise Application Data Service Data Service Storage Service Storage Service Identity Service Identity Service Application Service Application Service Enterprise Identity
  • 156. High-Level Architecture Hypervisor Guest Partition Host Partition Guest Partition Hardware Virtualization Stack (VSP) Drivers Host OS Server Core Applications Applications Virtualization Stack (VSC) Guest OS Server Enterprise Virtualization Stack (VSC) Guest OS Server Enterprise NIC NIC Disk1 Disk1 VMBUS VMBUS VMBUS Disk2 Disk2 CPU CPU
  • 157. HV-enabled Server Core base VHD HV-enabled Server Core base VHD Image-Based Deployment Host Partition Host partition differencing VHD Host partition differencing VHD Guest Partition Guest partition differencing VHD Guest partition differencing VHD Guest partition differencing VHD Guest partition differencing VHD Server Enterprise base VHD Server Enterprise base VHD Guest partition differencing VHD Guest partition differencing VHD Application VHD Application VHD Application VHD Server Core base VHD Server Core base VHD Server Enterprise base VHD Server Enterprise base VHD Maintenance OS App1 Package App1 Package App3 Package App3 Package App2 Package App2 Package Guest Partition Guest Partition
  • 158.  Your services are isolated from other services  Can access resources declared in model only  Local node resources – temp storage  Network end-points  Isolation using multiple mechanisms  Automatic application of Windows security patches  Rolling OS image upgrades Managed code Restriction of privileges Firewall Virtual Machine IP filtering
  • 159. Windows Azure Storage Stamps Storage Stamp LB Storage Location Service Access blob storage via the URL: http://<account>.blob.core.windows.net/ Data access Partition Layer Partition Layer Front-Ends Front-Ends Stream Layer Stream Layer Intra-stamp replication Storage Stamp LB Partition Layer Partition Layer Front-Ends Front-Ends Stream Layer Stream Layer Intra-stamp replication Inter-stamp (Geo) replication
  • 160. Storage Stamp Architecture – Stream Layer  Append-only distributed file system  All data from the Partition Layer is stored into files (extents) in the Stream layer  An extent is replicated 3 times across different fault and upgrade domains  With random selection for where to place replicas for fast MTTR  Checksum all stored data  Verified on every client read  Scrubbed every few days  Re-replicate on disk/node/rack failure or checksum mismatch M Extent Nodes (EN) Paxos M M Stream Layer (Distributed File System)
  • 161. Storage Stamp Architecture – Partition Layer  Provide transaction semantics and strong consistency for Blobs, Tables and Queues  Stores and reads the objects to/from extents in the Stream layer  Provides inter-stamp (geo) replication by shipping logs to other stamps  Scalable object index via partitioning M Extent Nodes (EN) Paxos M M Partition Server Partition Server Partition Server Partition Server Partition Master Lock Service Partition Layer Stream Layer
  • 162. Storage Stamp Architecture  Stateless Servers  Authentication + authorization  Request routing M Extent Nodes (EN) Paxos Front End Layer FE M M Partition Server Partition Server Partition Server Partition Server Partition Master FE FE FE FE Lock Service Partition Layer Stream Layer
  • 163. Storage Stamp Architecture M Extent Nodes (EN) Paxos Front End Layer FE Incoming Write Request M M Partition Server Partition Server Partition Server Partition Server Partition Master FE FE FE FE Lock Service Ack Partition Layer Stream Layer
  • 164. Account Name Container Name Blob Name aaaa aaaa aaaaa …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. zzzz zzzz zzzzz Storage Stamp Partition Server Partition Server Account Name Container Name Blob Name richard videos tennis ……… ……… ……… ……… ……… ……… zzzz zzzz zzzzz Account Name Container Name Blob Name harry pictures sunset ……… ……… ……… ……… ……… ……… richard videos soccer Partition Server Partition Master Partition Layer – Index Range Partitioning Front-End Server PS 2 PS 3 PS 1 A-H: PS1 H’-R: PS2 R’-Z: PS3 A-H: PS1 H’-R: PS2 R’-Z: PS3 Partitio n Map Blob Index Partition Map Account Name Container Name Blob Name aaaa aaaa aaaaa ……… ……… ……… ……… ……… ……… harry pictures sunrise A-H R’-Z H’-R
  • 165. Each RangePartition – Log Structured Merge-Tree Checkpoint File Table Checkpoint File Table Checkpoint File Table Blob Data Blob Data Blob Data Commit Log Stream Metadata log Stream Writes Read/Query
  • 166. Extent E2 Extent E3 Bloc k Bloc k Bloc k Bloc k Bloc k Bloc k Bloc k Bloc k Stream Layer Concepts Block  Min unit of write/read  Checksum  Up to N bytes (e.g. 4MB) Extent  Unit of replication  Sequence of blocks  Size limit (e.g. 1GB)  Sealed/unsealed Stream  Hierarchical namespace  Ordered list of pointers to extents  Append/Concatenate Bloc k Bloc k Bloc k Bloc k Bloc k Bloc k Bloc k Extent E4 Stream //foo/myfile.data Ptr E1 Ptr E2 Ptr E3 Ptr E4 Extent E1
  • 167. Creating an Extent SM SM Stream Master Paxos Partition Layer EN 1 EN 2 EN 3 EN Create Stream/Extent Allocate Extent replica set Primary Secondary A Secondary B EN1 Primary EN2, EN3 Secondary
  • 168. Replication Flow SM SM SM Paxos Partition Layer EN 1 EN 2 EN 3 EN Append Primary Secondary A Secondary B Ack EN1 Primary EN2, EN3 Secondary
  • 169. Design Choices  Multi-Data Architecture  Use extra resources to serve mixed workload for incremental costs  Blob -> storage capacity  Table -> IOps  Queue -> memory  Drives -> storage capacity and IOps  Multiple data abstractions from a single stack  Improvements at lower layers help all data abstractions  Simplifies hardware management  Tradeoff: single stack is not optimized for specific workload pattern  Append-only System  Greatly simplifies replication protocol and failure handling  Consistent and identical replicas up to the extent’s commit length  Keep snapshots at no extra cost  Benefit for diagnosis and repair  Erasure Coding  Tradeoff: GC overhead  Scaling Compute Separate from Storage  Allows each to be scaled separately  Important for multitenant environment  Moving toward full bisection bandwidth between compute and storage  Tradeoff: Latency/BW to/from storage
  • 170. Lessons Learned  Automatic load balancing  Quickly adapt to various traffic conditions  Need to handle every type of workload thrown at the system  Built an easily tunable and extensible language to dynamically tune the load balancing rules  Need to tune based on many dimensions  CPU, Network, Memory, tps, GC load, Geo-Rep load, Size of partitions, etc  Achieving consistently low append latencies  Ended up using journaling  Efficient upgrade support  Pressure point testing
  • 171. Windows Azure Storage Summary  Highly Available Cloud Storage with Strong Consistency  Scalable data abstractions to build your applications  Blobs – Files and large objects  Tables – Massively scalable structured storage  Queues – Reliable delivery of messages  Drives – Durable NTFS volume for Windows Azure applications  More information  Windows Azure tutorial this Wednesday 26th, 17:00 at start of SOCC  http://blogs.msdn.com/windowsazurestorage/
  • 172. Methods of Machine Learning
  • 175. Google Cloud Platform - Compute Engine / App Engine  App Engine - PaaS  Translate API  Prediction API  Big Query  Compute Engine - IaaS  Cloud Datastore  Cloud SQL  Cloud Endpoints  Cloud Storage
  • 176. 176 Google’s TPU 1.0—looking at the Technology • Employs 8 bit integer arithmetic to save power and area • A theme for others too—GraphCore • Google supports this with a development environment—Tensor Flow • Publically available
  • 177. Google Cloud Platform (GCP)  Compute  Compute Engine - Run large-scale workloads on virtual machines  App Engine - A platform for building scalable web apps and mobile backends  Container Engine - Run Docker containers powered by Kubernetes  Container Registry - Fast, private Docker image storage on GCP  Cloud Functions - A serverless platform for event-based microservices  Storage and Databases  Cloud Storage - Powerful and effective object storage with global edge- caching  Cloud SQL - A fully-managed, relational MySQL database  Cloud Bigtable - A fast, managed, massively scalable NoSQL database service  Cloud Datastore - A managed NoSQL database for storing non-relational data  Persistent Disk - Reliable, high-perf block storage for virtual machine instances  Networking  Cloud Virtual Network - Managed networking functionality for your resources  Cloud Load Balancing - High performance, scalable load balancing  Cloud CDN - Low-latency, low-cost content delivery using global network  Cloud Interconnect - Connect your infrastructure to Google's network edge  Cloud DNS - Reliable, resilient, low-latency DNS  Big Data  BigQuery - A fast and managed data warehouse for large-scale data analytics  Cloud Dataflow - A rt data processing service for batch and stream data proc  Cloud Dataproc - A managed Spark and Hadoop service  Cloud Datalab - An interactive tool for large-scale data analysis and visual  Cloud Pub/Sub - Connect your services with reliable asynchronous messaging  Genomics - Power your science with Google Genomics  Machine Learning  Cloud Machine Learning Platform - Machine Learning services  Vision API - Derive insight from images with our powerful Cloud Vision API  Speech API - Speech to text conversion powered by machine learning  Natural Language API - Processing text using machine learning  Translate API - Create multilingual apps and translate text into other languages  Management Tools  Stackdriver Overview - Monitoring, logging, and diagnostics GCP and AWS  Monitoring - Monitoring for applications running on GCP and AWS  Logging - Logging for applications running on GCP and AWS  Error Reporting - Identify and understand your application errors  Trace - Find performance bottlenecks in production  Debugger - Investigate your code’s behavior in production  Deployment Manager - Create and manage cloud resources with templates  Cloud Console - Your integrated Google Cloud Platform management console  Cloud Shell - Manage your infrastructure and applications from the cmd-line  Cloud Mobile App - Manage GCP services from Android or iOS  Billing API - management of billing for your projects in the GCP  Cloud APIs - Programmatic interfaces for all Google Cloud Platform services  Developer Tools  Cloud SDK - Command-line interface for GCP products and services  Deployment Manager - Create and manage cloud resources with templates  Cloud Source Repositories - Fully-featured private Git repositories  Cloud Endpoints - Create RESTful services from your code  Cloud Tools for Android Studio - Build backend services for your Android apps  Cloud Tools for IntelliJ - Debug production cloud applications inside of IntelliJ  Cloud Tools for PowerShell - Full cloud control from Windows PowerShell  Cloud Tools for Visual Studio - Deploy Visual Studio applications to GCP  Google Plug In for Eclipse - Simplifies development in the Eclipse IDE  Cloud Test Lab - On-demand app testing with the scalability of a cloud service  Identity & Security  Cloud Identity & Access Management - Fine-grained access control  Cloud Resource Manager - Hierarchically manage resources by project/org  Cloud Security Scanner - Scan your App Engine apps for common vulnerabilities
  • 178. Development Runtime Local Machine Python SDK Google AppEngine Infrastructure Sandboxed Runtime Environment Sandboxed Runtime Environment Data Store Url Fetch Image Manipulation Task Queue Cron Jobs Web App Web App Web App Web App
  • 179. Install/uninstall/upgrade all command-line tools related to Google Cloud Platform Notification for new release of any Cloud SDK component Automatization
  • 180. Cloud Storage Protected Your data is protected at multiple physical locations Strong, configurable security OAuth or simple access control on your data Multiple usages + Serve static objects directly + Use with other Google Cloud products (Bridge)
  • 181. Simple Citrix deployment on GCE Virtual Network XD VDI Host XA Session Host AD Controller Single Subnet SQL Server Secure Gateway Web Interface Delivery Controller License Server User Access via Internet Connect Via go.gcexencloud.ne t port 443 endpoint on Secure Gateway
  • 182. Simple hybrid deployment On-Premise Network AD Controller Virtual Network XD VDI Host XA Session Host Single Subnet SQL Server Secure Gateway Web Interface Delivery Controller License Server AD Controller Site-to-Site VPN Company resources and Applications Data
  • 183. Single Zone Delivery Controller License Server AD Controller Delivery Controller SQL Server SQL Server XD VDI Host XA Session Host XD VDI Host Virtual Network Single Zone Delivery Controller License Server AD Controller Delivery Controller SQL Server SQL Server XD VDI Host XA Session Host XD VDI Host Site-to-Site VPN
  • 184. Virtual Network Single Zone Secure Gateway Secure Gateway Web Interface Web Interface Delivery Controller License Server AD Controller Delivery Controller SQL Server SQL Server XD VDI Host XA Session Host XD VDI Host 443 443 443 EastCitrix.CloudApp.net Virtual Network Single Zone Secure Gateway Secure Gateway Web Interface Web Interface Delivery Controller License Server AD Controller Delivery Controller SQL Server SQL Server XD VDI Host XA Session Host XD VDI Host 443 443 443 WestCitrix.CloudApp.net Citrix.trafficmanager.net CNAME: citrixonazure.com StoreFront StoreFront StoreFront StoreFront Netscaler in GCE Netscaler in GCE
  • 185. Instance types – Knowledge Workers workload cost/users Knowledge Worker Workloa d - XenApp 7 .6 - Windows Server 20 08 R2 - Offic e 20 10 Instance type Units (GiB) vCPUs Cost/Hour Se ssions Cost per User/h General Purpose f1- mic ro Var 0.6 1 0.032 $ 0 #DIV/0! g1- small 1.38 1.7 1 0.052 $ 2 0.026 $ n1- sta ndard- 1 2.75 3.75 1 0.263 $ 3 0.088 $ n1- sta ndard- 2 5.5 7.5 2 0.526 $ 7 0.075 $ n1- sta ndard- 4 11 15 4 1.052 $ 15 0.070 $ n1- sta ndard- 8 22 30 8 2.104 $ 31 0.068 $ n1- sta ndard- 16 44 60 16 4.208 $ 63 0.067 $ n1- sta ndard- 3 2 88 120 32 7.256 $ 98 0.074 $ Compute Optimized n1- highc pu- 2 5.5 1.8 2 0.200 $ 2 0.1000 $ n1- highc pu- 4 11 3.6 4 0.480 $ 6 0.0800 $ n1- highc pu- 8 22 7.2 8 1.360 $ 19 0.0716 $ n1- highc pu- 16 44 14.4 16 2.760 $ 38 0.0726 $ n1- highc pu- 3 2 88 28.8 32 5.720 $ 79 0.0724 $ Memory Optimize d n1- highmem- 2 5.5 13 2 0.548 $ 7 0.078 $ n1- highmem- 4 11 26 4 1.096 $ 15 0.073 $ n1- highmem- 8 22 52 8 2.192 $ 31 0.071 $ ni- highmem- 16 44 104 16 4.384 $ 63 0.070 $ n1- highmem- 3 2 88 208 32 7.608 $ 98 0.078 $ Knowledge Worker Workloa d - XenApp 7 .6 - Windows Server 20 12 R2 - Offic e 2 013 Instance type Units (GiB) vCPUs Cost/Hour Se ssions Cost per User/h General Purpose f1- mic ro Var 0.6 1 0.032 $ 1 #DIV/0! g1- small 1.38 1.7 1 0.052 $ 2 0.026 $ n1- sta ndard- 1 2.75 3.75 1 0.263 $ 5 0.088 $ n1- sta ndard- 2 5.5 7.5 2 0.526 $ 8 0.088 $ n1- sta ndard- 4 11 15 4 1.052 $ 17 0.096 $ n1- sta ndard- 8 22 30 8 2.104 $ 33 0.100 $ n1- sta ndard- 16 44 60 16 4.208 $ 65 0.111 $ n1- sta ndard- 3 2 88 120 32 7.256 $ 101 0.097 $ Compute Optimized n1- highc pu- 2 5.5 1.8 2 0.200 $ 3 0.1000 $ n1- highc pu- 4 11 3.6 4 0.480 $ 8 0.0800 $ n1- highc pu- 8 22 7.2 8 1.360 $ 17 0.0971 $ n1- highc pu- 16 44 14.4 16 2.760 $ 34 0.1062 $ n1- highc pu- 3 2 88 28.8 32 5.720 $ 66 0.1192 $ Memory Optimize d n1- highmem- 2 5.5 13 2 0.548 $ 6 0.091 $ n1- highmem- 4 11 26 4 1.096 $ 11 0.100 $ n1- highmem- 8 22 52 8 2.192 $ 21 0.104 $ ni- highmem- 16 44 104 16 4.384 $ 38 0.115 $ n1- highmem- 3 2 88 208 32 7.608 $ 75 0.101 $
  • 186. Economics of GCE  Excel spreadsheet  Provided as a tool to estimate costs  Supports two regions and two user profiles  Accounts for computer, network, and storage
  • 190. Jan Balewski, NERSC Google GCE Tutorial March 2017 Docker container in 60 seconds 190 Virtual Machine w/ containers Your image is fully isolated, computations are private Hardware controlled by some OS Your image is meshed with hardware OS, your resources are capped, but computations are public Boundaries w/o privacy
  • 191. Why it Works: Separation of Concerns……
  • 192. • Docker Engine – CLI – Docker Daemon – Docker Registry • Docker Hub – Cloud service • Share Applications • Automate workflows • Assemble apps from components • Docker images • Docker containers Docker Architecture……
  • 193.  NOT A VHD  NOT A FILESYSTEM  uses a Union File System  a read-only Layer  do not have state  Basically a tar file  Has a hierarchy • Arbitrary depth • Fits into the Docker Registry Docker images……
  • 194. Units of software delivery (ship it!) ● run everywhere – regardless of kernel version – regardless of host distro – (but container and host architecture must match*) ● run anything – if it can run on the host, it can run in the container – i.e., if it can run on a Linux kernel, it can run *Unless you emulate CPU with qemu and binfmt Docker Containers...
  • 197. Introduction to Docker • Open Software – Launched March 2013 – 100+ million downloads of Docker images • Open Contribution – 750+ contributors – #2 most popular project – 137 community meet-up groups in 49 countries • Open Design – Contributors include IBM, Red Hat, Google, Microsoft, VMware, AWS, Rackspace, and others • Open Governance – 12 member governance advisory board selected by the community 197 Enabling application development efficiency, making deployment more efficient, eliminating vendor ‘lock-in’ with true portability
  • 198. 198 Docker is a shipping container system for code Multiplicity of Stacks Multiplicity of hardware environments QA server Development VM Contributor’s laptop Customer Data Center Production Cluster Public Cloud Static website User DB Analytics DB Queue Web frontend Do services and apps interact appropriately? Can I migrate smoothly and quickly …that can be manipulated using standard operations and run consistently on virtually any hardware platform An engine that enables any payload to be encapsulated as a lightweight, portable, self- sufficient container…
  • 199. Docker Mission Docker is an open platform for building distributed applications for developers and system administrators. Build Ship Run Anywhere Any App 199
  • 200. Docker Containers simplifies cloud portability 200 A platform to build, ship, and run applications in “containers”. Developers & SysAdmins love the flexibility and standardization of Docker Standardization  Application portability Package, ship, and run applications anywhere The Docker Hub Registry has 5,000+ "Dockerized" applications Lightweight Containers are “light” users of system resources, smaller than VMs, start up much faster, and have better performance Ecosystem-friendly A new industry standard, with a vibrant ecosystem of partners. 750+ community contributors; 50,000 third-party Docker projects on GitHub User-friendly Developers build with ease and ship higher-quality applications SysAdmins deploy workloads based on business priorities and policies. "Containers managed by Docker are effective in resource isolation. They are almost on par with the Linux OS and hypervisors in secure operations management and configuration governance." Joerg Fritsch, Gartner Analyst, Security Properties of Containers Managed by Docker, January 7, 2015
  • 201. Docker Containers A technical view into the shared and layered file systems technology  Docker uses a copy-on-write (union) filesystem  New files(& edits) are only visible to current/above layers  Layers allow for reuse  More containers per host  Faster start-up/download time – base layers are "cached"  Images  Tarball of layers (each layer is a tarball) 201 Filesystem Base OS / Kernel Fedora Ubuntu tomcat tomcat liberty CNTR1 CNTR2 CNTR3 CNTR4 app1 app2 app4 app3 Layer Layer Layer
  • 202. Docker Architecture 202 Source: https://docs.docker.com/introduction/understanding-docker/ The Client is typically a laptop or a build server such as Jenkins The DOCKER_HOST could be a VM on the same laptop as the Client, or a Linux VM in a Datacenter. Registry could be the Docker hub, a private corporate registry.
  • 203. Typical Container Lifecycle 203 Client (Laptop) DOCKER_HOST (Laptop) DOCKER_HOST (Bluemix) Registry (Bluemix) Node.js (IBM Created) Node.js (IBM Created) docker pull registry.ng.bluemix.net/i bmnode:latest git clone .../etherpad-lite docker build -t etherpad_bluemix . etherpad_bluemix etherpad_bluemix docker push registry.ng.bluemix.net/<n amespace_here>/etherpa d_bluemix iblue300etherpadxxxx docker run etherpad_bluemix (or click Start in Bluemix Console)
  • 204. Why do Developers care about Containers?  Demand for Increased Application Development Efficiency • Enable Continuous Integration/Continuous Delivery • Developer Laptops, through automated test, to production, and through scaling without modification  DevOps Requires Improved Deployment Efficiency • Higher Density of Compute Resources (CPU, Memory, Storage)  Hybrid Cloud and Choice Require Portability • Cross Cloud Deployment - move the same application across multiple clouds. • Eliminate “lock-in”, become a “Cloud Broker” 204 Customer pain points User scenarios How this offering helps Need resources faster Get a working environment up and running in minutes, not hours or weeks Users can instantiate new container instances in seconds with the consistent experience working directly with Docker Innovation requires agility and DevOps Continuous delivery pipeline IBM Containers integrates with Bluemix apps including a continuous delivery pipeline, partnered with the fast deployments of containers Ability to migrate workload from on-prem to off-prem infrastructure Changes made on developer’s local image is ready to deploy to production cloud Portability as images can be developed on a local workstation, tested in a staging cloud on-prem, and finally to the production off-prem cloud Environment to facilitate incremental production deployment Business wants to deploy in a phased approach to validate the expected experience of the new version Users can deploy new releases in a controlled manner enabling them to monitor the performance and behavior with the ability to roll back if needed
  • 205. VMs Benefits Better resource pooling Easier to scale VM’s on the cloud. Limitations Dedicated resources for each VM (more VM = more resources). Guest VM = Wasted resources.
  • 208. Virtual Machine Versus Container……
  • 209. Virtual Machine Versus Container……
  • 210. Virtual Machine Versus Container…… A “container“ delivers an application with all the libraries, environments and dependencies needed to run.
  • 211. Containers Containers vs VMs  Containers are more lightweight.  No need for a guest OS.  Less resources.  Greater portability  Faster
  • 212. • The Life of a Container – Conception • BUILD an Image from a Dockerfile – Birth • RUN (create+start) a container – Reproduction • COMMIT (persist) a container to a new image • RUN a new container from an image – Sleep • KILL a running container – Wake • START a stopped container – Death • RM (delete) a stopped container • Extinction – RMI a container image (delete image) Docker Container Lifecycle ……
  • 213. • Kernel Feature • Groups of processes • Control resource allocations – CPU – Memory – Disk – I/O • May be nested Linux Cgroups ……
  • 214. • Kernel Feature • Restrict your view of the system – Mounts (CLONE_NEWNS) – UTS (CLONE_NEWUTS) • uname() output – IPC (CLONE_NEWIPC) – PID (CLONE_NEWPID) – Networks (CLONE_NEWNET) – User (CLONE_NEWUSER) • Not supported in Docker yet • Has privileged/unprivileged modes today • May be nested Linux Kernel Namespaces ……
  • 215. Dockerfile 215 Build git clone https://github.com/dockerfile/nginx.git docker build -t="dockerfile/nginx" github.com/dockerfile/nginx Run docker run dockerfile/nginx
  • 216. • Like a Makefile (shell script with keywords) • Extends from a Base Image • Results in a new Docker Image • Imperative, not Declarative  A Docker file lists the steps needed to build an images • docker build is used to run a Docker file • Can define default command for docker run, ports to expose, etc Dockerfile ……
  • 218. Methods of building images • Three ways – Commit changes from a container as a new image – Build from a Dockerfile – Import a tarball into Docker as a standalone base layer 38
  • 219. Building a Docker Image Base Image (Disk) Container (Memory) New Image (Disk) Dockerfile Load Commit Run Run the Installation procedure Base Image (Disk) New Image (Disk) Build Installation script Interactive building Building from a Docker File
  • 220. Docker Commit • docker com m i t command saves changes in a container as a new image • Syntax docker com m i t [ opt i ons] [ cont ai ner I D] [ r eposi t or y: t ag] • Repository name should be based on username/application • Can reference the container with container name instead of ID 41 Save the container with ID of 984d25f537c5 as a new image in the repository johnnytu/myapplication. Tag the image as 1.0 docker com m i t 984d25f 537c5 j ohnnyt u/ m yappl i cat i on: 1. 0
  • 221. Interactive building Example: vim and curl $ docker run -t -i ubuntu:14.04 root@2a896c8cdd83:/# apt-get install -y curl root@2a896c8cdd83:/# apt-get install -y vim root@2a896c8cdd83:/# exit $ docker commit –m “test” 2a896c8cdd83 azab/test:1.0
  • 222. Dockerfile Intro to Dockerfile • Provides a more effective way to build images compared to using docker com m i t • Easily fits into your development workflow and your continuous integration and deployment process 44 A Dockerfile is a configuration file that contains instructions for building a Docker image
  • 223. Building a Docker Image from a Dockerfile Dockerfile .dockerignore files <source-directory> $docker build -t <image-name> <source-directory>
  • 224. Docker APIs - Python
  • 225. Docker - How it works Images are self-sufficient It’s possible to build container on OS X and use it in a Secure server
  • 227. Docker on 2 Servers Docker repositories Compute Node Docker Engine Compute Node Docker Engine Compute Node Docker Engine Compute Node Docker Engine Local Registry Compute Node Docker Engine Compute Node Docker Engine Compute Node Docker Engine Compute Node Docker Engine Pull Srv 1 Srv 2
  • 228. Docker on the cluster – Swarm Docker repositories Compute Node Docker Engine Compute Node Docker Engine Compute Node Docker Engine Compute Node Docker Engine Local Registry Pull Swarm cluster Swarm manager Docker Engine Docker Container VM Tools
  • 229. Docker containers on VMs Connecting to the Cluster Stroll Job Runner Container Virtual path Scheduler W W W W W W Job Input data Job Output data /cluster/ Project Area /var/proj/data VM Colossus Stroll File-system
  • 230. Containers Use Case – Microservices What is a Microservices Architecture? Application architected as a suite of small services, each running in its own process, and communicating with lightweight mechanisms e.g. REST/HTTP Services built around business capabilities Each service independently deployable via automation Minimal centralized governance  May be written in different languages  May use different data storage technologies Challenges with Microservices Architecture Cultural  Embracing a DevOps culture  Agility required from inception through to deployment – not just development  Ensuring autonomy does not preclude sharing Technological  Distributed systems are hard – introduce network latency, fault tolerance, serialization, …  Automation needed everywhere  Keeping latency down  Designing decoupled non-transactional systems is hard  Service versioning Why Microservices? • Agility  Services evolve independently and at difference speeds  Easier to adopt new technology and evolve architecture  Enables continuous delivery • Resilience  Use services boundaries for fault tolerance and isolation  Design for failure • Runtime scalability  Stateless services designed for horizontal scalability  Services can be scaled independently • Scalability of the development organisation  Easier to develop services in parallel  Smaller working set for each developer Microservices misconceptions • Microservices do not require Docker containers • Docker containers do not have to be microservices • Containers assist with portability, maintenance, and deployment; hence a natural choice for microservices
  • 231. Moving from monolithic applications to microservices 231 Monolithic app Microservices Scaling Scaling
  • 232.  Package your app to run virtually anywhere, including Bluemix • Cloud Foundry – Bluemix foundation that provides developers the ability to quickly compose their apps without worrying about the underlying infrastructure as these services run in secure droplet execution agent (DEA) environments. The Bluemix catalog consists of over 100 selections. • IBM Containers – Provides portability and consistency regardless of where your app is run— be it on bare metal servers in Bluemix, your company's data center, or on your laptop. Easily deploy containers from IBM’s hosted image hub or from your own private registry. • Virtual Machines – Offers the most control over your apps and middleware. The virtual machine contains the complete operating system and application, running on virtualized hardware that is provided by Bluemix. Deploying to the Cloud in a repeatable way Summer 2015  Same great services, no matter where your app runs • Bluemix Public – World class enterprise PaaS in the public cloud • Bluemix Dedicated – Your own PaaS private cloud, that’s securely connected to both the public Bluemix and your own network. • Bluemix Local – Bring cloud agility to even the most sensitive workloads in your data center. Delivered as a fully managed service behind your firewall. 232
  • 233. Service Existing services on Bluemix, they can be either public or private ones only visible within the organization. An application can be made into a service following an on boarding process. Application Basic unit of deployment in Bluemix. It may include multiple services, public or private. It cannot include other application. It's recommended to use traditional application architecture(a.k.a monolithic) on an application. It's the basic unit of red/black deployment. [Do we want this?] An application can be made into a system. Doing so, the original application will become the first app in the system. System (We coined this) An special kind of application that follows the MSA architecture(or multi-tier application architecture). It can integrate other applications(micro services) & services. Containers (Docker) • Dockerfile • A text doc that contains all the commands to build a Docker image. • Docker Image • The building block from which containers are launched. An image is the read-only layer that never changes. Images can be created based on the committed containers. • Docker Container • An running instance, generated from an Docker image. Self-contained environment built from one or more images • Information available at the Container level includes image from which it is generated, memory used, ip address assigned it, etc. • Container Group • A group of containers, which all share the same image. • Docker Registry • A registry server for Docker that helps hosting and delivery of repositories and images. • Layer • Each file system that is stacked when Docker mounts rootfs • Repository • Set of images on local Docker or registry server. 233 Terminology
  • 234. Page  234 Typical Docker Pull Data Flow
  • 235. Page  235 Typical Docker Run Data Flow
  • 237. Kubernetes is an ecosystem... Source: Redmonk - http://redmonk.com/sogrady/2017/09/22/cloud-native-license-choices/
  • 240. pets vs cattle - long lived - name them - care for them - ephemeral - brand them with #’s - well..vets are expensive
  • 241. git cc/ld java/jar Build docker build resources config libraries Package helm package Construct helm install/scale Load balancer Deploy
  • 242. Cloud as-is: No unified data access or security concepts Edge Private Cloud On Premise Public Cloud Public Cloud Public Cloud API Application ✓ API API API API API Connector Multi cloud strategy: • Complex data movement between clouds • On any other cloud: • Different API‘s, application breaks • Different Security concept
  • 243. Creating a Global Filesystem /mapr/edge1 /mapr/edge2 /mapr/edge3 /mapr/newyork /mapr/amsterdam /mapr/azure /mapr/gcp /mapr/aws-eu-west NFS POSIX HDFS REST HOT WARM COLD /mapr Kafka JSON HBASE SQL S3 Application ✓ Global access to local data
  • 244. Creating an “Ubernetes” Platform Application GLOBAL DATA MANAGEMENT Edge Private Cloud On Premise Public Cloud Public Cloud Public Cloud Pod Pod Pod Image Classification using Tensorflow in a Docker container Classic ETL Scheduling & Scaling MapR Kubernetes Volume Driver Single pane of glass to control jobs anywhere
  • 247. Kubernetes, Docker & Infrastructure  You don’t have to worry about the infrastructure  The entire design of pods and services is described with YAML files  Nothing in deployments, pod management, service discovery, monitoring, etc required any knowledge about how many servers, IP addresses, load balancers, or anything else with the infrastructure  Behind the scenes, Kubernetes is aware of all of the servers available, load balancers, application gateways and will configure them automatically according to what is in the YAML files 247
  • 248. Cloud Native Docker Container Cloud  Supporting a new Cloud Native DevOps Docker model with a Scale Out Infrastructure  Modernizing Hundreds of Websphere Apps on Power providing services both to internal employees and external clients  Embracing Open Source Technologies like Docker, Mongo, Redis etc.  Cooperatively Integrating Open Source Components to deliver a complete Container Cloud Service Power Compute Node Cloud Approx 100’s of Systems Kubernetes Container Management Service Web Apps Web Apps Web Apps Web Apps Web Apps Web Apps Web Apps Open Source Tooling and SW Mongo Redis etc SQL DB’s Data Services User Applications (Internal and External) Self Service Developer Portal to Get Containers and Data Services … Docker Containers LE Linux O/S & KVM RedHat 7.x LE Linux O/S & KVM SDN Registry Operations Dashboard Registry UI 248 Use Case
  • 249. Open Source Options for Container Cloud Orchestration on Power Docker Swarm/Datacenter Kubernetes Mesos Docker Inc Google Mesosphere • Strengths • Built-in to Docker 1.12 Engine • Easy to use for Small Clouds • Weaknesses • Full Docker DC not on Power Yet • Strengths • Good for Batch and Analytics • Lots of Apps in Catalog • Weaknesses • Less usage in Web Applications • Requires Marathon Framework for Web Apps • Strengths • Lots of Industry usage and experience for Web Apps • Synergy with Other parts of Client Business for X86 Container Mgmt • Weaknesses • Significant Integration of many components for Production Cloud 249
  • 250. Kubernetes Cluster Components RHEL 7 LE Hardware docker cAdvisor Kubernetes Slave flannel App Containers RHEL 7 LE Hardware Heapster Kubernetes Master Etcd RHEL 7 LE Hardware Docker Private Registry Grafana dashboard for showing utilizations Data Network Management Network l Storage – Provides Persistent Storage for Docker Containers and Private Registry l Docker Private Registry – Provides central on-premise repository of dockerized images l Heapster – Provides cluster wide monitoring by cAdvisor data from multiple Kubernetes slave l Kubernetes – Container Orchestration Platform l Etcd – Provides key-valuestorage for Kubernetes l RHEL – Base operating system for hosting containers l Dashboards – Provides self-service UI, monitoring views Storage InfluxDB Kubernetes- Dashboard for cluster management 250
  • 252. Client Environment K8s Master Environment-1 Environment-2 F5 Loadbalancer Clients K8s Slaves K8s Slaves • F5 Virtual IP (VIP) and port is configured for • K8s master • K8s slaves • Etcd distributed key-value store • Any direct communication between servers in Environment-1 and Environment-2 needs to be explicitly allowed by Firewall rules • K8s master and slaves are configured to use Flannel overlay network for PODs • Heapster/InfluxDB/Grafana is used for K8s resource monitoring • Ingress (with Nginx) is used for exposing services to clients Firewall Docker Private Registry Flannel 252
  • 253. Integration with Enterprise LDAP Server 253 Keystone Existing LDAP • Kubernetes uses namespaces to partition the cluster among multiple users • Three steps to Access: • Authentication • Authorization • Admission Control • Authorization defines what a Authenticated user can and can’t do: – AlwaysDeny: Used only for testing - AlwaysAllow: Used only for testing – ABAC: Attribute-based access control - Webhook: Calls out to an external authorization service via a REST call • ABAC based Authorization • Auth policies need to be created for every user and can be changed only by API server restart • Every user get's their own namespace • Read/write access to their own namespace • Read access to default (global) namespace • Kubernetes supports Openstack Keystone Component for Authentication • Keystone Provides LDAP/AD Integration
  • 254. Container Architecture km ctrl manager km apiserver km scheduler Agent Node Master Node Boot Node Ansible based installer and ops manager LDAP Server Mesos master MySQL haproxy etcd GUI cfc-auth Keystone VIP Mesos Agent km proxy Agent Node Pod Pod Pod Docker Agent Node VIP VIP 254 254 cfc-router Image-mgr appstore network mgr Heapter km agent Kube-DNS Flanneld Mesos Agent km proxy Pod Pod Pod Docker km agent Flanneld Mesos Agent km proxy Pod Pod Pod Docker km agent Flanneld master mgr
  • 255. Infrastructure Resource Aggregation xCAT Bare-Metal Generic Public Cloud adapter Cluster Deployment PaaS BD & A Infrastructure discovery Image Registry (OS, VM, container) SW Repository Logging/Metric Alert & Policy Authentication Load Balance DevOps Infrastructure Management Discover bare metals and quickly deploy the environment on-demand (bare metal, virtualization or hybrid) 1 Simplify IT operations Fine grain, dynamic allocation of resources maximizes efficiency of servers (Bare metals and VMs) sharing a common resource pool. 2 Increase Resource Utilization Proven architecture at extreme scale, with enterprise class infrastructure management, monitoring, reporting, and security capabilities. 3 Reduce Administration Costs 255 255
  • 256. Deliver an Agile Containerization Infrastructure in Enterprise 256 Server Storage Network Server Server IBM Spectrum Cluster Foundation Orchestration Cluster Template xCAT Conduct Cluster#1 Operating System Bare Metal Spectrum Scale Docker Engine Elastic scale in/out Design Deploy Monitor & Health upgrade scale Automation OpenStack Virtualizations Pools Bare Metal Operating System Spectrum Scale OpenStack (KVM) VM VM VM VM Provisioning Conductor Cluster#2 POD Benefits • Auto deploy customized OpenStack to offer the virtualization pools • Auto deploy two container management environments on both bare metals and virtual machines. • Easy to adjust the size of container management environments to balance the workload,and full • Building up Multi-tenant management based on LDAP POD POD POD POD POD
  • 257. kind: “Pod” (i.e. Type) Kubernetes Analysis: 2 types of containers “Dumb” (no HA, no Autoscale) = Pod Template kind: “ReplicationController” (i.e. Type) id: redis kind: ReplicationController apiVersion: v1beta1 desiredState: replicas: 1 replicaSelector: name: redis podTemplate: desiredState: manifest: version: v1beta1 id: redis containers: - name: redis image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data volumes: - name: data source: emptyDir: {} labels: name: redis id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: redis-master containers: - name: master image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data env: - key: MASTER value: "true" - name: sentinel image: kubernetes/redis:v1 ports: - containerPort: 26379 env: - key: SENTINEL value: "true" volumes: - name: data source: emptyDir: {} labels: name: redis role: master redis-sentinel: "true" Approach: • Use reuse exiting TOSCA normative node, capability and relationship types where possible • Model Kubernetes types (for now), then model similar container managers like Swarm, etc. and look for common base types, properties that can be abstracted. “Smart” (HA, Scaling) = ReplicationController Template
  • 258. Kubernetes.Pod tosca.groups.Placement derived_from: tosca.groups.Placement version: <version_number> metadata: <tosca:map(string)> description: <description> properties: TBD attributes: TBD # Allow get_property() against targets targets: [ tosca.nodes.Container.App.Kubernetes ] kind: “Pod” (a Template of type “Pod”) id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 (non-numeric) id: redis-master containers: ------------------------------------------------------------------------------------- ------------- - name: master (TOSCA template name) image: kubernetes/redis:v1 (TOSCA Container.App; create artifact of type image.Docker) cpu: 1000 (TOSCA Container capability; num_cpus, cpu_frequency) ports: (TOSCA EndPoint capability) - containerPort: 6379 (TOSCA Endpoint; port, ports) volumeMounts: (TOSCA Attachment capability) - name: data mountPath: /redis-master-data (TOSCA AttachesTo Rel.; location) env: - key: MASTER value: "true” # passed as Envirronment vars to instance ----------------------------------------------------------------------------------------- ------- - name: sentinel image: kubernetes/redis:v1 ports: - containerPort: 26379 env: - key: SENTINEL value: "true” # passed as Env. var. ----------------------------------------------------------------------------------------- ------- volumes: - name: data source: labels: name: redis role: master redis-sentinel: "true" Kubernetes Analysis: Pod Modeling: TOSCA Type mapping • A Pod is an aggregate of Docker Container Requirements of 1..N homogenous Container (topologies) TOSCA Types for Kubernetes: “Redis-master” Template of Kubernetes “Pod” Type: Kubernetes.Container tosca.nodes.Container.App derived_from: tosca.nodes.Container.App metadata: <tosca:map(string)> version: <version_number> description: <description> properties: environment: <tosca:map of string> requirements: - host: # hosted on kubelets type: Container.Runtime.Kubernetes - ports: capability: EndPoint properties: ports, ports, etc. - volumes: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED]
  • 259. redis-master-pod Kubernetes.Pod type: tosca.groups.Placement version: 1.0 metadata: name: redis role: master redis-sentinel: true targets: [ master-container, sentinel-container ] Kubernetes Analysis: Pod Modeling: TOSCA Template Mapping: Simple “Group Approach”: • Using the Types defined on the previous slide the TOSCA Topology Template looks like this for “redis-master” TOSCA Topology for Kubernetes “: “Redis-master” Template of Kubernetes “Pod” Type: master-container Kubernetes.Container derived_from: Kubernetes.Container metadata: <tosca:map(string)> version: <version_number> description: <description> artifacts: kubernetes/redis:v1 properties: requirements: - host: properties: num_cpus: 1000 ? - port: capability: EndPoint properties: port: 6379 - volume: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] interfaces: inputs: MASTER: true kind: “Pod” (a Template of type “Pod”) id: redis-master kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 (non-numeric) id: redis-master containers: ------------------------------------------------------------------------------------- ------------- - name: master image: kubernetes/redis:v1 cpu: 1000 ports: - containerPort: 6379 volumeMounts: - name: data mountPath: /redis-master-data env: - key: MASTER value: "true” # passed as Envirronment vars to instance ----------------------------------------------------------------------------------------- ------- - name: sentinel image: kubernetes/redis:v1 ports: - containerPort: 26379 env: - key: SENTINEL value: "true” # passed as Env. var. ----------------------------------------------------------------------------------------- ------- volumes: - name: data source: emptyDir: {} labels: name: redis role: master redis-sentinel: "true" sentinel-container Kubernetes.Contain er implied “InvitesTo” Relationship implied “InvitesTo” Relationship Issue: location property lost as there is no “AttachesTo” relationship in the topology. Create new Capability Type? derived_from: Kubernetes.Container ... ... ... Issue: Are there more than 1 volumes / mount points allowed? Choice: or use Docker.Runtime type to allow use of template on Swarm, etc.?
  • 260. redis-master-pod Kubernetes.Pod type: tosca.groups.Placement sources: [ master-container, sentinel-container ] Membership (MemberOf) direction is wrong for management (group): TOSCA Groups master-container Kubernetes.Contai ner sentinel-container Kubernetes.Contain er implied “MemberOf” Relationship implied “MemberOf” Relationship derived_from: Kubernetes.Container ... ... ...
  • 261. tosca.capabilities.Container.Docker: derived_from: tosca.capabilities.Container properties: version: type: list required: false entry_schema: version publish_all: type: boolean default: false required: false publish_ports: type: list entry_schema: PortSpec required: false expose_ports: type: list entry_schema: PortSpec required: false volumes: type: list entry_schema: string required: false However: We do not want to “buy into” Docker file as a Capability Type: Old Style: Docker capability type that mirrors a Dockerfile: Instead we want to use Endpoints (for ports) and Attachments (for volumes) This allows Docker, Rocket and containers to be modeled with other TOSCA nodes (i.e., via ConnectsTo) and leverage underlying Compute attached BlockStorage TBD: Need to show this
  • 262. tosca.groups.Placement tosca.groups.Root derived_from: tosca.groups.Placement version: <version_number> metadata: <tosca:map(string)> description: <description> properties: TBD attributes: TBD # Allow get_property() against targets targets: [ Container.App.Docker, Container.App.Rocket, ... ] Kubernetes Pod reuses “Docker” Container.App type which can now reference other Container.App types like Rocket (Rkt) Container.App.Docker tosca.nodes.Container.App derived_from: tosca.nodes.Container.App metadata: <tosca:map(string)> version: <version_number> description: <description> capabilities: Container.App: attribute: response_time: properties: environment: <tosca:map of string> requirements: - host: capability: Container.Docker type: Container.Runtime.Kubernetes - ports: capability: EndPoint properties: ports, ports, etc. - volumes: capability: Attachment relationship: AttachesTo properties: location, device occurrences: [0, UNBOUNDED] • There is no need for a “Kubernetes” Runtime type, just use the real Container’s built-in runtime requirement • (don’t care to model or reference Kubelets) • Homogenous Pods/Containers for Kubernetes is still an issue, but • this is a current Kubernetes limitation • (heterogonous is possible in future) Policies: • Security, • Scaling, • Update, • etc. “AppliesTo” group (members) • i.e., targets • Not using “BindsTo” as that implies it is coupled to an implementation BETTER: We do not need to define Kubernetes specific Types (reuse Docker types) : Container.App.Rocket Container.APP derived_from: Kubernetes.Container ... ... ...
  • 263. Event Type (new): <event_type_name>: derived_from: <parent_event_type> version: <version_number> description: <policy_description> Policy Definition <policy_name>: type: <policy_type_name> description: <policy_description> properties: <property_definitions> # allowed targets for policy association targets: [ <list_of_valid_target_templates> ] * triggers: <trigger_symbolic_name_1>: event: <event_type_name> # TODO: Allow a TOSCA node filter here # required node (resource) to monitor filter: node: <node_template_name> <node_type> # Used to reference another node related to # the node above via a relationship requirement: <requirement_name> # optional capability within node to monitor capability: <capability_name> # required clause that compares an attribute # with the identified node or capability # for some condition condition: <constraint_clause> action: # a) Define new TOSCA normative strategies # per-policy type and use here OR # b) allow domain-specific names <operation_name>: # (no lifecycle) # TBD: Do we care about validation of types? # If so, we should use a TOSCA Lifecycle type description: <optional description> inputs: <list of property assignments > implementation: <script> | <service_name> <trigger_symbolic_name_2>: ... <trigger_symbolic_name_n>: Event name of a normative TOSCA Event Type Condition described as a constraint of an attribute of the node (or capability) identified) by the filter. Action Describes either: a)a well-known strategy b)an implementation artifact (e.g., scripts, service) to invoke with optional property definitions as inputs (to either choice) TOSCA Policy – Entities that compose Policy (Event, Condition, Action) model <filter_name properties: - - - capabilities - - -
  • 264. Possible TOSCA Metamodel and Normative Type additions NodeType, Rel. Types <node_type_name>: metadata: description: > allow tags / labels for search of instance model type: map of string derived_from: <parent_node_type_name> version: <version_number> description: <node_type_description> properties: <property_definitions> attributes: <attribute_definitions> requirements: - <requirement_definitions> capabilities: <capability_definitions> interfaces: <interface_definitions> artifacts: <artifact_definitions> tosca.capabilities.Container tosca.capabilities.Container: derived_from: tosca.capabilities.Root properties: num_cpus: type: integer required: false constraints: - greater_or_equal: 1 cpu_frequency: type: scalar-unit.frequency required: false disk_size: type: scalar-unit.size required: false mem_size: type: scalar-unit.size required: false attributes: utilization: description: referenced by scaling policies type: # float (percent) | integer (percent) | # scalar-percent ? required: no ? constraints: - in_range: [ 0, 100 ]
  • 265. TOSCA Policy Definition my_scaling_policy: type: tosca.policies.scaling properties: # normative TOSCA properties for scaling min_instances: 1 max_instances: 10 default_instances: 3 increment: 1 # target the policy at the “Pod” targets: [redis-master-pod ] triggers: resize_compute: # symbolic name event: tosca.events.resource.utilization filter: node: master-container requirement: host capability: Container condition: utilization greater_than 80% action: # map to SENLIN::ACTION::RESIZE RESIZE_BEST_EFFORT: # logical operation name inputs: # optional inputs parameters number: 1 Implementation: <script> | <service_name> ... TOSCA Policy Mapping – Example Senlin “scaling_out_policy_ceilometer.yaml” Target is a Kubernetes Pod of the tosca.groups.placement type TODO: Need a % data type for TOSCA using the Kubernetes “redis” example from earlier slides (and its pod, container): TOSCA normative event OpenStack Ceilometer) TOSCA normative event type (name) that would map to domain-specific names (e.g., OpenStack Ceilometer) Symbolic name for the trigger (could be used to reference an externalized version; however, this would violate a Policy’s integrity as a “Security document” Find the attribute via the topology: a) Navigate to node (directly or via the requirement name) and optionally the Capability name b) The condition to map & register with the target monitoring service (e.g., Ceilometer) Describe NODE to attach an alarm | alert | event to i.e., Using the “node”, “req”, “cap” and “condition” keys would expressed as a descriptive “filter” Note: we combined the Senlin name Note: we combined the Senlin “Action” of SENLIN:ACTION:RESIZE with the strategy: BEST_EFFORT to have one name List optional input parms. here
  • 266. Kubeflow Architecture Firma | Referent | Abteilung | 13.05.2015 266
  • 267. Kubeflow Architecture Firma | Referent | Abteilung | 13.05.2015 267
  • 269. Memory Hierarchy: Past, Present and Future https://blog.dellemc.com/en-us/memory-centric-architecture-vision/
  • 270. New Memory Usage Paradigma https://blog.westerndigital.com/in-memory-computing-scale-ultrastar-memory-drive/
  • 271. Motivation: Memory Access Data Structures https://www.gridgain.com/resources/papers/introducing-apache-ignite
  • 273. Database Type Decision Tree https://www.nuodb.com/digging-distributed-sql
  • 275. Choosing the right IMC Technology https://www.gridgain.com/
  • 276. Uber Horovod: Main Mechanism Thomas Pötter 276 https://eng.uber.com/horovod/, https://www.slideshare.net/databricks/horovod-ubers-open-source-distributed-deep-learning-framework-for-tensorflow Exchange of (average) gradients for distributed learning.
  • 277. Apache Kafka (can be replaced by Pulsar) https://twitter.com/PoetterThomas/status/1203472185135960066?s=20
  • 279. Apache Pulsar: Tool Integration https://jack-vanlightly.com/blog/2018/10/2/understanding- how-apache-pulsar-works
  • 281. Apache Pulsar: Round-up of Concepts https://jack-vanlightly.com/blog/2018/10/2/understanding- how-apache-pulsar-works
  • 284. Use of Redis Cache in AWS (Ichnaea) https://ichnaea.readthedocs.io/en/latest/deploy.html
  • 287. Apache Ignite: In-Memory Capabilities https://www.slideshare.net/Codemotion/an-introduction-to-apache-ignite-mandhir-gidda-codemotion-rome-2017
  • 288. Ignite & GridGain based on it https://www.youtube.com/watch?v=zVQ2clIoxIQ
  • 289. GridGain Functionality / Use Cases https://www.youtube.com/watch?v=rDX_ialHfkU
  • 290. GridGain typical IMC Architecture https://www.youtube.com/watch?v=rDX_ialHfkU
  • 294. VoltDB https://www.voltdb.com/blog/2017/05/24/mifidii-youre-wrong/sarah-mifid/ VoltDB claims to be the only enterprise-grade data platform that meets the real-time streaming data requirements of 5G-powered applications and thus got a $10 million US series C funding in October 2019.
  • 297. Hazelcast: Example Use in Digital Transformation https://hazelcast.com/use-cases/digital-transformation/
  • 300. Red Hat JBoss Data Grid https://developers.redhat.com/blog/2017/02/20/unlock-your-red-hat-jboss- data-grid-data-with-red-hat-jboss-data-virtualization/
  • 301. Red Hat JBoss Data Grid https://www.slideshare.net/opensourcementor/jdv-big-data-summit-final
  • 303. Inbound Layer Corporate Memory Architecture Analytical Applications Outbound Layer Processing and Storage Layer Internal and External Sources Analytics and Reporting Analytics Service Delivery Platform CMS GALA External Data Source Systems Fleet WF-I IP Kafka / Flume (Flafka) Corporate Memory Streaming Batch CDC Data Integration / Validation Data Governance Data Provisioning Process Orchestration / Error Handling / Monitoring / Meta Data Management /Security Data Lake LE1 LE2 LEx . . . Commo n Batch Processin g, YARN, Spark, Hive R SAS SAP Design Studio Web I UI 5 Cryst al Reports WebServices SAP HANA SAP BA HANA Native FRDP Core Warehouse Data Marts BP Interface for Analytical Applications Hive+ORC/Parquet, REST + (Sqoop / Drill Exhibit/SploutSQL) Target Data Queues Hive* Source Data Pool Hive /Spark + ORC/Parquet* (Hybrid: Attribs + JSON- BLOBs), Diff- Records, Historic Corrections treated separately Spark SQL, DataSets + Streaming, MLlib,… Additional Ingestion Tool: Perhaps HDF/Nifi, Transformation Support-Tool: Talend or Diff-DB *= Hive/Parquet as complementary technology considered Flume / Sqoop MFT Initial Load Ma nag er: Con vers ion / For mat Dec isio n / Con sist enc y / Bite mp Spark SQL, DataSets + Streaming, MLlib, … Alluxio+Succinct Alluxio+Succinct H D F S Alluxio+Succinct H D F S
  • 304. Graph-Based Data Management class GraphInheritanceDB-Overview GraphInheritanceDB JDBCWrapper CommandLine HiveUDFs Dispatcher QueryAnalyzer SAP SAS R Sqoop HQLStructuralProcessor HQLDataProcessor DiffDBCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void AddNodeAttributeCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void JDBCConnection + connect(String) :void + disconnect() :int + sendCommand(String) :void Connection - addIfNotExist :boolean - apendSemicolon :boolean - name :String + connect(String) :void + disconnect() :int + sendCommand(String) :void HiveCommandProcessorConnection + connect(String) :void + disconnect() :int + sendCommand(String) :void BeelineConnection + connect(String) :void + disconnect() :int + sendCommand(String) :void HCatalogConnection + connect(String) :void + disconnect() :int + sendCommand(String) :void UpdateNodeAttributeFromTableCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void DeleteNodeRowCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void AddNodeCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void AddEdgeCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void AddEdgeAttributeCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void UpdateEdgeAttributeFromTableCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void DropNodeCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void DropEdgeCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void DeleteEdgeRowCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void InheritsAndExtendsCommand + dataOp(ArrayList<String>) :void + structuralHQLAfterDataOp(ArrayList<String>) :void + structuralHQLBeforeDataOp(ArrayList<String>) :void Hive or Spark Parallel processing as UDF Processing via a connection that ensures exactly once semantics. XSDImporter TypeManagement XMLImporter XSD (XML Schema Doc) XML JavaExporterWithHibernateAndJAXBAnnots JSONImporter TypeScriptOrSwaggerOrRAMLImporter JSON TypeScript or Swagger XMLExporter XSDExporter TypeScriptOrSwaggerOrRAMLExporter ORC Parquet HBase Using HyperJAXB, Hibernate, JAXB Exhibit JSONExporter Using Hive XML-SerDe Cassandra ScyllaDB PostgreSQL HibernateHQLDriver https://github.com/jwills/exhibit «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» «flow» The results of the queries should be aggregated efficiently using in-memory technology.
  • 305. Questions? Understood? Comprendes?  verstanden.de  compris.com, potentialism.net Further Infographics: 1. https://www.pinterest.de/poetter_thomas/data-science-infographics/ 2. https://www.pinterest.de/poetter_thomas/ai-artificial-intelligence-infographics/ 3. https://www.pinterest.de/poetter_thomas/deep-learning-infographics/ 4. https://www.pinterest.de/poetter_thomas/deep-learning-architecture-elements- architectures-/ 5. https://www.pinterest.de/poetter_thomas/explainable-ai-xai-interpretable- machine-learninga/ 6. https://github.com/FavioVazquez/ds-cheatsheets