Install Docker & setup Kubernetes cluster offline (using kubeadm) in an air-gapped environment.

My cluster details as per below table, 1 CentOS and 2 RHEL machines, both offline machine is part of an air-gapped environment.

No alt text provided for this image

DOCKER SETUP - run Step 1 to 5 in Online machine.

Server1 - Online Machine (dockeronline)

Step 1: Setup EPEL yum repository, this is required as some packages available in this repository for Docker.

[root@dockeronline ~]# yum install epel-release.noarch -y

Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirror.titansi.com.my
 * extras: mirror.titansi.com.my
 * updates: mirror.titansi.com.my
base                                                        | 3.6 kB  00:00:00
extras                                                      | 2.9 kB  00:00:00
updates                                                     | 2.9 kB  00:00:00
(1/4): base/7/x86_64/group_gz                               | 153 kB  00:00:00
(2/4): extras/7/x86_64/primary_db                           | 232 kB  00:00:00
(3/4): updates/7/x86_64/primary_db                          | 7.1 MB  00:00:01
(4/4): base/7/x86_64/primary_db                             | 6.1 MB  00:00:02
Resolving Dependencies
--> Running transaction check
---> Package epel-release.noarch 0:7-11 will be installed
--> Finished Dependency Resolution


Dependencies Resolved


==================================================================================
 Package              Arch            Version         Repository        Size
==================================================================================
Installing:
 epel-release          noarch          7-11            extras            15 k


Transaction Summary
==================================================================================
Install  1 Package


Total download size: 15 k
Installed size: 24 k
Downloading packages:
warning: /var/cache/yum/x86_64/7/extras/packages/epel-release-7-11.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4                            a80eb5: NOKEY
Public key for epel-release-7-11.noarch.rpm is not installed
epel-release-7-11.noarch.rpm                                                                     |  15 kB  00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-9.2009.0.el7.centos.x86_64 (@anaconda)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : epel-release-7-11.noarch                                                                             1/1
  Verifying  : epel-release-7-11.noarch                                                                             1/1


Installed:
  epel-release.noarch 0:7-11


Complete!

Step 2: Setup the Docker repository

[root@dockeronline ~]# yum-config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

Note: If yum-config-manager not installed in your server then install it by running below command

[root@dockeronline ~]# yum install yum-utils -y

Step 3: Setup nightly repository for Docker

[root@dockeronline ~]# yum-config-manager --enable docker-ce-nightly

Loaded plugins: fastestmirror
============================================================= repo: docker-ce-nightly ==============================================================
[docker-ce-nightly]
async = True
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7
baseurl = https://download.docker.com/linux/centos/7/x86_64/nightly
cache = 0
cachedir = /var/cache/yum/x86_64/7/docker-ce-nightly
check_config_file_age = True
compare_providers_priority = 80
cost = 1000
deltarpm_metadata_percentage = 100
deltarpm_percentage =
enabled = 1
enablegroups = True
exclude =
failovermethod = priority
ftp_disable_epsv = False
gpgcadir = /var/lib/yum/repos/x86_64/7/docker-ce-nightly/gpgcadir
gpgcakey =
gpgcheck = True
gpgdir = /var/lib/yum/repos/x86_64/7/docker-ce-nightly/gpgdir
gpgkey = https://download.docker.com/linux/centos/gpg
hdrdir = /var/cache/yum/x86_64/7/docker-ce-nightly/headers
http_caching = all
includepkgs =
ip_resolve =
keepalive = True
keepcache = False
mddownloadpolicy = sqlite
mdpolicy = group:small
mediaid =
metadata_expire = 21600
metadata_expire_filter = read-only:present
metalink =
minrate = 0
mirrorlist =
mirrorlist_expire = 86400
name = Docker CE Nightly - x86_64
old_base_cache_dir =
password =
persistdir = /var/lib/yum/repos/x86_64/7/docker-ce-nightly
pkgdir = /var/cache/yum/x86_64/7/docker-ce-nightly/packages
proxy = False
proxy_dict =
proxy_password =
proxy_username =
repo_gpgcheck = False
retries = 10
skip_if_unavailable = False
ssl_check_cert_permissions = True
sslcacert =
sslclientcert =
sslclientkey =
sslverify = True
throttle = 0
timeout = 30.0
ui_id = docker-ce-nightly/7/x86_64
ui_repoid_vars = releasever,
   basearch
username =

Step 4: Run this to build cache for yum package manager.

[root@dockeronline ~]# yum makecache fast

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
epel/x86_64/metalink                                         |  26 kB  00:00:00
 * base: mirror.titansi.com.my
 * epel: d2lzkl7pfhq30w.cloudfront.net
 * extras: mirror.titansi.com.my
 * updates: mirror.titansi.com.my
base                                                         | 3.6 kB  00:00:00
docker-ce-nightly                                            | 3.5 kB  00:00:00
docker-ce-stable                                             | 3.5 kB  00:00:00
extras                                                       | 2.9 kB  00:00:00
updates                                                      | 2.9 kB  00:00:00
(1/4): docker-ce-stable/7/x86_64/primary_db                  |  60 kB  00:00:00
(2/4): docker-ce-stable/7/x86_64/updateinfo                  |   55 B  00:00:00
(3/4): docker-ce-nightly/7/x86_64/primary_db                 | 168 kB  00:00:00
(4/4): docker-ce-nightly/7/x86_64/updateinfo                 |   55 B  00:00:00
Metadata Cache Created


Step 5: Create a Docker directory and download docker packages

[root@dockeronline Docker]# yumdownloader --resolve docker-ce

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.titansi.com.my
 * epel: d2lzkl7pfhq30w.cloudfront.net
 * extras: mirror.titansi.com.my
 * updates: mirror.titansi.com.my
--> Running transaction check
---> Package docker-ce.x86_64 3:20.10.6-3.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: containerd.io >= 1.4.1 for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: libseccomp >= 2.3 for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: docker-ce-cli for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: docker-ce-rootless-extras for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Processing Dependency: libcgroup for package: 3:docker-ce-20.10.6-3.el7.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be installed
--> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.119.2-1.911c772.el7_8.noarch
---> Package containerd.io.x86_64 0:1.4.4-3.1.el7 will be installed
---> Package docker-ce-cli.x86_64 1:20.10.6-3.el7 will be installed
--> Processing Dependency: docker-scan-plugin(x86-64) for package: 1:docker-ce-cli-20.10.6-3.el7.x86_64
---> Package docker-ce-rootless-extras.x86_64 0:20.10.6-3.el7 will be installed
--> Processing Dependency: fuse-overlayfs >= 0.7 for package: docker-ce-rootless-extras-20.10.6-3.el7.x86_64
--> Processing Dependency: slirp4netns >= 0.4 for package: docker-ce-rootless-extras-20.10.6-3.el7.x86_64
---> Package libcgroup.x86_64 0:0.41-21.el7 will be installed
---> Package libseccomp.x86_64 0:2.3.1-4.el7 will be installed
--> Running transaction check
---> Package docker-scan-plugin.x86_64 0:0.7.0-3.el7 will be installed
---> Package fuse-overlayfs.x86_64 0:0.7.2-6.el7_8 will be installed
--> Processing Dependency: libfuse3.so.3(FUSE_3.2)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
--> Processing Dependency: libfuse3.so.3(FUSE_3.0)(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
--> Processing Dependency: libfuse3.so.3()(64bit) for package: fuse-overlayfs-0.7.2-6.el7_8.x86_64
---> Package policycoreutils-python.x86_64 0:2.5-34.el7 will be installed
--> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
--> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-34.el7.x86_64
---> Package slirp4netns.x86_64 0:0.4.3-4.el7_8 will be installed
--> Running transaction check
---> Package audit-libs-python.x86_64 0:2.8.5-4.el7 will be installed
---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed
---> Package fuse3-libs.x86_64 0:3.6.1-4.el7 will be installed
---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed
---> Package python-IPy.noarch 0:0.75-6.el7 will be installed
---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed
--> Finished Dependency Resolution
(1/17): audit-libs-python-2.8.5-4.el7.x86_64.rpm              |  76 kB  00:00:00
(2/17): checkpolicy-2.5-8.el7.x86_64.rpm                      | 295 kB  00:00:00
(3/17): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm  |  40 kB  00:00:00
warning: /root/docker/containerd.io-1.4.4-3.1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY 13 MB/s |  69 MB  00:00:03 ETA
Public key for containerd.io-1.4.4-3.1.el7.x86_64.rpm is not installed
(4/17): containerd.io-1.4.4-3.1.el7.x86_64.rpm                |  33 MB  00:00:04
Public key for docker-ce-cli-20.10.6-3.el7.x86_64.rpm is not installed============================ ]  13 MB/s |  74 MB  00:00:02 ETA
(5/17): docker-ce-cli-20.10.6-3.el7.x86_64.rpm                |  33 MB  00:00:05
(6/17): docker-ce-20.10.6-3.el7.x86_64.rpm                    |  27 MB  00:00:06
(7/17): fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm               |  54 kB  00:00:00
(8/17): libseccomp-2.3.1-4.el7.x86_64.rpm                     |  56 kB  00:00:00
(9/17): docker-scan-plugin-0.7.0-3.el7.x86_64.rpm             | 4.2 MB  00:00:00
(10/17): docker-ce-rootless-extras-20.10.6-3.el7.x86_64.rpm   | 9.2 MB  00:00:01
(11/17): libsemanage-python-2.5-14.el7.x86_64.rpm             | 113 kB  00:00:00
(12/17): fuse3-libs-3.6.1-4.el7.x86_64.rpm                    |  82 kB  00:00:00
(13/17): libcgroup-0.41-21.el7.x86_64.rpm                     |  66 kB  00:00:00
(14/17): python-IPy-0.75-6.el7.noarch.rpm                     |  32 kB  00:00:00
(15/17): slirp4netns-0.4.3-4.el7_8.x86_64.rpm                 |  81 kB  00:00:00
(16/17): policycoreutils-python-2.5-34.el7.x86_64.rpm         | 457 kB  00:00:00
(17/17): setools-libs-3.3.8-4.el7.x86_64.rpm                  | 620 kB  00:00:00



Step 6: Make a tar ball of Docker directory and transfer to both offline machines.

[root@dockeronline ~]# tar cvzf Docker.tar.gz Docker

[root@dockeronline ~]# ls -l
total 110748
drwxr-xr-x. 2 root root      4096 Apr 26 15:45 Docker
-rw-r--r--. 1 root root 113396501 Apr 26 15:48 Docker.tar.gz

Install Docker in Offline machines

Step 1: Locate the tar file transferred to offline machine, extract it and install downloaded packages on both machine.

[root@k8s-masternode Docker]# rpm -ivh --replacepkgs --replacefiles *.rpm

warning: audit-libs-python-2.8.5-4.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
warning: containerd.io-1.4.4-3.1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:libseccomp-2.3.1-4.el7           ################################# [  6%]
   2:docker-scan-plugin-0:0.7.0-3.el7 ################################# [ 12%]
   3:docker-ce-cli-1:20.10.6-3.el7    ################################# [ 18%]
   4:libcgroup-0.41-21.el7            ################################# [ 24%]
   5:slirp4netns-0.4.3-4.el7_8        ################################# [ 29%]
   6:setools-libs-3.3.8-4.el7         ################################# [ 35%]
   7:python-IPy-0.75-6.el7            ################################# [ 41%]
   8:libsemanage-python-2.5-14.el7    ################################# [ 47%]
   9:fuse3-libs-3.6.1-4.el7           ################################# [ 53%]
  10:fuse-overlayfs-0.7.2-6.el7_8     ################################# [ 59%]
  11:checkpolicy-2.5-8.el7            ################################# [ 65%]
  12:audit-libs-python-2.8.5-4.el7    ################################# [ 71%]
  13:policycoreutils-python-2.5-34.el7################################# [ 76%]
  14:container-selinux-2:2.119.2-1.911################################# [ 82%]
  15:containerd.io-1.4.4-3.1.el7      ################################# [ 88%]
  16:docker-ce-rootless-extras-0:20.10################################# [ 94%]
  17:docker-ce-3:20.10.6-3.el7        ################################# [100%]

Step 2: Check installed docker version

[root@k8s-masternode Docker]# docker version

Client: Docker Engine - Community
 Version:           20.10.6
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        370c289
 Built:             Fri Apr  9 22:45:33 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true


Server: Docker Engine - Community
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8728dd2
  Built:            Fri Apr  9 22:43:57 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Step 3: Start and enable docker on both offline machine

[root@k8s-masternode Docker]# systemctl start docker.service


[root@k8s-masternode Docker]# systemctl enable docker.service

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.


[root@k8s-masternode Docker]# systemctl status docker

● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2021-04-26 13:46:25 +08; 25s ago
     Docs: https://docs.docker.com
 Main PID: 1637 (dockerd)
    Tasks: 8
   Memory: 42.6M
   CGroup: /system.slice/docker.service
           └─1637 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

You can verify you docker setup by pulling an image from docker hub in online machine (dockeronline) and transfer it to any offline machine in your cluster.

Step 4: Load image in offline machine

[root@k8s-masternode ~]# docker load < nginx.tar

7e718b9c0c8c: Loading layer [==================================================>]  72.52MB/72.52MB
4dc529e519c4: Loading layer [==================================================>]  64.81MB/64.81MB
23c959acc3d0: Loading layer [==================================================>]  3.072kB/3.072kB
15aac1be5f02: Loading layer [==================================================>]  4.096kB/4.096kB
974e9faf62f1: Loading layer [==================================================>]  3.584kB/3.584kB
64ee8c6d0de0: Loading layer [==================================================>]  7.168kB/7.168kB
Loaded image: nginx:latest
346fddbbb0ff: Loading layer [==================================================>]  72.52MB/72.52MB
2ba086d0a00c: Loading layer [==================================================>]   64.8MB/64.8MB
66f88fdd699b: Loading layer [==================================================>]  3.072kB/3.072kB
903ae422d007: Loading layer [==================================================>]  4.096kB/4.096kB
db765d5bf9f8: Loading layer [==================================================>]  3.584kB/3.584kB
1914a564711c: Loading layer [==================================================>]  7.168kB/7.168kB
Loaded image ID: sha256:62d49f9bab67f7c70ac3395855bf01389eb3175b374e621f6f191bf31b54cd5b
Loaded image ID: sha256:7ce4f91ef623b9672ec12302c4a710629cd542617c1ebc616a48d06e2a84656a

Step 5: Verify image loaded successfully

[root@k8s-masternode ~]# docker images

REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
nginx        latest    62d49f9bab67   12 days ago   133MB

Step 6: Run and verify image by running a container.

[root@k8s-masternode ~]# docker run -d nginx

b46987068d8ecb8833e81315c150bcbcb63b341fe22424b6121962c82692d13a

[root@localhost ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS     NAMES
b46987068d8e   nginx     "/docker-entrypoint.…"   2 seconds ago   Up 2 seconds   80/tcp    beautiful_elbakyan


KUBERNETES SETUP - run Step 1 to 3 in Online machine.

Online Machine

Step 1: Setup k8s yum repository

[root@dockeronline ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Step 2: Create a directory and download k8s files

[root@dockeronline ~]# mkdir kube

[root@dockeronline ~]# cd kube

[root@dockeronline kube]# yumdownloader --assumeyes --destdir=<your_rpm_dir> --resolve yum-utils kubeadm-1.18.* kubelet-1.18.* kubectl-1.18.* ebtables

Step 3: Make a tar ball and transfer to offline machine.

[root@dockeronline ~]# tar cvzf kube.tar.gz kube/

Offline Machine

Step 4: Locate the transferred tar ball in offline machine, extract it and install k8s utilities on offline machine (all nodes):

[root@k8s-masternode ~]# yum install -y --cacheonly --disablerepo=* kube/*.rpm


Loaded plugins: product-id, search-disabled-repos, subscription-manager
Examining kube/14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm: cri-tools-1.13.0-0.x86_64
Marking kube/14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm to be installed
Examining kube/445fcfae78f0ae899ca95cd0749b36a012f7c08c9865b415b2ec7d5de4ff601e-kubeadm-1.18.4-0.x86_64.rpm: kubeadm-1.18.4-0.x86_64
Marking kube/445fcfae78f0ae899ca95cd0749b36a012f7c08c9865b415b2ec7d5de4ff601e-kubeadm-1.18.4-0.x86_64.rpm to be installed
Examining kube/conntrack-tools-1.4.4-7.el7.x86_64.rpm: conntrack-tools-1.4.4-7.el7.x86_64
Marking kube/conntrack-tools-1.4.4-7.el7.x86_64.rpm to be installed
Examining kube/d8c1a189293d125745a094b96d107211404f9e3eda1c7a169dde994e792ba30a-kubectl-1.18.17-0.x86_64.rpm: kubectl-1.18.17-0.x86_64
Marking kube/d8c1a189293d125745a094b96d107211404f9e3eda1c7a169dde994e792ba30a-kubectl-1.18.17-0.x86_64.rpm to be installed
Examining kube/ebtables-2.0.10-16.el7.x86_64.rpm: ebtables-2.0.10-16.el7.x86_64
kube/ebtables-2.0.10-16.el7.x86_64.rpm: does not update installed package.
Examining kube/fd1282837aaaaf53178fd1e21d9a3c18ba40db6263e397642ec176998516e904-kubelet-1.18.4-0.x86_64.rpm: kubelet-1.18.4-0.x86_64
Marking kube/fd1282837aaaaf53178fd1e21d9a3c18ba40db6263e397642ec176998516e904-kubelet-1.18.4-0.x86_64.rpm to be installed
Examining kube/libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm: libnetfilter_cthelper-1.0.0-11.el7.x86_64
Marking kube/libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm to be installed
Examining kube/libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm: libnetfilter_cttimeout-1.0.0-7.el7.x86_64
Marking kube/libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm to be installed
Examining kube/libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm: libnetfilter_queue-1.0.2-2.el7_2.x86_64
Marking kube/libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm to be installed
Examining kube/socat-1.7.3.2-2.el7.x86_64.rpm: socat-1.7.3.2-2.el7.x86_64
Marking kube/socat-1.7.3.2-2.el7.x86_64.rpm to be installed
Examining kube/yum-utils-1.1.31-54.el7_8.noarch.rpm: yum-utils-1.1.31-54.el7_8.noarch
Marking kube/yum-utils-1.1.31-54.el7_8.noarch.rpm as an update to yum-utils-1.1.31-52.el7.noarch
Resolving Dependencies
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-7.el7 will be installed
---> Package cri-tools.x86_64 0:1.13.0-0 will be installed
---> Package kubeadm.x86_64 0:1.18.4-0 will be installed
---> Package kubectl.x86_64 0:1.18.17-0 will be installed
---> Package kubelet.x86_64 0:1.18.4-0 will be installed
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
---> Package yum-utils.noarch 0:1.1.31-52.el7 will be updated
---> Package yum-utils.noarch 0:1.1.31-54.el7_8 will be an update
--> Finished Dependency Resolution


Dependencies Resolved


=================================================================================================================================================================
 Package                    Arch       Version             Repository                                                                                       Size
=================================================================================================================================================================
Installing:
 conntrack-tools            x86_64     1.4.4-7.el7         /conntrack-tools-1.4.4-7.el7.x86_64                                                             550 k
 cri-tools                  x86_64     1.13.0-0            /14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64      21 M
 kubeadm                    x86_64     1.18.4-0            /445fcfae78f0ae899ca95cd0749b36a012f7c08c9865b415b2ec7d5de4ff601e-kubeadm-1.18.4-0.x86_64        38 M
 kubectl                    x86_64     1.18.17-0           /d8c1a189293d125745a094b96d107211404f9e3eda1c7a169dde994e792ba30a-kubectl-1.18.17-0.x86_64       42 M
 kubelet                    x86_64     1.18.4-0            /fd1282837aaaaf53178fd1e21d9a3c18ba40db6263e397642ec176998516e904-kubelet-1.18.4-0.x86_64       162 M
 libnetfilter_cthelper      x86_64     1.0.0-11.el7        /libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                       35 k
 libnetfilter_cttimeout     x86_64     1.0.0-7.el7         /libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                       39 k
 libnetfilter_queue         x86_64     1.0.2-2.el7_2       /libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                         45 k
 socat                      x86_64     1.7.3.2-2.el7       /socat-1.7.3.2-2.el7.x86_64                                                                     1.1 M
Updating:
 yum-utils                  noarch     1.1.31-54.el7_8     /yum-utils-1.1.31-54.el7_8.noarch                                                               337 k


Transaction Summary
=================================================================================================================================================================
Install  9 Packages
Upgrade  1 Package


Total size: 265 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
** Found 3 pre-existing rpmdb problem(s), 'yum check' output follows:
libseccomp-2.3.1-4.el7.x86_64 is a duplicate with libseccomp-2.3.1-3.el7.x86_64
policycoreutils-2.5-34.el7.x86_64 is a duplicate with policycoreutils-2.5-33.el7.x86_64
policycoreutils-python-2.5-34.el7.x86_64 is a duplicate with policycoreutils-python-2.5-33.el7.x86_64
  Installing : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                                    1/11
  Installing : socat-1.7.3.2-2.el7.x86_64                                                                                                                   2/11
  Installing : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                                    3/11
  Installing : kubectl-1.18.17-0.x86_64                                                                                                                     4/11
  Installing : cri-tools-1.13.0-0.x86_64                                                                                                                    5/11
  Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                                      6/11
  Installing : conntrack-tools-1.4.4-7.el7.x86_64                                                                                                           7/11
  Installing : kubelet-1.18.4-0.x86_64                                                                                                                      8/11
  Installing : kubeadm-1.18.4-0.x86_64                                                                                                                      9/11
  Updating   : yum-utils-1.1.31-54.el7_8.noarch                                                                                                            10/11
  Cleanup    : yum-utils-1.1.31-52.el7.noarch                                                                                                              11/11
Loaded plugins: product-id, subscription-manager
  Verifying  : kubelet-1.18.4-0.x86_64                                                                                                                      1/11
  Verifying  : kubeadm-1.18.4-0.x86_64                                                                                                                      2/11
  Verifying  : conntrack-tools-1.4.4-7.el7.x86_64                                                                                                           3/11
  Verifying  : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                                      4/11
  Verifying  : yum-utils-1.1.31-54.el7_8.noarch                                                                                                             5/11
  Verifying  : cri-tools-1.13.0-0.x86_64                                                                                                                    6/11
  Verifying  : kubectl-1.18.17-0.x86_64                                                                                                                     7/11
  Verifying  : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                                    8/11
  Verifying  : socat-1.7.3.2-2.el7.x86_64                                                                                                                   9/11
  Verifying  : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                                   10/11
  Verifying  : yum-utils-1.1.31-52.el7.noarch                                                                                                              11/11


Installed:
  conntrack-tools.x86_64 0:1.4.4-7.el7                  cri-tools.x86_64 0:1.13.0-0                         kubeadm.x86_64 0:1.18.4-0
  kubectl.x86_64 0:1.18.17-0                            kubelet.x86_64 0:1.18.4-0                           libnetfilter_cthelper.x86_64 0:1.0.0-11.el7
  libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7           libnetfilter_queue.x86_64 0:1.0.2-2.el7_2           socat.x86_64 0:1.7.3.2-2.el7


Updated:
  yum-utils.noarch 0:1.1.31-54.el7_8


Complete!

Step 5: Check available image list required for k8s setup, same version of images need to be downloaded.

[root@k8s-masternode ~]# kubeadm config images list


W0409 17:35:23.592481    3551 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0409 17:35:23.592681    3551 version.go:103] falling back to the local client version: v1.18.4
W0409 17:35:23.592833    3551 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.4
k8s.gcr.io/kube-controller-manager:v1.18.4
k8s.gcr.io/kube-scheduler:v1.18.4
k8s.gcr.io/kube-proxy:v1.18.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

Step 6 [Online]: Pull k8s images on online machine and transfer to offline machines (all nodes)

[root@dockeronline ~]# docker pull k8s.gcr.io/kube-apiserver:v1.18.4  
[root@dockeronline ~]# docker save k8s.gcr.io/kube-apiserver:v1.18.4 > kube-apiserver_v1.18.4.tar

[root@dockeronline ~]# docker pull k8s.gcr.io/kube-controller-manager:v1.18.4  
[root@dockeronline ~]# docker save k8s.gcr.io/kube-controller-manager:v1.18.4 > kube-controller-manager_v1.18.4.tar

[root@dockeronline ~]# docker pull k8s.gcr.io/kube-scheduler:v1.18.4  
[root@dockeronline ~]# docker save k8s.gcr.io/kube-scheduler:v1.18.4 > kube-scheduler_v1.18.4.tar

[root@dockeronline ~]# docker pull k8s.gcr.io/kube-proxy:v1.18.4  
[root@dockeronline ~]# docker save k8s.gcr.io/kube-proxy:v1.18.4 > kube-proxy_v1.18.4.tar

[root@dockeronline ~]# docker pull k8s.gcr.io/pause:3.2  
[root@dockeronline ~]# docker save k8s.gcr.io/pause:3.2 > pause_3.2.tar

[root@dockeronline ~]# docker pull k8s.gcr.io/etcd:3.4.3-0  
[root@dockeronline ~]# docker save k8s.gcr.io/etcd:3.4.3-0 > etcd_3.4.3-0.tar

[root@dockeronline ~]# docker pull k8s.gcr.io/coredns:1.6.7  
[root@dockeronline ~]# docker save k8s.gcr.io/coredns:1.6.7 > coredns_1.6.7.tar

Step 7 [Offline]: Install k8s utilities on offline machine (all nodes).

[root@k8s-masternode ~]# docker load < kube-apiserver_v1.18.4.tar
[root@k8s-masternode ~]# docker load < kube-controller-manager_v1.18.4.tar
[root@k8s-masternode ~]# docker load < kube-scheduler_v1.18.4.tar
[root@k8s-masternode ~]# docker load < kube-proxy_v1.18.4.tar
[root@k8s-masternode ~]# docker load < pause_3.2.tar
[root@k8s-masternode ~]# docker load < etcd_3.4.3-0.tar
[root@k8s-masternode ~]# docker load < coredns_1.6.7.tar


docker load < kube-scheduler_v1.18.4.tar
docker load < kube-proxy_v1.18.4.tar
docker load < pause_3.2.tar
docker load < etcd_3.4.3-0.tar
82a5cde9d9a9: Loading layer [==================================================>]  53.87MB/53.87MB
974f6952f60e: Loading layer [==================================================>]  120.7MB/120.7MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.18.4
[root@lxsptintapp02t kubeimages]# docker load < kube-controller-manager_v1.18.4.tar
6dad22d72181: Loading layer [==================================================>]  110.1MB/110.1MB
Loaded image: k8s.gcr.io/kube-controller-manager:v1.18.4
[root@lxsptintapp02t kubeimages]# docker load < kube-scheduler_v1.18.4.tar
d7d8de739211: Loading layer [==================================================>]  42.95MB/42.95MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.18.4
[root@lxsptintapp02t kubeimages]# docker load < kube-proxy_v1.18.4.tar
a2b38eae1b39: Loading layer [==================================================>]  21.62MB/21.62MB
f378e9487360: Loading layer [==================================================>]  5.168MB/5.168MB
a35a0b8b55f5: Loading layer [==================================================>]  4.608kB/4.608kB
dea351e760ec: Loading layer [==================================================>]  8.192kB/8.192kB
d57a645c2b0c: Loading layer [==================================================>]  8.704kB/8.704kB
529f435daf70: Loading layer [==================================================>]  38.39MB/38.39MB
Loaded image: k8s.gcr.io/kube-proxy:v1.18.4
[root@lxsptintapp02t kubeimages]# docker load < pause_3.2.tar
ba0dae6243cc: Loading layer [==================================================>]  684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.2
[root@lxsptintapp02t kubeimages]# docker load < etcd_3.4.3-0.tar
fe9a8b4f1dcc: Loading layer [==================================================>]  43.87MB/43.87MB
ce04b89b7def: Loading layer [==================================================>]  224.9MB/224.9MB
1b2bc745b46f: Loading layer [==================================================>]  21.22MB/21.22MB
Loaded image: k8s.gcr.io/etcd:3.4.3-0
[root@lxsptintapp02t kubeimages]# docker load < coredns_1.6.7.tar
225df95e717c: Loading layer [==================================================>]  336.4kB/336.4kB
c965b38a6629: Loading layer [==================================================>]  43.58MB/43.58MB
Loaded image: k8s.gcr.io/coredns:1.6.7

Download and Install Kubernetes Network

Step 8 [Online]: Download networking file.

[root@dockeronline ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Step 9 [Online]: Open the kube-flannel.yml file, and search for the line mentioning the flannel image version. For example, in the following string, the version is v0.13.1

image: quay.io/coreos/flannel:v0.13.1-rc2

Step 10 [Online]: Pull image and transfer to offline machine (all nodes)

[root@dockeronline ~]# docker pull quay.io/coreos/flannel:v0.13.1-rc2
[root@dockeronline ~]# docker save quay.io/coreos/flannel:v0.13.1-rc2 > flannel_v0.13.1_v1.tar

Step 11 [Offline]: Load k8s networking image on offline machine (each nodes):

[root@k8s-masternode ~]# docker load < flannel_v0.13.1-rc2.tar

50644c29ef5a: Loading layer [===================>]  5.845MB/5.845MB
0be670d27a91: Loading layer [===================>]  11.42MB/11.42MB
90679e912622: Loading layer [===================>]  2.267MB/2.267MB
6db5e246b16d: Loading layer [===================>]  45.69MB/45.69MB
97320fed8db7: Loading layer [===================>]   5.12kB/5.12kB
8a984b390686: Loading layer [===================>]  9.216kB/9.216kB
3b729894a01f: Loading layer [===================>]   7.68kB/7.68kB
Loaded image: quay.io/coreos/flannel:v0.13.1-rc2


Download and Install Ingress Controller (NGINX)

Step 12 [Online]: Execute the following command to download and save the image:

[root@dockeronline ~]# docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
[root@dockeronline ~]# docker save quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 > nginx-ingress-controller_0.30.0.tar

Step 13 [Offline]: Load NGINX ingress controller images on each node:

[root@k8s-masternode ~]# docker load < nginx-ingress-controller_0.30.0.tar

5216338b40a7: Loading layer [===========================>]  5.857MB/5.857MB
0ea7bac8e332: Loading layer [===========================>]  161.4MB/161.4MB
1f269229f946: Loading layer [===========================>]  6.144kB/6.144kB
2c1f73db0ddc: Loading layer [===========================>]  30.43MB/30.43MB
f0788cc6217b: Loading layer [===========================>]  20.85MB/20.85MB
1ba6bdbc843e: Loading layer [===========================>]  4.096kB/4.096kB
cb684eb76ff9: Loading layer [===========================>]  626.2kB/626.2kB
6d2e5bdfa0f8: Loading layer [===========================>]  50.15MB/50.15MB
e21a860ccea4: Loading layer [===========================>]  6.656kB/6.656kB
00d33fdc8470: Loading layer [===========================>]  37.23MB/37.23MB
3d228bf43dae: Loading layer [===========================>]  20.89MB/20.89MB
6f6e07efc7e7: Loading layer [===========================>]  6.656kB/6.656kB
Loaded image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0

Deploy cluster

Deploying k8s cluster, perform steps 1-5 on each offline machine, and then complete subsequent steps as per the instructions:

Step 1: Login with root access

Step 2: Execute the following command to turn off Swap for K8s installation:

[root@k8s-masternode ~]# swapoff -a

[root@k8s-masternode ~]# sed -e '/swap/s/^/#/g' -i /etc/fstab

Step 3: Execute below command to ensure SELinux is in permissive mode.

[root@k8s-masternode ~]# setenforce 0

[root@k8s-masternode ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Step 4: Ensure that the config option sysctl > net.bridge.bridge-nf-call-iptables is set to 1. Execute the following command:

[root@k8s-masternode ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

EOF

[root@k8s-masternode ~]# modprobe br_netfilter

[root@k8s-masternode ~]# sysctl --system


* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.ipv4.ip_forward = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
net.ipv4.ip_forward = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

Step 5: Allow k8s service ports in firewall.

Run below set of commands only on master node:

[root@k8s-masternode ~]# firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp

[root@k8s-masternode ~]# firewall-cmd --reload

[root@k8s-masternode ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

[root@k8s-masternode ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

Run below set of commands only on worker node:

[root@k8s-workernode ~]# firewall-cmd --permanent --add-port={10250,30000-32767}/tcp

[root@k8s-workernode ~]# firewall-cmd --reload

[root@k8s-workernode ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

[root@k8s-workernode ~]# echo 1 > /proc/sys/net/ipv4/ip_forward


Master Node only: Create a cluster, deploy the Flannel network, and schedule pods

Step 6: Execute the following command to initialize k8s cluster, version number should be v1.18.4 as we dowloaded the images earlier is the version of Kubernetes retrieved in the preceding step, cidr range is 10.244.0.0/16, advertise IP address should be IP of master node.

[root@k8s-masternode ~]# kubeadm init --kubernetes-version=1.18.4 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.200.136.114 --v=5


I0409 17:41:13.379422    5750 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
W0409 17:41:13.379717    5750 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.4
[preflight] Running pre-flight checks
I0409 17:41:13.379874    5750 checks.go:577] validating Kubernetes and kubeadm version
I0409 17:41:13.379893    5750 checks.go:166] validating if the firewall is enabled and active
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
I0409 17:41:13.388451    5750 checks.go:201] validating availability of port 6443
I0409 17:41:13.389164    5750 checks.go:201] validating availability of port 10259
I0409 17:41:13.389211    5750 checks.go:201] validating availability of port 10257
I0409 17:41:13.389246    5750 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0409 17:41:13.389258    5750 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0409 17:41:13.389272    5750 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0409 17:41:13.389281    5750 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0409 17:41:13.389294    5750 checks.go:432] validating if the connectivity type is via proxy or direct
I0409 17:41:13.389330    5750 checks.go:471] validating http connectivity to first IP address in the CIDR
I0409 17:41:13.389351    5750 checks.go:471] validating http connectivity to first IP address in the CIDR
I0409 17:41:13.389364    5750 checks.go:102] validating the container runtime
I0409 17:41:13.485735    5750 checks.go:128] validating if the service is enabled and active
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0409 17:41:13.587809    5750 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0409 17:41:13.587854    5750 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0409 17:41:13.587883    5750 checks.go:649] validating whether swap is enabled or not
I0409 17:41:13.587914    5750 checks.go:376] validating the presence of executable conntrack
I0409 17:41:13.587931    5750 checks.go:376] validating the presence of executable ip
I0409 17:41:13.587939    5750 checks.go:376] validating the presence of executable iptables
I0409 17:41:13.587958    5750 checks.go:376] validating the presence of executable mount
I0409 17:41:13.587971    5750 checks.go:376] validating the presence of executable nsenter
I0409 17:41:13.587982    5750 checks.go:376] validating the presence of executable ebtables
I0409 17:41:13.587990    5750 checks.go:376] validating the presence of executable ethtool
I0409 17:41:13.587997    5750 checks.go:376] validating the presence of executable socat
I0409 17:41:13.588007    5750 checks.go:376] validating the presence of executable tc
I0409 17:41:13.588016    5750 checks.go:376] validating the presence of executable touch
I0409 17:41:13.588027    5750 checks.go:520] running all checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
I0409 17:41:13.685214    5750 checks.go:406] checking whether the given node name is reachable using net.LookupHost
        [WARNING Hostname]: hostname "lxsptintapp02t" could not be reached
        [WARNING Hostname]: hostname "lxsptintapp02t": lookup lxsptintapp02t on 10.108.3.74:53: no such host
I0409 17:41:13.688273    5750 checks.go:618] validating kubelet version
I0409 17:41:13.735598    5750 checks.go:128] validating if the service is enabled and active
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0409 17:41:13.742962    5750 checks.go:201] validating availability of port 10250
I0409 17:41:13.743026    5750 checks.go:201] validating availability of port 2379
I0409 17:41:13.743053    5750 checks.go:201] validating availability of port 2380
I0409 17:41:13.743081    5750 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0409 17:41:13.785494    5750 checks.go:838] image exists: k8s.gcr.io/kube-apiserver:v1.18.4
I0409 17:41:13.829456    5750 checks.go:838] image exists: k8s.gcr.io/kube-controller-manager:v1.18.4
I0409 17:41:13.872297    5750 checks.go:838] image exists: k8s.gcr.io/kube-scheduler:v1.18.4
I0409 17:41:13.914578    5750 checks.go:838] image exists: k8s.gcr.io/kube-proxy:v1.18.4
I0409 17:41:13.957013    5750 checks.go:838] image exists: k8s.gcr.io/pause:3.2
I0409 17:41:13.998531    5750 checks.go:838] image exists: k8s.gcr.io/etcd:3.4.3-0
I0409 17:41:14.041160    5750 checks.go:838] image exists: k8s.gcr.io/coredns:1.6.7
I0409 17:41:14.041191    5750 kubelet.go:64] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0409 17:41:14.263384    5750 certs.go:103] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [lxsptintapp02t kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.200.136.114]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0409 17:41:14.995324    5750 certs.go:103] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I0409 17:41:15.270858    5750 certs.go:103] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [lxsptintapp02t localhost] and IPs [10.200.136.114 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [lxsptintapp02t localhost] and IPs [10.200.136.114 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0409 17:41:16.131902    5750 certs.go:69] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0409 17:41:16.405389    5750 kubeconfig.go:79] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0409 17:41:16.582579    5750 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0409 17:41:16.748916    5750 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0409 17:41:16.965282    5750 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0409 17:41:17.042250    5750 manifests.go:91] [control-plane] getting StaticPodSpecs
I0409 17:41:17.042895    5750 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0409 17:41:17.042904    5750 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0409 17:41:17.042908    5750 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0409 17:41:17.051192    5750 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0409 17:41:17.051217    5750 manifests.go:91] [control-plane] getting StaticPodSpecs
W0409 17:41:17.051297    5750 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0409 17:41:17.051496    5750 manifests.go:104] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0409 17:41:17.051503    5750 manifests.go:104] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0409 17:41:17.051508    5750 manifests.go:104] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0409 17:41:17.051513    5750 manifests.go:104] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0409 17:41:17.051518    5750 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0409 17:41:17.052169    5750 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0409 17:41:17.052185    5750 manifests.go:91] [control-plane] getting StaticPodSpecs
W0409 17:41:17.052227    5750 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0409 17:41:17.052383    5750 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0409 17:41:17.052902    5750 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0409 17:41:17.053832    5750 local.go:72] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0409 17:41:17.053848    5750 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.502630 seconds
I0409 17:41:36.558006    5750 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0409 17:41:36.566975    5750 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
I0409 17:41:36.572757    5750 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I0409 17:41:36.572775    5750 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "lxsptintapp02t" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node lxsptintapp02t as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node lxsptintapp02t as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: q0tzkt.oj44m1cpqfdwalx7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0409 17:41:37.601869    5750 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig
I0409 17:41:37.602662    5750 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0409 17:41:37.602905    5750 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0409 17:41:37.604665    5750 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0409 17:41:37.608618    5750 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0409 17:41:37.609355    5750 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I0409 17:41:37.981320    5750 request.go:557] Throttling request took 65.198838ms, request: POST:https://10.200.136.114:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
[addons] Applied essential addon: kube-proxy


Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 10.200.136.114:6443 --token q0tzkt.oj44m1cpqfdwalx7 \
    --discovery-token-ca-cert-hash sha256:e7058ec440e3509c6b0702836d993dec3257a565bbdd70abde098b9d9caad32d


Step 7: To start using your cluster, you need to run the following as a regular user:

[kubeadmin@k8s-masternode ~]# mkdir -p $HOME/.kube

[kubeadmin@k8s-masternode ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[kubeadmin@k8s-masternode ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 8: Start and enable kubelet.service (run as root user)

[root@k8s-masternode ~]# systemctl enable kubelet.service

[root@k8s-masternode ~]# systemctl start kubelet.service

Step 9: Execute the following command to verify the node is running, status should be in "not ready" state since we haven't deployed the network yet.

[root@k8s-masternode ~]# kubectl get nodes

Step 10: Execute following command to configure kubectl to manage the cluster.

[root@k8s-masternode ~]# grep -q "KUBECONFIG" ~/.bashrc || {
    echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
    . ~/.bashrc

}

Deploy the Flannel network on the Master node only

Step 11: Execute the following command to initiate the Flannel network:

[root@k8s-masternode ~]# kubectl apply -f kube-flannel.yml

Step 12: Execute the following command to ensure that all pods required to run k8s cluster is in running state.

[root@k8s-masternode ~]# kubectl get pods --all-namespaces


Join the Worker nodes to the cluster.

Step 13: Run on worker nodes only, Execute the following command as root user, this command was in the output when we initialised k8s cluster in step 6.  By using the same command we can join any number of worker nodes by running it with root user on each nodes.

[root@k8s-workernode ~]# kubeadm join 10.200.136.114:6443 --token q0tzkt.oj44m1cpqfdwalx7 \
    --
--discovery-token-ca-cert-hash sha256:e7058ec440e3509c6b0702836d993dec3257a565bbdd70abde098b9d9caad32d

W0409 23:43:23.274182   28132 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.


Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Step 14: Start and enable kubelet.service.

root@k8s-workernode ~] systemctl enable kubelet.service


root@k8s-workernode ~] systemctl start kubelet.service

Step 15: Verify nodes status by running below command on the Master node. The nodes should have Ready status, similar to the following output:

[root@k8s-masternode ~]# kubectl get nodes

gcxi-doc-kube0   Ready     master    3m        v1.18.1
gcxi-doc-kube1   Ready     <none>    22s       v1.18.1


That's it, we have successfully deployed k8s cluster. Please message me if you have any question. Thanks!

Nikhil Dabholkar

Devops engineer at Euronet

1y

Is these setup supported for rhel8 as well ?

Like
Reply
Sumit kumar singh

Unix /Linux, AWS, openshift/k8s, Ansible, Docker RHCSA 9 and Redhat openshift CERTIFIED

2y

how to run pod

Like
Reply
Koushik Basak

Azure/Kubernetes/Ansible/Linux/Observability/Opentelemetry/Distributed Tracing/SRE-DevOps-Automation

3y

nice one

Like
Reply
Janey Alam

Red Hat Certified OpenShift Architect

4y

Well Done.. 👍

To view or add a comment, sign in

Others also viewed

Explore topics