본문 바로가기
Container/Kubernetes

[k8s] Ubuntu 22.04에서 Kubernetes(쿠버네티스) Cluster 구성하기

by ganyga 2024. 2. 28.

실습 환경

  • Mac m1 -  VMware Fusion Player
  • CPU/Memory/Disk는 VM 생성할 때 고정되는 값이므로, 변경이 된다면 Disk를 넉넉하게 잡는 게 좋음
  • Disk 용량 부족시 추후에 Disk LVM을 진행할 예정
Node Name OS CPU Memory Disk NIC IP 관리자 계정
master Ubuntu 22.04 2 4GB 20GB 172.16.133.4 root/qwe123
worker Ubuntu 22.04 2 4GB 20GB 172.16.133.5 root/qwe123

Ubuntu 22.04 기본 설정

ssh로 root로그인 허용

sudo passwd root
qwe123 #root 비밀번호 설정

vi /etc/ssh/sshd_config
PermitRootLogin yes # 33번 째 줄 주석 풀고 바꿔주기

systemctl restart sshd # 변경사항 적용

hostname 지정, /etc/hosts 수정

# hostname 변경
hostnamectl hostname master

# 적용
su

# 확인
root@master:~# hostname
master

root@master:~# hostnamectl status
 Static hostname: master
       Icon name: computer-vm
         Chassis: vm
      Machine ID: 7a82c369cd7d4cf796d1e6ccc2069e34
         Boot ID: 6a913b6af6d74b33923106df27f46713
  Virtualization: vmware
Operating System: Ubuntu 22.04.4 LTS
          Kernel: Linux 5.15.0-97-generic
    Architecture: arm64
 Hardware Vendor: VMware, Inc.
  Hardware Model: VMware20,1

# hosts 파일에 master, worker 각각 추가해주기
root@master:~# vi /etc/hosts

root@master:~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 master # 이 부분은 자동으로 들어감, worker는 worker로 자기 자신 loopback

(중략)

172.16.133.4 master
172.16.133.5 worker

root로 ssh접속 확인

ssh root@172.16.133.4

 

Network tool 설치, NTP Server 설정

# network 명령어 설치
root@master:~# apt install net-tools

# NTP Server 설정
root@master:~# apt update
root@master:~# apt upgrade

root@master:~# apt install ntp

root@master:~# systemctl enable ntp
Synchronizing state of ntp.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable ntp

root@master:~# systemctl status ntp
● ntp.service - Network Time Service
     Loaded: loaded (/lib/systemd/system/ntp.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-02-28 04:14:59 UTC; 41s ago
       Docs: man:ntpd(8)
   Main PID: 20049 (ntpd)
      Tasks: 2 (limit: 4524)
     Memory: 1.3M
        CPU: 31ms
     CGroup: /system.slice/ntp.service
             └─20049 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 113:119

Docker 설치

https://docs.docker.com/engine/install/ubuntu/

 

Install Docker Engine on Ubuntu

Jumpstart your client-side server applications with Docker Engine on Ubuntu. This guide details prerequisites and multiple methods to install Docker Engine on Ubuntu.

docs.docker.com

Docker 엔진은 containerd에 의존함, Docker의 컨테이너 런타임이 containerd이기 때문임

# Docker 엔진과 충돌할 수 있는 기존에 설치된 패키지 제거
root@master:~# for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Docker apt 저장소를 설정
# Docker 공식 GPG key 추가
root@master:~# apt-get update
root@master:~# apt-get install ca-certificates curl
root@master:~# install -m 0755 -d /etc/apt/keyrings
root@master:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
root@master:~# chmod a+r /etc/apt/keyrings/docker.asc

# apt 소스에 리포지토리 추가
root@master:~# echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# 확인
root@master:~# cat /etc/apt/sources.list.d/docker.list
deb [arch=arm64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu   jammy stable

root@master:~# apt-get update

# Docker 패키지 설치
root@master:~# apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Docker 버전 확인
root@master:~# docker version
Client: Docker Engine - Community
 Version:           25.0.3
 API version:       1.44
 Go version:        go1.21.6
 Git commit:        4debf41
 Built:             Tue Feb  6 21:13:11 2024
 OS/Arch:           linux/arm64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          25.0.3
  API version:      1.44 (minimum version 1.24)
  Go version:       go1.21.6
  Git commit:       f417435
  Built:            Tue Feb  6 21:13:11 2024
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.6.28
  GitCommit:        ae07eda36dd25f8a1b98dfbf587313b99c0190bb
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
  
  
root@master:~# systemctl status docker
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-02-28 04:26:27 UTC; 1min 15s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 21508 (dockerd)
      Tasks: 8
     Memory: 26.9M
        CPU: 205ms
     CGroup: /system.slice/docker.service
             └─21508 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
             
root@master:~# systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

 


Kubernets 설치

https://kubernetes.io/docs/setup/production-environment/container-runtimes/

 

Container Runtimes

Note: Dockershim has been removed from the Kubernetes project as of release 1.24. Read the Dockershim Removal FAQ for further details. You need to install a container runtime into each node in the cluster so that Pods can run there. This page outlines what

kubernetes.io

필수 요소들 설치 및 구성하기

# IPv4를 포워딩하여 iptables가 브리지된 트래픽을 보게 하기
root@master:~# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

root@master:~# modprobe overlay
root@master:~# modprobe br_netfilter

# 확인
root@master:~# lsmod | grep overlay
overlay               155648  0
root@master:~# lsmod | grep br_netfilter
br_netfilter           32768  0
bridge                352256  1 br_netfilter


# 필요한 sysctl 파라미터를 설정하면, 재부팅 후에도 값이 유지됨
root@master:~# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 확인
root@master:~# sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1


# 재부팅하지 않고 sysctl 파라미터 적용하기
root@master:~# sysctl --system

# SWAP 메모리 제거
root@master:~# swapoff -a
root@master:~# free
               total        used        free      shared  buff/cache   available
Mem:         4005380      293932     1993924        1352     1717524     3531088
Swap:              0           0           0

 

Container Runtime(컨테이너 런타임) 설정 - containerd

  • Docker 엔진은 containerd에 의존함, Docker의 컨테이너 런타임이 containerd이기 때문임
  • 쿠버네티스에서 containerd를 사용하기 위해서는 CRI support가 활성화되어 있어야 함, CRI integration 플러그인은 기본적으로 비활성화되어 있기 때문에 활성화해줘야 함
  • 따라서, cri가 /etc/containerd/config.toml 파일 안에 있는 disabled_plugins 목록에 포함되지 않도록 주의
  • 만약 해당 파일을 변경했다면, containerd를 다시 시작해야 함
# config.toml 기본 설정, disabled_plugins에 cri가 포함됨을 확인 할 수 있음

root@master:~# cat /etc/containerd/config.toml
#   Copyright 2018-2022 Docker Inc.

#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at

#       http://www.apache.org/licenses/LICENSE-2.0

#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

disabled_plugins = ["cri"]

#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0

#[grpc]
#  address = "/run/containerd/containerd.sock"
#  uid = 0
#  gid = 0

#[debug]
#  address = "/run/containerd/debug.sock"
#  uid = 0
#  gid = 0
#  level = "info"

# containerd 기본 설정을 정의
root@master:~# containerd config default | tee /etc/containerd/config.toml

 

containerd systemd Cgroup Driver 환경 설정하기

  • 리눅스에서 control group은 프로세스에 할당된 리소스를 제한하는 데 사용됨
  • kubelet과 연계된 컨테이너 런타임 모두 control group과 상호작용을 해야 함
    • Pod 및 Container 자원 관리
    • CPU 혹은 메모리 같은 자원의 요청과 상한을 설정
  • control group과 상호작용하기 위해서는 kubelet과 컨테이너 런타임이 cgroup 드라이버를 사용해야 함
  • kubernets는 cgroupfs와 systemd 두 개의 Cgroup 드라이버가 있음
  • 이때, kubelet과 컨테이너 런타임이 같은 cgroup 드라이버를 사용해야 하며, 구성도 동일해야 함
  • kubelet, kubeadm은 default가 systemd임, 따라서 containerd의 cgroup도 systemd로 설정하기
# /etc/containerd/config.toml의 systemd cgroup 드라이버를 runc에서 사용하기 위해 다음과 같이 설정
root@master:~# vi /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
... 중략
SystemdCgroup = true

다른 부분에도 SystemdCgroup이 있기 때문에 반드시, plugins에 .containerd.runtimes.runc.options에 설정을 바꿔야 함!

 

containerd 변경사항 설정 파일 적용 (/etc/containerd/config.toml)

root@master:~# systemctl restart containerd
root@master:~# systemctl enable containerd

root@master:~# systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-02-28 05:50:56 UTC; 4s ago
       Docs: https://containerd.io
    Process: 21940 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 21941 (containerd)
      Tasks: 8
     Memory: 11.7M
        CPU: 65ms
     CGroup: /system.slice/containerd.service
             └─21941 /usr/bin/containerd

Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.827292159Z" level=info msg="Start subscribing containerd event"
Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.827344393Z" level=info msg="Start recovering state"
Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.827435771Z" level=info msg="Start event monitor"
Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.827464952Z" level=info msg="Start snapshots syncer"
Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.827476166Z" level=info msg="Start cni network conf syncer for default"
Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.827482085Z" level=info msg="Start streaming server"
Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.828062287Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.828116313Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 28 05:50:56 master containerd[21941]: time="2024-02-28T05:50:56.828164962Z" level=info msg="containerd successfully booted in 0.022052s"
Feb 28 05:50:56 master systemd[1]: Started containerd container runtime.

 

kubeadm, kubelet, kubectl 설치

  • kubeadm : 클러스터를 부트스트랩하는 명령어
  • kubelet : 클러스터의 모든 머신에서 실행되는 파드와 컨테이너 시작과 같은 작업을 수행하는 컴포넌트
  • kubectl : 클러스터와 통신하기 위한 커맨드 라인 유틸리티

https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

 

kubeadm 설치하기

이 페이지에서는 kubeadm 툴박스 설치 방법을 보여준다. 이 설치 프로세스를 수행한 후 kubeadm으로 클러스터를 만드는 방법에 대한 자세한 내용은 kubeadm으로 클러스터 생성하기 페이지를 참고한다.

kubernetes.io

 

# 쿠버네티스 apt 리포지터리를 사용하는데 필요한 패키지 설치
root@master:~# apt-get update
root@master:~# apt-get install -y apt-transport-https ca-certificates curl

# 쿠버네티스 GPG key 다운로드
root@master:~# curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg  https://dl.k8s.io/apt/doc/apt-key.gpg

root@master:~# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main

# 확인
root@master:~# cat /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main

# apt 업데이트
root@master:~# apt-get update
Hit:1 https://download.docker.com/linux/ubuntu jammy InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports jammy InRelease
Get:4 http://ports.ubuntu.com/ubuntu-ports jammy-updates InRelease [119 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 Packages [68.5 kB]
Hit:6 http://ports.ubuntu.com/ubuntu-ports jammy-backports InRelease
Get:7 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease [110 kB]
Fetched 298 kB in 3s (110 kB/s)
Reading package lists... Done

 

설치할 수 있는 kubeadm, kubelet, kubectl version 확인 후 특정 버전 설치(1.28.0-00)

root@master:~# apt-cache policy kubeadm
kubeadm:
  Installed: (none)
  Candidate: 1.28.2-00
  Version table:
     1.28.2-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.28.1-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.28.0-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.27.6-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages        
(중략)

root@master:~# apt-cache policy kubelet
kubelet:
  Installed: (none)
  Candidate: 1.28.2-00
  Version table:
     1.28.2-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.28.1-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.28.0-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.27.6-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.27.5-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
 (중략)      
 
root@master:~# apt-cache policy kubectl
kubectl:
  Installed: (none)
  Candidate: 1.28.2-00
  Version table:
     1.28.2-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.28.1-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.28.0-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.27.6-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.27.5-00 500
        500 https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
     1.27.4-00 500
 (중략)    
 
 # 1.28.0-00 버전으로 설치
 root@master:~# apt -y install kubelet=1.28.0-00 kubeadm=1.28.0-00 kubectl=1.28.0-00
 
 # 해당 버전 고정
 root@master:~# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.

root@master:~# systemctl daemon-reload
root@master:~# systemctl enable --now kubelet

 

kubelet이 active 상태가 아닌 loaded 되는 현상

root@master:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Wed 2024-02-28 07:58:57 UTC; 2s ago
       Docs: https://kubernetes.io/docs/home/
    Process: 24450 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/>
   Main PID: 24450 (code=exited, status=1/FAILURE)
        CPU: 60ms

Feb 28 07:58:57 master systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 28 07:58:57 master systemd[1]: kubelet.service: Failed with result 'exit-code'.

 

journalctl로 로그 보기

  • /var/lib/kubelet/config.yaml 이 존재하지 않는다는 error 발생
  • /var/lib/kubelet/config.yaml은 kubeadm init 이후에 생성되는 거기 때문에 당연함!
root@master:~# journalctl -exu kubelet -n 10

Feb 28 08:01:51 master kubelet[24573]: E0228 08:01:51.289090   24573 run.go:74] 
"command failed" err="failed to load kubelet config file, 
path: /var/lib/kubelet/config.yaml, 
error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, 
error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", 
error: open /var/lib/kubelet/config.yaml: no such file or directory"

Feb 28 08:01:51 master systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE

 

여기까지 master, worker 동일하게 설정해 주기, snapshot 생성해도 되지만, Vmware Fusion Player 버전은 별도의 snapshot 기능을 제공하지 않음


Kubernetes Bootstraping(=kubeadm) 설정

https://v1-28.docs.kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/

 

kubeadm init

This command initializes a Kubernetes control-plane node. Run this command in order to set up the Kubernetes control plane Synopsis Run this command in order to set up the Kubernetes control plane The "init" command executes the following phases: preflight

kubernetes.io

kubeadm init

# pod-network-cidr은 pod를 띄울 때 사용하고 싶은 IP, apiserver-advertise-address는 master IP
root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.16.133.4

I0228 08:38:39.354789   25974 version.go:256] remote version is much newer: v1.29.2; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0228 08:39:18.671099   25974 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 172.16.133.4]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [172.16.133.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [172.16.133.4 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502358 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

[bootstrap-token] Using token: 41p2t2.ae5patchsi4y7oo3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.133.4:6443 --token 41p2t2.ae5patchsi4y7oo3 \
	--discovery-token-ca-cert-hash sha256:97d1755c549fa30c867cbfb178fdc515f2e727a5804e503b8b5ca70a289c712d

 

kubernetes cluster를 사용하기 위한 명령어 실행

root@master:~# mkdir -p $HOME/.kube
root@master:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@master:~# export KUBECONFIG=/etc/kubernetes/admin.conf

kubeadm join (클러스터 조인)

# 조인하고 싶은 worker1 node에서 실행
root@worker1:~# kubeadm join 172.16.133.4:6443 --token 41p2t2.ae5patchsi4y7oo3 \
        --discovery-token-ca-cert-hash sha256:97d1755c549fa30c867cbfb178fdc515f2e727a5804e503b8b5ca70a289c712d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

master에서 cluster join 됐는지 확인하기

아직 CNI(Container Network Interface가 설치되어 있지 않아 Status가 NotReady 상태

root@master:~# kubectl get nodes -o wide
NAME      STATUS     ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master    NotReady   control-plane   8m5s    v1.28.0   172.16.133.4   <none>        Ubuntu 22.04.4 LTS   5.15.0-97-generic   containerd://1.6.28
worker1   NotReady   <none>          2m40s   v1.28.0   172.16.133.5   <none>        Ubuntu 22.04.4 LTS   5.15.0-97-generic   containerd://1.6.28

CNI(Container Network Interface) 구성

Cilium설치

https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/

 

Cilium Quick Installation — Cilium 1.15.1 documentation

Install Cilium into the EKS cluster. Note If you have to uninstall Cilium and later install it again, that could cause connectivity issues due to aws-node DaemonSet flushing Linux routing tables. The issues can be fixed by restarting all pods, alternativel

docs.cilium.io

root@master:~# CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
root@master:~# CLI_ARCH=amd64 

# amd64가 아닌 경우 arm64로 입력
root@master:~# if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi

root@master:~# curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 36.2M  100 36.2M    0     0  11.3M      0  0:00:03  0:00:03 --:--:-- 19.1M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    92  100    92    0     0    140      0 --:--:-- --:--:-- --:--:--   140

root@master:~# sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
cilium-linux-arm64.tar.gz: OK

root@master:~# tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
cilium

root@master:~# rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}


# cilium 설치
root@master:~# cilium install --version 1.15.1
ℹ️  Using Cilium version 1.15.1
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has been installed

# cilium pod Running 확인하기, pending에서 바뀌는걸 볼 수 있음! 조금 걸리니까 5분 정도 기다리기
# Coredns도 Pending에서 Running으로 바뀜을 확인할 수 있음
root@master:~# watch kubectl get pods -A
Every 2.0s: kubectl get pods -A                                                                                               master: Wed Feb 28 08:59:26 2024

NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   cilium-56hx2                      1/1     Running   0          7m9s
kube-system   cilium-7fzbc                      1/1     Running   0          7m9s
kube-system   cilium-operator-dd95cc587-v5j4s   1/1     Running   0          7m9s
kube-system   coredns-5dd5756b68-46p7j          1/1     Running   0          19m
kube-system   coredns-5dd5756b68-6sqp4          1/1     Running   0          19m
kube-system   etcd-master                       1/1     Running   0          19m
kube-system   kube-apiserver-master             1/1     Running   0          19m
kube-system   kube-controller-manager-master    1/1     Running   0          19m
kube-system   kube-proxy-68qn6                  1/1     Running   0          14m
kube-system   kube-proxy-nv5dq                  1/1     Running   0          19m
kube-system   kube-scheduler-master             1/1     Running   0          19m

 

CNI 설치 완료 후 노드 Status NotReady에서 Ready로 바뀜을 확인하기

root@master:~# kubectl get nodes
NAME      STATUS   ROLES           AGE   VERSION
master    Ready    control-plane   21m   v1.28.0
worker1   Ready    <none>          16m   v1.28.0

root@master:~# kubectl get nodes -o wide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master    Ready    control-plane   22m   v1.28.0   172.16.133.4   <none>        Ubuntu 22.04.4 LTS   5.15.0-97-generic   containerd://1.6.28
worker1   Ready    <none>          16m   v1.28.0   172.16.133.5   <none>        Ubuntu 22.04.4 LTS   5.15.0-97-generic   containerd://1.6.28

Linux에서 Kubectl 명령어 자동 완성 사용하기

https://kubernetes.io/ko/docs/tasks/tools/included/optional-kubectl-configs-bash-linux/

 

리눅스에서 bash 자동 완성 사용하기

리눅스에서 bash 자동 완성을 위한 몇 가지 선택적 구성에 대해 설명한다.

kubernetes.io

root@master:~# echo 'source <(kubectl completion bash)' >>~/.bashrc
root@master:~# echo 'alias k=kubectl' >>~/.bashrc
root@master:~# echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
root@master:~# exec bash