ubuntu22.04 server搭建k8s集群

2024-11-10 14:32:25
/
0 点赞
/
419 阅读
2024-11-10

环境准备

安装

sudo apt install openssh-server
1

设置主机名

# 设置主机名为 kmaster
hostnamectl set-hostname kmaster --static
# 设置另外一台为 kworker1
hostnamectl set-hostname kworker1 --static
# 看看别名是否生效
hostname
1
2
3
4
5
6

配置IP host映射关系

sudo vim /etc/hosts

10.0.2.101 kmaster
10.0.2.102 kworker1
10.0.2.103 kworker2
1
2
3

关闭防火墙

sudo systemctl stop ufw.service
sudo systemctl disable ufw.service   #开机禁用
sudo systemctl status ufw.service
ufw disable

1
2
3
4
5

网络测试

互相ping

禁用 swap

systemctl stop swap.target
systemctl status swap.target
systemctl disable swap.target   #开机禁用
systemctl stop swap.img.swap
systemctl status swap.img.swap

1
2
3
4
5
6

关闭虚拟交换分区 sudo vim /etc/fstab将包含swap 的一行注释掉

修改内核参数

apt install -y ipvsadm
1

载入如下内核模块

sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
1
2
3
4
5
6
7

配置下面的网络参数:

sudo tee /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

1
2
3
4
5
6

运行下面的command使改动生效:

sudo sysctl --system

1
2

安装containerd

安装dependencies

sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

1
2

添加docker repo

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

1
2
3

安装containerd

sudo apt update
sudo apt install -y containerd.io

1
2
3

配置containerd使用systemd作为cgroup

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

1
2
3

重启并设置开机自启

sudo systemctl restart containerd
sudo systemctl enable containerd

1
2
3

安装Kubernetes组件

添加apt repo

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

1
2
3

或则使用阿里云 kubernetes 镜像

apt  install curl 
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt update

1
2
3
4
5
6
7
8

安装Kubectl, kubeadm & kubelet

sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

1
2
3
4

apt-mark 用于将软件包标记/取消标记为自动安装。 hold 选项用于将软件包标记为保留,以防止软件包被自动安装、升级或删除。这里主要是为了防止kubelet等组件自动升级。

初始化Master节点

这步需要在Master节点进行设置,运行如下的节点初始化整个k8s集群。

sudo kubeadm init --control-plane-endpoint=10.0.2.101

1
2

当看到如下的输出时,说明系统master节点初始化完成。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.0.2.101:6443 --token kc2iuw.8c2kqbqvqxtainsc \
	--discovery-token-ca-cert-hash sha256:711735efd1985ea74444bc08c614ba4c3906f8c9d501564bd9813853bfe4fbbe \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.101:6443 --token kc2iuw.8c2kqbqvqxtainsc \
	--discovery-token-ca-cert-hash sha256:711735efd1985ea74444bc08c614ba4c3906f8c9d501564bd9813853bfe4fbbe
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

接着按照提示信息,进行后续的初始化工作:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
1
2
3
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
1
2

完成上面的工作之后,可以尝试运行下面的command来查看当前集群的状态:

kubectl cluster-info
kubectl get nodes

----
root@kmaster:~# kubectl get nodes
NAME      STATUS     ROLES           AGE   VERSION
kmaster   NotReady   control-plane   19m   v1.28.2

1
2
3
4
5
6
7
8

加入Node节点

在master节点的初始化输出的最下面,会列出node节点加入的command,直接copy然后到每个node节点运行即可。下面是个例子,得换成实际的command,

kubeadm join 10.0.2.101:6443 --token kc2iuw.8c2kqbqvqxtainsc \
	--discovery-token-ca-cert-hash sha256:711735efd1985ea74444bc08c614ba4c3906f8c9d501564bd9813853bfe4fbbe
1
2

worker结点执行下面命令

mkdir ~/.kube
cp /etc/kubernetes/kubelet.conf  ~/.kube/config
1
2

加入成功之后,可以查看当前集群的node状态,此时由于没有安装网络相关插件,所有节点出于NotReady的状态。下一步会进行安装。

kubectl get nodes

kmaster    NotReady   control-plane   23m   v1.28.2
kworker1   NotReady   <none>          36s   v1.28.2
kworker2   NotReady   <none>          7s    v1.28.2
1
2
3

配置集群网络

wget https://calico-v3-25.netlify.app/archive/v3.25/manifests/calico.yaml
kubectl apply -f calico.yaml
1
2

日志如下

poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

此时可以查看系统namespace下的所有基础组件的运行情况kubectl get pods -n kube-system

root@kmaster:~# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-658d97c59c-fvg5t   1/1     Running   0          13m
calico-node-54wpj                          1/1     Running   0          13m
calico-node-qhgw5                          1/1     Running   0          13m
calico-node-vhj4f                          1/1     Running   0          13m
coredns-5dd5756b68-csrqp                   1/1     Running   0          58m
coredns-5dd5756b68-lskrl                   1/1     Running   0          58m
etcd-kmaster                               1/1     Running   0          58m
kube-apiserver-kmaster                     1/1     Running   0          58m
kube-controller-manager-kmaster            1/1     Running   0          58m
kube-proxy-f4gh4                           1/1     Running   0          58m
kube-proxy-lrr56                           1/1     Running   0          34m
kube-proxy-rnctl                           1/1     Running   0          35m
kube-scheduler-kmaster                     1/1     Running   0          58m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

等到所有pods就绪后,接下来检查节点状态kubectl get nodes

root@kmaster:~# kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
kmaster    Ready    control-plane   54m   v1.28.2
kworker1   Ready    <none>          31m   v1.28.2
kworker2   Ready    <none>          30m   v1.28.2
1
2
3
4
5

至此,整个k8s集群搭建完毕

测试

部署了一个nginx的app来进行测试

kubectl create deployment nginx-app --image=nginx --replicas=2
1

查看nginx的状态

kubectl get deployment nginx-app
1

将deployment暴露出去,采用NodePort的方式(这种方式会在每个节点上开放同一个端口,外部可以通过节点ip+port的方式进行访问)

kubectl expose deployment nginx-app --type=NodePort --port=80
1
root@kmaster:~# kubectl get svc nginx-app
NAME        TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx-app   NodePort   10.104.15.155   <none>        80:32732/TCP   10s
1
2
3

访问测试

curl http://kmaster:32732
curl http://kworker1:32732
curl http://kworker2:32732
1
2
3

参考

版权属于:

那棵树看起来生气了

本文链接:

https://dengyb.com/archives/149.html(转载时请注明本文出处及文章链接)