Ubuntu 22安装Kubernetes 1.29

随笔8个月前发布 糖炒饭饭
125 0 0

记录下ubuntu 22.04安装最新Kubernets 1.29 ,全部按照官方教程去执行

安装容器

因为Kubernetes 1.20版本中,Docker支持已经被弃用,并在1.24版本后续的版本不再包含docker依赖了。后续使用containerd作为容器实现,它是OCI的规范,理论上所以符合OCI规范的容器都可以做Kubernetes的容器实现。虽然Docker已经被弃用而改用contrainerd了,但是containerd也是Docker公司开发的,在docker官网找到安装方法,我这里使用官网推荐安装。
这种方式安装的containerd是最新的版本,必须要对应是最新Kubernetes,如果要安装指定版本Kuberenets,最好去github下载指定版本.

sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo
“deb [arch=$(dpkg –print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo “$VERSION_CODENAME”) stable” |
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
apt install containerd.io cri-tools -y

安装Kubernetes组件

安装Kubernetes本质就是安装下面3个工具

  • kubeadm: 安装Kubernetes集群执行工具,安装集群需要各类组件、初始化集群。
  • kubelet: 可以理解成集群节点运行agent,通过API方式向服务器注册,处理Master下发到本节点的任务,管理Pod及Pod中的容器。
  • kubectl: Kubernetes命令行客户端,可以命令方式向集群调用,查看资源。

由于官网Kubenretes官网源下载很慢,我这里使用的国内阿里云源

apt-get update && apt-get install -y apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/deb/Release.key | gpg –dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo “deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/deb/ /” |
tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl

验证安装结果

kubeadm version

kubeadm version: &version.Info{Major:”1″, Minor:”28″, GitVersion:”v1.28.7″, GitCommit:”c8dcb00be9961ec36d141d2e4103f85f92bcf291″, GitTreeState:”clean”, BuildDate:”2024-02-14T10:39:01Z”, GoVersion:”go1.21.7″, Compiler:”gc”, Platform:”linux/amd64″}

设置Linux内核

禁用swap 交换分区

sudo swapoff -a

写入配置中,保证重启后依然生效

sudo sed -i ‘/ swap / s/^(.*)$/#1/g’ /etc/fstab

设置容器模板

vim /etc/modules-load.d/containerd.conf

写入以下内容

overlay
br_netfilter

开启模块

sudo modprobe overlay
sudo modprobe br_netfilter

配置kubernetes网络

vim /etc/sysctl.d/kubernetes.conf

写入下面内容

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

重启,让配置生效

sudo sysctl –system

安装Kubernetes集群

获取kubeadm 默认配置

kubeadm config print init-defaults > kubeadm.conf

修改4处地方

  • localAPIEndpoint.advertiseAddress为master的ip;
  • nodeRegistration.name 为当前节点名称;
  • imageRepository为国内源:registry.cn-hangzhou.aliyuncs.com/google_containers
  • podSubnet 设置pod网段,这里必须 192.168.0.0/16,因为下面安装CNI 插件Calico已经固定网段,如果要安装其他插件自行修改
  • kubernetesVersion: 1.29.2 ,查看了kubeadm 版本为1.29.2 ,默认是1.29.0

修改配置如下

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.28.14.94 #当前主机IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: ser
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #改成国内镜像地址
kind: ClusterConfiguration
kubernetesVersion: v1.29.2 # 改成跟kubelet 一致
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 192.168.0.0/16 #固定网段,这是cni插件Calico要求的
scheduler: {}

我这里先执行镜像下载,一步步来

kubeadm config images pull –config=kubeadm.conf

出现错误

root@syf-CN15S:~# kubeadm config images pull –config=kubeadm.conf
W0323 23:12:47.468488 54593 initconfiguration.go:312] error unmarshaling configuration schema.GroupVersionKind{Group:”kubeadm.k8s.io”, Version:”v1beta3″, Kind:”ClusterConfiguration”}: strict decoding error: unknown field “networking.pod-network-cidr”
failed to pull image “registry.lank8s.cn/kube-apiserver:v1.29.0″: output: time=”2024-03-23T23:12:47+08:00” level=fatal msg=”validate service connection: validate CRI v1 image API for endpoint “unix:///var/run/containerd/containerd.sock”: rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService”
, error: exit status 1
To see the stack trace of this error execute with –v=5 or higher

这是cri没有使用containerd 作为容器,在github 找到解决方法

vim /etc/crictl.yaml

新增下面内容

runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 2
debug: true
pull-image-on-create: false

修改containerd默认配置

因为containerd没有配置,使用下面命令生成默认配置

containerd config default > /etc/containerd/config.toml

修改镜像地址,改用国内仓库

[plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" # 修改这一行就可以

同样修改Cgroup设置,会导致etcd无限重启

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false //改成  true 就可以

配置containerd 容器仓库加速,不然使用默认docker.io 非常慢的
新创建 /etc/containerd/certs.d/docker.io/ 目录, 向下面结构 docker.io 表示默认Docker 官方仓库地址

/etc/containerd/
├── certs.d
│   ├── docker.io
│   │   └── host.toml
└── config.toml

编辑host.toml

vim /etc/containerd/certs.d/docker.io/host.toml

添加下面内容

 [host."https://xxxxx.mirror.aliyuncs.com"] # docker 仓库加速地址
  capabilities = ["pull", "resolve","push"]
  skip_verify = true

其他配置可以参看Github
重启,让配置生效

systemctl restart containred

重新执行命令,拉取镜像,出现下面记录,表示拉取镜像成功

root@syf-CN15S:~# kubeadm config images pull –config=kubeadm.conf
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.29.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.29.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.29.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.29.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.11.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.10-0

执行初始化命令

kubeadm init –config=kubeadm.con

看见下面信息,说明集群搭建成功

.... 省略

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

根据安装提示,执行下面命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

执行下下面命令,验证下kubernets是否安装成功

kubectl get node

如果你和我一样,状态不是Ready,那就是有问题

NAME STATUS ROLES AGE VERSION
syf-cn15s NotReady control-plane 5m14s v1.29.3

首先成kubelete 排查kubelet,查看是否正常运行

systemctl status kubelet

看见下面

3月 24 00:09:59 syf-CN15S kubelet[59351]: E0324 00:09:59.243528   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:04 syf-CN15S kubelet[59351]: E0324 00:10:04.244612   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:09 syf-CN15S kubelet[59351]: E0324 00:10:09.246425   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:14 syf-CN15S kubelet[59351]: E0324 00:10:14.248467   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:19 syf-CN15S kubelet[59351]: E0324 00:10:19.250277   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:24 syf-CN15S kubelet[59351]: E0324 00:10:24.251682   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:29 syf-CN15S kubelet[59351]: E0324 00:10:29.253199   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:34 syf-CN15S kubelet[59351]: E0324 00:10:34.255030   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:39 syf-CN15S kubelet[59351]: E0324 00:10:39.256634   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ>
3月 24 00:10:44 syf-CN15S kubelet[59351]: E0324 00:10:44.258525   59351 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Networ

这是没有安装CNI 插件导致

安装CNI插件

我这里安装calico 作为Kubernetes网络CNI插件,安装步骤来自官网教程
先现在yaml文件,这个步骤很关键,接下来如果安装失败了,可以使用kubectl delete 直接删除

wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml

等待一会calico 安装完成,使用下面命令查看镜像运行情况

watch kubectl get pods -n calico-system

看到下面信息,则说明安装完成了

NAME READY STATUS RESTARTS AGE
calico-kube-controllers-df447b98-44n5j 1/1 Running 0 10m
calico-node-8jrzf 1/1 Running 0 10m
calico-typha-78f4459ccc-xgh4h 1/1 Running 0 10m
csi-node-driver-ggjql 2/2 Running 0 10m

再次查看k8s运行情况

root@syf-CN15S:~# kubectl get node
NAME STATUS ROLES AGE VERSION
syf-cn15s Ready control-plane 37m v1.29.3

正常运行了

如果是单节点运行,还有执行以下命令,才能让master 充当工作节点

kubectl taint nodes –all node-role.kubernetes.io/control-plane-

问题排查思路

我遇到问题先看kubelet 状态,日志

sudo systemctl status kubelet

查看详细日志

sudo journalctl -u kubelet.service

定位到具体容器后,集群没有安装好,kubectl还能使用的,可以使用containerd的客户端工具crictl 查看,用法和docker基本一样
查看日志

sudo crictl logs <containerd-id>

进入容器

sudo crictl exec -it <containerd-id> /bin/bash


以下文章参考的资料
https://phoenixnap.com/kb/install-kubernetes-on-ubuntu
https://docs.docker.com/engine/install/ubuntu/
https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.12873639.article-detail.8.73cd4579RZH8Z4
https://github.com/containerd/containerd/blob/main/docs/hosts.md

© 版权声明

相关文章

暂无评论

您必须登录才能参与评论!
立即登录
暂无评论...