导图社区 搭建k8s K8S 1.28 ubuntu22.04阿里云香港外网单机
搭建k8s K8S 1.28 ubuntu20.04、ubuntu22.04 阿里云香港外网单机,包含根据实际情况做如下6大步骤配置、安装containerd、步骤三、yum安装kubeadm、kubelet、kubectl(所有节点都执行)、步骤四、初始化master节点的控制面板、步骤五、将node节点加入k8s集群等。
编辑于2024-06-19 16:28:16搭建K8S1.28+ubuntu22.04阿里云香港外网单机版
颜色区分
黑色
常规文字
绿色
输入命令文字
蓝色
按情况使用文字
红色
注意提示文字
橙色
文档修改内容文字
根据实际情况做如下6大步骤配置
1、关闭防火墙
#ufw查看当前的防火墙状态:inactive状态是防火墙关闭状态 active是开启状态
ufw status
#启动、关闭防火墙
ufw disable
2、禁用selinux
#默认ubunt默认是不安装selinux的,如果没有selinux命令和配置文件则说明没有安装selinux,则下面步骤就不用做了
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
3、关闭swap分区(必须,因为k8s官网要求)
#注意:最好是安装虚拟机时就不要创建swap交换分区**
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
4、设置主机名
按照使用到多少台主机设置多少个主机名
设置3台机
cat >> /etc/hosts <<EOF 192.168.1.2 master 192.168.1.3 node1 192.168.1.4 node2 EOF
hostnamectl set-hostname master && bash
hostnamectl set-hostname node1 && bash
hostnamectl set-hostname node2 && bash
5、时间同步
#查看时区,时间
date
#先查看时区是否正常,不正确则替换为上海时区
timedatectl set-timezone Asia/Shanghai
#安装chrony,联网同步时间
apt install chrony -y && systemctl enable --now chronyd
# 6、将桥接的IPv4流量传递到iptables的链
#(有一些ipv4的流量不能走iptables链,因为linux内核的一个过滤器,每个流量都会经过他,然后再匹配是否可进入当前应用进程去处理,所以会导致流量丢失),配置k8s.conf文件(k8s.conf文件原来不存在,需要自己创建的)
touch /etc/sysctl.d/k8s.conf
cat >> /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.ip_forward=1 vm.swappiness=0 EOF
sysctl --system
# 7、设置服务器之间免密登陆(3台彼此之间均设置)
只有一台电脑可忽略
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.3
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.4
ssh node1
ssh node2
步骤一、安装containerd
#安装containerd
wget -c https://github.com/containerd/containerd/releases/download/v1.6.8/containerd-1.6.8-linux-amd64.tar.gz
tar -xzvf containerd-1.6.8-linux-amd64.tar.gz
#解压出来一个bin目录,containerd可执行文件都在bin目录里面
mv bin/* /usr/local/bin/
#使用systemcd来管理containerd
wget -P /usr/lib/systemd/system/ https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
无法wget 直接访问网址,复制配置,手动建一个配置文件
vim /usr/lib/systemd/system/containerd.service
systemctl daemon-reload && systemctl enable --now containerd
systemctl status containerd
#安装runc
#runc是容器运行时,runc实现了容器的init,run,create,ps...我们在运行容器所需要的cmd:
curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.1/runc.amd64 && \ install -m 755 runc.amd64 /usr/local/sbin/runc
#安装 CNI plugins
wget -c https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
#根据官网的安装步骤来,创建一个目录用于存放cni插件
mkdir -p /opt/cni/bin
tar -xzvf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin/
#修改containerd的配置,因为containerd默认从k8s官网拉取镜像
#创建一个目录用于存放containerd的配置文件
mkdir -p /etc/containerd
#把containerd配置导出到文件
containerd config default | sudo tee /etc/containerd/config.toml
#修改配置文件
vim /etc/containerd/config.toml
#搜索sandbox_image,把原来的k8s.gcr.io/pause:3.6改为"registry.k8s.io/pause:3.9"
sandbox_image = "registry.k8s.io/pause:3.9"
#搜索SystemdCgroup,把这个false改为true
SystemdCgroup = true
#加载containerd的内核模块
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
sudo modprobe overlay
sudo modprobe br_netfilter
#重启containerd
systemctl restart containerd
systemctl status containerd
步骤二、配置kubernetes的阿里云apt源(所有节点服务器都需要执行)
#更改源,ubuntu 20.04(focal) (选做)
cp /etc/apt/sources.list /etc/apt/sources.list.backup cat > /etc/apt/sources.list <<EOF deb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse deb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse deb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse deb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse deb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse deb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse deb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse deb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse EOF
apt update
apt install apt-transport-https ca-certificates -y
apt install vim lsof net-tools zip unzip tree wget curl bash-completion pciutils gcc make lrzsz tcpdump bind9-utils -y
# 编辑镜像源文件,文件末尾加入阿里云k8s镜像源配置
echo 'deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main' >> /etc/apt/sources.list
#更新证书
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add
#更新源
apt update
步骤三、yum安装kubeadm、kubelet、kubectl(所有节点都执行)
#在3台虚拟机上都执行安装kubeadm、kubelet、kubectl
#查看apt可获取的kubeadm版本,这里安装1.24.0版本,不指定版本的话默认安装最新版本
apt-cache madison kubeadm
#在所有节点上安装kubeadm、kubelet、kubectl
apt install -y kubelet=1.28.0-00 kubeadm=1.28.0-00 kubectl=1.28.0-00
接下来,标记指定软件包为保留(held back),阻止软件自动更新
apt-mark hold kubelet kubeadm kubectl
#设置kubelet开机自启(先不用启动,也起不了,后面kubeadm init初始化master时会自动拉起kubelet)
systemctl enable kubelet
步骤四、初始化master节点的控制面板
#列出所需镜像,可以提前拉取镜像
kubeadm config images list --kubernetes-version=v1.28.0
kubeadm config images pull --kubernetes-version=v1.28.0
#下载的镜像默认存放在containerd的k8s.io命名空间
#kubeadm init --help可以查看命令的具体参数用法
kubeadm config print init-defaults > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: node taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.k8s.io kind: ClusterConfiguration kubernetesVersion: 1.28.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {}
vim kubeadm-config.yaml
10.76.139.53是公网地址
这里不改公网无法连接到第三方UI工具如LENS
apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.2 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: master taints: null --- apiServer: extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s certSANs: - 192.168.1.2 - 10.76.139.53 apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd serverCertSANs: - 192.168.1.2 - 10.76.139.53 peerCertSANs: - 192.168.1.2 - 10.76.139.53 imageRepository: registry.k8s.io kind: ClusterConfiguration kubernetesVersion: 1.28.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 scheduler: {}
kubeadm init --config kubeadm-config.yaml
#如果出现已知可忽略错误
--ignore-preflight-errors=all
#最后kubeadm init初始化成功,提示信息如下:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.2:6443 --token nrefdp.2mtcwkkshizkj1qa \ --discovery-token-ca-cert-hash sha256:564dbb8ec1993f3e38f3b757c324ad6190950156f30f89f7f7d4b244d2b29ec7
#我们根据输入的提示信息复制粘贴照着做即可
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=/etc/kubernetes/admin.conf
步骤五、将node节点加入k8s集群
在步骤四初始化完成master节点之后会提示你在node节点执行如下的命令来将node节点加入k8s集群,如下所示,复制它到node节点执行即可;
注意:这段kubeamd join命令的token只有24h,24h就过期,需要执行kubeadm token create --print-join-command 重新生成。
kubeadm token create --print-join-command
#node节点执行
kubeadm join 192.168.1.2:6443 --token nrefdp.2mtcwkkshizkj1qa \ --discovery-token-ca-cert-hash sha256:564dbb8ec1993f3e38f3b757c324ad6190950156f30f89f7f7d4b244d2b29ec7 kubeadm join 192.168.1.2:6443 --token dy8xgo.ab6bp2gbhkgapzpr \ --discovery-token-ca-cert-hash sha256:e9b4645904143b59bdf220ef3d832deff1fc0ede8dfde72b2ee224725ceff08f
步骤六、部署容器网络,CNI网络插件
#下载calico
wget https://docs.projectcalico.org/manifests/calico.yaml
#如无法直接下载需要在/etc/hosts末尾添加199.232.68.133 raw.githubusercontent.com
#编辑文件,找到下面这两句,去掉注释,修改ip为当前你设置的pod ip段
vim calico.yaml
- name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
#镜像拉取没有问题的话最好
kubectl apply -f calico.yaml
步骤七、配置kubectl命令自动补全
#修改containerd为容器运行时
cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF
# kubectl 配置命令自动补全,master节点配置即可
apt install -y bash-completion
echo 'source /usr/share/bash-completion/bash_completion' >> ~/.bashrc
echo 'source <(kubectl completion bash)' >> ~/.bashrc
source ~/.bashrc
kubectl get pods -n kube-system
一般两分钟CALICO网络节点第一次后会以下面这样显示
NAME READY STATUS RESTARTS AGE calico-kube-controllers-658d97c59c-5kr2x 1/1 Running 0 42s calico-node-z8h6b 1/1 Running 0 43s coredns-5dd5756b68-hch9c 1/1 Running 0 4h3m coredns-5dd5756b68-znlth 1/1 Running 0 4h3m etcd-master 1/1 Running 17 4h3m kube-apiserver-master 1/1 Running 3 4h3m kube-controller-manager-master 1/1 Running 27 4h3m kube-proxy-vzfd8 1/1 Running 0 4h3m kube-scheduler-master 1/1 Running 27 4h3m
到这里K8S1.28的基本功能已经搭建完成
如果有问题,那就用kubectl describe pod <podname> -n <namespace>查找出POD的问题
步骤八、测试k8s集群 (在k8s中创建一个pod,验证是否正常运行)
#创建一个httpd服务测试
kubectl create deployment httpd --image=httpd
#暴露服务,端口就写80,如果你写其他的可能防火墙拦截了
kubectl expose deployment httpd --port=80 --type=NodePort
#查看pod是否是Running状态,查看service/httpd的端口
kubectl get pod,svc
NAME READY STATUS RESTARTS AGE pod/httpd-757fb56c8d-w42l5 1/1 Running 0 39s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/httpd NodePort 10.109.29.1 <none> 80:32569/TCP 42s #外部端口32569 service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h22m
#网页测试访问,使用master节点的IP或者node节点的IP都可以访问,端口就是32569
http://192.168.118.145:32569/
It works! #成功了