三台机器
2C4G
192.168.56.71 node1 192.168.56.72 node2 192.168.56.73 node3 123
参考
https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 12345678910
执行下列命令刷新yum源缓存
# yum clean all # yum makecache # yum repolist 123
得到这面这个列表,说明源配置正确
[root@MiWiFi-R1CM-srv yum.repos.d]# yum repolist 已加载插件:fastestmirror Loading mirror speeds from cached hostfile 源标识 源名称 状态 base/7/x86_64 CentOS-7 - Base - 163.com 10,019 docker-ce-stable/x86_64 Docker CE Stable - x86_64 28 extras/7/x86_64 CentOS-7 - Extras - 163.com 321 kubernetes Kubernetes 299 updates/7/x86_64 CentOS-7 - Updates - 163.com 628 repolist: 11,295 12345678910
###4.安装kubeadm
yum install -y kubeadm 1
系统就会帮我们自动安装最新版的kubeadm了(我安装的时候是1.13.1),一共会安装kubelet、kubeadm、kubectl、kubernetes-cni这四个程序。
kubeadm:k8集群的一键部署工具,通过把k8的各类核心组件和插件以pod的方式部署来简化安装过程
kubelet:运行在每个节点上的node agent,k8集群通过kubelet真正的去操作每个节点上的容器,由于需要直接操作宿主机的各类资源,所以没有放在pod里面,还是通过服务的形式装在系统里面
kubectl:kubernetes的命令行工具,通过连接api-server完成对于k8的各类操作
kubernetes-cni:k8的虚拟网络设备,通过在宿主机上虚拟一个cni0网桥,来完成pod之间的网络通讯,作用和docker0类似。
安装完后,执行
[root@node1 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:54:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} 12
可以看到装的是1.18.2的版本
//pod使用的网段
10.16.0.0/16
kubeadm init --kubernetes-version=v1.18.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.16.0.0/16 --apiserver-advertise-address=192.168.56.71 1
成功后可看到
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.56.71:6443 --token vji3ft.jlfs2w3fsqvcm5rt --discovery-token-ca-cert-hash sha256:640d7f9ccd1fa36eb9f92c091983cde5a5300b6233353f602f5b373bbad92d95
1234567891011121314151617[root@node1 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7ff77c879f-769jq 0/1 Pending 0 21m coredns-7ff77c879f-d72rh 0/1 Pending 0 21m etcd-node1 1/1 Running 0 21m kube-apiserver-node1 1/1 Running 0 21m kube-controller-manager-node1 1/1 Running 0 21m kube-proxy-frxlt 1/1 Running 0 14m kube-proxy-kpf8f 1/1 Running 0 21m kube-scheduler-node1 1/1 Running 0 21m 12345678910
话不多说,依次执行
[root@node1 ~]# mkdir -p $HOME/.kube [root@node1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@node1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config 123
如果不加网络插件,将节点加入,然后查看节点状态如下
[root@node1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node1 NotReady master 11m v1.18.2 node2 NotReady <none> 4m45s v1.18.2 12345
网络插件我们使用calico
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml sed -i -e "s?192.168.0.0/16?10.16.0.0/16?g" calico.yaml kubectl apply -f calico.yaml 1234
再次查看,可与看到coredns已经从pending=>Running
[root@node1 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-77c5fc8d7f-bcdhs 1/1 Running 0 2m32s calico-node-cmpl6 1/1 Running 0 2m33s calico-node-x4sqc 1/1 Running 0 2m33s coredns-7ff77c879f-769jq 1/1 Running 0 39m coredns-7ff77c879f-d72rh 1/1 Running 0 39m etcd-node1 1/1 Running 0 39m kube-apiserver-node1 1/1 Running 0 39m kube-controller-manager-node1 1/1 Running 0 39m kube-proxy-frxlt 1/1 Running 0 33m kube-proxy-kpf8f 1/1 Running 0 39m kube-scheduler-node1 1/1 Running 0 39m 1234567891011121314
节点NotReady=>Ready
[root@node1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 41m v1.18.2 node2 Ready <none> 34m v1.18.2 1234
[root@node1 ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} 12345
node上执行join之后,执行kubectl命令报错
[root@node3 ~]# kubectl get node error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable 12
解决方案
复制master上/etc/kubernetes/admin.conf到node 然后执行 -----------------------export KUBECONFIG=/etc/kubernetes/admin.conf echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile 永久可生效 source ~/.bash_profile [root@node3 ~]# kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready master 60m v1.18.2 node2 Ready <none> 54m v1.18.2 node3 Ready <none> 15m v1.18.2 问题解决 1234567891011121314
三个节点都执行
[root@node2 ~]# systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. 12
Master重启后无法正常启动,node "" not found** Unable to register node “node1” with API server:
May 12 16:12:29 node1 kubelet[6726]: I0512 16:12:29.851144 6726 kubelet_node_status.go:70] Attempting to register node node1 May 12 16:12:29 node1 kubelet[6726]: E0512 16:12:29.911997 6726 kubelet.go:2267] node "node1" not found May 12 16:12:30 node1 kubelet[6726]: E0512 16:12:30.012553 6726 kubelet.go:2267] node "node1" not found May 12 16:12:30 node1 kubelet[6726]: E0512 16:12:30.017299 6726 kubelet_node_status.go:92] Unable to register node "node1" with API server: Post https://192.168.56.71:6443/api/v1/nodes: dia...nection refused May 12 16:12:30 node1 kubelet[6726]: E0512 16:12:30.112649 6726 kubelet.go:2267] node "node1" not found 123456
原因
1.8版本之前.开启rbac后,apiserver默认绑定system:nodes组到system:node的clusterrole。v1.8之后,此绑定默认不存在,需要手工绑定,否则kubelet启动后会报认证错误,使用kubectl get nodes查看无法成为Ready状态
解决方案
使用命令kubectl get clusterrolebinding和kubectl get clusterrole可以查看系统中的角色与角色绑定 使用命令kubectl describe clusterrolebindings system:node查看system:node角色绑定的详细信息 创建角色绑定 在整个集群中授予 ClusterRole ,包括所有命名空间 kubectl create clusterrolebinding kubelet-node-clusterbinding --clusterrole=system:node --group=system:nodes kubectl describe clusterrolebindings kubelet-node-clusterbinding 12345678
可以看到
[root@node1 ~]# kubectl describe clusterrolebindings kubelet-node-clusterbinding Name: kubelet-node-clusterbinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: system:node Subjects: Kind Name Namespace ---- ---- --------- Group system:nodes 1234567891011
重启后生效,至此问题解决
相关知识
在单个服务器上安装多个Elasticsearch节点
关于云原生实践的思考记录
NPM安装模块报错:Error: sha1
创建 memory cgroup 失败原因与解决方案
[Udemy]Blender 4 几何节点研讨会:丛林藤蔓
c语言链表宠物4节点
扫码就能查来源!中山建成607个肉类蔬菜追溯节点
【壁挂炉安装方法】壁挂炉安装示意图 壁挂炉如何安装?
如下图所示桁架,承受节点荷载设计值P=600kN,AB杆采用2L100×63×8
座便器安装说明
网址: 通过kubeadmin安装三节点k8s https://m.mcbbbk.com/newsview803348.html
上一篇: 邀您看报告:乖宝宠物: 我国宠物 |
下一篇: 宠物小达人下载 |