Centos7.X部署kubernates集群环境
部署环境准备集群类型采用多对多高可用集群部署,共7台主机,3台master,3台slaver,1台client。
主机名 |
OS版本 |
ip |
主机配置 |
备注 |
region-master-1 |
7.6.1160 |
192.168.199.130 |
2颗CPU4G内存 | |
region-master-2 |
7.6.1160 |
192.168.199.131 |
2颗CPU4G内存 | |
region-master-3 |
7.6.1160 |
192.168.199.132 |
2颗CPU4G内存 | |
region-slaver-1 |
7.6.1160 |
192.168.199.180 |
2颗CPU4G内存 | |
region-slaver-2 |
7.6.1160 |
192.168.199.181 |
2颗CPU4G内存 | |
region-slaver-3 |
7.6.1160 |
192.168.199.182 |
2颗CPU4G内存 | |
region-vip |
7.6.1160 |
192.168.199.188 |
2颗CPU4G内存 | |
region-client |
7.6.1160 |
192.168.199.160 |
2颗CPU4G内存 |
分别在master和slaver节点都执行下面操作。
配置操作系统
禁用了防火墙和selinux并设置了阿里源。
$ systemctl stop firewalld && systemctl disable firewalld
$ setenforce 0
$ vim /etc/selinux/config
SELINUX=disabled
配置主机名修改主机名[root@localhost ~]# hostnamectl set-hostname region-master-1
[root@localhost ~]# more /etc/hostname
退出重新登陆即可显示新设置的主机名region-master-1
修改hosts文件[root@region-master-1 ~]# cat >> /etc/hosts << EOF
192.168.199.130 region-master-1
192.168.199.131 region-master-2
192.168.199.132 region-master-3
192.168.199.180 region-slaver-1
192.168.199.181 region-slaver-2
192.168.199.182 region-slaver-3
EOF
禁用swap临时禁用[root@region-master-1 ~]# swapoff -a
永久禁用禁用swap后还需修改配置文件/etc/fstab,注释swap
[root@region-master-1 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
内核参数修改
本文的k8s网络使用flannel,该网络需要设置内核参数bridge-nf-call-iptables=1,修改这个参数需要系统有br_netfilter模块。
br_netfilter模块加载查看br_netfilter模块:
[root@region-master-1 ~]# lsmod |grep br_netfilter
如果系统没有br_netfilter模块则执行下面的新增命令,如有则忽略。
临时新增br_netfilter模块:
[root@region-master-1 ~]# modprobe br_netfilter
该方式重启后会失效
永久新增br_netfilter模块:
[root@region-master-1 ~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for File in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@region-master-1 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
[root@region-master-1 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules
内核参数临时修改[root@region-master-1 ~]# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
[root@region-master-1 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-ip6tables = 1
内核参数永久修改[root@region-master-1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@region-master-1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
新增kubernetes源[root@region-master-1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
更新缓存[root@region-master-1 ~]# yum clean all
[root@region-master-1 ~]# yum -y makecache
免密登录配置region-master-1到region-master-2、region-master-3免密登录,本步骤只在region-master-1上执行。
创建秘钥[root@region-master-1 ~]# ssh-keygen -t rsa
将秘钥同步至region-master-2/region-master-3[root@region-master-1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.4
[root@region-master-1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@172.27.34.5
免密登陆测试[root@region-master-1 ~]# ssh 172.27.34.4
[root@region-master-1 ~]# ssh region-master-3
region-master-1可以直接登录region-master-2和region-master-3,不需要输入密码。
Docker安装control plane和work节点都执行本部分操作。
安装依赖包[root@region-master-1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
设置Docker源[root@region-master-1 ~]# yum-config-manager --add-repo https://download.docker.com/linux/CentOS/docker-ce.repo
安装Docker CEdocker安装版本查看[root@region-master-1 ~]# yum list docker-ce --showduplicates | sort -r
安装docker[root@region-master-1 ~]# yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
启动Docker[root@region-master-1 ~]# systemctl start docker
[root@region-master-1 ~]# systemctl enable docker
命令补全安装bash-completion[root@region-master-1 ~]# yum -y install bash-completion
加载bash-completion[root@region-master-1 ~]# source /etc/profile.d/bash_completion.sh
镜像加速由于Docker Hub的服务器在国外,下载镜像会比较慢,可以配置镜像加速器。主要的加速器有:Docker官方提供的中国registry mirror、阿里云加速器、DaoCloud 加速器,本文以阿里加速器配置为例。
登陆阿里云容器模块登陆地址为:https://cr.console.aliyun.com ,未注册的可以先注册阿里云账户
配置镜像加速器配置daemon.json文件
[root@region-master-1 ~]# mkdir -p /etc/docker
[root@region-master-1 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"]
}
EOF
重启服务
[root@region-master-1 ~]# systemctl daemon-reload
[root@region-master-1 ~]# systemctl restart docker
加速器配置完成
验证[root@region-master-1 ~]# docker --version
[root@region-master-1 ~]# docker run hello-world
通过查询docker版本和运行容器hello-world来验证docker是否安装成功。
修改Cgroup Driver修改daemon.json修改daemon.json,新增‘”exec-opts”: [“native.cgroupdriver=systemd”’
[root@region-master-1 ~]# more /etc/docker/daemon.json
{
"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
重新加载docker[root@region-master-1 ~]# systemctl daemon-reload
[root@region-master-1 ~]# systemctl restart docker
修改cgroupdriver是为了消除告警:[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at?https://kubernetes.io/docs/setup/cri/
keepalived安装control plane节点都执行本部分操作。
安装keepalived[root@region-master-1 ~]# yum -y install keepalived
keepalived配置region-master-1上keepalived配置:
[root@region-master-1 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id region-master-1
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.199.188
}
}
region-master-2上keepalived配置:
[root@region-master-2 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id region-master-2
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.199.188
}
}
region-master-3上keepalived配置:
[root@region-master-3 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id region-master-3
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 50
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.199.188
}
启动keepalived所有control plane启动keepalived服务并设置开机启动
[root@region-master-1 ~]# service keepalived start
[root@region-master-1 ~]# systemctl enable keepalived
VIP查看[root@region-master-1 ~]# ip a
vip在region-master-1上
k8s安装control plane和work节点都执行本部分操作。
版本查看[root@region-master-1 ~]# yum list kubelet --showduplicates | sort -r
本文安装的kubelet版本是1.16.4,该版本支持的docker版本为1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。
安装kubelet、kubeadm和kubectl安装三个包[root@region-master-1 ~]# yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4
# 调整CentOS7仓库
yum install wget -y
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 调整Kubernetes仓库
vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
#vim保存
# 刷新仓库
yum clean all
yum makecache
启动kubelet启动kubelet并设置开机启动
[root@region-master-1 ~]# systemctl enable kubelet && systemctl start kubelet
kubectl命令补全[root@region-master-1 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@region-master-1 ~]# source .bash_profile
下载镜像镜像下载的脚本Kubernetes几乎所有的安装组件和Docker镜像都放在goolge自己的网站上,直接访问可能会有网络问题,这里的解决办法是从阿里云镜像仓库下载镜像,拉取到本地以后改回默认的镜像tag。本文通过运行image.sh脚本方式拉取镜像。
[root@region-master-1 ~]# more image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.16.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
url为阿里云镜像仓库地址,version为安装的kubernetes版本。
下载镜像运行脚本image.sh,下载指定版本的镜像
[root@region-master-1 ~]# ./image.sh
[root@region-master-1 ~]# docker images
初始化Masterregion-master-1节点执行本部分操作。
kubeadm.conf[root@region-master-1 ~]# more kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
certSANs: #填写所有kube-apiserver节点的hostname、IP、VIP
- region-master-1
- region-master-2
- region-master-3
- region-slave-1
- region-slave-2
- region-slave-3
- 192.168.199.130
- 192.168.199.131
- 192.168.199.132
- 192.168.199.180
- 192.168.199.181
- 192.168.199.182
- 192.168.199.188
controlPlaneEndpoint: "192.168.199.188:6443"
networking:
podSubnet: "192.168.0.0/16"
kubeadm.conf为初始化的配置文件
master初始化[root@region-master-1 ~]# kubeadm init --config=kubeadm-config.yaml
记录kubeadm join的输出,后面需要这个命令将work节点和其他master节点加入集群中。
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.199.188:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.199.188:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
初始化失败:
如果初始化失败,可执行kubeadm reset后重新初始化
[root@region-master-1 ~]# kubeadm reset
[root@region-master-1 ~]# rm -rf $HOME/.kube/config
加载环境变量[root@region-master-1 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@region-master-1 ~]# source .bash_profile
本文所有操作都在root用户下执行,若为非root用户,则执行如下操作:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
安装flannel网络在region-master-1上新建flannel网络
[root@region-master-1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
由于网络原因,可能会安装失败,可以在文末直接下载kube-flannel.yml文件,然后再执行apply
master节点加入集群证书分发region-master-1分发证书:
在region-master-1上运行脚本cert-main-master.sh,将证书分发至region-master-2和region-master-3
[root@region-master-1 ~]# ll|grep cert-main-master.sh
-rwxr--r-- 1 root root 638 1月 2 15:23 cert-main-master.sh
[root@region-master-1 ~]# more cert-main-master.sh
USER=root # customizable
CONTROL_PLANE_IPS="172.27.34.4 172.27.34.5"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
region-master-2移动证书至指定目录:
在region-master-2上运行脚本cert-other-master.sh,将证书移至指定目录
[root@region-master-2 ~]# pwd
/root
[root@region-master-2 ~]# ll|grep cert-other-master.sh
-rwxr--r-- 1 root root 484 1月 2 15:29 cert-other-master.sh
[root@region-master-2 ~]# more cert-other-master.sh
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@region-master-2 ~]# ./cert-other-master.sh
region-master-3移动证书至指定目录:
在region-master-3上也运行脚本cert-other-master.sh
[root@region-master-3 ~]# pwd
/root
[root@region-master-3 ~]# ll|grep cert-other-master.sh
-rwxr--r-- 1 root root 484 1月 2 15:31 cert-other-master.sh
[root@region-master-3 ~]# ./cert-other-master.sh
region-master-2加入集群kubeadm join 192.168.199.188:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
region-master-3加入集群kubeadm join 192.168.199.188:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
加载环境变量region-master-2和region-master-3加载环境变量
[root@region-master-2 ~]# scp region-master-1:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@region-master-2 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@region-master-2 ~]# source .bash_profile
[root@region-master-3 ~]# scp region-master-1:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@region-master-3 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@region-master-3 ~]# source .bash_profile
该步操作是为了在region-master-2和region-master-3上也能执行kubectl命令。
集群节点查看[root@region-master-1 ~]# kubectl get nodes
[root@region-master-1 ~]# kubectl get po -o wide -n kube-system
所有master节点处于ready状态,所有的系统组件也正常。
Slave节点加入集群region-slaver-1加入集群kubeadm join 192.168.199.188:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
运行初始化master生成的work节点加入集群的命令
region-slaver-2加入集群kubeadm join 192.168.199.188:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
region-slaver-3加入集群kubeadm join 192.168.199.188:6443 --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
集群节点查看[root@region-master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
region-master-1 Ready master 44m v1.16.4
region-master-2 Ready master 33m v1.16.4
region-master-3 Ready master 23m v1.16.4
region-slaver-1 Ready <none> 11m v1.16.4
region-slaver-2 Ready <none> 7m50s v1.16.4
region-slaver-3 Ready <none> 3m4s v1.16.4
Client配置设置kubernetes源新增kubernetes源[root@client ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
更新缓存[root@client ~]# yum clean all
[root@client ~]# yum -y makecache
安装kubectl[root@client ~]# yum install -y kubectl-1.16.4
命令补全安装bash-completion[root@client ~]# yum -y install bash-completion
加载bash-completion[root@client ~]# source /etc/profile.d/bash_completion.sh
拷贝admin.conf[root@client ~]# mkdir -p /etc/kubernetes
[root@client ~]# scp 172.27.34.3:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@client ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@client ~]# source .bash_profile
加载环境变量[root@region-master-1 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@region-master-1 ~]# source .bash_profile
kubectl测试[root@client ~]# kubectl get nodes
[root@client ~]# kubectl get cs
[root@client ~]# kubectl get po -o wide -n kube-system
Dashboard搭建本节内容都在client端完成
下载yaml[root@client ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
如果连接超时,可以多试几次。recommended.yaml已上传,也可以在文末下载。
配置yaml2.1 修改镜像地址
[root@client ~]# sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml
由于默认的镜像仓库网络访问不通,故改成阿里镜像
2.2 外网访问
[root@client ~]# sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml
配置NodePort,外部通过https://NodeIp:NodePort 访问Dashboard,此时端口为30001
2.3 新增管理员帐号
[root@client ~]# cat >> recommended.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
部署访问部署Dashboard[root@client ~]# kubectl apply -f recommended.yaml
状态查看[root@client ~]# kubectl get all -n kubernetes-dashboard
令牌查看[root@client ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin
令牌为:
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd0NHZ5X3RHZW5pNDR6WEdldmlQUWlFM3IxbGM3aEIwWW1IRUdZU1ZKdWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNms1ZjYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjk1NDE0ODEtMTUyZS00YWUxLTg2OGUtN2JmMWU5NTg3MzNjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.LAe7N8Q6XR3d0W8w-r3ylOKOQHyMg5UDfGOdUkko_tqzUKUtxWQHRBQkowGYg9wDn-nU9E-rkdV9coPnsnEGjRSekWLIDkSVBPcjvEd0CVRxLcRxP6AaysRescHz689rfoujyVhB4JUfw1RFp085g7yiLbaoLP6kWZjpxtUhFu-MKh1NOp7w4rT66oFKFR-_5UbU3FoetAFBmHuZ935i5afs8WbNzIkM6u9YDIztMY3RYLm9Zs4KxgpAmqUmBSlXFZNW2qg6hxBqDijW_1bc0V7qJNt_GXzPs2Jm1trZR6UU1C2NAJVmYBu9dcHYtTCgxxkWKwR0Qd2bApEUIJ5Wug
访问使用火狐浏览器访问:
https://VIP:30001
接受风险
集群高可用测试
本节内容都在client端完成
组件所在节点查看通过ip查看apiserver所在节点,通过leader-elect查看scheduler和controller-manager所在节点:
[root@region-master-1 ~]# ip a|grep 130
inet 192.168.199.188/32 scope global ens160
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"region-master-1_6caf8003-052f-451d-8dce-4516825213ad","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:23Z","renewTime":"2020-01-03T07:57:55Z","leaderTransitions":2}'
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"region-master-1_720d65f9-e425-4058-95d7-e5478ac951f7","leaseDurationSeconds":15,"acquireTime":"2020-01-02T09:36:20Z","renewTime":"2020-01-03T07:58:03Z","leaderTransitions":2}'
region-master-1关机关闭region-master-1
[root@region-master-1 ~]# init 0
各组件查看vip飘到了region-master-2
[root@region-master-2 ~]# ip a|grep 130
inet 192.168.199.188/32 scope global ens160
controller-manager和scheduler也发生了迁移
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"region-master-2_b3353e8f-a02f-4322-bf17-2f596cd25ba5","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:42Z","renewTime":"2020-01-03T08:06:36Z","leaderTransitions":3}'
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"region-master-3_e0a2ec66-c415-44ae-871c-18c73258dc8f","leaseDurationSeconds":15,"acquireTime":"2020-01-03T08:04:56Z","renewTime":"2020-01-03T08:06:45Z","leaderTransitions":3}'
集群功能性测试查询:
[root@client ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
region-master-1 NotReady master 22h v1.16.4
region-master-2 Ready master 22h v1.16.4
region-master-3 Ready master 22h v1.16.4
region-slaver-1 Ready <none> 22h v1.16.4
region-slaver-2 Ready <none> 22h v1.16.4
region-slaver-3 Ready <none> 22h v1.16.4
region-master-1状态为NotReady
新建pod:
[root@client ~]# more nginx-master.yaml
apiVersion: apps/v1 #描述文件遵循extensions/v1beta1版本的Kubernetes API
kind: Deployment #创建资源类型为Deployment
metadata: #该资源元数据
name: nginx-master #Deployment名称
spec: #Deployment的规格说明
selector:
matchLabels:
app: nginx
replicas: 3 #指定副本数为3
template: #定义Pod的模板
metadata: #定义Pod的元数据
labels: #定义label(标签)
app: nginx #label的key和value分别为app和nginx
spec: #Pod的规格说明
containers:
- name: nginx #容器的名称
image: nginx:latest #创建容器所使用的镜像
[root@client ~]# kubectl apply -f nginx-master.yaml
deployment.apps/nginx-master created
[root@client ~]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-master-75b7bfdb6b-lnsfh 1/1 Running 0 4m44s 10.244.5.6 region-slaver-3 <none> <none>
nginx-master-75b7bfdb6b-vxfg7 1/1 Running 0 4m44s 10.244.3.3 region-slaver-1 <none> <none>
nginx-master-75b7bfdb6b-wt9kc 1/1 Running 0 4m44s 10.244.4.5 region-slaver-2 <none> <none>
结论当有一个master节点宕机时,VIP会发生漂移,集群各项功能不受影响。
region-master-2关机在关闭region-master-1的同时关闭region-master-2,测试集群还能否正常对外服务。
关闭region-master-2:[root@region-master-2 ~]# init 0
查看VIP[root@region-master-3 ~]# ip a|grep 130
inet 192.168.199.188/32 scope global ens160
vip漂移至唯一的region-master-3
集群功能测试[root@client ~]# kubectl get nodes
Error from server: etcdserver: request timed out
[root@client ~]# kubectl get nodes
The connection to the server 192.168.199.188:6443 was refused - did you specify the right host or port?
etcd集群崩溃,整个k8s集群也不能正常对外服务。
,