在《使用VirtualBox安装CentOS 7和Docker》中,我们已经详细地介绍了使用VirtualBox安装CentOS 7和Docker的过程现在我们在此基础上继续安装Kubernetes,我来为大家科普一下关于virtualbox6 支持centos7内核?以下内容希望对你有帮助!

virtualbox6 支持centos7内核(使用VirtualBox安装CentOS7和Kubernetes)

virtualbox6 支持centos7内核

在《使用VirtualBox安装CentOS 7和Docker》中,我们已经详细地介绍了使用VirtualBox安装CentOS 7和Docker的过程。现在我们在此基础上继续安装Kubernetes。

配置yum源

首先,我们新建“/etc/yum.repos.d/kubernetes.repo”,使用aliyun的镜像仓配置yum源。

vi /etc/yum.repos.d/kubernetes.repo

将以下内容保存到文件中:

[kubernetes] name=Kubernetes Repository baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0

之后,为新添加的yum源生成缓存:

yum makecache fast

其输出结果为:

Loaded plugins: fastestmirror Determining fastest mirrors * base: mirrors.163.com * extras: mirrors.163.com * updates: mirrors.163.com base | 3.6 kB 00:00:00 docker-ce-stable | 3.5 kB 00:00:00 extras | 2.9 kB 00:00:00 kubernetes | 1.4 kB 00:00:00 updates | 2.9 kB 00:00:00 (1/2): kubernetes/primary | 104 kB 00:00:00 (2/2): updates/7/x86_64/primary_db | 13 MB 00:00:01 kubernetes 766/766 Metadata Cache Created

禁用swap分区

首先,我们通过以下命令关闭swap分区:

swapoff -a

然后,我们修改“/etc/fstab”文件,禁止swap被自动加载:

vi /etc/fstab

将其中swap的配置注释掉即可:

# # /etc/fstab # Created by anaconda on Fri Jan 14 07:23:37 2022 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=7b5615ef-15cd-46ec-b2d9-555dad0f9f22 /boot xfs defaults 0 0 # /dev/mapper/centos-swap swap swap defaults 0 0

关闭防火墙

配置kubernetes的过程需要关闭防火墙:

systemctl stop firewalld

同时我们将防火墙取消自启动:

systemctl disable firewalld

禁用SELinux

首先,我们执行以下命令,禁用SELinux:

setenforce 0

然后,我们修改配置文件“/etc/sysconfig/selinux”:

vi /etc/sysconfig/selinux

将“SELINUX”设置为“disabled”:

# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted

执行以下命令,查看SELinux状态:

getenforce

其输出结果为:

Permissive

说明已禁用SELinux。

将Docker的cgroup驱动设置为systemd

此时,我们需要修改“/etc/docker/daemon.json”文件:

mkdir -p /etc/docker vi /etc/docker/daemon.json

将以下内容保存到文件中:

{ "exec-opts": [ "native.cgroupdriver=systemd" ] }

然后,我们重启Docker服务,使配置生效:

systemctl restart docker

我们可以通过以下命令来查看当前生效的cgroup驱动:

docker info | grep "Cgroup Driver"

其输出结果为:

Cgroup Driver: systemd

安装Kubernetes

在安装Kubernetes之前,我们先重启操作系统,确保之前所修改的配置生效。

reboot

首先,我们安装Kubernetes的必要工具:

yum install kubectl kubelet kubeadm

这里我们需要记下所安装的Kubernetes的版本,例如:

===================================================================== Package Arch Version Repository Size ===================================================================== Installing: kubeadm x86_64 1.23.3-0 kubernetes 9.0 M kubectl x86_64 1.23.3-0 kubernetes 9.5 M kubelet x86_64 1.23.3-0 kubernetes 21 M Installing for dependencies: conntrack-tools x86_64 1.4.4-7.el7 base 187 k cri-tools x86_64 1.19.0-0 kubernetes 5.7 M kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M libnetfilter_cthelper x86_64 1.0.0-11.el7 base 18 k libnetfilter_cttimeout x86_64 1.0.0-7.el7 base 18 k libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k socat x86_64 1.7.3.2-2.el7 base 290 k Transaction Summary =====================================================================

说明我们所安装的kubernetes的版本为“1.23.3”。

然后,我们将kubelet服务设置为自启动:

systemctl enable kubelet

输出结果为:

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

注意,这里我们只是将服务设置为自启动,并未启动服务。现在服务是无法启动的,我们先要进行初始化。初始化服务需要依赖一些镜像,在国内访问“k8s.gcr.io”来拉取镜像通常是有问题的,因此这里我们先从阿里云将镜像拉取到本地。

kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers

如果拉取镜像过程发生“connection refused”导致的失败,我们重新执行命令来拉取即可。拉取成功后的输出如下:

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.3 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.3 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.3 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.3 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

执行以下命令,使网桥在进行二层转发时使用iptables配置的三层规则:

sysctl -w net.bridge.bridge-nf-call-iptables=1

执行结果如下:

net.bridge.bridge-nf-call-iptables = 1

现在我们需要对环境进行初始化,需要注意的是:

执行以下命令进行初始化,执行前需要检查参数,以符合自身环境的网络要求:

kubeadm init --kubernetes-version=1.23.3 \ --apiserver-advertise-address=192.168.3.22 \ --service-cidr=10.10.0.0/16 --pod-network-cidr=172.16.0.1/20 \ --image-repository=registry.aliyuncs.com/google_containers

在初始化完成后,输出内容的最下方有如下的信息:

To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.3.22:6443 --token eetyo2.3awnz0oboqk70i6t \ --discovery-token-ca-cert-hash sha256:3073b1ce3b691d87106f868a9d50244ac4acd6ab36f4278eb85b0fb1d880bfd9

根据提示,因为我本地是直接使用的root用户,因此执行以下命令:

export KUBECONFIG=/etc/kubernetes/admin.conf

但是在操作系统重启后,这个设置就失效了,需要重新设置。这里我们直接修改“/etc/profile”文件,增加环境变量。

vi /etc/profile

然后在文件最后增加环境变量配置:

export KUBECONFIG=/etc/kubernetes/admin.conf

如果不是使用root命令安装,则需执行以下命令:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

现在,我们来检查一下kubelet服务的状态:

systemctl status kubelet

其输出结果为:

● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Fri 2022-01-28 12:11:20 EST; 3min 51s ago Docs: https://kubernetes.io/docs/ Main PID: 3894 (kubelet) Tasks: 15 Memory: 45.1M CGroup: /system.slice/kubelet.service └─3894 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-in...

此时Kubernetes的基础服务已经安装成功。我们通过以下命令来查看POD状态:

kubectl get nodes

输出结果如下:

NAME STATUS ROLES AGE VERSION localhost.localdomain NotReady control-plane,master 5m26s v1.23.3

说明虽然服务已经安装成功,但是master节点并未处于可用状态。我们通过以下命令来查看POD的运行状态。

kubectl get pod --all-namespaces

其输出结果如下:

NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d8c4cb4d-2lbrs 0/1 Pending 0 7m16s kube-system coredns-6d8c4cb4d-jr528 0/1 Pending 0 7m16s kube-system etcd-localhost.localdomain 1/1 Running 0 7m30s kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 7m31s kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 7m30s kube-system kube-proxy-5wql5 1/1 Running 0 7m16s kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 7m30s

我们发现名称“coredns-xxx”的两个POD的状态为“Pending”,未启动成功。此时我们需要安装网络服务,我们选择使用calico来提供网络服务:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

输出结果如下:

configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21 , unavailable in v1.25 ; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/calico-kube-controllers created

现在我们重新查看节点状态:

kubectl get nodes

输出结果为:

NAME STATUS ROLES AGE VERSION localhost.localdomain Ready control-plane,master 14m v1.23.3

此时节点已处于就绪状态。我们再来检查POD状态:

kubectl get pods --all-namespaces

输出结果为:

NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-85b5b5888d-tlnfb 1/1 Running 0 2m58s kube-system calico-node-f4ldm 1/1 Running 0 2m58s kube-system coredns-6d8c4cb4d-2lbrs 1/1 Running 0 14m kube-system coredns-6d8c4cb4d-jr528 1/1 Running 0 14m kube-system etcd-localhost.localdomain 1/1 Running 0 14m kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 14m kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 14m kube-system kube-proxy-5wql5 1/1 Running 0 14m kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 14m

各POD的运行状态都正常,Kubernetes已安装完成。

允许master节点参与负载

在默认情况下,在Kubernetes中部署应用时,是不会调度到master节点的,在我们学习和体验Kubernetes时不是很方便。此时我们可考虑通过去除污点(taints)的方法来使master节点参与负载,以使POD可以被调用到master节点中部署和运行。

首先,我们检查当前master节点是否参与了负载:

kubectl describe node $(kubectl get nodes --all-namespaces | grep master | awk '{print $1}') | grep Taints | awk -F ':' '{print $3}'

执行结果如下:

NoSchedule

表示当前master节点尚未参与负载。此时我们执行以下命令,以使其参与负载:

kubectl taint nodes $(kubectl get nodes --all-namespaces | grep master | awk '{print $1}') node-role.kubernetes.io/master-

执行结果为:

node/localhost.localdomain untainted

现在我们再次检查参与负载的情况:

kubectl describe node $(kubectl get nodes --all-namespaces | grep master | awk '{print $1}') | grep Taints | awk -F ':' '{print $3}'

输出结果是一个空行,则此时已可以使POD被调度到master节点了。

为了更直观的使用Kubernetes,我们后续来安装Dashboard,以提供通过Web页面的访问。

,