c盤清理的步驟是什么(如何清理C盤空間)
如何清理C盤空間怎么清理C盤的垃圾文件?每天上網(wǎng)會給電腦帶來很多臨時(shí)文件,這些垃圾文件不清理掉時(shí)間久了就會影響到電腦的運(yùn)行速度。那怎
2022/12/08
[root@master1 ~]# cat /etc/redhat-releaseCentOS Linux release 7.9.2009 (Core)[root@master1 ~]# uname -aLinux master1 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux配置
角色 | IP
| 配置 |
k8s-master01 | 172.16.0.11 | 4核4G內(nèi)存,虛擬機(jī) |
k8s-master02 | 172.16.0.12 | 4核4G內(nèi)存,虛擬機(jī) |
k8s-master03 | 172.16.0.13 | 4核4G內(nèi)存,虛擬機(jī) |
k8s-node01 | 172.16.0.14 | 4核4G內(nèi)存,虛擬機(jī) |
k8s-node02 | 172.16.0.15 | 4核4G內(nèi)存,虛擬機(jī) |
hostnamectl set-hostname k8s-master01 #在master1上操作hostnamectl set-hostname k8s-master02 #在master2上操作hostnamectl set-hostname k8s-master03 #在master3上操作hostnamectl set-hostname k8s-node01 #在node1上操作hostnamectl set-hostname k8s-node02 #在node2上操作添加域名綁定
cat >> /etc/hosts << EOF172.16.0.11 cluster-endpoint172.16.0.11 k8s-master01172.16.0.12 k8s-master02172.16.0.13 k8s-master03172.16.0.14 k8s-node1172.16.0.15 k8s-node2EOF基礎(chǔ)配置
#1.關(guān)閉selinuxsetenforce 0 && sed -i "s/^SELINUX=enforcing$/SELINUX=permissive/" /etc/selinux/config#2.關(guān)閉swap分區(qū)swapoff -a && sed -ri "s/.*swap.*/#&/" /etc/fstab#3.允許 iptables 檢查橋接流量cat <安裝docker #1.刪掉系統(tǒng)中舊的dockeryum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine #2.安裝依賴及配置源yum install -y yum-utilsyum-config-manager \--add-repo \http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo#3.安裝dockeryum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6#4.配置docker開機(jī)啟動systemctl enable docker --now#5.配置鏡像加速(這是我個人的阿里云鏡像加速,大家也可以使用)mkdir -p /etc/dockertee /etc/docker/daemon.json <<-"EOF"{ "registry-mirrors": ["https://sx15mtuf.mirror.aliyuncs.com"]}EOFsystemctl daemon-reload && systemctl restart docker三、安裝kubelet、kubeadm、kubectl(所有服務(wù)器都要配置)
cat <四、下載各個機(jī)器需要使用的鏡像(所有服務(wù)器都要配置)
tee ./images.sh <<-"EOF"#!/bin/bashimages=(kube-apiserver:v1.20.9kube-proxy:v1.20.9kube-controller-manager:v1.20.9kube-scheduler:v1.20.9coredns:1.7.0etcd:3.4.13-0pause:3.2)for imageName in ${images[@]} ; dodocker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageNamedoneEOF chmod +x ./images.sh && ./images.sh五、初始化主節(jié)點(diǎn)
初始化主節(jié)點(diǎn)kubeadm init \--apiserver-advertise-address=172.16.0.11 \--control-plane-endpoint=cluster-endpoint \--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \--kubernetes-version v1.20.9 \--service-cidr=10.96.0.0/16 \--pod-network-cidr=192.168.0.0/16注意:這個是在k8s-master1上操作的,其中第二行的ip地址要改為你master1的ip地址Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root: kubeadm join cluster-endpoint:6443 --token 5j3ocl.g8axi1p6ihpdpinf \ --discovery-token-ca-cert-hash sha256:2955f32471251dd1f0a6dcce29f025fd2a9042e03276cff4332de8ad7f15a5ce \ --control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join cluster-endpoint:6443 --token 5j3ocl.g8axi1p6ihpdpinf \ --discovery-token-ca-cert-hash sha256:2955f32471251dd1f0a6dcce29f025fd2a9042e03276cff4332de8ad7f15a5ce [root@k8s-master01 ~]#在k8s-master01上運(yùn)行如下命令[root@k8s-master01 ~]# mkdir -p $HOME/.kube[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config查看集群狀態(tài)[root@k8s-master01 ~]# kubectl get nodes #查看集群所有節(jié)點(diǎn)[root@k8s-master01 ~]# kubectl get pods -A #查看集群運(yùn)行的所有pod六、安裝網(wǎng)絡(luò)組件(k8s-master01上安裝即可)
[root@k8s-master01 ~]# curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O[root@k8s-master01 ~]# kubectl apply -f calico.yaml七、加入node節(jié)點(diǎn)(這里以k8s-node01示例)
加入node節(jié)點(diǎn)[root@k8s-node01 ~]# kubeadm join cluster-endpoint:6443 --token 5j3ocl.g8axi1p6ihpdpinf \> --discovery-token-ca-cert-hash sha256:2955f32471251dd1f0a6dcce29f025fd2a9042e03276cff4332de8ad7f15a5ce[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03 [WARNING Hostname]: hostname "k8s-node01" could not be reached [WARNING Hostname]: hostname "k8s-node01": lookup k8s-node01 on 172.16.0.2:53: no such host[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with "kubectl -n kube-system get cm kubeadm-config -o yaml"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run "kubectl get nodes" on the control-plane to see this node join the cluster.[root@k8s-node01 ~]#在master節(jié)點(diǎn)上查看所有節(jié)點(diǎn)信息[root@k8s-master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01 Ready control-plane,master 39m v1.20.9k8s-node01 Ready3m12s v1.20.9 八、加入master節(jié)點(diǎn)(這里以k8s-master02示例)
直接加入,會提示證書錯誤[root@k8s-master02 ~]# kubeadm join cluster-endpoint:6443 --token 5j3ocl.g8axi1p6ihpdpinf --discovery-token-ca-cert-hash sha256:2955f32471251dd1f0a6dcce29f025fd2a9042e03276cff4332de8ad7f15a5ce --control-plane[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with "kubectl -n kube-system get cm kubeadm-config -o yaml"error execution phase preflight: One or more conditions for hosting a new control plane instance is not satisfied.failure loading certificate for CA: couldn"t load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directoryPlease ensure that:* The cluster has a stable controlPlaneEndpoint address.* The certificates that must be shared among control plane instances are provided.To see the stack trace of this error execute with --v=5 or higher我們先把k8s相關(guān)證書拷貝過來mkdir -p /etc/kubernetes/pki/etcdscp root@k8s-master01:/etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/scp root@k8s-master01:/etc/kubernetes/pki/ca.key /etc/kubernetes/pki/scp root@k8s-master01:/etc/kubernetes/pki/sa.key /etc/kubernetes/pki/scp root@k8s-master01:/etc/kubernetes/pki/sa.pub /etc/kubernetes/pki/scp root@k8s-master01:/etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki/scp root@k8s-master01:/etc/kubernetes/pki/front-proxy-ca.key /etc/kubernetes/pki/scp root@k8s-master01:/etc/kubernetes/pki/etcd/ca.crt /etc/kubernetes/pki/etcd/scp root@k8s-master01:/etc/kubernetes/pki/etcd/ca.key /etc/kubernetes/pki/etcd/scp root@k8s-master01:/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf再加入,就OK了This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.* The Kubelet was informed of the new secure connection details.* Control plane (master) label and taint were applied to the new node.* The Kubernetes control plane instances scaled up.* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configRun "kubectl get nodes" to see this node join the cluster.配置環(huán)境變量等mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config查看所有集群情況[root@k8s-master02 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master01 Ready control-plane,master 19h v1.20.9k8s-master02 Ready control-plane,master 19m v1.20.9k8s-master03 Ready control-plane,master 12m v1.20.9k8s-node01 Ready18h v1.20.9k8s-node02 Ready 18h v1.20.9
標(biāo)簽: 可以使用 環(huán)境變量 開機(jī)啟動