焦點!關于Kubernetes集群中常見問題的排查方法的一些筆記

2022-12-12 17:01:45 來源:51CTO博客

寫在前面


學習??K8s??,所以整理記憶文章理論內容來源于:??《Kubernetes權威指南:從Docker到Kubernetes實踐全接觸》??第四版.第十一章這里整理學習筆記

一切時代的藝術都在努力為我們內心那神圣的無聲的欲望提供語言。 ——赫爾曼·黑塞《彼得·卡門青》


【資料圖】


因為沒有具體的Demo,所以文章有些空,類似于一些指導思想,讀著乏味,這里先列出干貨:一些查問題的網站,關于內容之后有機會在補充相關的案例,如果解決問題,時間緊張的小伙伴還是針對問題描述下面的平臺里找找

查問題的網站

??Kubernetes???官網中監控、記錄和調試相關問題: ??https://kubernetes.io/docs/tasks/debug-application-cluster/??

??Kubernetes???官方論壇: ??https://discuss.kubernetes.io/??(這個需要科學上網)

??GitHub???庫關于??Kubernetes???問題列表:??https://github.com/kubernetes/kubernetes/issues??

??StackOverflow???網站上關于??Kubernetes???的問題討論:??https://stackoverflow.com/questions/tagged/kubernetes??

??Kubernetes Slack???聊天群組: ??https://kubernetes.slack.com/??(需要谷歌賬號)

Kubernetes集群中常見問題的排查方法

為了跟蹤和發現在Kubernetes集群中運行的容器應用出現的問題,我們常用如下查錯方法。

查看Kubernetes對象的當前運行時信息,特別是與對象關聯的??Event事件???。這些事件記錄了??相關主題???、??發生時間???、??最近發生時間???、??發生次數???及??事件原因???等,對排查故障非常有價值。通過查看對象的??運行時數據???,我們還可以發現??參數錯誤???、??關聯錯誤???、??狀態異常??等明顯問題。由于在Kubernetes中多種對象相互關聯,因此這一步可能會涉及多·個相關對象的排查問題。

對于??服務、容器???方面的問題,可能需要深入??容器內部???進行??故障診斷???,此時可以通過查看??容器的運行日志??來定位具體問題。

對于某些復雜問題,例如??Pod調度這種全局性???的問題,可能需要結合??集群中每個節點上的Kubernetes服務日志???來排查。比如搜集??Master???上的??kube-apiserver, kube-schedule, kube-controler-manager???服務日志,以及各個??Node???上的??kubelet, kube-proxy??服務日志.

查看系統Event

在??Kubernetes集群???中??創建Pod???后,我們可以??通過kubectl get pods命令???查看??Pod列表???,但通過該命令顯示的信息有限。Kubernetes提供了??kubectl describe pod???命令來查看一個??Pod??的詳細信息,例如:

通過??kubectl describe pod???命令,可以顯示??Pod創建???時的??配置定義、狀態等信息???,還可以顯示與該??Pod???相關的最近的??Event??事件,事件信息對于查錯非常有用。

如果??某個Pod一直處于Pending狀態???,我們就可以通過??kubectl describe??了解具體的原因:

沒有可用的??Node以供調度???,可能原因為pod端口沖突,或者受??Taints??影響,。開啟了??資源配額管理???,但在當前調度的目標節點上??資源不足??。??鏡像下載失敗等??。

查看??pod??詳細信息

┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl describe pods   etcd-vms81.liruilongs.github.io -n kube-system# pod創建的基本信息Name:                 etcd-vms81.liruilongs.github.ioNamespace:            kube-systemPriority:             2000001000Priority Class Name:  system-node-criticalNode:                 vms81.liruilongs.github.io/192.168.26.81Start Time:           Tue, 25 Jan 2022 21:54:20 +0800Labels:               component=etcd                      tier=control-planeAnnotations:          kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.26.81:2379                      kubernetes.io/config.hash: 1502584f9ab841720212d4341d723ba2                      kubernetes.io/config.mirror: 1502584f9ab841720212d4341d723ba2                      kubernetes.io/config.seen: 2021-12-13T00:01:04.834825537+08:00                      kubernetes.io/config.source: file                      seccomp.security.alpha.kubernetes.io/pod: runtime/defaultStatus:               Running # Node當前的運行狀態, IP:                   192.168.26.81IPs:  IP:           192.168.26.81Controlled By:  Node/vms81.liruilongs.github.ioContainers:  etcd:  # pod的一些基礎信息    Container ID:  docker://20d99a98a4c2590e8726916932790200ba1cf93c48f3c84ca1298ffdcaa4f28a    Image:         registry.aliyuncs.com/google_containers/etcd:3.5.0-0    Image ID:      docker-pullable://registry.aliyuncs.com/google_containers/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d    Port:              Host Port:         Command:  # 容器運行的一些啟動參數      etcd      --advertise-client-urls=https://192.168.26.81:2379      --cert-file=/etc/kubernetes/pki/etcd/server.crt      --client-cert-auth=true      --data-dir=/var/lib/etcd      --initial-advertise-peer-urls=https://192.168.26.81:2380      --initial-cluster=vms81.liruilongs.github.io=https://192.168.26.81:2380      --key-file=/etc/kubernetes/pki/etcd/server.key      --listen-client-urls=https://127.0.0.1:2379,https://192.168.26.81:2379      --listen-metrics-urls=http://127.0.0.1:2381      --listen-peer-urls=https://192.168.26.81:2380      --name=vms81.liruilongs.github.io      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt      --peer-client-cert-auth=true      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt      --snapshot-count=10000      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt    State:          Running      Started:      Tue, 25 Jan 2022 21:54:20 +0800    Last State:     Terminated      Reason:       Error      Exit Code:    255      Started:      Mon, 24 Jan 2022 08:35:16 +0800      Finished:     Tue, 25 Jan 2022 21:53:56 +0800    Ready:          True    Restart Count:  128    Requests: # 涉及到的一些資源信息      cpu:        100m      memory:     100Mi    Liveness:     http-get http://127.0.0.1:2381/health delay=10s timeout=15s period=10s #success=1 #failure=8    Startup:      http-get http://127.0.0.1:2381/health delay=10s timeout=15s period=10s #success=1 #failure=24    Environment:      Mounts:      /etc/kubernetes/pki/etcd from etcd-certs (rw)      /var/lib/etcd from etcd-data (rw)Conditions:    #pod啟動以后會做一系列的自檢工作:  Type              Status  Initialized       True  Ready             True  ContainersReady   True  PodScheduled      TrueVolumes:     # 映射的宿主機的數據卷信息,這里的定義為宿主機共享  etcd-certs:    Type:          HostPath (bare host directory volume)    Path:          /etc/kubernetes/pki/etcd    HostPathType:  DirectoryOrCreate  etcd-data:    Type:          HostPath (bare host directory volume)    Path:          /var/lib/etcd    HostPathType:  DirectoryOrCreateQoS Class:         BurstableNode-Selectors:    Tolerations:       :NoExecute op=ExistsEvents:              ┌──[root@vms81.liruilongs.github.io]-[~]└─$

查看集群中的??Node???節點和??節點的詳細信息??

[root@liruilong k8s]# kubectl  get nodesNAME        STATUS    AGE127.0.0.1   Ready     2d[root@liruilong k8s]# kubectl describe node 127.0.0.1# Node基本信息:名稱、標簽、創建時間等。Name:                   127.0.0.1Role:Labels:                 beta.kubernetes.io/arch=amd64                        beta.kubernetes.io/os=linux                        kubernetes.io/hostname=127.0.0.1Taints:                 CreationTimestamp:      Fri, 27 Aug 2021 00:07:09 +0800Phase:# Node當前的運行狀態, Node啟動以后會做一系列的自檢工作:# 比如磁盤是否滿了,如果滿了就標注OutODisk=True# 否則繼續檢查內存是否不足(如果內存不足,就標注MemoryPressure=True)# 最后一切正常,就設置為Ready狀態(Ready=True)# 該狀態表示Node處于健康狀態, Master將可以在其上調度新的任務了(如啟動Pod)Conditions:  Type                  Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message  ----                  ------  -----------------                       ------------------                      ------                          -------  OutOfDisk             False   Sun, 29 Aug 2021 23:05:53 +0800         Sat, 28 Aug 2021 00:30:35 +0800         KubeletHasSufficientDisk        kubelet has sufficient disk space available  MemoryPressure        False   Sun, 29 Aug 2021 23:05:53 +0800         Fri, 27 Aug 2021 00:07:09 +0800         KubeletHasSufficientMemory      kubelet has sufficient memory available  DiskPressure          False   Sun, 29 Aug 2021 23:05:53 +0800         Fri, 27 Aug 2021 00:07:09 +0800         KubeletHasNoDiskPressure        kubelet has no disk pressure  Ready                 True    Sun, 29 Aug 2021 23:05:53 +0800         Sat, 28 Aug 2021 00:30:35 +0800         KubeletReady                    kubelet is posting ready status# Node的主機地址與主機名。Addresses:              127.0.0.1,127.0.0.1,127.0.0.1# Node上的資源總量:描述Node可用的系統資源,包括CPU、內存數量、最大可調度Pod數量等,注意到目前Kubernetes已經實驗性地支持GPU資源分配了(alpha.kubernetes.io/nvidia-gpu=0)Capacity: alpha.kubernetes.io/nvidia-gpu:        0 cpu:                                   1 memory:                                1882012Ki pods:                                  110# Node可分配資源量:描述Node當前可用于分配的資源量。Allocatable: alpha.kubernetes.io/nvidia-gpu:        0 cpu:                                   1 memory:                                1882012Ki pods:                                  110# 主機系統信息:包括主機的唯一標識UUID, Linux kernel版本號、操作系統類型與版本、Kubernetes版本號、kubelet與kube-proxy的版本號等。  System Info: Machine ID:                    963c2c41b08343f7b063dddac6b2e486 System UUID:                   EB90EDC4-404C-410B-800F-3C65816C0E2D Boot ID:                       4a9349b0-ce4b-4b4a-8766-c5c4256bb80b Kernel Version:                3.10.0-1160.15.2.el7.x86_64 OS Image:                      CentOS Linux 7 (Core) Operating System:              linux Architecture:                  amd64 Container Runtime Version:     docker://1.13.1 Kubelet Version:               v1.5.2 Kube-Proxy Version:            v1.5.2ExternalID:                     127.0.0.1# 當前正在運行的Pod列表概要信息Non-terminated Pods:            (3 in total)  Namespace                     Name                    CPU Requests    CPU Limits      Memory Requests Memory Limits  ---------                     ----                    ------------    ----------      --------------- -------------  default                       mysql-2cpt9             0 (0%)          0 (0%)          0 (0%)          0 (0%)  default                       myweb-53r32             0 (0%)          0 (0%)          0 (0%)          0 (0%)  default                       myweb-609w4             0 (0%)          0 (0%)          0 (0%)          0 (0%)# 已分配的資源使用概要信息,例如資源申請的最低、最大允許使用量占系統總量的百分比。Allocated resources:  (Total limits may be over 100 percent, i.e., overcommitted.  CPU Requests  CPU Limits      Memory Requests Memory Limits  ------------  ----------      --------------- -------------  0 (0%)        0 (0%)          0 (0%)          0 (0%)# Node相關的Event信息。Events:  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason                  Message  ---------     --------        -----   ----                    -------------   --------        ------                  -------  4h            27m             3       {kubelet 127.0.0.1}                     Warning         MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "myweb-609w4_default(01d719dd-08b1-11ec-9d6a-00163e1220cb)". Falling back to DNSDefault policy.  25m           25m             1       {kubelet 127.0.0.1}                     Warning         MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "mysql-2cpt9_default(1c9353ba-08d7-11ec-9d6a-00163e1220cb)". Falling back to DNSDefault policy.

查看容器日志

在需要排查容器內部應用程序生成的日志時,我們可以使用??kubectl logs ??命令

這里打印??etcd???數據庫的??日志信息???,查看日志中異常的相關信息,這里用過過濾??error??關鍵字的方法來查看相關的信息

┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl logs  etcd-vms81.liruilongs.github.io -n kube-system | grep -i error | head  -5{"level":"info","ts":"2022-01-25T13:54:33.191Z","caller":"wal/repair.go:96","msg":"repaired","path":"/var/lib/etcd/member/wal/0000000000000014-0000000000185aba.wal","error":"unexpected EOF"}{"level":"info","ts":"2022-01-25T13:54:33.192Z","caller":"etcdserver/storage.go:109","msg":"repaired WAL","error":"unexpected EOF"}{"level":"warn","ts":"2022-01-25T13:54:33.884Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:53950","server-name":"","error":"EOF"}{"level":"warn","ts":"2022-01-25T13:54:33.885Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}{"level":"warn","ts":"2022-01-28T03:00:37.549Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"628.230855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"","error":"context canceled"}┌──[root@vms81.liruilongs.github.io]-[~]└─$

查看Kubernetes服務日志

如果在??Linux???系統上安裝??Kubernetes???,并且使用??systemd???系統管理??Kubernetes???服務,那么??systemd???的??journal???系統會接管服務程序的輸出日志。在這種環境中,可以通過使用??systemd status???或??journalct??具來查看系統服務的日志。例如:

查看服務服務啟動的相關信息,通過這個,可以定位??服務加載的配置文件???信息,??啟動參數配置情況??

┌──[root@vms81.liruilongs.github.io]-[~]└─$systemctl status kubelet.service -l● kubelet.service - kubelet: The Kubernetes Node Agent   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)  Drop-In: /usr/lib/systemd/system/kubelet.service.d           └─10-kubeadm.conf   Active: active (running) since 二 2022-01-25 21:53:35 CST; 6 days ago     Docs: https://kubernetes.io/docs/ Main PID: 1014 (kubelet)   Memory: 208.2M   CGroup: /system.slice/kubelet.service           └─1014 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.52月 01 17:47:14 vms81.liruilongs.github.io kubelet[1014]: W0201 17:47:14.258523    1014 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1b874bfdef201d69db10b200b8f47d5.slice/docker-c20fa960cfebd38172e123a5d87ecd499518bf22381f7aaa62d57131e7eb1aae.scope": unable to determine device info for dir: /var/lib/docker/overlay2/07d7695f2c479fbd0b654016345fcbacd0838276fb57f8291f993ed6799fae8d/diff: stat failed on /var/lib/docker/overlay2/07d7695f2c479fbd0b654016345fcbacd0838276fb57f8291f993ed6799fae8d/diff with error: no such file or directory, continuing to push stats。。。。。。。。。。

通過 ??journalct??來查看相關的服務日志信息,查看當前用戶下的kubelet服務日志中有error關鍵字的字段的報錯問題

┌──[root@vms81.liruilongs.github.io]-[~]└─$journalctl  -u kubelet.service  | grep -i error | head -21月 25 21:53:55 vms81.liruilongs.github.io kubelet[1014]: I0125 21:53:55.865441    1014 docker_service.go:264] "Docker Info" dockerInfo=&{ID:HN3K:C6LG:QGV7:N2CG:VELF:CJ6T:HFR5:EEKH:HLPO:CDEU:GN3E:QAJJ Containers:32 ContainersRunning:11 ContainersPaused:0 ContainersStopped:21 Images:32 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:39 SystemTime:2022-01-25T21:53:55.833509372+08:00 LoggingDriver:json-file CgroupDriver:systemd CgroupVersion:1 NEventsListener:0 KernelVersion:3.10.0-693.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSVersion:7 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000a8f960 NCPU:2 MemTotal:4126896128 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vms81.liruilongs.github.io Labels:[] ExperimentalBuild:false ServerVersion:20.10.9 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:} io.containerd.runtime.v1.linux:{Path:runc Args:[] Shim:} runc:{Path:runc Args:[] Shim:}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:5b46e404f6b9f661a205e28d59c982d3634148f8 Expected:5b46e404f6b9f661a205e28d59c982d3634148f8} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: DefaultAddressPools:[]1月 25 21:53:56 vms81.liruilongs.github.io kubelet[1014]: E0125 21:53:56.293100    1014 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://192.168.26.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/vms81.liruilongs.github.io?timeout=10s": dial tcp 192.168.26.81:6443: connect: connection refused┌──[root@vms81.liruilongs.github.io]-[~]└─$

如果不使用??systemd???系統接管??Kubernetes??服務的標準輸出,則也可以通過日志相關的啟動參數來指定日志的存放目錄。當然,這里的相關啟動參數的配置信息需要通過查看pod文件來查看

查看??kube-controller-manager??的啟動參數和認證相關的配置文件

┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl describe  pod kube-controller-manager-vms81.liruilongs.github.io -n kube-system | grep -i -A 20  command    Command:      kube-controller-manager      --allocate-node-cidrs=true      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf      --bind-address=127.0.0.1      --client-ca-file=/etc/kubernetes/pki/ca.crt      --cluster-cidr=10.244.0.0/16      --cluster-name=kubernetes      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key      --controllers=*,bootstrapsigner,tokencleaner      --kubeconfig=/etc/kubernetes/controller-manager.conf      --leader-elect=true      --port=0      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt      --root-ca-file=/etc/kubernetes/pki/ca.crt      --service-account-private-key-file=/etc/kubernetes/pki/sa.key      --service-cluster-ip-range=10.96.0.0/12      --use-service-account-credentials=true    State:          Running
┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl describe  pod kube-controller-manager-vms81.liruilongs.github.io -n kube-system | grep  kubeconfig      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf      --kubeconfig=/etc/kubernetes/controller-manager.conf      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)  kubeconfig:
┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl describe  pod kube-controller-manager-vms81.liruilongs.github.io -n kube-system | grep -i -A 20  VolumesVolumes:  ca-certs:    Type:          HostPath (bare host directory volume)    Path:          /etc/ssl/certs    HostPathType:  DirectoryOrCreate  etc-pki:    Type:          HostPath (bare host directory volume)    Path:          /etc/pki    HostPathType:  DirectoryOrCreate  flexvolume-dir:    Type:          HostPath (bare host directory volume)    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec    HostPathType:  DirectoryOrCreate  k8s-certs:    Type:          HostPath (bare host directory volume)    Path:          /etc/kubernetes/pki    HostPathType:  DirectoryOrCreate  kubeconfig:    Type:          HostPath (bare host directory volume)    Path:          /etc/kubernetes/controller-manager.conf    HostPathType:  FileOrCreate┌──[root@vms81.liruilongs.github.io]-[~]└─$

Pod資源對象相關的問題,比如無法創建??Pod???, ??Pod???啟動后就停止或者??Pod???副本無法增加,等等。此時,可以先確定??Pod???在哪個節點上,然后登錄這個節點,從??kubelet???的日志中查詢該??Pod??的完整日志,然后進行問題排查。

對于與Pod擴容相關或者與RC相關的問題,則很可能在??kube-controller-manager???及??kube-scheduler??的日志中找出問題的關鍵點。

┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl logs  kube-scheduler-vms81.liruilongs.github.io┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl logs  kube-controller-manager-vms81.liruilongs.github.io

??kube-proxy???經常被我們忽視,因為即使它意外停止, ??Pod???的狀態也是正常的,但會導致某些服務訪問異常。這些錯誤通常與每個節點上的??kube-proxy???服務有著密切的關系。遇到這些問題時,首先要排查??kube-proxy??服務的日志,同時排查防火墻服務,要特別留意在防火墻中是否有人為添加的可疑規則。

┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl logs kube-proxy-tbwz5

常見問題

由于無法下載pause鏡像導致Pod一直處于Pending狀態

Pod創建成功,但RESTARTS數量持續增加:容器的啟動命令不能保持在前臺運行。

通過服務名無法訪問服務

在??Kubernetes???集群中應盡量使用服務名訪問正在運行的微服務,但有時會訪問失敗。由于??服務涉及服務名的DNS域名解析???、??kube-proxy組件的負載分發???、??后端Pod列表的狀態??等,所以可通過以下幾方面排查問題。

1.查看??Service???的??后端Endpoint??是否正常

可以通過??kubectl get endpoints ???命令查看某個服務的后端??Endpoint??列表,如果列表為空,則可能因為:

┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl get svcNAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                        AGEkube-dns                            ClusterIP   10.96.0.10               53/UDP,53/TCP,9153/TCP         50dliruilong-kube-prometheus-kubelet   ClusterIP   None                     10250/TCP,10255/TCP,4194/TCP   16dmetrics-server                      ClusterIP   10.111.104.173           443/TCP                        50d┌──[root@vms81.liruilongs.github.io]-[~]└─$kubectl get endpointsNAME                                ENDPOINTS                                                                 AGEkube-dns                            10.244.88.66:53,10.244.88.67:53,10.244.88.66:53 + 3 more...               50dliruilong-kube-prometheus-kubelet   192.168.26.81:10250,192.168.26.82:10250,192.168.26.83:10250 + 6 more...   16dmetrics-server                                                                                          50d┌──[root@vms81.liruilongs.github.io]-[~]└─$
??Service???的??Label Selector???與??Pod的Label不匹配??,沒有相關的pod可以提供能力后端??Pod???一直沒有達到??Ready???狀態(通過??kubectl get pods???進一步查看??Pod的狀態??)Service的targetPort端口號與Pod的containerPort不一致等。即容器暴露的端口不是SVC暴露的端口,需要使用targetPort來轉發

2·查看Service的名稱能否被正確解析為ClusterIP地址

可以通過在客戶端容器中ping ..svc進行檢查,如果能夠得到??Service???的??ClusterlP???地址,則說明??DNS服務???能夠??正確解析Service???的名稱;如果不能得到??Service???的??ClusterlP地址???,則可能是因為??Kubernetes集群???的??DNS服務工作異常??。

3·查看??kube-proxy???的??轉發規則??是否正確

我們可以將??kube-proxy???服務設置為??IPVS或iptables負載分發模式??。

對于??IPVS負載分發模式??,可以通過??ipvsadm??工具查看??Node上的IPVS規則??,查看是否正確設置??Service ClusterlP??的相關規則。對于??iptables負載分發模式??,可以通過查看??Node上的iptables規則??,查看是否正確設置??Service ClusterlP??的相關規則。

尋求幫助

網站和社區

??Kubernetes???官網中監控、記錄和調試相關問題: ??https://kubernetes.io/docs/tasks/debug-application-cluster/??

??Kubernetes???官方論壇: ??https://discuss.kubernetes.io/??(這個需要科學上網)

??GitHub???庫關于??Kubernetes???問題列表:??https://github.com/kubernetes/kubernetes/issues??

??StackOverflow???網站上關于??Kubernetes???的問題討論:??https://stackoverflow.com/questions/tagged/kubernetes??

??Kubernetes Slack???聊天群組: https://kubernetes.slack.com/(需要谷歌賬號)

標簽: 是否正確 常見問題 詳細信息

上一篇:新消息丨嵌入式:ARM的流水線技術
下一篇:零基礎開啟元宇宙|如何快速創建虛擬形象