百度360必应搜狗淘宝本站头条
当前位置:网站首页 > 技术资源 > 正文

19 个 K8S 日常故障处理集锦!(k8srancher)

off999 2025-04-01 21:15 35 浏览 0 评论

问题1:K8S集群服务访问失败?

curl: (60) Peer's Certificate issuer is not recognized.

More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.

原因分析:证书不能被识别,其原因为:自定义证书,过期等。

解决方法:更新证书即可。

问题2:K8S集群服务访问失败?

curl: (7) Failed connect to 10.103.22.158:3000; Connection refused

原因分析:端口映射错误,服务正常工作,但不能提供服务。

解决方法:删除svc,重新映射端口即可。

kubectl delete svc nginx-deployment

问题3:K8S集群服务暴露失败?

Error from server (AlreadyExists): services "nginx-deployment" already exists

原因分析:该容器已暴露服务了。

解决方法:删除svc,重新映射端口即可。

问题4:外网无法访问K8S集群提供的服务?

原因分析:K8S集群的type为ClusterIP,未将服务暴露至外网。

解决方法:修改K8S集群的type为NodePort即可,于是可通过所有K8S集群节点访问服务。

kubectl edit svc nginx-deployment

问题5:pod状态为ErrImagePull?

readiness-httpget-pod 0/1 ErrImagePull 0 10s

原因分析:image无法拉取;

Warning Failed 59m (x4 over 61m) kubelet, k8s-node01 Error: ErrImagePull

解决方法:更换镜像即可。

问题6:创建init C容器后,其状态不正常?

NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 20s

原因分析:查看日志发现,pod一直出于初始化中;然后查看pod详细信息,定位pod创建失败的原因为:初始化容器未执行完毕。

Error from server (BadRequest): container "myapp-container" in pod "myapp-pod" is waiting to start: PodInitializing

waiting for myservice

Server: 10.96.0.10
Address: 10.96.0.10:53

** server can't find myservice.default.svc.cluster.local: NXDOMAIN

*** Can't find myservice.svc.cluster.local: No answer
*** Can't find myservice.cluster.local: No answer
*** Can't find myservice.default.svc.cluster.local: No answer
*** Can't find myservice.svc.cluster.local: No answer
*** Can't find myservice.cluster.local: No answer

解决方法:创建相关service,将SVC的name写入K8S集群的coreDNS服务器中,于是coreDNS就能对POD的initC容器执行过程中的域名解析了。

kubectl apply -f myservice.yaml

NAME READY STATUS RESTARTS AGE

myapp-pod 0/1 Init:1/2 0 27m
myapp-pod 0/1 PodInitializing 0 28m
myapp-pod 1/1 Running 0 28m

问题7:探测存活pod状态为CrashLoopBackOff?

readiness-httpget-pod 0/1 CrashLoopBackOff 1 13s
readiness-httpget-pod 0/1 Completed 2 20s
readiness-httpget-pod 0/1 CrashLoopBackOff 2 31s
readiness-httpget-pod 0/1 Completed 3 42s
readiness-httpget-pod 0/1 CrashLoopBackOff 3 53s

原因分析:镜像问题,导致容器重启失败。

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 56m kubelet, k8s-node01 Pulling image "hub.atguigu.com/library/mylandmarktech/myapp:v1"
Normal Pulled 56m kubelet, k8s-node01 Successfully pulled image "hub.atguigu.com/library/mylandmarktech/myapp:v1"
Normal Created 56m (x3 over 56m) kubelet, k8s-node01 Created container readiness-httpget-container
Normal Started 56m (x3 over 56m) kubelet, k8s-node01 Started container readiness-httpget-container
Normal Pulled 56m (x2 over 56m) kubelet, k8s-node01 Container image "hub.atguigu.com/library/mylandmarktech/myapp:v1" already present on machine
Warning Unhealthy 56m kubelet, k8s-node01 Readiness probe failed: Get http://10.244.2.22:80/index1.html: dial tcp 10.244.2.22:80: connect: connection refused
Warning BackOff 56m (x4 over 56m) kubelet, k8s-node01 Back-off restarting failed container
Normal Scheduled 50s default-scheduler Successfully assigned default/readiness-httpget-pod to k8s-node01

解决方法:更换镜像即可。

问题8:POD创建失败?

readiness-httpget-pod 0/1 Pending 0 0s
readiness-httpget-pod 0/1 Pending 0 0s
readiness-httpget-pod 0/1 ContainerCreating 0 0s
readiness-httpget-pod 0/1 Error 0 2s
readiness-httpget-pod 0/1 Error 1 3s
readiness-httpget-pod 0/1 CrashLoopBackOff 1 4s
readiness-httpget-pod 0/1 Error 2 15s
readiness-httpget-pod 0/1 CrashLoopBackOff 2 26s
readiness-httpget-pod 0/1 Error 3 37s
readiness-httpget-pod 0/1 CrashLoopBackOff 3 52s
readiness-httpget-pod 0/1 Error 4 82s

原因分析:镜像问题导致容器无法启动。

[root@k8s-master01 ~]# kubectl logs readiness-httpget-pod
url.js:106
throw new errors.TypeError('ERR_INVALID_ARG_TYPE', 'url', 'string', url);
^
TypeError [ERR_INVALID_ARG_TYPE]: The "url" argument must be of type string. Received type undefined
at Url.parse (url.js:106:11)
at Object.urlParse [as parse] (url.js:100:13)
at module.exports (/myapp/node_modules/mongodb/lib/url_parser.js:17:23)
at connect (/myapp/node_modules/mongodb/lib/mongo_client.js:159:16)
at Function.MongoClient.connect (/myapp/node_modules/mongodb/lib/mongo_client.js:110:3)
at Object. (/myapp/app.js:12:13)
at Module._compile (module.js:641:30)
at Object.Module._extensions..js (module.js:652:10)
at Module.load (module.js:560:32)
at tryModuleLoad (module.js:503:12)
at Function.Module._load (module.js:495:3)
at Function.Module.runMain (module.js:682:10)
at startup (bootstrap_node.js:191:16)
at bootstrap_node.js:613:3
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 58m (x5 over 59m) kubelet, k8s-node01 Container image "hub.atguigu.com/library/myapp:v1" already present on machine
Normal Created 58m (x5 over 59m) kubelet, k8s-node01 Created container readiness-httpget-container
Normal Started 58m (x5 over 59m) kubelet, k8s-node01 Started container readiness-httpget-container
Warning BackOff 57m (x10 over 59m) kubelet, k8s-node01 Back-off restarting failed container
Normal Scheduled 3m35s default-scheduler Successfully assigned default/readiness-httpget-pod to k8s-node01

解决方法:更换镜像。

问题9:POD的ready状态未进入?

readiness-httpget-pod 0/1 Running 0 116s

原因分析:POD的执行命令失败,无法获取资源。

Error from server (NotFound): pods "pod" not found
2021/06/11 07:10:14 [error] 30#30: *1 open() "/usr/share/nginx/html/index1.html" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /index1.html HTTP/1.1", host: "10.244.2.25:80"
10.244.2.1 - - [11/Jun/2021:07:10:14 +0000] "GET /index1.html HTTP/1.1" 404 153 "-" "kube-probe/1.15" "-"
10.244.2.1 - - [11/Jun/2021:07:10:17 +0000] "GET /index1.html HTTP/1.1" 404 153 "-" "kube-probe/1.15" "-"
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 64m kubelet, k8s-node01 Container image "hub.atguigu.com/library/nginx" already present on machine
Normal Created 64m kubelet, k8s-node01 Created container readiness-httpget-container
Normal Started 64m kubelet, k8s-node01 Started container readiness-httpget-container
Warning Unhealthy 59m (x101 over 64m) kubelet, k8s-node01 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Scheduled 8m16s default-scheduler Successfully assigned default/readiness-httpget-pod to k8s-node01

解决方法:进入容器内部,创建yaml定义的资源

问题10:pod创建失败?

error: error validating "myregistry-secret.yml": error validating data: ValidationError(Pod.spec.imagePullSecrets[0]): invalid type for io.k8s.api.core.v1.LocalObjectReference: got "string", expected "map"; if you choose to ignore these errors, turn validation off with --validate=false

原因分析:yml文件内容出错---使用中文字符;

解决方法:修改myregistrykey内容即可。

11、kube-flannel-ds-amd64-ndsf7插件pod的status为Init:0/1?

排查思路:kubectl -n kube-system describe pod kube-flannel-ds-amd64-ndsf7 #查询pod描述信息;

原因分析:k8s-slave1节点拉取镜像失败。

解决方法:登录k8s-slave1,重启docker服务,手动拉取镜像。

k8s-master节点,重新安装插件即可。

kubectl create -f kube-flannel.yml;kubectl get nodes

12、K8S创建服务status为ErrImagePull?

排查思路:kubectl describe pod test-nginx

原因分析:拉取镜像名称问题。

解决方法:删除错误pod;重新拉取镜像;

kubectl delete pod test-nginx;kubectl run test-nginx --image=10.0.0.81:5000/nginx:alpine

13、不能进入指定容器内部?

Error from server (BadRequest): container volume-test-container is not valid for pod volume-test-pod

原因分析:yml文件comtainers字段重复,导致该pod没有该容器。

解决方法:去掉yml文件中多余的containers字段,重新生成pod。

14、创建PV失败?

persistentvolume/nfspv1 unchanged
persistentvolume/nfspv01 created
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"nfspv01\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"nfs\":{\"path\":\"/nfs2\",\"server\":\"192.168.66.100\"},\"persistentVolumeReclaimPolicy\":\"Retain\",\"storageClassName\":\"nfs\"}}\n"}},"spec":{"nfs":{"path":"/nfs2"}}}
to:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "nfspv01", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"nfspv01\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"nfs\":{\"path\":\"/nfs1\",\"server\":\"192.168.66.100\"},\"persistentVolumeReclaimPolicy\":\"Retain\",\"storageClassName\":\"nfs\"}}\n"] "creationTimestamp":"2021-06-25T01:54:24Z" "finalizers":["kubernetes.io/pv-protection"] "name":"nfspv01" "resourceVersion":"325674" "selfLink":"/api/v1/persistentvolumes/nfspv01" "uid":"89cb1d15-8012-47f0-aee6-6507bb624387"] "spec":map["accessModes":["ReadWriteOnce"] "capacity":map["storage":"5Gi"] "nfs":map["path":"/nfs1" "server":"192.168.66.100"] "persistentVolumeReclaimPolicy":"Retain" "storageClassName":"nfs" "volumeMode":"Filesystem"] "status":map["phase":"Available"]]}
for: "PV.yml": PersistentVolume "nfspv01" is invalid: spec.persistentvolumesource: Forbidden: is immutable after creation

原因分析:pv的name字段重复。

解决方法:修改pv的name字段即可。

15、pod无法挂载PVC?

原因分析:pod无法挂载PVC。

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 60s default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)

accessModes与可使用的PV不一致,导致无法挂载PVC,由于只能挂载大于1G且accessModes为RWO的PV,故只能成功创建1个pod,第2个pod一致pending,按序创建时则第3个pod一直未被创建;

解决方法:修改yml文件中accessModes或PV的accessModes即可。

16、问题:pod使用PV后,无法访问其内容?

原因分析:nfs卷中没有文件或权限不对。

解决方法:在nfs卷中创建文件并授予权限。

17、查看节点状态失败?

Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

原因分析:没有heapster服务。

解决方法:安装promethus监控组件即可。

18、pod一直处于pending'状态?

原因分析:由于已使用同样镜像发布了pod,导致无节点可调度。

Events:

Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 9s (x13 over 14m) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.

解决方法:删除所有pod后部署pod即可。

19、helm安装组件失败?

[root@k8s-master01 hello-world]# helm install
Error: This command needs 1 argument: chart nam
[root@k8s-master01 hello-world]# helm install ./
Error: no Chart.yaml exists in directory "/root/hello-world"
原因分析:文件名格式不对。

解决方法:mv chart.yaml Chart.yaml

20、helm更新release失败?

[root@k8s-master01 hello-world]# helm upgrade joyous-wasp ./
UPGRADE FAILED
ROLLING BACK
Error: render error in "hello-world/templates/deployment.yaml": template: hello-world/templates/deployment.yaml:14:35: executing "hello-world/templates/deployment.yaml" at <.values.image.reposi...>: can't evaluate field image in type interface {}
Error: UPGRADE FAILED: render error in "hello-world/templates/deployment.yaml": template: hello-world/templates/deployment.yaml:14:35: executing "hello-world/templates/deployment.yaml" at <.values.image.reposi...>: can't evaluate field image in type interface {}

原因分析:yaml文件语法错误。

解决方法:修改yaml文件即可。

21、etcd启动失败?

[root@k8s-master01 ~]# systemctl enable --now etcd
Created symlink from /etc/systemd/system/etcd3.service to /usr/lib/systemd/system/etcd.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.

原因分析:认证失败原因可能为证书、配置、端口等。检查配置符合etcd版本要求,证书生成过程有效。最后确认端口被占用导致认证失败。

[root@k8s-master01 ~]# systemctl status etcd
● etcd.service - Etcd.service
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: activating (start) since Wed 2021-07-14 09:53:03 CST; 1min 6s ago
Docs: https://coreos.com/etcd/docs/latest/
Main PID: 39692 (etcd)
CGroup: /system.slice/etcd.service
└─39692 /usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46168" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46166" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46170" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46172" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46176" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46174" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46178" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:09 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46180" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:10 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46182" (error "remote error: tls: bad certificate", ServerName "")
Jul 14 09:54:10 k8s-master01 etcd[39692]: rejected connection from "192.168.0.108:46186" (error "remote error: tls: bad certificate", ServerName "") 



解决方法:kill占用2379端口的进程,重启etcd即可。

22、svc反代理服务,跨域访问失败?

Connecting to externalname (183.232.231.172:80)
wget: server returned error: HTTP/1.1 403 Forbidden

原因分析:pod跨域访问,被百度禁止访问;

解决方法:修改访问策略即可(略略)。

参考链接:

https://www.cnblogs.com/chalon/p/14415252.html

https://mp.weixin.qq.com/s/2tK-w7MhzxqyoMv9C38tHA

相关推荐

Python 数据分析——利用Pandas进行分组统计

话说天下大势,分久必合,合久必分。数据分析也是如此,我们经常要对数据进行分组与聚合,以对不同组的数据进行深入解读。本章将介绍如何利用Pandas中的GroupBy操作函数来完成数据的分组、聚合以及统计...

python数据分析:介绍pandas库的数据类型Series和DataFrame

安装pandaspipinstallpandas-ihttps://mirrors.aliyun.com/pypi/simple/使用pandas直接导入即可importpandasas...

使用DataFrame计算两列的总和和最大值_[python]

【如果对您有用,请关注并转发,谢谢~~】最近在处理气象类相关数据的空间计算,在做综合性计算的时候,DataFrame针对每列的统计求和、最大值等较为方便,对某行的两列或多列数据进行求和与最大值等的简便...

8-Python内置函数

Python提供了丰富的内置函数,这些函数可以直接使用而无需导入任何模块。以下是一些常用的内置函数及其示例:1-print()1-1-说明输出指定的信息到控制台。1-2-例子2-len()2-1-说...

Python中函数式编程函数: reduce()函数

Python中的reduce()函数是一个强大的工具,它通过连续地将指定的函数应用于序列(如列表)来对序列(如列表)执行累积操作。它是functools模块的一部分,这意味着您需要在使用它之...

万万没想到,除了香农计划,Python3.11竟还有这么多性能提升

众所周知,Python3.11版本带来了较大的性能提升,但是,它具体在哪些方面上得到了优化呢?除了著名的“香农计划”外,它还包含哪些与性能相关的优化呢?本文将带你一探究竟!作者:BeshrKay...

最全python3.11版12类75个内置函数大全

获取全部内置函数:importbuiltins#导入模块yc=[]#异常属性nc=[]#不可调用fn=[]#内置函数defll(ty=builtins):...

软件测试笔试题

测试工程师岗位,3-5年,10-14k1.我司有一款产品,类似TeamViewer,向日葵,mstsc,QQ远程控制产品,一个PC客户端产品,请设想一下测试要点。并写出2.写出常用的SQL语句8条,l...

备战各大互联网巨头公司招聘会,最全Python面试大全,共300题

前言众所周知,越是顶尖的互联网公司在面试这一part的要求就越高,需要你有很好的技术功底、项目经验、一份漂亮的简历,当然还有避免不了的笔试过关。对于Python的工程师来说,全面掌握好有关Python...

经典 SQL 数据库笔试题及答案整理

马上又是金三银四啦,有蛮多小伙伴在跳槽找工作,但对于年限稍短的软件测试工程师,难免会需要进行笔试,而在笔试中,基本都会碰到一道关于数据库的大题,今天这篇文章呢,就收录了下最近学员反馈上来的一些数据库笔...

用Python开发日常小软件,让生活与工作更高效!附实例代码

引言:Python如何让生活更轻松?在数字化时代,编程早已不是程序员的专属技能。Python凭借其简洁易学的特点,成为普通人提升效率、解决日常问题的得力工具。无论是自动化重复任务、处理数据,还是开发个...

太牛了!102个Python实战项目被我扒到了!建议收藏!

挖到宝了!整整102个Python实战项目合集,从基础语法到高阶应用全覆盖,附完整源码+数据集,手把手带你从代码小白变身实战大神!这波羊毛不薅真的亏到哭!超全项目库,学练一站式搞定这份资...

Python中的并发编程

1.Python对并发编程的支持多线程:threading,利用CPU和IO可以同时执行的原理,让CPU不会干巴巴等待IO完成。多进程:multiprocessing,利用多核CPU...

Python 也有内存泄漏?

1.背景前段时间接手了一个边缘视觉识别的项目,大功能已经开发的差不多了,主要是需要是优化一些性能问题。其中比较突出的内存泄漏的问题,而且不止一处,有些比较有代表性,可以总结一下。为了更好地可视化内存...

python爬虫之多线程threading、多进程、协程aiohttp批量下载图片

一、单线程常规下载常规单线程执行脚本爬取壁纸图片,只爬取一页的图片。importdatetimeimportreimportrequestsfrombs4importBeautifu...

取消回复欢迎 发表评论: