本篇内容介绍了“Kubernetes pod中systemctl状态探针失败问题怎么解决”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!

创新互联是专业的清河门网站建设公司,清河门接单;提供成都网站建设、做网站,网页设计,网站设计,建网站,PHP网站建设等专业做网站服务;采用PHP框架,可快速的进行清河门网站开发网页制作和功能扩展;专业做搜索引擎喜爱的网站,专业的做网站团队,希望更多企业前来合作!
在Heketi的glusterd容器服务,使用systemctl探针来检测glusterfs服务是否可用,发现总是出现失败问题。
经查,在Ubuntu 18.04 上 systemctl status glusterd.service 运行时输出信息不是K8s livenessProbe希望的,导致检测器超时挂起了。
使用systemctl status glusterd.service并不能检测到服务的真实状态,会挂起、超时,返回错误状态码。
使用下面的方式,可以正确检测service的真实状态:
systemctl is-active --quiet glusterd.service; echo $?;
或者(类似于):
systemctl is-active sshd >/dev/null 2>&1 && echo 0 || echo 1
输出:
正常时 0;
非正常时为错误码。
如下所示:
livenessProbe: exec: command: - /bin/bash - -c - systemctl is-active --quiet glusterd.service; echo $?; failureThreshold: 3 initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 3 readinessProbe: exec: command: - /bin/bash - -c - systemctl is-active --quiet glusterd.service; echo $?;
修改后的k8s yaml文件如下:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: glusterfs-daemon
namespace: gluster
labels:
k8s-app: glusterfs-node
spec:
selector:
matchLabels:
name: glusterfs-daemon
template:
metadata:
labels:
name: glusterfs-daemon
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: gluster/gluster-centos:latest
imagePullPolicy: IfNotPresent
name: glusterfs
livenessProbe:
exec:
command:
- /bin/bash
- -c
- systemctl is-active --quiet glusterd.service; echo $?;
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
readinessProbe:
exec:
command:
- /bin/bash
- -c
- systemctl is-active --quiet glusterd.service; echo $?;
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
resources: {}
securityContext:
capabilities: {}
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/heketi
name: glusterfs-heketi
- mountPath: /run
name: glusterfs-run
- mountPath: /run/lvm
name: glusterfs-lvm
- mountPath: /etc/glusterfs
name: glusterfs-etc
- mountPath: /var/log/glusterfs
name: glusterfs-logs
- mountPath: /var/lib/glusterd
name: glusterfs-config
- mountPath: /dev
name: glusterfs-dev
- mountPath: /sys/fs/cgroup
name: glusterfs-cgroup
DNSPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /var/lib/heketi
type: ""
name: glusterfs-heketi
- emptyDir: {}
name: glusterfs-run
- hostPath:
path: /run/lvm
type: ""
name: glusterfs-lvm
- hostPath:
path: /etc/glusterfs
type: ""
name: glusterfs-etc
- hostPath:
path: /var/log/glusterfs
type: ""
name: glusterfs-logs
- hostPath:
path: /var/lib/glusterd
type: ""
name: glusterfs-config
- hostPath:
path: /dev
type: ""
name: glusterfs-dev
- hostPath:
path: /sys/fs/cgroup
type: ""
name: glusterfs-cgroup可能在不同的Linux版本上,systemd的版本不同,参数也可能不一样,输入systemctl help来获取当前版本的帮助。
“Kubernetes pod中systemctl状态探针失败问题怎么解决”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注创新互联网站,小编将为大家输出更多高质量的实用文章!
本文题目:Kubernetespod中systemctl状态探针失败问题怎么解决
文章位置:http://www.jxjierui.cn/article/jchcoh.html


咨询
建站咨询
