当前位置: 首页 > news >正文

etcd集群备份和恢复

举常用的 kubectl 命令和参数
https://kubernetes.io/zh-cn/docs/reference/kubectl/quick-reference/
kubectl 命令补全配置

apt install bash-completion
kubectl completion bash > /data/k8s-completion.sh
echo "source /data/k8s-completion.sh  " >> /etc/profile
source /etc/profile
拉取yaml文件
kubectl get  deployments.apps -n myserver myserver-nginx-deployment -o yaml >nginx-deployment.yaml
查看deployment详情
kubectl describe pod myserver-nginx-deployment-597d966577-hd5fq -n myserver
进入指定容器
kubectl exec -it  myserver-nginx-deployment-597d966577-hd5fq -c myserver-nginx-container -n myserver -- sh
查看标准输出日志
kubectl logs --tail 100 -f -n myserver myserver-nginx-deployment-597d966577-br99n
静态副本数修改
kubectl scale --replicas=2 deployment -n myserver myserver-nginx-deployment
简称查看
kubectl api-resources 
cordon和drain区别
drain将节点驱逐并将里面的pod转移到其他节点,并打上标签SchedulingDisabled不能调度。
cordon将节点直接打上标签,不影响节点内的pod使用。
kubectl drain 172.31.7.113
kubectl cordon 172.31.7.113
kubecet uncordon 172.31.7.113

etcd简介:
etcd是CoreOS团队于2013年6月发起的开源项目,它的目标是构建一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法,etcd基于Go语言实现。
官方网站:https://etcd.io/
github地址:https://github.com/etcd-io/etcd
官方硬件推荐:https://etcd.io/docs/v3.7/op-guide/hardware/
官方文档:https://etcd.io/docs/v3.7/op-guide/maintenance/
etcd具有下面这些属性:
完全复制:集群中的每个节点都可以使用完整的存档
高可用性:Etcd可用于避免硬件的单点故障或网络问题
一致性:每次读取都会返回跨多主机的最新写入
简单:包括一个定义良好、面向用户的API(gRPC)
安全:实现了带有可选的客户端证书身份验证的自动化TLS
快速:每秒10000次写入的基准速度
可靠:使用Raft算法实现了存储的合理分布Etcd的工作原理
选举简介:
etcd基于Raft算法进行集群角色选举,使用Raft的还有Consul、InfluxDB、kafka(KRaft)等。
Raft算法是由斯坦福大学的Diego Ongaro(迭戈·翁加罗)和John Ousterhout(约翰·欧斯特霍特)于2013年提出的,在Raft算法之前,Paxos算法是最著名的分布式一致性算法,但Paxos算法的理论和实现都比较复杂,不太容易理解和实现,因此,以上两位大神就提出了Raft算法,旨在解决Paxos算法存在的一些问题,如难以理解、难以实现、可扩展性有限等。
Raft算法的设计目标是提供更好的可读性、易用性和可靠性,以便工程师能够更好地理解和实现它,它采用了Paxos算法中的一些核心思想,但通过简化和修改,使其更容易理解和实现,Raft算法的设计理念是使算法尽可能简单化,以便于人们理解和维护,同时也能够提供强大的一致性保证和高可用性。
节点角色:集群中每个节点只能处于 Leader、Follower 和 Candidate 三种状态的一种
follower:追随者(Redis Cluster的Slave节点)
candidate:候选节点,选举过程中。
leader:主节点(Redis Cluster的Master节点)
节点启动后基于termID(任期ID)进行相互投票并选举出唯一leader,termID是一个整数默认值为0,在Raft算法中,一个term代表leader的一段任期周期,每当一个节点成为leader时,就会进入一个新的term, 然后每个节点都会在自己的term ID上加1,以便与上一轮选举区分开来。
Leader负责处理客户端写请求、同步日志到 Follower 节点,Follower 仅处理读请求。
Leader 周期性发送心跳包(默认每 100ms 毫秒一次),维持 Leader 身份,选举超时时间默认为1000ms毫秒。
--heartbeat-interval=100 --election-timeout=500 #所有节点保持一致, https://etcd.io/docs/v3.4/tuning/#time-parameters
首次选举:
1、新集群各etcd节点启动后默认term ID为0、node将自己的日志term+1,该节点状态为 Candidate候选角色。
2、candidate(候选节点)向其它候选节点发送投票信息(RequestVote),默认投票给自己是leader。
3、各候选节点相互收到另外的投票信息(如A收到BC的,B收到AC的,C收到AB的),然后对比term ID谁的高(首次先到先得)以及日志是否比自己的更新(日志最新),
如果比自己的更新,则将自己的选票投给目的候选人,并回复一个包含自己最新日志信息的响应消息,如果C的日志更新,那么将会得到A、B、C的投票,
则C全票当选,如果B挂了,得到A、C的投票,则C超过半票当选。
4、C向其它节点发送自己leader心跳信息,以维护自己的身份(heartbeatinterval、默认100毫秒)。
5、其它节点将角色切换为Follower并向leader同步数据。
6、如果选举超时(election-timeout )、则重新选举,如果选出来两个leader,则超过集群总数半票的生效。
后期选举:
当一个follower节点在规定时间内未收到leader的消息时(etcd服务异常、主机宕机或网络超时),它将转换为candidate状态,
向其他节点发送投票请求(自己的term ID和日志更新记录),并等待其它节点的响应,如果该candidate的(日志更新记录最新),则会获多数投票,它将成为新的leader。
新的leader将自己的termID +1 并通告至其它节点。
如果旧的leader恢复了,发现已有新的leader,则加入到已有的leader中并将自己的term ID更新为和leader一致,在同一个任期内所有节点的term ID是一致的。

配置 优化:
--max-request-bytes=10485760 #request size limit(请求的最大字节数,默认一个key最大1.5Mib,官方推荐最大不要超出10Mib)
--quota-backend-bytes=8589934592 #storage size limit(磁盘存储空间大小限制,默认为2G,此值超过8G启动会有警告信息)
集群碎片整理(机械盘):
/usr/local/bin/etcdctl defrag --cluster --endpoints=https://172.31.7.101:2379 -- cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem
增加日志功能:
cat /etc/systemd/system/etcd.service
--log-outputs=/var/log/etcd.log \ 指定日志输出路径
查看成员信息:
etcd有多个不同的API访问版本,v1版本已经废弃,etcd v2 和 v3 本质上是共享同一套 raft 协议代码的两个独立的应用,接口不一样,存储不一
样,数据互相隔离。也就是说如果从 Etcd v2 升级到 Etcd v3,原来v2 的数据还是只能用 v2 的接口访问,v3 的接口创建的数据也只能访问通过
v3 的接口访问。
root@k8s-master1:~# etcdctl --helproot@k8s-master1:~# etcdctl member --help
NAME:member - Membership related commands
USAGE:etcdctl member <subcommand> [flags]
API VERSION:3.6
COMMANDS:add           Adds a member into the clusterlist          Lists all members in the clusterpromote       Promotes a non-voting member in the clusterremove        Removes a member from the clusterupdate        Updates a member in the cluster
root@k8s-master1:~# etcdctl member list
173b4ca8edae5591, started, etcd-172.31.7.102, https://172.31.7.102:2380, https://172.31.7.102:2379, false
7b4ab66ca42ffa36, started, etcd-172.31.7.111, https://172.31.7.111:2380, https://172.31.7.111:2379, false
f79b382adf834dc1, started, etcd-172.31.7.101, https://172.31.7.101:2380, https://172.31.7.101:2379, false
root@k8s-master1:~# export NODE_IPS="172.31.7.101 172.31.7.102 172.31.7.111"
root@k8s-master1:~# /usr/local/bin/etcdctl --write-out=table member list --endpoints=https://172.31.7.101:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem
+------------------+---------+-------------------+---------------------------+---------------------------+------------+
|        ID        | STATUS  |       NAME        |        PEER ADDRS         |       CLIENT ADDRS        | IS LEARNER |
+------------------+---------+-------------------+---------------------------+---------------------------+------------+
| 173b4ca8edae5591 | started | etcd-172.31.7.102 | https://172.31.7.102:2380 | https://172.31.7.102:2379 |      false |
| 7b4ab66ca42ffa36 | started | etcd-172.31.7.111 | https://172.31.7.111:2380 | https://172.31.7.111:2379 |      false |
| f79b382adf834dc1 | started | etcd-172.31.7.101 | https://172.31.7.101:2380 | https://172.31.7.101:2379 |      false |
+------------------+---------+-------------------+---------------------------+---------------------------+------------+
验证节点心跳状态:
root@k8s-master1:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
{"level":"warn","ts":"2026-04-25T20:09:03.940844+0800","caller":"flags/flag.go:94","msg":"unrecognized environment variable","environment-variable":"ETCDCTL_API=3"}
https://172.31.7.101:2379 is healthy: successfully committed proposal: took = 4.75665ms
{"level":"warn","ts":"2026-04-25T20:09:03.961118+0800","caller":"flags/flag.go:94","msg":"unrecognized environment variable","environment-variable":"ETCDCTL_API=3"}
https://172.31.7.102:2379 is healthy: successfully committed proposal: took = 5.232051ms
{"level":"warn","ts":"2026-04-25T20:09:03.983920+0800","caller":"flags/flag.go:94","msg":"unrecognized environment variable","environment-variable":"ETCDCTL_API=3"}
https://172.31.7.111:2379 is healthy: successfully committed proposal: took = 5.505931ms
详细信息:
root@k8s-master1:~# for ip in ${NODE_IPS}; do /usr/local/bin/etcdctl --write-out=table endpoint status --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem; done
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
|         ENDPOINT          |        ID        | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA  | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED |
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| https://172.31.7.101:2379 | f79b382adf834dc1 |   3.6.4 |           3.6.0 |  6.0 MB | 2.0 MB |                   68% | 8.6 GB |     false |      false |        14 |     142183 |             142183 |        |                          |             false |
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
|         ENDPOINT          |        ID        | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA  | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED |
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| https://172.31.7.102:2379 | 173b4ca8edae5591 |   3.6.4 |           3.6.0 |  6.0 MB | 1.9 MB |                   68% | 8.6 GB |     false |      false |        14 |     142183 |             142183 |        |                          |             false |
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
|         ENDPOINT          |        ID        | VERSION | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA  | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED |
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| https://172.31.7.111:2379 | 7b4ab66ca42ffa36 |   3.6.4 |           3.6.0 |  6.0 MB | 2.0 MB |                   68% | 8.6 GB |      true |      false |        14 |     142183 |             142183 |        |                          |             false |
+---------------------------+------------------+---------+-----------------+---------+--------+-----------------------+--------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+

查看etcd数据:

以路径的方式所有key信息:root@k8s-master1:~# etcdctl get / --prefix --keys-only
查看pod信息:
root@k8s-master1:~# etcdctl get / --prefix --keys-only | grep pod
/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler
/registry/clusterrolebindings/system:controller:pod-garbage-collector
namespace信息:
root@k8s-master1:~# etcdctl get / --prefix --keys-only | grep namespaces
/registry/namespaces/default
/registry/namespaces/kube-node-lease
/registry/namespaces/kube-public
/registry/namespaces/kube-system
/registry/namespaces/kuboard
/registry/namespaces/myserver
查看deployment控制器信息:
root@k8s-master1:~# etcdctl get / --prefix --keys-only | grep deployment
/registry/clusterrolebindings/system:controller:deployment-controller
/registry/clusterroles/system:controller:deployment-controller
查看calico组件信息:
root@k8s-master1:~# etcdctl get / --prefix --keys-only | grep calico
/registry/apiextensions.k8s.io/customresourcedefinitions/bgpconfigurations.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/bgpfilters.crd.projectcalico.org
/registry/apiextensions.k8s.io/customresourcedefinitions/bgppeers.crd.projectcalico.org

etcd增删改查:

添加数据:root@k8s-master1:~# /usr/local/bin/etcdctl put /name "tom"
OK
查询数据:root@k8s-master1:~# /usr/local/bin/etcdctl get /name
/name
tom
改动数据,直接覆盖就是更新数据:root@k8s-master1:~# /usr/local/bin/etcdctl put /name "jack"
OK
验证改动:root@k8s-master1:~# /usr/local/bin/etcdctl get /name
/name
jack
删除数据:root@k8s-master1:~# /usr/local/bin/etcdctl del /name
1
验证删除:root@k8s-master1:~# /usr/local/bin/etcdctl get /name

etcd数据watch机制:
基于不断监看数据,发生变化就主动触发通知客户端,Etcd v3 的watch机制支持watch某个固定的key,也支持watch一个范围。

在etcd node1上watch一个key,没有此key也可以执行watch,后期可以再创建:
/usr/local/bin/etcdctl watch /data
在etcd node2修改数据,验证etcd node1是否能够发现数据变化
root@k8s-master2:~# /usr/local/bin/etcdctl put /data "data v1"
OK
root@k8s-master2:~# /usr/local/bin/etcdctl put /data "data v2"
OK
验证etcd node1:
root@k8s-master1:~# /usr/local/bin/etcdctl watch /data
PUT
/data
data v1
PUT
/data
data v2

etcd V3 API版本数据备份与恢复:

WAL是write ahead log(预写日志)的缩写,顾名思义,也就是在执行真正的写操作之前先写一个日志,预写日志。
wal: 存放预写式日志,最大的作用是记录了整个数据变化的全部历程。在etcd中,所有数据的修改在提交前,都要先写入到WAL中。
V3版本备份数据:
root@k8s-master1:/data# etcdctl snapshot save snapshot.db
{"level":"info","ts":"2026-04-25T20:29:33.172615+0800","caller":"snapshot/v3_snapshot.go:83","msg":"created temporary db file","path":"snapshot.db.part"}
{"level":"info","ts":"2026-04-25T20:29:33.173734+0800","logger":"client","caller":"v3@v3.6.4/maintenance.go:236","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2026-04-25T20:29:33.182464+0800","caller":"snapshot/v3_snapshot.go:96","msg":"fetching snapshot","endpoint":"127.0.0.1:2379"}
{"level":"info","ts":"2026-04-25T20:29:33.224878+0800","logger":"client","caller":"v3@v3.6.4/maintenance.go:302","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2026-04-25T20:29:33.231109+0800","caller":"snapshot/v3_snapshot.go:111","msg":"fetched snapshot","endpoint":"127.0.0.1:2379","size":"6.0 MB","took":"58.309548ms","etcd-version":"3.6.0"}
{"level":"info","ts":"2026-04-25T20:29:33.231253+0800","caller":"snapshot/v3_snapshot.go:121","msg":"saved","path":"snapshot.db"}
Snapshot saved at snapshot.db
Server version 3.6.0
root@k8s-master1:/data# ll
total 5908
drwxr-xr-x  2 root root      50 Apr 25 20:29 ./
drwxr-xr-x 23 root root    4096 Apr 23 08:37 ../
-rw-r--r--  1 root root   16801 Apr 23 08:37 k8s-completion.sh
-rw-------  1 root root 6021152 Apr 25 20:29 snapshot.db
V3版本恢复数据:
查看数据:etcdutl snapshot status snapshot.db
将数据恢复到一个空的目录中:etcdutl snapshot restore ./snapshot.db --data-dir=/var/lib/etcd-data
再将之前数据目录备份,然后将恢复目录重命名为数据目录。
默认数据目录:--data-dir=/var/lib/etcd

单机自动备份数据

mkdir /data/etcd-backup-dir/ -p
cat etcd-backup.sh
#!/bin/bash
source /etc/profile
DATE=`date +%Y-%m-%d_%H-%M-%S` ETCDCTL_API=3 /usr/local/bin/etcdctl snapshot save /data/etcd-backup-dir/etcd-snapshot-${DATE}.db

etcd 集群v3版本数据备份与恢复:

修改文件vim roles/cluster-restore/tasks/main.yml
将etcdctl修改为etcdutl;现版本恢复不支持etcdctl
- name: etcd 数据恢复shell: "cd /etcd_backup && \ETCDCTL_API=3 {{ bin_dir }}/etcdutl snapshot restore snapshot.db \--name etcd-{{ inventory_hostname }} \
下载新包将命令复制到个节点:
tar -xf etcd-v3.6.4-linux-amd64.tar.gz
scp etcdutl 172.31.7.101:/usr/local/bin/
scp etcdutl 172.31.7.102:/usr/local/bin/
scp etcdutl 172.31.7.111:/usr/local/bin/
集群备份:root@k8s-harbor1:/etc/kubeasz# ./ezctl backup k8s-cluster1
查看备份:root@k8s-harbor1:/etc/kubeasz# ll clusters/k8s-cluster1/backup/
total 11772
drwxr-xr-x 2 root root      57 Apr 25 20:57 ./
drwxr-xr-x 5 root root    4096 Apr 23 01:10 ../
-rw------- 1 root root 6021152 Apr 25 20:57 snapshot.db
-rw------- 1 root root 6021152 Apr 25 20:57 snapshot_202604252057.db
查看deployment:
root@k8s-harbor1:/etc/kubeasz# kubectl get deployment -n myserver
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
myserver-nginx-deployment   2/2     2            2           4m30s
删除deployment:
root@k8s-harbor1:/etc/kubeasz# kubectl delete deployment -n myserver myserver-nginx-deployment 
deployment.apps "myserver-nginx-deployment" deleted from myserver namespace在恢复数据期间API server不可用,必须在业务低峰期操作或者是在其它紧急场景:
选择恢复的文件:
root@k8s-harbor1:/etc/kubeasz# grep db_to_restore ./roles/ -R
./roles/cluster-restore/defaults/main.yml:db_to_restore: "snapshot.db"
./roles/cluster-restore/tasks/main.yml:    src: "{{ cluster_dir }}/backup/{{ db_to_restore }}"
恢复集群:
root@k8s-harbor1:/etc/kubeasz# ./ezctl restore k8s-cluster1
ansible-playbook -i clusters/k8s-cluster1/hosts -e @clusters/k8s-cluster1/config.yml playbooks/95.restore.yml
查看deployment恢复情况:
root@k8s-harbor1:/etc/kubeasz# kubectl get deployments -n myserver 
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
myserver-nginx-deployment   2/2     2            2           4m30s
root@k8s-harbor1:/etc/kubeasz# kubectl get pod -n myserver 
NAME                                         READY   STATUS    RESTARTS   AGE
myserver-nginx-deployment-597d966577-5glkp   1/1     Running   0          5m18s
myserver-nginx-deployment-597d966577-nn66q   1/1     Running   0          5m18s

ETCD数据恢复流程:
当etcd集群宕机数量超过集群总节点数一半以上的时候(如总数为三台宕机两台),就会导致整合集群宕机,后期需要重新恢复数据,则恢复流程如下

  1. 恢复服务器系统
  2. 重新部署ETCD集群
  3. 停止kube-apiserver/controller-manager/scheduler/kubelet/kube-proxy
  4. 停止ETCD集群
  5. 各ETCD节点恢复同一份备份数据
  6. 启动各节点并验证ETCD集群
  7. 启动kube-apiserver/controller-manager/scheduler/kubelet/kube-proxy
  8. 验证k8s master状态及pod数据

ETCD集群节点添加与删除:
add-etcd
del-etcd

CoreDNS-简介:
DNS组件历史版本有skydns、kube-dns和coredns三个,k8s 1.3版本之前使用skydns,之后的版本到1.17及之间的版本使用kube-dns,1.18开始目前主要使用coredns,
DNS组件用于解析k8s集群中service name所对应得到IP地址。
https://github.com/coredns/coredns
https://coredns.io
https://coredns.io/plugins/
CoreDNS-域名解析:
https://github.com/coredns/deployment/tree/master/kubernetes

root@k8s-harbor1:/etc/kubeasz# kubectl get deployment coredns -n kube-system
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
coredns   2/2     2            2           3d2h
root@k8s-harbor1:/etc/kubeasz# kubectl get pod  -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-695bf6cc9d-jmm76   1/1     Running   0          13m
calico-node-6txxx                          1/1     Running   0          13m
calico-node-fz8b6                          1/1     Running   0          13m
calico-node-j6kwn                          1/1     Running   0          13m
calico-node-rzj5s                          1/1     Running   0          13m
calico-node-tvds4                          1/1     Running   0          13m
calico-node-z8lbg                          1/1     Running   0          13m
coredns-68cf8f8659-hdm22                   1/1     Running   0          3d2h
coredns-68cf8f8659-k68kb                   1/1     Running   0          3d2h

CoreDNS进阶-插件配置:
errors:错误信息标准输出。
health:在CoreDNS的 http://localhost:8080/health 端口提供 CoreDNS 服务的健康报告。
ready:监听8181端口,当coredns的插件都已就绪时,访问该接口会返回 200 OK。
kubernetes:CoreDNS 将基于 kubernetes service name进行 DNS 查询并返回查询记录给客户端.
prometheus:CoreDNS 的度量指标数据以 Prometheus 的key-value的格式在http://localhost:9153/metrics URI上提供。
forward: 不是Kubernetes 集群内的其它任何域名查询都将转发到 预定义的目的server,如 (/etc/resolv.conf或IP(如8.8.8.8)).
cache:启用 service解析缓存,单位为秒。
loop:检测域名解析是否有死循环,如coredns转发给内网DNS服务器,而内网DNS服务器又转发给coredns,如果发现解析是死循环,则强制中止 CoreDNS 进程(kubernetes会重建)。
reload:检测corefile是否更改,在重新编辑configmap 配置后,默认2分钟后会优雅的自动加载。
loadbalance:轮训DNS域名解析, 如果一个域名存在多个记录则轮训解析。

root@k8s-master1:~/nginx-tomcat-case# curl http://10.200.45.1:9153/metrics
root@k8s-harbor1:/etc/kubeasz/20260111/1.coredns# kubectl exec -it -n myserver myserver-nginx-deployment-597d966577-5glkp -- sh
/ # ping www.baidu.com
PING www.baidu.com (39.156.70.239): 56 data bytes
64 bytes from 39.156.70.239: seq=0 ttl=127 time=42.477 ms
64 bytes from 39.156.70.239: seq=1 ttl=127 time=39.701 ms
http://www.jsqmd.com/news/710082/

相关文章:

  • 从本地Notebook到千卡集群:Docker AI Toolkit 2026的12层抽象架构图首次解禁(含源码级hook点标注),你还在用v2024手动patch?
  • ComfyUI-Impact-Pack终极指南:从零开始掌握AI图像增强插件
  • 2026年3月吹膜机直销厂家推荐,印刷机/pp吹膜机/快递袋制袋机/气泡膜制袋机/pvc吹膜机,吹膜机厂家哪个好 - 品牌推荐师
  • 对抗协同训练:提升代码与测试生成质量的新方法
  • 手把手教你用Amos做结构方程模型:从SPSS数据导入到路径图绘制的保姆级教程
  • 在设备树(DTS)里正确配置MPIDR_EL1:以ARMv8设备启动失败排查为例
  • 规范说明:Controller 层编码规范
  • 2026年宁波韩国留学机构品牌推荐:五家优选对比解析 - 科技焦点
  • 2026天津专业汽车维修机构横评:从资质到售后的深度对比 - 资讯焦点
  • Akagi麻将AI助手:3分钟快速上手完整指南
  • 终极APK安装器:在Windows电脑上运行安卓应用的完整指南
  • 抖音下载神器:douyin-downloader终极免费批量下载解决方案
  • AI模型在数据可视化与Web开发中的能力边界测试
  • 新手必看!降ai率软件怎么选?降迹灵AI全解析 - 资讯焦点
  • ROOST开源安全工具链:构建透明可扩展的安全生态
  • 炉石传说脚本终极指南:5分钟快速上手与4大实战场景
  • sd-webui-controlnet完整实践指南:掌握AI绘画精准控制的终极方法
  • 终极番茄小说下载器:Rust重构的高效离线阅读解决方案
  • 阿里巴巴最新Spring全家桶学习笔记全网首次公开!
  • 基于Mistral-7B与LoRA的高效多标签分类实践
  • OpCore Simplify:15分钟搞定黑苹果OpenCore配置的终极方案
  • 3大核心功能全面解锁:艾尔登法环高帧率优化终极方案
  • LLM在软件开发中的挑战与优化实践
  • 耶鲁OpenHand机械手硬件架构深度解析:从开源设计到工业应用的技术实现
  • WPS-Zotero技术实现深度指南:跨平台文献管理架构解析
  • 猫抓浏览器资源嗅探扩展:专业媒体内容捕获解决方案
  • 2026 年视频拍摄新趋势,专业技巧助您脱颖而出
  • Meshroom:当照片遇见魔法,普通人也能成为3D造物主
  • Web Scraper Chrome扩展:高效网页数据提取的智能解决方案
  • Elasticsearch 评分精度实战:评分偏差、失真问题全方位解决方案