当前位置: 首页 > news >正文

k8s部署Zookeeper集群

1 基于Deployment控制器的Zookeeper集群

基于PV和PVC作为后端存储,实现zookeeper集群

1.1 JDK镜像准备

参考:
分层构建基础镜像

1.2 zookeeper镜像准备

https://www.apache.org/dist/zookeeper #官方程序包下载官方网址

1.2.1 构建zookeeper镜像

# bash build-command.sh v1.0.0
[+] Building 75.8s (14/14)
[+] Building 77.5s (14/14) FINISHED=> [internal] load build definition from Dockerfile                                                                                                             0.1s=> => transferring dockerfile: 1.75kB                                                                                                                           0.0s=> [internal] load metadata for registry.cn-hangzhou.aliyuncs.com/myhubregistry/slim_java:8                                                                     1.4s=> [internal] load .dockerignore                                                                                                                                0.0s=> => transferring context: 2B                                                                                                                                  0.0s=> [1/9] FROM registry.cn-hangzhou.aliyuncs.com/myhubregistry/slim_java:8@sha256:817d0af5d4f16c29509b8397784f5d4ec3accb1bfde4e474244ed3be7f41a604              10.9s=> => resolve registry.cn-hangzhou.aliyuncs.com/myhubregistry/slim_java:8@sha256:817d0af5d4f16c29509b8397784f5d4ec3accb1bfde4e474244ed3be7f41a604               0.3s=> => sha256:6d987f6f42797d81a318c40d442369ba3dc124883a0964d40b0c8f4f7561d913 1.99MB / 1.99MB                                                                   2.4s=> => sha256:7141511c4dad1bb64345a3cd38e009b1bcd876bba3e92be040ab2602e9de7d1e 3.19MB / 3.19MB                                                                   2.6s=> => sha256:fd529fe251b34db45de24e46ae4d8f57c5b8bbcfb1b8d8c6fb7fa9fcdca8905e 27.33MB / 27.33MB                                                                 8.0s=> => extracting sha256:6d987f6f42797d81a318c40d442369ba3dc124883a0964d40b0c8f4f7561d913                                                                        2.8s=> => extracting sha256:7141511c4dad1bb64345a3cd38e009b1bcd876bba3e92be040ab2602e9de7d1e                                                                        2.1s=> => extracting sha256:fd529fe251b34db45de24e46ae4d8f57c5b8bbcfb1b8d8c6fb7fa9fcdca8905e                                                                        2.1s=> [internal] load build context                                                                                                                                7.6s=> => transferring context: 37.75MB                                                                                                                             7.6s=> [2/9] ADD repositories /etc/apk/repositories                                                                                                                 0.4s=> [3/9] COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz                                                                                                               0.9s=> [4/9] COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc                                                                                                       0.1s=> [5/9] COPY KEYS /tmp/KEYS                                                                                                                                    0.1s=> [6/9] RUN apk add --no-cache --virtual .build-deps       ca-certificates         gnupg                   tar                     wget &&               apk  11.4s=> [7/9] COPY conf /zookeeper/conf/                                                                                                                             0.3s=> [8/9] COPY bin/zkReady.sh /zookeeper/bin/                                                                                                                    0.2s=> [9/9] COPY entrypoint.sh /                                                                                                                                   0.2s=> exporting to docker image format                                                                                                                            49.2s=> => exporting layers                                                                                                                                         12.6s=> => exporting manifest sha256:61d8d055f95c20381d8e59f40bcfe795ec6ac74cbd11b24f1ae80a630048f3f0                                                                0.0s=> => exporting config sha256:95699ff8bec91bbdd94a037193093ddb6e38c9fb7a28eafd95b79029bdaa9266                                                                  0.0s=> => sending tarball                                                                                                                                          36.4s
Loaded image: harbor.zhou-kai.com/myserver/zookeeper:v1.0.0
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.v2+json, sha256:61d8d055f95c20381d8e59f40bcfe795ec6ac74cbd11b24f1ae80a630048f3f0)
manifest-sha256:61d8d055f95c20381d8e59f40bcfe795ec6ac74cbd11b24f1ae80a630048f3f0: waiting        |--------------------------------------|
layer-sha256:5b3494ca878a264634547be25406f5ecd626ee91880a6b732a2e7731fd5b0b01:    waiting        |--------------------------------------|
config-sha256:95699ff8bec91bbdd94a037193093ddb6e38c9fb7a28eafd95b79029bdaa9266:   waiting        |--------------------------------------|
layer-sha256:6d987f6f42797d81a318c40d442369ba3dc124883a0964d40b0c8f4f7561d913:    waiting        |--------------------------------------|
layer-sha256:7141511c4dad1bb64345a3cd38e009b1bcd876bba3e92be040ab2602e9de7d1e:    waiting        |--------------------------------------|
layer-sha256:fd529fe251b34db45de24e46ae4d8f57c5b8bbcfb1b8d8c6fb7fa9fcdca8905e:    waiting        |--------------------------------------|
layer-sha256:6826761dea31511dd38c58f03228250f1835cce01068d48637a2ba309e1c21c3:    waiting        |--------------------------------------|
layer-sha256:0ce55cbfdd8e5b1faa84949bdb0b30b6e95b856fd45d13c2e4a8e45682d73431:    waiting        |--------------------------------------|
layer-sha256:c06c2c54f43957815d4d625fe73d078649c4085a31e549b4a21e68e93dc3546a:    waiting        |--------------------------------------|
layer-sha256:28685a0dfccd845c0a339cb99fd5a506a24f3fc7fddbcf97244a0a6c5480ba50:    waiting        |--------------------------------------|
layer-sha256:4fab7cb2c6bafb02dec3e1d4ded3b3fdbf94df96c26a4bb1a420c5cd207d77ef:    waiting        |--------------------------------------|
layer-sha256:726868ffc48327e9ba8f033e1ae6b33dc7e3d58e66cb5bcb27f5d9a2939ef924:    waiting        |--------------------------------------|
layer-sha256:c30a8fd0c141000021fb740f9a165d18e9eba11375c25685c7cbddc5941477b4:    waiting        |--------------------------------------|
elapsed: 0.1 s                                                                    total:   0.0 B (0.0 B/s)
ERRO[0000] server "harbor.zhou-kai.com" does not seem to support HTTPS  error="failed to do request: Head \"https://harbor.zhou-kai.com/v2/myserver/zookeeper/blobs/sha256:726868ffc48327e9ba8f033e1ae6b33dc7e3d58e66cb5bcb27f5d9a2939ef924\": dial tcp 172.31.7.104:443: connect: connection refused"
INFO[0000] Hint: you may want to try --insecure-registry to allow plain HTTP (if you are in a trusted network)
FATA[0000] failed to do request: Head "https://harbor.zhou-kai.com/v2/myserver/zookeeper/blobs/sha256:726868ffc48327e9ba8f033e1ae6b33dc7e3d58e66cb5bcb27f5d9a2939ef924": dial tcp 172.31.7.104:443: connect: connection refused
root@master01:/opt/k8s-data/dockerfile/web/myserver/zookeeper#
root@master01:/opt/k8s-data/dockerfile/web/myserver/zookeeper#
root@master01:/opt/k8s-data/dockerfile/web/myserver/zookeeper# bash build-command.sh v1.0.0
[+] Building 37.6s (14/14)=> [internal] load build definition from Dockerfile                                                                                                             0.0s
[+] Building 37.8s (14/14) FINISHED=> [internal] load build definition from Dockerfile                                                                                                             0.0s=> => transferring dockerfile: 1.75kB                                                                                                                           0.0s=> [internal] load metadata for registry.cn-hangzhou.aliyuncs.com/myhubregistry/slim_java:8                                                                     0.9s=> [internal] load .dockerignore                                                                                                                                0.0s=> => transferring context: 2B                                                                                                                                  0.0s=> [1/9] FROM registry.cn-hangzhou.aliyuncs.com/myhubregistry/slim_java:8@sha256:817d0af5d4f16c29509b8397784f5d4ec3accb1bfde4e474244ed3be7f41a604               0.1s=> => resolve registry.cn-hangzhou.aliyuncs.com/myhubregistry/slim_java:8@sha256:817d0af5d4f16c29509b8397784f5d4ec3accb1bfde4e474244ed3be7f41a604               0.1s=> [internal] load build context                                                                                                                                0.0s=> => transferring context: 336B                                                                                                                                0.0s=> CACHED [2/9] ADD repositories /etc/apk/repositories                                                                                                          0.0s=> CACHED [3/9] COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz                                                                                                        0.0s=> CACHED [4/9] COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc                                                                                                0.0s=> CACHED [5/9] COPY KEYS /tmp/KEYS                                                                                                                             0.0s=> CACHED [6/9] RUN apk add --no-cache --virtual .build-deps       ca-certificates         gnupg                   tar                     wget &&              0.0s=> CACHED [7/9] COPY conf /zookeeper/conf/                                                                                                                      0.0s=> CACHED [8/9] COPY bin/zkReady.sh /zookeeper/bin/                                                                                                             0.0s=> CACHED [9/9] COPY entrypoint.sh /                                                                                                                            0.0s=> exporting to docker image format                                                                                                                            36.1s=> => exporting layers                                                                                                                                          0.0s=> => exporting manifest sha256:61d8d055f95c20381d8e59f40bcfe795ec6ac74cbd11b24f1ae80a630048f3f0                                                                0.0s=> => exporting config sha256:95699ff8bec91bbdd94a037193093ddb6e38c9fb7a28eafd95b79029bdaa9266                                                                  0.0s=> => sending tarball                                                                                                                                          36.0s
INFO[0000] pushing as a reduced-platform image (application/vnd.docker.distribution.manifest.v2+json, sha256:61d8d055f95c20381d8e59f40bcfe795ec6ac74cbd11b24f1ae80a630048f3f0)
manifest-sha256:61d8d055f95c20381d8e59f40bcfe795ec6ac74cbd11b24f1ae80a630048f3f0: done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:5b3494ca878a264634547be25406f5ecd626ee91880a6b732a2e7731fd5b0b01:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:95699ff8bec91bbdd94a037193093ddb6e38c9fb7a28eafd95b79029bdaa9266:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:6d987f6f42797d81a318c40d442369ba3dc124883a0964d40b0c8f4f7561d913:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:7141511c4dad1bb64345a3cd38e009b1bcd876bba3e92be040ab2602e9de7d1e:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:fd529fe251b34db45de24e46ae4d8f57c5b8bbcfb1b8d8c6fb7fa9fcdca8905e:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:6826761dea31511dd38c58f03228250f1835cce01068d48637a2ba309e1c21c3:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:0ce55cbfdd8e5b1faa84949bdb0b30b6e95b856fd45d13c2e4a8e45682d73431:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c06c2c54f43957815d4d625fe73d078649c4085a31e549b4a21e68e93dc3546a:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:28685a0dfccd845c0a339cb99fd5a506a24f3fc7fddbcf97244a0a6c5480ba50:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:4fab7cb2c6bafb02dec3e1d4ded3b3fdbf94df96c26a4bb1a420c5cd207d77ef:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:726868ffc48327e9ba8f033e1ae6b33dc7e3d58e66cb5bcb27f5d9a2939ef924:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:c30a8fd0c141000021fb740f9a165d18e9eba11375c25685c7cbddc5941477b4:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 4.9 s                                                                    total:  101.0  (20.6 MiB/s)

1.2.2 测试zookeeper镜像

# nerdctl run -it --rm -p 2181:2181 harbor.zhou-kai.com/myserver/zookeeper:v1.0.0
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
2026-02-22 10:11:16,681 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /zookeeper/bin/../conf/zoo.cfg
2026-02-22 10:11:16,700 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2026-02-22 10:11:16,702 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
2026-02-22 10:11:16,712 [myid:] - WARN  [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running  in standalone mode
2026-02-22 10:11:16,714 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2026-02-22 10:11:16,748 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /zookeeper/bin/../conf/zoo.cfg
2026-02-22 10:11:16,754 [myid:] - INFO  [main:ZooKeeperServerMain@98] - Starting server
2026-02-22 10:11:16,889 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2026-02-22 10:11:16,897 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=1473b261c4d7
2026-02-22 10:11:16,899 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.8.0_144
2026-02-22 10:11:16,904 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
2026-02-22 10:11:16,901 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2026-02-22 10:11:16,908 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-oracle
2026-02-22 10:11:16,913 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/zookeeper/bin/../zookeeper-server/target/classes:/zookeeper/bin/../build/classes:/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/zookeeper/bin/../build/lib/*.jar:/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/zookeeper/bin/../lib/netty-3.10.6.Final.jar:/zookeeper/bin/../lib/log4j-1.2.17.jar:/zookeeper/bin/../lib/jline-0.9.94.jar:/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper/bin/../zookeeper-3.4.14.jar:/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/zookeeper/bin/../conf:
2026-02-22 10:11:16,914 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2026-02-22 10:11:16,917 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
2026-02-22 10:11:16,930 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=<NA>
2026-02-22 10:11:16,932 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
2026-02-22 10:11:16,934 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
2026-02-22 10:11:16,937 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=6.8.0-90-generic
2026-02-22 10:11:16,939 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=root
2026-02-22 10:11:16,946 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=/root
2026-02-22 10:11:16,947 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/zookeeper
2026-02-22 10:11:16,963 [myid:] - INFO  [main:ZooKeeperServer@836] - tickTime set to 2000
2026-02-22 10:11:16,968 [myid:] - INFO  [main:ZooKeeperServer@845] - minSessionTimeout set to -1
2026-02-22 10:11:16,973 [myid:] - INFO  [main:ZooKeeperServer@854] - maxSessionTimeout set to -1
2026-02-22 10:11:17,010 [myid:] - INFO  [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
2026-02-22 10:11:17,033 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181

1.2.3 测试客户端连接zookeeper

test-zookeeper-client-connection

1.3 k8s运行zookeeper服务

1.3.1 yaml文件准备

yaml目录结构

# tree
.
├── pv
│   ├── zookeeper-persistentvolume.yaml
│   └── zookeeper-persistentvolumeclaim.yaml
└── zookeeper.yaml

1.3.2 创建PV

# kubectl apply -f zookeeper-persistentvolume.yaml
persistentvolume/zookeeper-datadir-pv-1 created
persistentvolume/zookeeper-datadir-pv-2 created
persistentvolume/zookeeper-datadir-pv-3 created

1.3.3 创建PVC

# kubectl apply -f zookeeper-persistentvolumeclaim.yaml
persistentvolumeclaim/zookeeper-datadir-pvc-1 created
persistentvolumeclaim/zookeeper-datadir-pvc-2 created
persistentvolumeclaim/zookeeper-datadir-pvc-3 created

1.3.4 验证PV

root@master01:/opt/k8s-data/yaml/myserver/zookeeper/1.deployment/pv# kubectl -n myserver get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
zookeeper-datadir-pv-1                     20Gi       RWO            Retain           Bound    myserver/zookeeper-datadir-pvc-1      nfs            <unset>                          7s
zookeeper-datadir-pv-2                     20Gi       RWO            Retain           Bound    myserver/zookeeper-datadir-pvc-2      nfs            <unset>                          7s
zookeeper-datadir-pv-3                     20Gi       RWO            Retain           Bound    myserver/zookeeper-datadir-pvc-3      nfs            <unset>  

1.3.5 验证PVC

# kubectl -n myserver get pvc
jenkins-datadir-pvc          Bound    jenkins-datadir-pv                         100Gi      RWO                           <unset>                 10d
jenkins-root-data-pvc        Bound    jenkins-root-datadir-pv                    100Gi      RWO                           <unset>                 10d
myserver-myapp-dynamic-pvc   Bound    pvc-f459d3a3-3c9f-4ee1-beba-3bf58031cc89   500Mi      RWX            nfs            <unset>                 36d
zookeeper-datadir-pvc-1      Bound    zookeeper-datadir-pv-1                     20Gi       RWO            nfs            <unset>                 3m56s
zookeeper-datadir-pvc-2      Bound    zookeeper-datadir-pv-2                     20Gi       RWO            nfs            <unset>                 3m56s
zookeeper-datadir-pvc-3      Bound    zookeeper-datadir-pv-3                     20Gi       RWO            nfs            <unset>                 3m56s

1.3.6 kuboard验证存储卷

kuboard-verify-volumes

1.3.7 运行zookeeper集群

# kubectl apply -f zookeeper.yaml
service/zookeeper created
service/zookeeper1 created
service/zookeeper2 created
service/zookeeper3 created
deployment.apps/zookeeper1 created
deployment.apps/zookeeper2 created
deployment.apps/zookeeper3 created
root@master01:/opt/k8s-data/yaml/myserver/zookeeper/1.deployment#

1.3.8 验证zookeeper集群

# kubectl -n myserver get pod
NAME                          READY   STATUS    RESTARTS          AGE
dns-debug                     1/1     Running   278 (7m32s ago)   13d
zookeeper1-5d97dc94c8-kxjpg   1/1     Running   0                 6s
zookeeper2-7c548484bc-bzwmt   1/1     Running   0                 6s
zookeeper3-884c48747-ktmj2    1/1     Running   0                 6s
# kubectl -n myserver exec -it zookeeper1-5d97dc94c8-kxjpg -- bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower
bash-4.3# exit
exit
# kubectl -n myserver exec -it zookeeper2-7c548484bc-bzwmt -- bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: leader
bash-4.3# exit
exit
# kubectl -n myserver exec -it zookeeper3-884c48747-ktmj2 -- bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower

删除一个leader pod,
验证在其余两个pod是否能自动选举出新的leader

# kubectl -n myserver delete pod zookeeper2-7c548484bc-bzwmt
pod "zookeeper2-7c548484bc-bzwmt" deleted from myserver namespace
root@master01:/opt/k8s-data/yaml/myserver/zookeeper/1.deployment# kubectl -n myserver get pod
NAME                          READY   STATUS    RESTARTS        AGE
dns-debug                     1/1     Running   278 (28m ago)   13d
zookeeper1-5d97dc94c8-kxjpg   1/1     Running   0               21m
zookeeper2-7c548484bc-tfbdk   1/1     Running   0               3s
zookeeper3-884c48747-ktmj2    1/1     Running   0               21m
root@master01:/opt/k8s-data/yaml/myserver/zookeeper/1.deployment# kubectl -n myserver exec -it zookeeper1-5d97dc94c8-kxjpg -- bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower
bash-4.3# exit
exit
root@master01:/opt/k8s-data/yaml/myserver/zookeeper/1.deployment# kubectl -n myserver exec -it zookeeper3-884c48747-ktmj2 -- bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: leader
bash-4.3# exit
exit
root@master01:/opt/k8s-data/yaml/myserver/zookeeper/1.deployment# kubectl -n myserver exec -it zookeeper2-7c548484bc-tfbdk -- bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower

可以发现自动选举出了新的leader,删除的pod重建后以follower的身份加入到zookeeper集群中。

1.3.9 验证从外部访问zookeeper集群

test-zookeeper-client-connection

2 基于StatefulSet控制器的Zookeeper集群

2.1 创建zookeeper集群

apiVersion: v1
kind: Service
metadata:name: zookeeper-headlessnamespace: myserverlabels:app: zookeeper
spec:ports:- port: 2888name: server- port: 3888name: leader-electionclusterIP: Noneselector:app: zookeeper
---
apiVersion: v1
kind: Service
metadata:name: zookeeper-servicenamespace: myserverlabels:app: zookeeper
spec:ports:- port: 2181name: clientselector:app: zookeeper
---
#apiVersion: policy/v1beta1
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: zookeeper-pdbnamespace: myserver
spec:selector:matchLabels:app: zookeepermaxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: zookeepernamespace: myserver
spec:selector:matchLabels:app: zookeeperserviceName: zookeeper-headlessreplicas: 3updateStrategy:type: RollingUpdatepodManagementPolicy: OrderedReadytemplate:metadata:labels:app: zookeeperspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: "app"operator: Invalues:- zookeepertopologyKey: "kubernetes.io/hostname"containers:- name: kubernetes-zookeeperimagePullPolicy: Alwaysimage: "harbor.zhou-kai.com/myserver/zookeeper:v3.4.14"resources:#limits:#  memory: "1024Mi"#  cpu: "0.5"requests:memory: "1024Mi"cpu: "0.5"ports:- containerPort: 2181name: client- containerPort: 2888name: server- containerPort: 3888name: leader-electioncommand:- sh- -c- "start.sh \--servers=3 \--data_dir=/var/lib/zookeeper/data \--data_log_dir=/var/lib/zookeeper/data/log \--conf_dir=/opt/zookeeper/conf \--client_port=2181 \--election_port=3888 \--server_port=2888 \--tick_time=2000 \--init_limit=10 \--sync_limit=5 \--heap=512M \--max_client_cnxns=60 \--snap_retain_count=3 \--purge_interval=12 \--max_session_timeout=40000 \--min_session_timeout=4000 \--log_level=INFO"livenessProbe:tcpSocket:port: 2181initialDelaySeconds: 20periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 3readinessProbe:tcpSocket:port: 2181initialDelaySeconds: 20periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 3#readinessProbe:#  exec:#    command:#    - sh#    - -c#    - "ready_live.sh 2181"#  initialDelaySeconds: 10#  timeoutSeconds: 5#livenessProbe:#  exec:#    command:#    - sh#    - -c#    - "ready_live.sh 2181"#  initialDelaySeconds: 10#  timeoutSeconds: 5volumeMounts:- name: datadirmountPath: /var/lib/zookeepersecurityContext:runAsUser: 1000fsGroup: 1000volumeClaimTemplates:- metadata:name: datadirspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "nfs"resources:requests:storage: 20Gi
# kubectl apply -f zk-cluster-StatefulSet.yaml
Warning: spec.SessionAffinity is ignored for headless services
service/zookeeper-headless created
service/zookeeper-service created
poddisruptionbudget.policy/zookeeper-pdb created
statefulset.apps/zookeeper created

2.2 验证zookeeper集群

# kubectl -n myserver exec -it zookeeper-0 -- bash
zookeeper@zookeeper-0:/$ /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
zookeeper@zookeeper-0:/$ exit
exit
# kubectl -n myserver exec -it zookeeper-1 -- bash
zookeeper@zookeeper-1:/$ /zookeeper/bin/zkServer.sh status
zookeeper@zookeeper-1:/$ /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
zookeeper@zookeeper-1:/$ exit
exit
# kubectl -n myserver exec -it zookeeper-2 -- bash
zookeeper@zookeeper-2:/$ /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader

删除leader pod,验证是否能自动选举出新的leader

# kubectl -n myserver delete pod zookeeper-2
pod "zookeeper-2" deleted from myserver namespace
# kubectl -n myserver exec -it zookeeper-0 -- bash /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader
root@master01:/opt/k8s-data/yaml/myserver/zookeeper/2.StatefulSet# kubectl -n myserver exec -it zookeeper-1 -- bash /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
root@master01:/opt/k8s-data/yaml/myserver/zookeeper/2.StatefulSet# kubectl -n myserver exec -it zookeeper-2 -- bash /opt/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

可以看出,集群运行正常,自动选举出了新的leader:zookeeper-0,删除的pod重建后以follower的身份加入到zookeeper集群中。

http://www.jsqmd.com/news/443920/

相关文章:

  • 高光谱成像(四)最小噪声分数变换 MNF
  • 2026云南摄影摄像优质服务机构推荐指南 - 优质品牌商家
  • 2026最新家用食堂厨具服务商/品牌TOP5评测!权威榜单发布,品质赋能餐厨新生态 - 十大品牌榜
  • 免费vs付费降AI工具:你需要了解的关键区别 - 我要发一区
  • 老年护理院怎么选择适合自己的骨密度仪 - 资讯焦点
  • 2026年工业用户口碑实证:五大AGV叉车厂家项目交付率与复购率全面对比 - 品牌推荐
  • 2026年数字化咨询公司深度测评:基于制造业转型需求的五维服务能力全解析 - 品牌推荐
  • 考研后留学服务:TOP10北京留学机构打造高效申请新方案 - 博客湾
  • 为什么这些降AI工具提供免费试用?背后的商业逻辑解读 - 我要发一区
  • 2026年电动折叠天幕厂家实力推荐:安阳锦旺钢结构,专注户外电动遮阳棚/铝合金天幕/防风卷帘全系定制 - 品牌推荐官
  • 乐山临江鳝丝优质门店推荐榜 - 优质品牌商家
  • 2026 最新食品304厨具品牌TOP5评测!权威榜单发布,品质赋能厨房生活升级 - 十大品牌榜
  • 2026最新青岛婚纱照/旅拍/海边婚纱照/目的地婚礼/婚纱摄影工作室推荐:定制专属影像,这家实力出圈 - 十大品牌榜
  • 2026年深圳全屋定制品牌权威榜单发布:五大品牌综合实力深度排位赛 - 品牌推荐
  • 2026年实测对比:河南阻火器工程承包TOP3深度解析 - 精选优质企业推荐榜
  • 2026年短途旅行必看:宜昌两天一夜游路线选型指南与适配场景实测 - 十大品牌推荐
  • 考研出分选对机构:TOP10 美国留学中介精准助力 - 博客湾
  • 乐山正宗临江鳝丝领衔非遗美食推荐榜:乐山临江鳝丝品牌、乐山临江鳝丝联系方式、乐山学生党攻略、乐山必吃临江鳝丝选择指南 - 优质品牌商家
  • 汕头市大润发购物卡在线秒回收! - 畅回收小程序
  • 【2026实测】Claude Code 省钱指南:5 个技巧让 Token 消耗降低 80%
  • Milvus向量数据库/RAG基础设施学习教程 - 详解
  • 美国留学中介TOP10测评 考研后市场适配度全解析 - 博客湾
  • 破解PC耐力板插接板三大痛点:PCLS系统如何实现十年无维护? - 速递信息
  • 韶关市 - 畅回收小程序
  • 2026年宜昌两天一夜游路线深度测评:基于体验价值与时间效率的五维攻略解析 - 十大品牌推荐
  • 会议记录系统的最终确认架构;
  • 2026 最新纯钛厨具品牌/厂家 TOP5 评测!权威榜单发布,品质与匠心的巅峰对决 - 十大品牌榜
  • Receiver Receiver卸载
  • 率零免费降AI教程:1000字免费额度怎么用最划算 - 我要发一区
  • 2026 最新青岛婚纱照工作室TOP5评测!权威推荐榜单发布 - 十大品牌榜