当前位置: 首页 > news >正文

K8S 中使用 YAML 安装 ECK

K8S 中使用 YAML 安装 ECK

Kubernetes 是目前最受欢迎的容器编排技术,越来越多的应用开始往 Kubernetes 中迁移。Kubernetes 现有的 ReplicaSet、Deployment、Service 等资源对象已经可以满足无状态应用对于自动扩缩容、负载均衡等基本需求。但是对于有状态的、分布式的应用,通常拥有各自的一套模型定义规范,例如 Prometheus,Etcd,Zookeeper,Elasticsearch 等等。部署这些分布式应用往往需要熟悉特定领域的知识,并且在扩缩容和升级时需要考虑如何保证应用服务的可用性等问题。为了简化有状态、分布式应用的部署,Kubernetes Operator 应运而生。

Kubernetes Operator 是一种特定的应用控制器,通过 CRD(Custom Resource Definitions,自定义资源定义)扩展 Kubernetes API 的功能,可以用它来创建、配置和管理特定的有状态应用,而不需要直接去使用 Kubernetes 中最原始的一些资源对象,比如 Pod,Deployment,Service 等等。

Elastic Cloud on Kubernetes(ECK) 是其中的一种 Kubernetes Operator,方便我们管理 Elastic Stack 家族中的各种组件,例如 Elasticsearch,Kibana,APM,Beats 等等。比如只需要定义一个 Elasticsearch 类型的 CRD 对象,ECK 就可以帮助我们快速搭建出一套 Elasticsearch 集群。

使用create安装Elastic的自定义资源定义

[root@k8s-192-168-1-140 ~]# kubectl create -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml
customresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearchautoscalers.autoscaling.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/stackconfigpolicies.stackconfigpolicy.k8s.elastic.co created
[root@k8s-192-168-1-140 ~]# 

使用kubectl apply 安装operator及其RBAC规则

[root@k8s-192-168-1-140 ~]# kubectl apply -f https://download.elastic.co/downloads/eck/3.2.0/operator.yaml
namespace/elastic-system created
serviceaccount/elastic-operator created
secret/elastic-webhook-server-cert created
configmap/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator-view created
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
service/elastic-webhook-server created
statefulset.apps/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created
[root@k8s-192-168-1-140 ~]# 

查看是否启动完成

[root@k8s-192-168-1-140 ~]kubectl get -n elastic-system pods
NAME                 READY   STATUS    RESTARTS   AGE
elastic-operator-0   1/1     Running   0          8m38s
[root@k8s-192-168-1-140 ~]# 

启动部署

Operator自动创建和管理Kubernetes资源,以实现Elasticsearch集群的期望状态。可能需要几分钟的时间才能创建所有资源并准备好使用群集。

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:name: quickstart
spec:version: 9.2.2nodeSets:- name: defaultcount: 1config:node.store.allow_mmap: false
EOF

存储用量

创建时默认为1G存储空间,可以在创建时配置申明空间

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:name: quickstart
spec:version: 9.2.2nodeSets:- name: defaultcount: 1volumeClaimTemplates:- metadata:name: elasticsearch-dataspec:accessModes:- ReadWriteOnceresources:requests:storage: 5GistorageClassName: nfs-storageconfig:node.store.allow_mmap: false
EOF

查看部署状态

[root@k8s-192-168-1-140 ~]# 
[root@k8s-192-168-1-140 ~]# kubectl get pod -A
NAMESPACE        NAME                                       READY   STATUS    RESTARTS            AGE
default          nginx-66686b6766-tdwt2                     1/1     Running   2 (<invalid> ago)   81d
default          quickstart-es-default-0                    0/1     Pending   0                   2m1s
elastic-system   elastic-operator-0                         1/1     Running   0                   12m
kube-system      calico-kube-controllers-78dcb7b647-8f2ph   1/1     Running   2 (<invalid> ago)   81d
kube-system      calico-node-hpwvr                          1/1     Running   2 (<invalid> ago)   81d
kube-system      coredns-6746f4cb74-bhkv8                   1/1     Running   2 (<invalid> ago)   81d
kube-system      metrics-server-55c56cb875-bwbpr            1/1     Running   2 (<invalid> ago)   81d
kube-system      node-local-dns-nz4q7                       1/1     Running   2 (<invalid> ago)   81d
[root@k8s-192-168-1-140 ~]# 

查看日志

[root@k8s-192-168-1-140 ~]# kubectl logs -f quickstart-es-default-0
Defaulted container "elasticsearch" out of: elasticsearch, elastic-internal-init-filesystem (init), elastic-internal-suspend (init)
[root@k8s-192-168-1-140 ~]# 

安装NFS动态挂载

[root@k8s-192-168-1-140 ~]# yum  install  nfs-utils -y
[root@k8s-192-168-1-140 ~]# mkdir /nfs
[root@k8s-192-168-1-140 ~]#  vim /etc/exports
/nfs *(rw,sync,no_root_squash,no_subtree_check)[root@k8s-192-168-1-140 ~]# systemctl restart rpcbind
[root@k8s-192-168-1-140 ~]# systemctl restart nfs-server
[root@k8s-192-168-1-140 ~]# systemctl  enable  rpcbind
[root@k8s-192-168-1-140 ~]# systemctl  enable  nfs-server

K8S 编写NFS供给

[root@k8s-192-168-1-140 ~]#  vim nfs-storage.yaml
[root@k8s-192-168-1-140 ~]# 
[root@k8s-192-168-1-140 ~]#  cat nfs-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-storageannotations:storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份---
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-hangzhou.aliyuncs.com/chenby/nfs-subdir-external-provisioner:v4.0.2# resources:#    limits:#      cpu: 10m#    requests:#      cpu: 10mvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 192.168.1.140 ## 指定自己nfs服务器地址- name: NFS_PATH  value: /nfs/  ## nfs服务器共享的目录volumes:- name: nfs-client-rootnfs:server: 192.168.1.140path: /nfs/
---
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io

开始安装

[root@k8s-192-168-1-140 ~]#  kubectl apply -f nfs-storage.yaml
storageclass.storage.k8s.io/nfs-storage created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@k8s-192-168-1-140 ~]# 

查看存储

[root@k8s-192-168-1-140 ~]# kubectl get storageclasses.storage.k8s.io
NAME                    PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  6h7m
[root@k8s-192-168-1-140 ~]# [root@k8s-192-168-1-140 ~]# kubectl get pvc
NAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
elasticsearch-data-quickstart-es-default-0   Bound    pvc-2df832aa-1c54-4af6-8384-5e5c5f167445   5Gi        RWO            nfs-storage    <unset>                 39s
[root@k8s-192-168-1-140 ~]# [root@k8s-192-168-1-140 ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-2df832aa-1c54-4af6-8384-5e5c5f167445   5Gi        RWO            Delete           Bound    default/elasticsearch-data-quickstart-es-default-0   nfs-storage    <unset>                          43s
[root@k8s-192-168-1-140 ~]# 

查看ES服务发现

[root@k8s-192-168-1-140 ~]# kubectl get service
NAME                          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes                    ClusterIP   10.68.0.1      <none>        443/TCP        81d
nginx                         NodePort    10.68.148.53   <none>        80:30330/TCP   81d
quickstart-es-default         ClusterIP   None           <none>        9200/TCP       74s
quickstart-es-http            ClusterIP   10.68.66.232   <none>        9200/TCP       75s
quickstart-es-internal-http   ClusterIP   10.68.121.73   <none>        9200/TCP       75s
quickstart-es-transport       ClusterIP   None           <none>        9300/TCP       75s
[root@k8s-192-168-1-140 ~]# 

查看ES密码

[root@k8s-192-168-1-140 ~]# PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
[root@k8s-192-168-1-140 ~]# 
[root@k8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
V3VPqwQMURTSg6zFYvVIsH13[root@k8s-192-168-1-140 ~]# 
[root@k8s-192-168-1-140 ~]# 
[root@k8s-192-168-1-140 ~]# curl -u "elastic:$PASSWORD" -k "https://10.68.66.232:9200"
{"name" : "quickstart-es-default-0","cluster_name" : "quickstart","cluster_uuid" : "JNqGubnmSeao_LO-JmypHg","version" : {"number" : "9.2.2","build_flavor" : "default","build_type" : "docker","build_hash" : "ed771e6976fac1a085affabd45433234a4babeaf","build_date" : "2025-11-27T08:06:51.614397514Z","build_snapshot" : false,"lucene_version" : "10.3.2","minimum_wire_compatibility_version" : "8.19.0","minimum_index_compatibility_version" : "8.0.0"},"tagline" : "You Know, for Search"
}
[root@k8s-192-168-1-140 ~]# 

安装Kibana服务

cat <<EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:name: quickstart
spec:version: 9.2.2count: 1elasticsearchRef:name: quickstart
EOF

查看服务状态以及密码信息

[root@k8s-192-168-1-140 ~]# kubectl get kibana
NAME         HEALTH   NODES   VERSION   AGE
quickstart   red              9.2.2     16s
[root@k8s-192-168-1-140 ~]# # 查看密码
[root@k8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
V3VPqwQMURTSg6zFYvVIsH13
[root@k8s-192-168-1-140 ~]#[root@k8s-192-168-1-140 ~]# kubectl get service 
NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes                    ClusterIP   10.68.0.1       <none>        443/TCP        81d
nginx                         NodePort    10.68.148.53    <none>        80:30330/TCP   81d
quickstart-es-default         ClusterIP   None            <none>        9200/TCP       2m47s
quickstart-es-http            ClusterIP   10.68.66.232    <none>        9200/TCP       2m48s
quickstart-es-internal-http   ClusterIP   10.68.121.73    <none>        9200/TCP       2m48s
quickstart-es-transport       ClusterIP   None            <none>        9300/TCP       2m48s
quickstart-kb-http            ClusterIP   10.68.103.103   <none>        5601/TCP       24s
[root@k8s-192-168-1-140 ~]# [root@k8s-192-168-1-140 ~]# kubectl get service quickstart-kb-http
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
quickstart-kb-http   ClusterIP   10.68.103.103   <none>        5601/TCP   2m39s
[root@k8s-192-168-1-140 ~]# 

开启访问

[root@k8s-192-168-1-140 ~] kubectl port-forward --address 0.0.0.0 service/quickstart-kb-http 5601
Forwarding from 0.0.0.0:5601 -> 5601# 登录地址
https://192.168.1.140:5601/login用户:
elastic密码:
V3VPqwQMURTSg6zFYvVIsH13

修改ES副本数

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:name: quickstart
spec:version: 9.2.2nodeSets:- name: defaultcount: 3config:node.store.allow_mmap: false
EOF

查看状态

[root@k8s-192-168-1-140 ~]# kubectl get pod -A
NAMESPACE        NAME                                       READY   STATUS    RESTARTS            AGE
default          nfs-client-provisioner-58d465c998-w8mfq    1/1     Running   0                   5h38m
default          nginx-66686b6766-tdwt2                     1/1     Running   2 (<invalid> ago)   81d
default          quickstart-es-default-0                    1/1     Running   0                   5h51m
default          quickstart-kb-57cc78f6b8-b8fpf             1/1     Running   0                   5h24m
elastic-system   elastic-operator-0                         1/1     Running   0                   6h2m
kube-system      calico-kube-controllers-78dcb7b647-8f2ph   1/1     Running   2 (<invalid> ago)   81d
kube-system      calico-node-hpwvr                          1/1     Running   2 (<invalid> ago)   81d
kube-system      coredns-6746f4cb74-bhkv8                   1/1     Running   2 (<invalid> ago)   81d
kube-system      metrics-server-55c56cb875-bwbpr            1/1     Running   2 (<invalid> ago)   81d
kube-system      node-local-dns-nz4q7                       1/1     Running   2 (<invalid> ago)   81d
[root@k8s-192-168-1-140 ~]# kubectl get pod -A -w
NAMESPACE        NAME                                       READY   STATUS    RESTARTS            AGE
default          nfs-client-provisioner-58d465c998-w8mfq    1/1     Running   0                   5h38m
default          nginx-66686b6766-tdwt2                     1/1     Running   2 (<invalid> ago)   81d
default          quickstart-es-default-0                    1/1     Running   0                   5h51m
default          quickstart-kb-57cc78f6b8-b8fpf             1/1     Running   0                   5h24m
elastic-system   elastic-operator-0                         1/1     Running   0                   6h2m
kube-system      calico-kube-controllers-78dcb7b647-8f2ph   1/1     Running   2 (<invalid> ago)   81d
kube-system      calico-node-hpwvr                          1/1     Running   2 (<invalid> ago)   81d
kube-system      coredns-6746f4cb74-bhkv8                   1/1     Running   2 (<invalid> ago)   81d
kube-system      metrics-server-55c56cb875-bwbpr            1/1     Running   2 (<invalid> ago)   81d
kube-system      node-local-dns-nz4q7                       1/1     Running   2 (<invalid> ago)   81ddefault          quickstart-es-default-1                    0/1     Pending   0                   0s
default          quickstart-es-default-1                    0/1     Pending   0                   0s
default          quickstart-es-default-1                    0/1     Pending   0                   0s
default          quickstart-es-default-1                    0/1     Init:0/2   0                   0s
default          quickstart-es-default-1                    0/1     Init:0/2   0                   1s
default          quickstart-es-default-0                    1/1     Running    0                   5h51m
default          quickstart-es-default-1                    0/1     Init:0/2   0                   1s
default          quickstart-es-default-0                    1/1     Running    0                   5h51m
default          quickstart-es-default-1                    0/1     Init:0/2   0                   2s
default          quickstart-es-default-1                    0/1     Init:1/2   0                   3s
default          quickstart-es-default-1                    0/1     PodInitializing   0                   4s
default          quickstart-es-default-1                    0/1     Running           0                   5s
default          quickstart-es-default-1                    1/1     Running           0                   26s
default          quickstart-es-default-2                    0/1     Pending           0                   0s
default          quickstart-es-default-2                    0/1     Pending           0                   0s
default          quickstart-es-default-2                    0/1     Pending           0                   0s
default          quickstart-es-default-2                    0/1     Init:0/2          0                   0s
default          quickstart-es-default-2                    0/1     Init:0/2          0                   1s
default          quickstart-es-default-0                    1/1     Running           0                   5h51m
default          quickstart-es-default-1                    1/1     Running           0                   30s
default          quickstart-es-default-2                    0/1     Init:0/2          0                   2s
default          quickstart-es-default-1                    1/1     Running           0                   30s
default          quickstart-es-default-2                    0/1     Init:0/2          0                   2s
default          quickstart-es-default-0                    1/1     Running           0                   5h51m
default          quickstart-es-default-2                    0/1     Init:1/2          0                   3s
default          quickstart-es-default-2                    0/1     PodInitializing   0                   4s
default          quickstart-es-default-2                    0/1     Running           0                   5s
default          quickstart-es-default-2                    1/1     Running           0                   33s
^C[root@k8s-192-168-1-140 ~]# 
[root@k8s-192-168-1-140 ~]# 
[root@k8s-192-168-1-140 ~]# 
[root@k8s-192-168-1-140 ~]# kubectl get pod -A -wNAMESPACE        NAME                                       READY   STATUS    RESTARTS            AGE
default          nfs-client-provisioner-58d465c998-w8mfq    1/1     Running   0                   5h40m
default          nginx-66686b6766-tdwt2                     1/1     Running   2 (<invalid> ago)   81d
default          quickstart-es-default-0                    1/1     Running   0                   5h53m
default          quickstart-es-default-1                    1/1     Running   0                   2m19s
default          quickstart-es-default-2                    1/1     Running   0                   111s
default          quickstart-kb-57cc78f6b8-b8fpf             1/1     Running   0                   5h26m
elastic-system   elastic-operator-0                         1/1     Running   0                   6h4m
kube-system      calico-kube-controllers-78dcb7b647-8f2ph   1/1     Running   2 (<invalid> ago)   81d
kube-system      calico-node-hpwvr                          1/1     Running   2 (<invalid> ago)   81d
kube-system      coredns-6746f4cb74-bhkv8                   1/1     Running   2 (<invalid> ago)   81d
kube-system      metrics-server-55c56cb875-bwbpr            1/1     Running   2 (<invalid> ago)   81d
kube-system      node-local-dns-nz4q7                       1/1     Running   2 (<invalid> ago)   81d

卸载删除


# 删除所有命名空间中的所有Elastic资源
kubectl get namespaces --no-headers -o custom-columns=:metadata.name \| xargs -n1 kubectl delete elastic --all -n# 删除operator
kubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/operator.yaml
kubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml

关于

https://www.oiox.cn/

https://www.oiox.cn/index.php/start-page.html

CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客

全网可搜《小陈运维》

文章主要发布于微信公众号:《Linux运维交流社区》

http://www.jsqmd.com/news/79229/

相关文章:

  • 如何更详细地应用AI提升学习效率?——大学生实战指南
  • 2025 最新租房/找房平台 TOP4 评测!数智化赋能 + 全维服务权威榜单发布,重构居家生活服务新生态 - 全局中转站
  • 当电机开始“唱歌“:NVH工程师的降噪日常
  • 在写小故事
  • Linux 中如何将文本中连续的字段转换成一个字段显示
  • 光伏板太阳能充电MATLAB仿真探索
  • 26、端口敲门与单包授权:网络安全认证技术对比
  • QtCreator IDE中向项目添加ui文件并绑定类
  • PI + 重复控制的并联型APF有源电力滤波器仿真探索
  • 20、深入理解Snort规则选项与iptables数据包过滤
  • 教程 29 - 从磁盘加载纹理
  • 从自动化到智能化,构建企业级Workflow Agent系统实战指南
  • 基于SpringBoot的大学生在线考试平台的设计与实现毕业设计项目源码
  • 003-RSA魔改:一号店
  • 创维LB2004_瑞芯微RK3566_2G+32G_删除移动定制_安卓11_原生桌面_线刷固件包-方法4
  • Jeecg AI开源平台:零门槛构建AI应用的完整指南
  • 与AI共舞:当代大学生如何在智能时代重塑学习与成长
  • RPA在企业微信桌面端的元素识别:基于坐标与基于属性的优劣对比
  • 详细介绍:【分布式锁通关指南 12】源码剖析redisson如何利用Redis数据结构实现Semaphore和CountDownLatch
  • 【Java避坑】为什么我的 String a == b 返回 false?一文搞懂 Java 中的 == 与 equals
  • 教程 30 - 纹理系统
  • 【题解】Luogu P1310 [NOIP 2011 普及组] 表达式的值
  • Java面试三连击:原理拆解+实战避坑
  • 【题解】Luogu P11854 [CSP-J2022 山东] 宴会
  • Stack
  • 深入Ascend C(四):多算子融合与图级优化实战——构建高性能Attention自定义Kernel
  • 【题解】Luogu P5322 [BJOI2019] 排兵布阵
  • 代码源挑战赛 Round 41
  • 详细介绍:NumPy / pandas 类型选型、内存占用与性能优化
  • 告别选择困难!2025年远程控制软件场景化终极横评