您现在的位置是:首页 >技术教程 >k8s集群优化网站首页技术教程

k8s集群优化

GhostXE 2024-06-14 17:18:23
简介k8s集群优化

k8s集群优化

1.针对etcd优化和问题排查

etcd是存储k8s源数据的节点,一个基于leader-based的一个分布式存储节点。

官方文档:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/#resource-requirements

1.优化

1.针对节点数量

​ 尽量是奇数如遇大集群基本来说是三个节点。如果是2个或者4个有可能会出现脑裂现象。

脑裂的原因: 网络分区问题,节点失联问题,人为操作

  1. 网路分区:etcd的相互通信因为网络问题造成网络拥塞,导致etcd的keepalicved保活报文超时造成etcd集群重新选举发生脑裂。
  2. 节点失联:因为leader节点服务器出现故障造成节点失联,可能会造成脑裂。
  3. 人为操作:管理人员将节点下线,或者对etcd配置了错误的配置造成节点下线可能会造成脑裂。

2.针对资源问题

​ 针对资源问题主要有三个方向,网络资源,设备资源,存储资源

  1. 网络资源:保证etcd的网络能够顺畅,保证集群内的调用正常,必要时可采用qos优先wrr优先将etcd报文优先转发。
  2. 设备资源:k8s官方网站给出cpu和内存限制:https://etcd.io/docs/v3.6/op-guide/hardware/#example-hardware-configurations。
  3. 存储资源:针对k8s-etcd节点需要采用ssd 快速磁盘,必要时可采用m2磁盘。

3.安全加固

​ 针对etcd节点可以采用https方式进行访问进行加固和防火墙规则进行限制。

  1. https:配置https只有k8s的api服务器可以访问etcd集群,使用tls身份验证进行安全加固。
  2. 防火墙规则:使用iptables进行配置access防火墙规则,保证etcd集群的安全。

4.参数优化

--max-request-bytes=10485760 #request size limit单位k,官方最大不要超过10mb,默认一个key最大1.5m


--quota-backend-bytes=8589934592  #storage size limit #占用磁盘大小,默认2G,超过8G会告警

ETCDCTL_API=3 /usr/local/bin/etcdctl defrag --cluster --endpoints=https://10.0.0.21:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem# 采用高可用集群任意节点进行碎片整理 如果集群采用加固证书需要指定证书。





2.问题排查和解决:

1.出现脑裂如何解决

1.预防措施
  1. 保证网络可靠,etcd的节点网络避免拥塞。
  2. etcd的备份恢复机制:定时对etcd集群进行备份,如果出现错误采用最近备份进行恢复。
  3. 监控:监控etcd节点心跳和运行状态,保证集群的健壮性。
2.发生脑裂处理措施:
  1. 诊断发生脑裂的原因,是因为网络还是节点失联,或者磁盘问题。
  2. 暂停etcd服务,禁止写入数据。
  3. 停止集群后,新加入一个节点保证集群奇数节点,等待集群启动然后导入最新集群的备份副本,或者从已经停止的集群中选择一个最新副本的机器作为集群leader,进行数据同步。
2.出现磁盘IO过高如何解决

​ 当已经出现磁盘IO过高时,首先需要将集群进行多点备份,然后停止集群更换磁盘目录将目录替换到SSD磁盘或者M2磁盘中。

3.出现网络拥堵如何解决

​ 利用ping命令进行通信测试,如发现延迟过高时,需要通知网工进行网络优化具体优化内容如下:

  1. 针对etcd服务器的通信链路进行割接扩容,减少延迟。
  2. 针对etcd服务器进行配置qos服务质量技术,优先转发etcd报文。
  3. 针对etcd服务器配置iptables只允许k8s的服务器进行访问etcd服务器。
  4. 配置安全通信https,只有k8s的apiserver访问etcd。
4.服务器替换

​ 针对已经修复不了的服务器或者需要下线的服务器需要进行剔除。


etcdctl --endpoints=http://10.0.0.1,http://10.0.0.2 member list



8211f1d0f64f3269, started, member1, http://10.0.0.1:2380, http://10.0.0.1:2379
91bc3c398fb3c146, started, member2, http://10.0.0.2:2380, http://10.0.0.2:2379
fd422379fda50e48, started, member3, http://10.0.0.3:2380, http://10.0.0.3:2379


#剔除失败的成员
etcdctl member remove 8211f1d0f64f3269
#显示
Removed member 8211f1d0f64f3269 from cluster


#增加新成员
etcdctl member add member4 --peer-urls=http://10.0.0.4:2380
#显示
Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4



#恢复集群 需要一个快找文件
ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshotdb

#打快照 这个$ENDPOINT是快照目录
ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb

#验证快照
ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb 


#ETCD查询心跳

export NODE_IPS="10.0.0.31 10.0.0.32 10.0.0.33"
for ip in ${NODE_IPS}; do
  ETCDCTL_API=3 etcdctl 
  --endpoints=https://${ip}:2379  
  --cacert=/etc/kubernetes/ssl/ca.pem 
  --cert=/etc/kubernetes/ssl/etcd.pem 
  --key=/etc/kubernetes/ssl/etcd-key.pem 
  endpoint health; done

for ip in ${NODE_IPS}; do
  ETCDCTL_API=3 etcdctl 
  --endpoints=https://${ip}:2379  
  --cacert=/etc/kubernetes/ssl/ca.pem 
  --cert=/etc/kubernetes/ssl/etcd.pem 
  --key=/etc/kubernetes/ssl/etcd-key.pem 
  --write-out=table endpoint status; done



3.针对etcd升级

#升级地址   https://etcd.io/docs/v3.5/upgrades/upgrade_3_5/

4.etcd的选举规则

ETCD有主备节点的概念,主节点,备节点,追随节点,termID(默认0,更换主节点+1)

termID:采用raft算法是一种共识算法,表示需要配置相同。

节点数量:节点数量一般为奇数,1 3 5 ,但是一般不为5个经过测试发现5个会出现写放大,因为etcd集群需要时时复制导致集群性能降低。

启动顺序:尽量同时启动,有利于选举。

主节点心跳时间:100ms

ETCD选举规则:

  1. 集群初始化默认都是追随节点,当发现集群内没有主节点会通过比较termID,谁大谁是主。
  2. 如果termID 相同则比较日志新旧,日志新旧是存放选举消息的,一般选举都会选自己然后同步给其他备节点,比较谁收到的消息更新,比较日志新旧。
  3. 选择出主节点,此时其他节点会变更为追随节点,并且主节点会定期发送心跳信息,
  4. 其他节点会切换追随者并且想主节点同步数据。
  5. 如果心跳时间超时,则需要重新选举。

etcd主节点故障:

追随节点长时间收不到主节点的心跳报文,会重新选举leader。

状态从追随者变为备节点,比较teamID和日志的新旧,采用termdID大的作为leader节点或者日志更新时间短的为主节点。

新的leader会将自己的termID+1,告知其他节点。

如果旧的leader返回到集群,会变为追随节点不会再次选举。

5.ETCD增删改查命令

(1) 查 get

ETCDCTL_API=3 etcdctl  get / --prefix  --keys-only  #查看所有的ETCD API 指定版本3

ETCDCTL_API=3
#查看关于pod的api
root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get / --prefix  --keys-only   | grep pod 
/calico/ipam/v2/handle/k8s-pod-network.34c7c4df3b9fa2b41d5621460ea61b0693c1760ddaa3888718bc25c48c9d6c0e
/calico/ipam/v2/handle/k8s-pod-network.3b8fef06e5dbdd4627496ac68f9c2b531a6e1e7d07e3a9526c99f25eb7798580
/calico/ipam/v2/handle/k8s-pod-network.3e2af890abe3de2d2ea6586ed630714de725b54dcb3887a88b0e798ce9db4d6e
/calico/ipam/v2/handle/k8s-pod-network.8a70d0ceabf90f0bff6c64a49e1e55f73c6ee1ab5b24df0c72600fe10b68a98d
/calico/ipam/v2/handle/k8s-pod-network.944d15792b1fe67f571266c0d3610ed6c83bb1f655acbe0a589e180a8a328041
/calico/ipam/v2/handle/k8s-pod-network.a16da6f69aef7f207d4f4fe6ef7838baab33c3972b90832625ab60474d7a512f
/calico/ipam/v2/handle/k8s-pod-network.a6561d6222fc36384faa943dc04fe725fea8432abf7015c8a46bd0bff3c6f5b6
/calico/ipam/v2/handle/k8s-pod-network.af12ee887e3dfb1131d95401e7df730ad0342e945904e7d9d74853580d5f37ab
/calico/ipam/v2/handle/k8s-pod-network.c7f5dd18d1901c1c320dd4961734024f44c303efbe236fc8255b209570b7ed79
/calico/ipam/v2/handle/k8s-pod-network.f794d8795d6402de32556d8baf55021bdf97fcd11ac14debe203ac9a9195c05e
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.horizontal-pod-autoscaler
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.pod-garbage-collector
/registry/apiextensions.k8s.io/customresourcedefinitions/podmonitors.monitoring.coreos.com
/registry/clusterrolebindings/system:controller:horizontal-pod-autoscaler
/registry/clusterrolebindings/system:controller:pod-garbage-collector
/registry/clusterroles/system:controller:horizontal-pod-autoscaler
/registry/clusterroles/system:controller:pod-garbage-collector
/registry/poddisruptionbudgets/kube-system/calico-kube-controllers
/registry/pods/kube-system/calico-kube-controllers-5d45cfb97b-x6rft
/registry/pods/kube-system/calico-node-ftj5z
/registry/pods/kube-system/calico-node-hdbkv
/registry/pods/kube-system/calico-node-jjv5z
/registry/pods/kube-system/calico-node-l7psx
/registry/pods/kube-system/calico-node-v5l4l
/registry/pods/kube-system/calico-node-vz6mw
/registry/pods/kube-system/coredns-566564f9fd-2qxnv
/registry/pods/kube-system/coredns-566564f9fd-9qxmp
/registry/pods/kube-system/snapshot-controller-0
/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-5fdf8ff74f-h8h8x
/registry/pods/kubernetes-dashboard/kubernetes-dashboard-56cdd85c55-wkb7d
/registry/pods/kuboard/kuboard-v3-55b8c7dbd7-lmmnl
/registry/pods/myapp/myapp-nginx-deployment-7454547d57-bblgk
/registry/pods/myapp/myapp-nginx-deployment-7454547d57-jxnpk
/registry/pods/myapp/myapp-nginx-deployment-7454547d57-nqjm5
/registry/pods/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7
/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler
/registry/serviceaccounts/kube-system/pod-garbage-collector




查看关于deploymen的API

root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get / --prefix  --keys-only   | grep deployment
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.deployment-controller
/calico/resources/v3/projectcalico.org/workloadendpoints/myapp/k8s--worker--01--21-k8s-myapp--nginx--deployment--7454547d57--jxnpk-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/myapp/k8s--worker--01--21-k8s-myapp--tomcat--app1--deployment--6d9d8885db--v59n7-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/myapp/k8s--worker--22-k8s-myapp--nginx--deployment--7454547d57--bblgk-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/myapp/k8s--worker--23-k8s-myapp--nginx--deployment--7454547d57--nqjm5-eth0
/registry/clusterrolebindings/system:controller:deployment-controller
/registry/clusterroles/system:controller:deployment-controller
/registry/deployments/kube-system/calico-kube-controllers
/registry/deployments/kube-system/coredns
/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper
/registry/deployments/kubernetes-dashboard/kubernetes-dashboard
/registry/deployments/kuboard/kuboard-v3
/registry/deployments/myapp/myapp-nginx-deployment
/registry/deployments/myapp/myapp-tomcat-app1-deployment
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c912993658714
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c912d61094e5a
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c9133f030d3ee
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c9137388bccb4
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c9137efe3dfc7
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c9137eff0dfd0
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c9138108718af
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c91381087821b
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c913b06e00cfe
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c91450986be75
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c914e99243f15
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c914e9a504b4e
/registry/events/myapp/myapp-nginx-deployment-7454547d57-bblgk.175c914e9cc4259e
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c91299a32af87
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c912d6ed984fb
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c913476b17237
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c9136f4f77631
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c9137ac0cc703
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c9137ac0d040c
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c9137d96f7c13
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c9137d96ff28f
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c913b0fe445fe
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c9144245cfb1c
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c914fcb5d3c66
/registry/events/myapp/myapp-nginx-deployment-7454547d57-jxnpk.175c914fccbaee2a
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c91299a7de5ef
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c912d6e7570e0
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c9133ed1ba4a5
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c9137a17ff8fa
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c91385813809e
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c91385814047d
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c91387afaf11a
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c91387afb121d
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c913b7619a7f4
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c91448a3ccd54
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c9150a7b3dee4
/registry/events/myapp/myapp-nginx-deployment-7454547d57-nqjm5.175c9150a8b5c549
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c91299a361622
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c912d612647f4
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c9134758b3238
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c913826609c70
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c91386229531d
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c9138622997a2
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c91388cd91c24
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c91388cd9362f
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c913bfed30115
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c9144d73166b7
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c91507d4b506d
/registry/events/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7.175c91507dfecbba
/registry/pods/myapp/myapp-nginx-deployment-7454547d57-bblgk
/registry/pods/myapp/myapp-nginx-deployment-7454547d57-jxnpk
/registry/pods/myapp/myapp-nginx-deployment-7454547d57-nqjm5
/registry/pods/myapp/myapp-tomcat-app1-deployment-6d9d8885db-v59n7
/registry/replicasets/myapp/myapp-nginx-deployment-7454547d57
/registry/replicasets/myapp/myapp-tomcat-app1-deployment-6d9d8885db
/registry/serviceaccounts/kube-system/deployment-controller

查看关于namesp的API

root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get / --prefix  --keys-only   | grep namespace
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.namespace-controller
/registry/clusterrolebindings/system:controller:namespace-controller
/registry/clusterroles/system:controller:namespace-controller
/registry/namespaces/default
/registry/namespaces/kube-node-lease
/registry/namespaces/kube-public
/registry/namespaces/kube-system
/registry/namespaces/kubernetes-dashboard
/registry/namespaces/kuboard
/registry/namespaces/myapp
/registry/serviceaccounts/kube-system/namespace-controller

查看关于calico的API

root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get / --prefix  --keys-only   | grep namespace
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.namespace-controller
/registry/clusterrolebindings/system:controller:namespace-controller
/registry/clusterroles/system:controller:namespace-controller
/registry/namespaces/default
/registry/namespaces/kube-node-lease
/registry/namespaces/kube-public
/registry/namespaces/kube-system
/registry/namespaces/kubernetes-dashboard
/registry/namespaces/kuboard
/registry/namespaces/myapp
/registry/serviceaccounts/kube-system/namespace-controller
root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get / --prefix  --keys-only   | grep calico
/calico/ipam/v2/assignment/ipv4/block/172.16.124.64-26
/calico/ipam/v2/assignment/ipv4/block/172.16.140.192-26
/calico/ipam/v2/assignment/ipv4/block/172.16.221.64-26
/calico/ipam/v2/assignment/ipv4/block/172.16.227.128-26
/calico/ipam/v2/assignment/ipv4/block/172.16.39.64-26
/calico/ipam/v2/assignment/ipv4/block/172.16.76.64-26
/calico/ipam/v2/config
/calico/ipam/v2/handle/ipip-tunnel-addr-k8s-master-01-11
/calico/ipam/v2/handle/ipip-tunnel-addr-k8s-master-02-12
/calico/ipam/v2/handle/ipip-tunnel-addr-k8s-master-03-13
/calico/ipam/v2/handle/ipip-tunnel-addr-k8s-worker-01-21
/calico/ipam/v2/handle/ipip-tunnel-addr-k8s-worker-22
/calico/ipam/v2/handle/ipip-tunnel-addr-k8s-worker-23
/calico/ipam/v2/handle/k8s-pod-network.34c7c4df3b9fa2b41d5621460ea61b0693c1760ddaa3888718bc25c48c9d6c0e
/calico/ipam/v2/handle/k8s-pod-network.3b8fef06e5dbdd4627496ac68f9c2b531a6e1e7d07e3a9526c99f25eb7798580
/calico/ipam/v2/handle/k8s-pod-network.3e2af890abe3de2d2ea6586ed630714de725b54dcb3887a88b0e798ce9db4d6e
/calico/ipam/v2/handle/k8s-pod-network.8a70d0ceabf90f0bff6c64a49e1e55f73c6ee1ab5b24df0c72600fe10b68a98d
/calico/ipam/v2/handle/k8s-pod-network.944d15792b1fe67f571266c0d3610ed6c83bb1f655acbe0a589e180a8a328041
/calico/ipam/v2/handle/k8s-pod-network.a16da6f69aef7f207d4f4fe6ef7838baab33c3972b90832625ab60474d7a512f
/calico/ipam/v2/handle/k8s-pod-network.a6561d6222fc36384faa943dc04fe725fea8432abf7015c8a46bd0bff3c6f5b6
/calico/ipam/v2/handle/k8s-pod-network.af12ee887e3dfb1131d95401e7df730ad0342e945904e7d9d74853580d5f37ab
/calico/ipam/v2/handle/k8s-pod-network.c7f5dd18d1901c1c320dd4961734024f44c303efbe236fc8255b209570b7ed79
/calico/ipam/v2/handle/k8s-pod-network.f794d8795d6402de32556d8baf55021bdf97fcd11ac14debe203ac9a9195c05e
/calico/ipam/v2/host/k8s-master-01-11/ipv4/block/172.16.140.192-26
/calico/ipam/v2/host/k8s-master-02-12/ipv4/block/172.16.39.64-26
/calico/ipam/v2/host/k8s-master-03-13/ipv4/block/172.16.227.128-26
/calico/ipam/v2/host/k8s-worker-01-21/ipv4/block/172.16.76.64-26
/calico/ipam/v2/host/k8s-worker-22/ipv4/block/172.16.221.64-26
/calico/ipam/v2/host/k8s-worker-23/ipv4/block/172.16.124.64-26
/calico/resources/v3/projectcalico.org/clusterinformations/default
/calico/resources/v3/projectcalico.org/felixconfigurations/default
/calico/resources/v3/projectcalico.org/felixconfigurations/node.k8s-master-01-11
/calico/resources/v3/projectcalico.org/felixconfigurations/node.k8s-master-02-12
/calico/resources/v3/projectcalico.org/felixconfigurations/node.k8s-master-03-13
/calico/resources/v3/projectcalico.org/felixconfigurations/node.k8s-worker-01-21
/calico/resources/v3/projectcalico.org/felixconfigurations/node.k8s-worker-22
/calico/resources/v3/projectcalico.org/felixconfigurations/node.k8s-worker-23
/calico/resources/v3/projectcalico.org/ippools/default-ipv4-ippool
/calico/resources/v3/projectcalico.org/kubecontrollersconfigurations/default
/calico/resources/v3/projectcalico.org/nodes/k8s-master-01-11
/calico/resources/v3/projectcalico.org/nodes/k8s-master-02-12
/calico/resources/v3/projectcalico.org/nodes/k8s-master-03-13
/calico/resources/v3/projectcalico.org/nodes/k8s-worker-01-21
/calico/resources/v3/projectcalico.org/nodes/k8s-worker-22
/calico/resources/v3/projectcalico.org/nodes/k8s-worker-23
/calico/resources/v3/projectcalico.org/profiles/kns.default
/calico/resources/v3/projectcalico.org/profiles/kns.kube-node-lease
/calico/resources/v3/projectcalico.org/profiles/kns.kube-public
/calico/resources/v3/projectcalico.org/profiles/kns.kube-system
/calico/resources/v3/projectcalico.org/profiles/kns.kubernetes-dashboard
/calico/resources/v3/projectcalico.org/profiles/kns.kuboard
/calico/resources/v3/projectcalico.org/profiles/kns.myapp
/calico/resources/v3/projectcalico.org/profiles/ksa.default.default
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-node-lease.default
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-public.default
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.attachdetach-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.calico-kube-controllers
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.calico-node
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.certificate-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.clusterrole-aggregation-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.coredns
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.cronjob-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.daemon-set-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.default
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.deployment-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.disruption-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.endpoint-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.endpointslice-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.endpointslicemirroring-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.ephemeral-volume-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.expand-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.generic-garbage-collector
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.horizontal-pod-autoscaler
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.job-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.namespace-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.node-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.persistent-volume-binder
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.pod-garbage-collector
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.pv-protection-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.pvc-protection-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.replicaset-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.replication-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.resourcequota-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.root-ca-cert-publisher
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.service-account-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.service-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.snapshot-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.statefulset-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.ttl-after-finished-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.ttl-controller
/calico/resources/v3/projectcalico.org/profiles/ksa.kubernetes-dashboard.admin-user
/calico/resources/v3/projectcalico.org/profiles/ksa.kubernetes-dashboard.default
/calico/resources/v3/projectcalico.org/profiles/ksa.kubernetes-dashboard.kubernetes-dashboard
/calico/resources/v3/projectcalico.org/profiles/ksa.kuboard.default
/calico/resources/v3/projectcalico.org/profiles/ksa.kuboard.kuboard-admin
/calico/resources/v3/projectcalico.org/profiles/ksa.kuboard.kuboard-viewer
/calico/resources/v3/projectcalico.org/profiles/ksa.myapp.default
/calico/resources/v3/projectcalico.org/workloadendpoints/kube-system/k8s--worker--01--21-k8s-coredns--566564f9fd--2qxnv-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/kube-system/k8s--worker--01--21-k8s-snapshot--controller--0-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/kube-system/k8s--worker--22-k8s-coredns--566564f9fd--9qxmp-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/kubernetes-dashboard/k8s--worker--23-k8s-dashboard--metrics--scraper--5fdf8ff74f--h8h8x-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/kubernetes-dashboard/k8s--worker--23-k8s-kubernetes--dashboard--56cdd85c55--wkb7d-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/kuboard/k8s--worker--22-k8s-kuboard--v3--55b8c7dbd7--lmmnl-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/myapp/k8s--worker--01--21-k8s-myapp--nginx--deployment--7454547d57--jxnpk-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/myapp/k8s--worker--01--21-k8s-myapp--tomcat--app1--deployment--6d9d8885db--v59n7-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/myapp/k8s--worker--22-k8s-myapp--nginx--deployment--7454547d57--bblgk-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/myapp/k8s--worker--23-k8s-myapp--nginx--deployment--7454547d57--nqjm5-eth0
/registry/clusterrolebindings/calico-kube-controllers
/registry/clusterrolebindings/calico-node
/registry/clusterroles/calico-kube-controllers
/registry/clusterroles/calico-node
/registry/configmaps/kube-system/calico-config
/registry/controllerrevisions/kube-system/calico-node-5864644c86
/registry/controllerrevisions/kube-system/calico-node-64466497
/registry/controllerrevisions/kube-system/calico-node-74b5b78bf8
/registry/controllerrevisions/kube-system/calico-node-76c66bdd96
/registry/controllerrevisions/kube-system/calico-node-7df94b4c75
/registry/controllerrevisions/kube-system/calico-node-7fcccdfd7d
/registry/daemonsets/kube-system/calico-node
/registry/deployments/kube-system/calico-kube-controllers
/registry/events/kube-system/calico-kube-controllers-5d45cfb97b-x6rft.175c912993b97b98
/registry/events/kube-system/calico-kube-controllers-5d45cfb97b-x6rft.175c9129c2b4e6a3
/registry/events/kube-system/calico-kube-controllers-5d45cfb97b-x6rft.175c9129d0affc68
/registry/events/kube-system/calico-kube-controllers-5d45cfb97b-x6rft.175c9129d943c8eb
/registry/events/kube-system/calico-kube-controllers-5d45cfb97b-x6rft.175c912b628c9422
/registry/events/kube-system/calico-kube-controllers-5d45cfb97b-x6rft.175c912e029179a1
/registry/events/kube-system/calico-node-ftj5z.175c9129ee2a7c4f
/registry/events/kube-system/calico-node-ftj5z.175c912a2497accc
/registry/events/kube-system/calico-node-ftj5z.175c912a2aa08832
/registry/events/kube-system/calico-node-ftj5z.175c912a3a1d5b7a
/registry/events/kube-system/calico-node-ftj5z.175c912ebcc35534
/registry/events/kube-system/calico-node-ftj5z.175c91393ba6a4bd
/registry/events/kube-system/calico-node-ftj5z.175c91393c96845f
/registry/events/kube-system/calico-node-ftj5z.175c9139404a6e53
/registry/events/kube-system/calico-node-ftj5z.175c913977335938
/registry/events/kube-system/calico-node-ftj5z.175c913978cdd231
/registry/events/kube-system/calico-node-ftj5z.175c91397b2e1c06
/registry/events/kube-system/calico-node-ftj5z.175c9139b68cc8ea
/registry/events/kube-system/calico-node-ftj5z.175c9139f15fb34d
/registry/events/kube-system/calico-node-hdbkv.175c91299b18ef41
/registry/events/kube-system/calico-node-hdbkv.175c9129c145d496
/registry/events/kube-system/calico-node-hdbkv.175c9129c48057a9
/registry/events/kube-system/calico-node-hdbkv.175c9129d6e05652
/registry/events/kube-system/calico-node-hdbkv.175c912e42261e1b
/registry/events/kube-system/calico-node-hdbkv.175c913365cf3b78
/registry/events/kube-system/calico-node-hdbkv.175c913366fb1c49
/registry/events/kube-system/calico-node-hdbkv.175c913369f4d492
/registry/events/kube-system/calico-node-hdbkv.175c9133a1fefc26
/registry/events/kube-system/calico-node-hdbkv.175c9133a2bfecf7
/registry/events/kube-system/calico-node-hdbkv.175c9133a6d071e5
/registry/events/kube-system/calico-node-hdbkv.175c9133e1726d3f
/registry/events/kube-system/calico-node-hdbkv.175c9133fe7077b5
/registry/events/kube-system/calico-node-hdbkv.175c91341c267958
/registry/events/kube-system/calico-node-hdbkv.175c91364eea94eb
/registry/events/kube-system/calico-node-hdbkv.175c9138a31cb849
/registry/events/kube-system/calico-node-jjv5z.175c9128ef11a7c3
/registry/events/kube-system/calico-node-jjv5z.175c9129251ad43f
/registry/events/kube-system/calico-node-jjv5z.175c912927e32a7b
/registry/events/kube-system/calico-node-jjv5z.175c912935cc61b5
/registry/events/kube-system/calico-node-jjv5z.175c912ade8dbd8e
/registry/events/kube-system/calico-node-jjv5z.175c912ae02ca07e
/registry/events/kube-system/calico-node-jjv5z.175c912ae49a04ee
/registry/events/kube-system/calico-node-jjv5z.175c912b1a1e40b2
/registry/events/kube-system/calico-node-jjv5z.175c912b1aeb4006
/registry/events/kube-system/calico-node-jjv5z.175c912b1d9364b9
/registry/events/kube-system/calico-node-jjv5z.175c912b594ab61d
/registry/events/kube-system/calico-node-jjv5z.175c912b9712f6ba
/registry/events/kube-system/calico-node-jjv5z.175c912d9c9dda5a
/registry/events/kube-system/calico-node-jjv5z.175c912ff059cbd5
/registry/events/kube-system/calico-node-l7psx.175c912b12991242
/registry/events/kube-system/calico-node-l7psx.175c912b1566ee8e
/registry/events/kube-system/calico-node-l7psx.175c912b19b7a40e
/registry/events/kube-system/calico-node-l7psx.175c912b4e30b792
/registry/events/kube-system/calico-node-l7psx.175c912b4ec7913f
/registry/events/kube-system/calico-node-l7psx.175c912b51630eda
/registry/events/kube-system/calico-node-l7psx.175c912b961ad0c3
/registry/events/kube-system/calico-node-l7psx.175c912bc89ebb67
/registry/events/kube-system/calico-node-l7psx.175c912ddc0e63f0
/registry/events/kube-system/calico-node-l7psx.175c91302f85c085
/registry/events/kube-system/calico-node-l7psx.175cab5aadffeb88
/registry/events/kube-system/calico-node-l7psx.175cab5ad6da3a08
/registry/events/kube-system/calico-node-l7psx.175cab5ada90009c
/registry/events/kube-system/calico-node-l7psx.175cab5aedb6b36e
/registry/events/kube-system/calico-node-v5l4l.175c91299b72c4f2
/registry/events/kube-system/calico-node-v5l4l.175c9129c7ccdf80
/registry/events/kube-system/calico-node-v5l4l.175c9129c94557cd
/registry/events/kube-system/calico-node-v5l4l.175c9129d1339504
/registry/events/kube-system/calico-node-v5l4l.175c912e358be745
/registry/events/kube-system/calico-node-v5l4l.175c913357ec84be
/registry/events/kube-system/calico-node-v5l4l.175c9133592d2b55
/registry/events/kube-system/calico-node-v5l4l.175c91335cab684d
/registry/events/kube-system/calico-node-v5l4l.175c9133945f25cb
/registry/events/kube-system/calico-node-v5l4l.175c91339545ec68
/registry/events/kube-system/calico-node-v5l4l.175c9133993cd429
/registry/events/kube-system/calico-node-v5l4l.175c9133d4d6b82e
/registry/events/kube-system/calico-node-v5l4l.175c91340ebd3a32
/registry/events/kube-system/calico-node-v5l4l.175c913552ce2529
/registry/events/kube-system/calico-node-v5l4l.175c9137ae74bfb7
/registry/events/kube-system/calico-node-v5l4l.175c9139fb6a6775
/registry/events/kube-system/calico-node-vz6mw.175c912994116322
/registry/events/kube-system/calico-node-vz6mw.175c9129c1847542
/registry/events/kube-system/calico-node-vz6mw.175c9129c9b921b8
/registry/events/kube-system/calico-node-vz6mw.175c9129d32b1e9a
/registry/events/kube-system/calico-node-vz6mw.175c912e4716ac4d
/registry/events/kube-system/calico-node-vz6mw.175c91332dd3c8fd
/registry/events/kube-system/calico-node-vz6mw.175c91332eb6c8e7
/registry/events/kube-system/calico-node-vz6mw.175c9133325c3aab
/registry/events/kube-system/calico-node-vz6mw.175c91336a0dbab8
/registry/events/kube-system/calico-node-vz6mw.175c91336aef8f16
/registry/events/kube-system/calico-node-vz6mw.175c91336f07d25a
/registry/events/kube-system/calico-node-vz6mw.175c9133ac17d3e1
/registry/events/kube-system/calico-node-vz6mw.175c9133e828a012
/registry/events/kube-system/calico-node-vz6mw.175c91352957b958
/registry/events/kube-system/calico-node-vz6mw.175c91377ebd588c
/registry/events/kube-system/calico-node-vz6mw.175c9139d31c9d40
/registry/poddisruptionbudgets/kube-system/calico-kube-controllers
/registry/pods/kube-system/calico-kube-controllers-5d45cfb97b-x6rft
/registry/pods/kube-system/calico-node-ftj5z
/registry/pods/kube-system/calico-node-hdbkv
/registry/pods/kube-system/calico-node-jjv5z
/registry/pods/kube-system/calico-node-l7psx
/registry/pods/kube-system/calico-node-v5l4l
/registry/pods/kube-system/calico-node-vz6mw
/registry/replicasets/kube-system/calico-kube-controllers-5d45cfb97b
/registry/replicasets/kube-system/calico-kube-controllers-7b66574b5
/registry/secrets/kube-system/calico-etcd-secrets
/registry/serviceaccounts/kube-system/calico-kube-controllers
/registry/serviceaccounts/kube-system/calico-node

可以根据自己所需要的内容变化grep内容

(2)增加数据 put

root@k8setcd31:~# ETCDCTL_API=3 etcdctl  put /name "lalalal"
OK
root@k8setcd31:~# 

(3)删除数据 del

root@k8setcd31:~# ETCDCTL_API=3 etcdctl  put /name "lalalal"
OK
root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get /name  
/name
lalalal
root@k8setcd31:~# ETCDCTL_API=3 etcdctl  del /name  
1
root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get /name  


(4)改数据

针对改动数据其实就是二次覆盖

root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get /name  
/name
liu
root@k8setcd31:~# ETCDCTL_API=3 etcdctl  put /name "wang"
OK
root@k8setcd31:~# ETCDCTL_API=3 etcdctl  get /name  
/name
wang
root@k8setcd31:~# 

2.coredns

kubernetes 为service和pod之间创建DNS记录,你可以使用一致性DNS名称访问非IP地址的service。

查询定义:

​ DNS查询可能是因为执行查询的的pod所在命名空间返回不同的结果,不指定命名空间就是查询pod所在命名空间进行DNS查询,要访问其他命名空间的pod,需要在DNS查询时定义命名空间。

针对Coredns优化官方给出推荐:https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md

1.优化

​ 在大规模 Kubernetes 集群中,CoreDNS 的内存使用主要受集群中 Pod 和服务数量的影响。其他因素包括已填充的 DNS 应答缓存的大小,以及每个 CoreDNS 实例的查询接收率 (QPS)。

​ coredns的计算公式:所需的 MB(默认设置)=(Pod + 服务)/1000 + 54

  1. 针对插件优化,autopath路径优化可以提高集群外部查询性能,相当于提高集群外访问集群内的性能,会提高kubernetes API server的损耗,因为需要监控pod的是否被更改。 内存损耗计算公式:所需 MB(带自动路径)=(Pod + 服务)/ 250 + 56
  2. 性能优化:针对1c2g的coredns一般来说可以撑住1000-1500个pod使用,如果更大可以调整副本数量,在yaml中调整Deployment.spec.replicas中调整。
  3. dns缓存:如果开启dns缓存,记得一定要把LOCAL_DNS_CACHE这个key对应的value改成coredns的clusterIP。

2.访问流程:

集群内:

同namepase
  1. 服务在定义yaml时,指定了namespace和service_name并且服务在启动时会将一个域名解析打到pod的/etc/hosts中,那是k8s的集群默认域名解析。
  2. 当服务运行后,在pod中访问本namespace域名时,无需指定namespace接可以进行访问,通过coredns解析,解析是1:1的,节点的pod都会通过service_name和集群IP进行一一对应存放在etcd中,通过coredns服进行解析到etcd,然后etcd通过k8s-apiserver通知对应pod coredns解析到的地址进行访问。
不同namespace
  1. 服务在定义yaml时,指定了namespace和service_name并且服务在启动时会将一个域名解析打到pod的/etc/hosts中,那是k8s的集群默认域名解析
  2. 当服务运行后,在pod中访问其他namespace的域名时,需要指定对方namespace的名称才可以访问,通过coredns服进行解析到etcd,然后etcd通过k8s-apiserver通知对应pod coredns解析到的地址进行访问。

所有的集群内等域名访问都是需要转发到coredns服务器的,服务器一般为服务地址的第二个,每个服务创建都会选择service规则,有clusterIP,nodeport等。

针对集群外的访问

  1. pod-nginx需要访问百度,ping一下百度域名baidu.com。
  2. 该请求会先被kube-dns(Coredns服务)捕获。
  3. 域名解析转发到coredns集群,根据负载均衡会分配到某个coredns pod。
  4. coredns pod再通过api-server转到k8s集群服务。
  5. 最后k8s集群从etcd数据库中获取到域名解析结果。
  6. etcd把结果原路返回到k8s,依次类推,Nginx获取到baidu对应的IP地址。
  7. 解析结果会保存到域名缓存,下次访问会更加快速。

总结:

​ k8s会通过coredns进行解析,但是解析结果存放在etcd,通过k8s的apiserver返回给对应pod。

3.控制器说明

RC-ReplicationController

用于控制副本数量,确保任何时候都有特定的pod数量在运行,确保一定数量的pod是可用的。

官方文档:https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicationcontroller/

从官方文档来看rc是通过label匹配标签来确定pod数量,如此yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: www.ghostxin.online/application/nginx:1.22 
        ports:
        - containerPort: 80

指定副本数量为三个,将这个副本进行创建查看,

NAMESPACE              NAME                                         READY   STATUS    RESTARTS      AGE     LABELS
default                nginx-72z7n                                  1/1     Running   0             44s     app=nginx
default                nginx-p4bxx                                  1/1     Running   0             44s     app=nginx
default                nginx-qs89n                                  1/1     Running   0             44s     app=nginx
kube-system            calico-kube-controllers-5d45cfb97b-x6rft     1/1     Running   1 (51m ago)   4d17h   k8s-app=calico-kube-controllers,pod-template-hash=5d45cfb97b
kube-system            calico-node-ftj5z                            1/1     Running   1 (51m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-hdbkv                            1/1     Running   1 (51m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-jjv5z                            1/1     Running   1 (51m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-l7psx                            1/1     Running   1 (51m ago)   4d15h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-v5l4l                            1/1     Running   1 (51m ago)   4d15h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-vz6mw                            1/1     Running   1 (51m ago)   4d15h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            coredns-566564f9fd-2qxnv                     1/1     Running   1 (51m ago)   4d16h   k8s-app=kube-dns,pod-template-hash=566564f9fd
kube-system            coredns-566564f9fd-9qxmp                     1/1     Running   1 (51m ago)   4d16h   k8s-app=kube-dns,pod-template-hash=566564f9fd
kube-system            snapshot-controller-0                        1/1     Running   1 (51m ago)   4d11h   app=snapshot-controller,controller-revision-hash=snapshot-controller-7d87fc7c78,statefulset.kubernetes.io/pod-name=snapshot-controller-0
kubernetes-dashboard   dashboard-metrics-scraper-5fdf8ff74f-h8h8x   1/1     Running   1 (51m ago)   4d12h   k8s-app=dashboard-metrics-scraper,pod-template-hash=5fdf8ff74f
kubernetes-dashboard   kubernetes-dashboard-56cdd85c55-wkb7d        1/1     Running   1 (51m ago)   4d12h   k8s-app=kubernetes-dashboard,pod-template-hash=56cdd85c55
kuboard                kuboard-v3-55b8c7dbd7-lmmnl                  1/1     Running   1 (51m ago)   4d12h   k8s.kuboard.cn/name=kuboard-v3,pod-template-hash=55b8c7dbd7


root@k8s-master-01-11:/data/k8s_yaml/app# kubectl describe replicationcontrollers/nginx 
Name:         nginx
Namespace:    default
Selector:     app=nginx
Labels:       app=nginx
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        www.ghostxin.online/application/nginx:1.22
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  8m8s  replication-controller  Created pod: nginx-qs89n
  Normal  SuccessfulCreate  8m8s  replication-controller  Created pod: nginx-72z7n
  Normal  SuccessfulCreate  8m8s  replication-controller  Created pod: nginx-p4bxx

rc通过精准匹配label标签来确定pod的存活数量 app=nginx

RS-ReplicaSet

ReplicaSet 通过一组字段进行定义,通过定义进行识别来管理一组pod,rs也是通过label进行匹配,只是针对rc支持了不同的调度器,

selector和in notin ,

yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
  labels:
    app: guestbook  
    tier: nginx  #匹配标签
spec:
  # 按你的实际情况修改副本数
  replicas: 3
  selector:
    matchLabels:
      tier: nginx
  template:
    metadata:
      labels:
        tier: nginx
    spec:
      containers:
      - name: nginx
        image:  www.ghostxin.online/application/nginx:1.22 
        
        
        

查看RS信息

root@k8s-master-01-11:/data/k8s_yaml/app# kubectl  get rs
NAME    DESIRED   CURRENT   READY   AGE
nginx   3         3         3       9s
root@k8s-master-01-11:/data/k8s_yaml/app# 


root@k8s-master-01-11:/data/k8s_yaml/app# kubectl describe rs/nginx
Name:         nginx
Namespace:    default
Selector:     tier=nginx
Labels:       app=guestbook
              tier=nginx
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  tier=nginx
  Containers:
   nginx:
    Image:        www.ghostxin.online/application/nginx:1.22
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  72s   replicaset-controller  Created pod: nginx-n7bz4
  Normal  SuccessfulCreate  72s   replicaset-controller  Created pod: nginx-fcrsz
  Normal  SuccessfulCreate  72s   replicaset-controller  Created pod: nginx-hjhrd


这里看到匹配了 tier=nginx,app=guestbook,多个条件精准匹配

deployment

无状态服务副本控制器,具有滚动升级,回滚等功能,第三代pod副本控制器主要运行无状态副本

yaml文件

root@k8s-master-01-11:/data/k8s_yaml/app# cat nginx.yaml 
kind: Deployment #宣告kind采用deployment  副本控制器
apiVersion: apps/v1 #api版本
metadata: #数据mate
  labels: #标签
    app: myapp-nginx-deployment-label #app名称
  name: myapp-nginx-deployment #deployment名称
  namespace: myapp #命名空间名称
spec: #pod 空间
  replicas: 1 #副本数量
  selector: #调度器 
    matchLabels: #匹配标签调度 
      app: myapp-nginx-selector #标签名称
  template: #预发配置
    metadata: #meta数据
      labels: #标签
        app: myapp-nginx-selector #标签名称
    spec: #容器空间
      containers:  #容器运行时配置
      - name: myapp-nginx-container #名称
        image: www.ghostxin.online/application/nginx:latest #进行名称
        imagePullPolicy: Always #拉取镜像的规则 始终拉取
        ports: #容器端口配置 
        - containerPort: 80 #容器内端口
          protocol: TCP #使用协议
          name: http #名称 
        - containerPort: 443 #容器端口
          protocol: TCP #使用协议
          name: https #名称


---
kind: Service #服务类型
apiVersion: v1 #版本
metadata: #mata数据
  labels: #标签
    app: myapp-nginx-service-label #app名
    name: myapp-nginx-service #service  名称
  namespace: myapp #命名空间名称
spec: #空间
  type: NodePort #service的类型
  ports: #端口类
  - name: http 
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30080
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 30443
  selector: #调度器匹配调度
    app: myapp-nginx-selector #匹配app名称

副本控制器相关命令:

#查看所有pod和标签
root@k8s-master-01-11:/data/k8s_yaml/app# kubectl  get  pod -A  --show-labels=true  
NAMESPACE              NAME                                         READY   STATUS    RESTARTS      AGE     LABELS
kube-system            calico-kube-controllers-5d45cfb97b-x6rft     1/1     Running   1 (98m ago)   4d17h   k8s-app=calico-kube-controllers,pod-template-hash=5d45cfb97b
kube-system            calico-node-ftj5z                            1/1     Running   1 (99m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-hdbkv                            1/1     Running   1 (98m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-jjv5z                            1/1     Running   1 (99m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-l7psx                            1/1     Running   1 (99m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-v5l4l                            1/1     Running   1 (99m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            calico-node-vz6mw                            1/1     Running   1 (99m ago)   4d16h   controller-revision-hash=64466497,k8s-app=calico-node,pod-template-generation=6
kube-system            coredns-566564f9fd-2qxnv                     1/1     Running   1 (99m ago)   4d16h   k8s-app=kube-dns,pod-template-hash=566564f9fd
kube-system            coredns-566564f9fd-9qxmp                     1/1     Running   1 (98m ago)   4d16h   k8s-app=kube-dns,pod-template-hash=566564f9fd
kube-system            snapshot-controller-0                        1/1     Running   1 (99m ago)   4d12h   app=snapshot-controller,controller-revision-hash=snapshot-controller-7d87fc7c78,statefulset.kubernetes.io/pod-name=snapshot-controller-0
kubernetes-dashboard   dashboard-metrics-scraper-5fdf8ff74f-h8h8x   1/1     Running   1 (99m ago)   4d13h   k8s-app=dashboard-metrics-scraper,pod-template-hash=5fdf8ff74f
kubernetes-dashboard   kubernetes-dashboard-56cdd85c55-wkb7d        1/1     Running   1 (99m ago)   4d13h   k8s-app=kubernetes-dashboard,pod-template-hash=56cdd85c55
kuboard                kuboard-v3-55b8c7dbd7-lmmnl                  1/1     Running   1 (98m ago)   4d13h   k8s.kuboard.cn/name=kuboard-v3,pod-template-hash=55b8c7dbd7
myapp                  myapp-nginx-deployment-7454547d57-9vvvm      1/1     Running   0             2m31s   app=myapp-nginx-selector,pod-template-hash=7454547d57
myapp                  myapp-nginx-deployment-7454547d57-np5s5      1/1     Running   0             7s      app=myapp-nginx-selector,pod-template-hash=7454547d57
myapp                  myapp-nginx-deployment-7454547d57-qtp4m      1/1     Running   0             7s      app=myapp-nginx-selector,pod-template-hash=7454547d57


#查看所有svc名称
root@k8s-master-01-11:/data/k8s_yaml/app# kubectl  get svc -o wide  -A 
NAMESPACE              NAME                        TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                        AGE     SELECTOR
default                kubernetes                  ClusterIP   192.168.0.1       <none>        443/TCP                                        4d18h   <none>
kube-system            kube-dns                    ClusterIP   192.168.0.2       <none>        53/UDP,53/TCP,9153/TCP                         4d16h   k8s-app=kube-dns
kube-system            kubelet                     ClusterIP   None              <none>        10250/TCP,10255/TCP,4194/TCP                   4d11h   <none>
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   192.168.62.93     <none>        8000/TCP                                       4d13h   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard   kubernetes-dashboard        NodePort    192.168.156.107   <none>        443:30000/TCP                                  4d13h   k8s-app=kubernetes-dashboard
kuboard                kuboard-v3                  NodePort    192.168.204.219   <none>        80:30888/TCP,10081:30081/TCP,10081:30081/UDP   4d13h   k8s.kuboard.cn/name=kuboard-v3
myapp                  myapp-nginx-service         NodePort    192.168.167.254   <none>        80:30080/TCP,443:30443/TCP                     2m42s   app=myapp-nginx-selector

#查看所有deployment 名称
root@k8s-master-01-11:/data/k8s_yaml/app# kubectl  get deployment -A 
NAMESPACE              NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
kube-system            calico-kube-controllers     1/1     1            1           4d18h
kube-system            coredns                     2/2     2            2           4d16h
kubernetes-dashboard   dashboard-metrics-scraper   1/1     1            1           4d13h
kubernetes-dashboard   kubernetes-dashboard        1/1     1            1           4d13h
kuboard                kuboard-v3                  1/1     1            1           4d13h
myapp                  myapp-nginx-deployment      3/3     3            3           3m29s



#观察一下deployment的配置,我们之前的配置都可以看到
kubectl  edit deployment -n myapp myapp-nginx-deployment 

root@k8s-master-01-11:/data/k8s_yaml/app# kubectl  edit deployment -n myapp myapp-nginx-deployment
# Please edit the object below. Lines beginning with a '#' will be ignored,                                                                                            # and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"myapp-nginx-deployment-label"},"name":"myapp-nginx-deployment","namespace":"myapp"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"myapp-nginx-selector"}},"template":{"metadata":{"labels":{"app":"myapp-nginx-selector"}},"spec":{"containers":[{"image":"www.ghostxin.online/application/nginx:latest","imagePullPolicy":"Always","name":"myapp-nginx-container","ports":[{"containerPort":80,"name":"http","protocol":"TCP"},{"containerPort":443,"name":"https","protocol":"TCP"}]}]}}}}
  creationTimestamp: "2023-04-26T02:20:56Z"
  generation: 2
  labels:
    app: myapp-nginx-deployment-label
  name: myapp-nginx-deployment
  namespace: myapp
  resourceVersion: "58243"
  uid: 12128e86-d8f2-463e-ae66-657591a32c68
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: myapp-nginx-selector



  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: myapp-nginx-selector
    spec:
      containers:
      - image: www.ghostxin.online/application/nginx:latest
        imagePullPolicy: Always
        name: myapp-nginx-container
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: "2023-04-26T02:20:56Z"
    lastUpdateTime: "2023-04-26T02:20:57Z"
    message: ReplicaSet "myapp-nginx-deployment-7454547d57" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2023-04-26T02:23:23Z"
    lastUpdateTime: "2023-04-26T02:23:23Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 2
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3   

4.service服务

1.作用介绍

pod创建时会根据pod服务网络的地址池分配IP,那么重建pod时pod_ip会发生变化所以service服务产生了,为了解决pod重启或者重构不耽误访问的问题,通过label标签和endpoint进行绑定访问pod。

2.如何实现

实现label和endpoint绑定需要 kube-proxy对api-server监听,一旦service的资源发生(kube-apiserver修改service)变化则会触发kube-proxy更改调度负载到对应的pod,保证service的调度成功。

画图:

3.类型

1.clusterIP

​ 集群IP类型,方便与集群内部去使用,只可以在集群内部使用集群外部不可以使用。

yaml案例:

root@k8s-master-01-11:/data/k8s_yaml/app# cat nginx_cluster_ip.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: myapp-nginx-deployment-label
  name: myapp-nginx-deployment
  namespace: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp-nginx-selector
  template:
    metadata:
      labels:
        app: myapp-nginx-selector
    spec:
      containers:
      - name: myapp-nginx-container
        image: www.ghostxin.online/application/nginx:latest
        imagePullPolicy: Always

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myapp-nginx-service-label
  name: myapp-nginx-service
  namespace: myapp
spec:
  type: ClusterIP
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: myapp-nginx-selector 

#查看集群的基本配置

#查看svc的基本信息
root@k8s-master-01-11:/data/k8s_yaml/app# kubectl  get svc -A 
NAMESPACE              NAME                        TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)                                        AGE
default                kubernetes                  ClusterIP   192.168.0.1       <none>        443/TCP                                        4d19h
kube-system            kube-dns                    ClusterIP   192.168.0.2       <none>        53/UDP,53/TCP,9153/TCP                         4d17h
kube-system            kubelet                     ClusterIP   None              <none>        10250/TCP,10255/TCP,4194/TCP                   4d12h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   192.168.62.93     <none>        8000/TCP                                       4d13h
kubernetes-dashboard   kubernetes-dashboard        NodePort    192.168.156.107   <none>        443:30000/TCP                                  4d13h
kuboard                kuboard-v3                  NodePort    192.168.204.219   <none>        80:30888/TCP,10081:30081/TCP,10081:30081/UDP   4d13h
myapp                  myapp-nginx-service         ClusterIP   192.168.52.1      <none>        80/TCP,443/TCP                                 25s


#查看所有pod
root@k8s-master-01-11:/data/k8s_yaml/app# kubectl  get pod -A -o wide 
NAMESPACE              NAME                                         READY   STATUS    RESTARTS       AGE     IP               NODE               NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-5d45cfb97b-x6rft     1/1     Running   1 (131m ago)   4d18h   10.0.0.22        k8s-worker-02-22   <none>           <none>
kube-system            calico-node-ftj5z                            1/1     Running   1 (131m ago)   4d17h   10.0.0.13        10.0.0.13          <none>           <none>
kube-system            calico-node-hdbkv                            1/1     Running   1 (131m ago)   4d17h   10.0.0.22        k8s-worker-02-22   <none>           <none>
kube-system            calico-node-jjv5z                            1/1     Running   1 (131m ago)   4d17h   10.0.0.12        k8s-master-02-12   <none>           <none>
kube-system            calico-node-l7psx                            1/1     Running   1 (131m ago)   4d17h   10.0.0.11        k8s-master-01-11   <none>           <none>
kube-system            calico-node-v5l4l                            1/1     Running   1 (131m ago)   4d17h   10.0.0.23        10.0.0.23          <none>           <none>
kube-system            calico-node-vz6mw                            1/1     Running   1 (131m ago)   4d17h   10.0.0.21        k8s-worker-01-21   <none>           <none>
kube-system            coredns-566564f9fd-2qxnv                     1/1     Running   1 (131m ago)   4d17h   172.16.76.83     k8s-worker-01-21   <none>           <none>
kube-system            coredns-566564f9fd-9qxmp                     1/1     Running   1 (131m ago)   4d17h   172.16.221.81    k8s-worker-02-22   <none>           <none>
kube-system            snapshot-controller-0                        1/1     Running   1 (131m ago)   4d12h   172.16.76.80     k8s-worker-01-21   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-5fdf8ff74f-h8h8x   1/1     Running   1 (131m ago)   4d13h   172.16.124.84    10.0.0.23          <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-56cdd85c55-wkb7d        1/1     Running   1 (131m ago)   4d13h   172.16.124.89    10.0.0.23          <none>           <none>
kuboard                kuboard-v3-55b8c7dbd7-lmmnl                  1/1     Running   1 (131m ago)   4d13h   172.16.221.86    k8s-worker-02-22   <none>           <none>
myapp                  myapp-nginx-deployment-66984fdbd6-6cqqq      1/1     Running   0              53s     172.16.221.98    k8s-worker-02-22   <none>           <none>
myapp                  myapp-nginx-deployment-66984fdbd6-764cg      1/1     Running   0              53s     172.16.124.104   10.0.0.23          <none>           <none>
myapp                  myapp-nginx-deployment-66984fdbd6-th578      1/1     Running   0              53s     172.16.76.99     k8s-worker-01-21   <none>           <none>
root@k8s-master-01-11:/data/k8s_yaml/app# 



#查看pod详细信息
root@k8s-master-01-11:/data/k8s_yaml/app# kubectl  describe pod -n myapp                  myapp-nginx-deployment-66984fdbd6-th578   
Name:             myapp-nginx-deployment-66984fdbd6-th578
Namespace:        myapp
Priority:         0
Service Account:  default
Node:             k8s-worker-01-21/10.0.0.21
Start Time:       Wed, 26 Apr 2023 02:55:08 +0000
Labels:           app=myapp-nginx-selector
                  pod-template-hash=66984fdbd6
Annotations:      <none>
Status:           Running
IP:               172.16.76.99
IPs:
  IP:           172.16.76.99
Controlled By:  ReplicaSet/myapp-nginx-deployment-66984fdbd6
Containers:
  myapp-nginx-container:
    Container ID:   containerd://af9f329b75edca6c3f946fb234731c744dd54d2829ac13de2d9bee3ab26f15db
    Image:          www.ghostxin.online/application/nginx:latest
    Image ID:       www.ghostxin.online/application/nginx@sha256:2d7084857d5435dbb3468426444a790a256409885ab17c0d3272e8460e909d3c
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 26 Apr 2023 02:55:09 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sshmk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-sshmk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  3m8s  default-scheduler  Successfully assigned myapp/myapp-nginx-deployment-66984fdbd6-th578 to k8s-worker-01-21
  Normal  Pulling    3m8s  kubelet            Pulling image "www.ghostxin.online/application/nginx:latest"
  Normal  Pulled     3m8s  kubelet            Successfully pulled image "www.ghostxin.online/application/nginx:latest" in 75.692186ms (75.702727ms including waiting)
  Normal  Created    3m8s  kubelet            Created container myapp-nginx-container
  Normal  Started    3m8s  kubelet            Started container myapp-nginx-container

cluster IP 集群访问

没有任何问题!

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-81gdWaTN-1683450722755)(/Users/liujinxin/Library/Application Support/typora-user-images/image-20230426112218106.png)]

2.NotePort

用于集群外部服务器访问集群内部服务的service组件

Yaml

root@k8s-master-01-11:/data/k8s_yaml/app# cat nginx.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: myapp-nginx-deployment-label
  name: myapp-nginx-deployment
  namespace: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp-nginx-selector
  template:
    metadata:
      labels:
        app: myapp-nginx-selector
    spec:
      containers:
      - name: myapp-nginx-container
        image: www.ghostxin.online/application/nginx:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https


---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myapp-nginx-service-label
  name: myapp-nginx-service
  namespace: myapp
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30080
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 30443
  selector:
    app: myapp-nginx-selector

访问测试

在这里插入图片描述

3.LoadBalancer

用于公有云的slb,轮训转发。

4.访问流程-针对nodeport

在这里插入图片描述

  1. k8s通过service-nodeport向外暴露服务端口,用户通过公网进行访问
  2. 用户通过公网访问web域名访问网站,由dns解析到公司对外nat的snat上,转发至公司防火墙由防火墙解析是dmz区服务器对外提供服务。
  3. 防火墙转发至负载均衡haproxy的vip上,在haproxy上配置转发至k8s对应的master服务器上(k8s服务器master服务器负责访问和运行基本组件,worker服务器负责运行pod)。
  4. 至此访问到master节点的对外映射30080端口由service和pod通过label进行匹配,有kube-proxy进行匹配调度到pod。
  5. 针对多副本可以通过网络插件calico中的ipvs进行调度,其中ipvs调度有:rr,lc,dh,sh,sed,nq。

5.Volume-存储卷

官方网址:https://kubernetes.io/zh-cn/docs/concepts/storage/persistent-volumes/

K8s节点存储卷是将容器的指定的数据和容器结偶,存储到指定的位置上。

存储卷分为了本地存储和网络存储,本地存储

1.本地存储卷

(1)empydir 本地临时存储

本地临时存储,会随着pod的创建而创建,删除而删除

(2)hostpath 本地存储卷

类似于docker -v 进行挂载存储,当pod删除后不会随pod的删除而删除数据

网络存储卷

(1)网络存储

利用NFS,ceph等网络共享存储保存数据,

(2)云存储

利用云产品,比如阿里云的OSS存储数据。

2.针对存储还分为动态存储和静态存储

(1)静态存储 static

主要针对位PV和PVC进行创建,其中PV是一个存储存储容器,将存储卷进行整合然后通过PVC进行分配给pod。

需要注意地方: PVC不可大于PV。

(2)动态存储 dynamin

首先创建一个存储类storageclass,通过pod使用pvc时动态创建pvc无需手动配置。

3.针对PV和PV

(1)介绍

PV:PersistentVolume 持久卷的意思

PVC:PersistentVolume 持久卷申领,控制静态存储的空间类似一把锁

(1)实现pod和storage的结偶,便于管理调整存储不需要修改pod。

(2)区别于NFS能够通过pvc控制磁盘大小,存储权限。

(3)k8s低版本1.0支持pv和pvc。

大概PV和PVC的绑定图:

在这里插入图片描述

(2)PV持久卷的参数和使用

#针对不同的参数可以查询不同的参数
root@ubuntuharbor50:/opt/harbor/harbor# kubectl  explain   PersistentVolume.spec
KIND:     PersistentVolume
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     spec defines a specification of a persistent volume owned by the cluster.
     Provisioned by an administrator. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes

     PersistentVolumeSpec is the specification of a persistent volume.

FIELDS:
   accessModes	<[]string> #访问模式哦
     accessModes contains all ways the volume can be mounted. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes

   awsElasticBlockStore	<Object>
     awsElasticBlockStore represents an AWS Disk resource that is attached to a
     kubelet's host machine and then exposed to the pod. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore

   azureDisk	<Object>
     azureDisk represents an Azure Data Disk mount on the host and bind mount to
     the pod.

   azureFile	<Object>
     azureFile represents an Azure File Service mount on the host and bind mount
     to the pod.

   capacity	<map[string]string> #大小
     capacity is the description of the persistent volume's resources and
     capacity. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity

   cephfs	<Object>
     cephFS represents a Ceph FS mount on the host that shares a pod's lifetime

   cinder	<Object>
     cinder represents a cinder volume attached and mounted on kubelets host
     machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md

   claimRef	<Object>
     claimRef is part of a bi-directional binding between PersistentVolume and
     PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName
     is the authoritative bind between PV and PVC. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding

   csi	<Object>
     csi represents storage that is handled by an external CSI driver (Beta
     feature).

   fc	<Object>
     fc represents a Fibre Channel resource that is attached to a kubelet's host
     machine and then exposed to the pod.

   flexVolume	<Object>
     flexVolume represents a generic volume resource that is
     provisioned/attached using an exec based plugin.

   flocker	<Object>
     flocker represents a Flocker volume attached to a kubelet's host machine
     and exposed to the pod for its usage. This depends on the Flocker control
     service being running

   gcePersistentDisk	<Object>
     gcePersistentDisk represents a GCE Disk resource that is attached to a
     kubelet's host machine and then exposed to the pod. Provisioned by an
     admin. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk

   glusterfs	<Object>
     glusterfs represents a Glusterfs volume that is attached to a host and
     exposed to the pod. Provisioned by an admin. More info:
     https://examples.k8s.io/volumes/glusterfs/README.md

   hostPath	<Object>
     hostPath represents a directory on the host. Provisioned by a developer or
     tester. This is useful for single-node development and testing only!
     On-host storage is not supported in any way and WILL NOT WORK in a
     multi-node cluster. More info:
     https://kubernetes.io/docs/concepts/storage/volumes#hostpath

   iscsi	<Object>
     iscsi represents an ISCSI Disk resource that is attached to a kubelet's
     host machine and then exposed to the pod. Provisioned by an admin.

   local	<Object>
     local represents directly-attached storage with node affinity

   mountOptions	<[]string> #挂载类型 精准控制权限
     mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not
     validated - mount will simply fail if one is invalid. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options

   nfs	<Object>
     nfs represents an NFS mount on the host. Provisioned by an admin. More
     info: https://kubernetes.io/docs/concepts/storage/volumes#nfs

   nodeAffinity	<Object>
     nodeAffinity defines constraints that limit what nodes this volume can be
     accessed from. This field influences the scheduling of pods that use this
     volume.

   persistentVolumeReclaimPolicy	<string> #回收策略
     persistentVolumeReclaimPolicy defines what happens to a persistent volume
     when released from its claim. Valid options are Retain (default for
     manually created PersistentVolumes), Delete (default for dynamically
     provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be
     supported by the volume plugin underlying this PersistentVolume. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming

     Possible enum values:
     - `"Delete"` means the volume will be deleted from Kubernetes on release
     from its claim. The volume plugin must support Deletion.
     - `"Recycle"` means the volume will be recycled back into the pool of
     unbound persistent volumes on release from its claim. The volume plugin
     must support Recycling.
     - `"Retain"` means the volume will be left in its current phase (Released)
     for manual reclamation by the administrator. The default policy is Retain.

   photonPersistentDisk	<Object>
     photonPersistentDisk represents a PhotonController persistent disk attached
     and mounted on kubelets host machine

   portworxVolume	<Object>
     portworxVolume represents a portworx volume attached and mounted on
     kubelets host machine

   quobyte	<Object>
     quobyte represents a Quobyte mount on the host that shares a pod's lifetime

   rbd	<Object>
     rbd represents a Rados Block Device mount on the host that shares a pod's
     lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md

   scaleIO	<Object>
     scaleIO represents a ScaleIO persistent volume attached and mounted on
     Kubernetes nodes.

   storageClassName	<string>
     storageClassName is the name of StorageClass to which this persistent
     volume belongs. Empty value means that this volume does not belong to any
     StorageClass.

   storageos	<Object>
     storageOS represents a StorageOS volume that is attached to the kubelet's
     host machine and mounted into the pod More info:
     https://examples.k8s.io/volumes/storageos/README.md

   volumeMode	<string> #卷类型
     volumeMode defines if a volume is intended to be used with a formatted
     filesystem or to remain in raw block state. Value of Filesystem is implied
     when not included in spec.

   vsphereVolume	<Object>
     vsphereVolume represents a vSphere volume attached and mounted on kubelets
     host machine

root@ubuntuharbor50:/opt/harbor/harbor# 



查看PV的配置大小

root@ubuntuharbor50:/opt/harbor/harbor# kubectl  explain   PersistentVolume.spec.capacity 
KIND:     PersistentVolume
VERSION:  v1

FIELD:    capacity <map[string]string>

DESCRIPTION:
     capacity is the description of the persistent volume's resources and
     capacity. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity

     Quantity is a fixed-point representation of a number. It provides
     convenient marshaling/unmarshaling in JSON and YAML, in addition to
     String() and AsInt64() accessors.

     The serialization format is:

     ```<quantity> ::= <signedNumber><suffix>

     (Note that <suffix> may be empty, from the "" case in <decimalSI>.)

     <digit> ::= 0 | 1 | ... | 9 <digits> ::= <digit> | <digit><digits> <number>
     ::= <digits> | <digits>.<digits> | <digits>. | .<digits> <sign> ::= "+" |
     "-" <signedNumber> ::= <number> | <sign><number> <suffix> ::= <binarySI> |
     <decimalExponent> | <decimalSI> <binarySI> ::= Ki | Mi | Gi | Ti | Pi | Ei

     (International System of units; See:
     http://physics.nist.gov/cuu/Units/binary.html)

     <decimalSI> ::= m | "" | k | M | G | T | P | E

     (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)

     <decimalExponent> ::= "e" <signedNumber> | "E" <signedNumber> ```

     No matter which of the three exponent forms is used, no quantity may
     represent a number greater than 2^63-1 in magnitude, nor may it have more
     than 3 decimal places. Numbers larger or more precise will be capped or
     rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the
     future if we require larger or smaller quantities.

     When a Quantity is parsed from a string, it will remember the type of
     suffix it had, and will use the same type again when it is serialized.

     Before serializing, Quantity will be put in "canonical form". This means
     that Exponent/suffix will be adjusted up or down (with a corresponding
     increase or decrease in Mantissa) such that:

     - No precision is lost - No fractional digits will be emitted - The
     exponent (or suffix) is as large as possible.

     The sign will be omitted unless the number is negative.

     Examples:

     - 1.5 will be serialized as "1500m" - 1.5Gi will be serialized as "1536Mi"

     Note that the quantity will NEVER be internally represented by a floating
     point number. That is the whole point of this exercise.

     Non-canonical values will still parse as long as they are well formed, but
     will be re-emitted in their canonical form. (So always use canonical form,
     or don't diff.)

     This format is intended to make it difficult to use these numbers without
     writing some sort of special handling code in the hopes that that will
     cause implementors to also use a fixed point implementation.

查看访问模式 accessMode

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolume.spec.accessModes
KIND:     PersistentVolume
VERSION:  v1

FIELD:    accessModes <[]string>

DESCRIPTION:
     accessModes contains all ways the volume can be mounted. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes

查看删除机制

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolume.spec.persistentVolumeReclaimPolicy
KIND:     PersistentVolume
VERSION:  v1

FIELD:    persistentVolumeReclaimPolicy <string>

DESCRIPTION:
     persistentVolumeReclaimPolicy defines what happens to a persistent volume
     when released from its claim. Valid options are Retain (default for
     manually created PersistentVolumes), Delete (default for dynamically
     provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be
     supported by the volume plugin underlying this PersistentVolume. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming

     Possible enum values:
     - `"Delete"` means the volume will be deleted from Kubernetes on release #自动删除存储卷
     from its claim. The volume plugin must support Deletion.
     - `"Recycle"` means the volume will be recycled back into the pool of#空间回收,删除存储卷上的所有数据
     unbound persistent volumes on release from its claim. The volume plugin
     must support Recycling.
     - `"Retain"` means the volume will be left in its current phase (Released)#删除PVC后手动删除数据
     for manual reclamation by the administrator. The default policy is Retain.

查看volumeMode 查看卷类型

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolume.spec.volumeMode
KIND:     PersistentVolume
VERSION:  v1

FIELD:    volumeMode <string>

DESCRIPTION:
     volumeMode defines if a volume is intended to be used with a formatted
     filesystem or to remain in raw block state. Value of Filesystem is implied

查看挂在选项,进行权限控制mountOptions

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolume.spec.mountOptions
KIND:     PersistentVolume
VERSION:  v1

FIELD:    mountOptions <[]string>

DESCRIPTION:
     mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not
     validated - mount will simply fail if one is invalid. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options

(3)PVC持久卷申领参数使用

#针对不同参数进行查看
root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolumeClaim.spec
KIND:     PersistentVolumeClaim
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     spec defines the desired characteristics of a volume requested by a pod
     author. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

     PersistentVolumeClaimSpec describes the common attributes of storage
     devices and allows a Source for provider-specific attributes

FIELDS:
   accessModes	<[]string>
     accessModes contains the desired access modes the volume should have. More
     info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1

   dataSource	<Object>
     dataSource field can be used to specify either: * An existing
     VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An
     existing PVC (PersistentVolumeClaim) If the provisioner or an external
     controller can support the specified data source, it will create a new
     volume based on the contents of the specified data source. When the
     AnyVolumeDataSource feature gate is enabled, dataSource contents will be
     copied to dataSourceRef, and dataSourceRef contents will be copied to
     dataSource when dataSourceRef.namespace is not specified. If the namespace
     is specified, then dataSourceRef will not be copied to dataSource.

   dataSourceRef	<Object>
     dataSourceRef specifies the object from which to populate the volume with
     data, if a non-empty volume is desired. This may be any object from a
     non-empty API group (non core object) or a PersistentVolumeClaim object.
     When this field is specified, volume binding will only succeed if the type
     of the specified object matches some installed volume populator or dynamic
     provisioner. This field will replace the functionality of the dataSource
     field and as such if both fields are non-empty, they must have the same
     value. For backwards compatibility, when namespace isn't specified in
     dataSourceRef, both fields (dataSource and dataSourceRef) will be set to
     the same value automatically if one of them is empty and the other is
     non-empty. When namespace is specified in dataSourceRef, dataSource isn't
     set to the same value and must be empty. There are three important
     differences between dataSource and dataSourceRef: * While dataSource only
     allows two specific types of objects, dataSourceRef allows any non-core
     object, as well as PersistentVolumeClaim objects.
     * While dataSource ignores disallowed values (dropping them), dataSourceRef
     preserves all values, and generates an error if a disallowed value is
     specified.
     * While dataSource only allows local objects, dataSourceRef allows objects
     in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource
     feature gate to be enabled. (Alpha) Using the namespace field of
     dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to
     be enabled.

   resources	<Object>
     resources represents the minimum resources the volume should have. If
     RecoverVolumeExpansionFailure feature is enabled users are allowed to
     specify resource requirements that are lower than previous value but must
     still be higher than capacity recorded in the status field of the claim.
     More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources

   selector	<Object>
     selector is a label query over volumes to consider for binding.

   storageClassName	<string>
     storageClassName is the name of the StorageClass required by the claim.
     More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1

   volumeMode	<string>
     volumeMode defines what type of volume is required by the claim. Value of
     Filesystem is implied when not included in claim spec.

   volumeName	<string>
     volumeName is the binding reference to the PersistentVolume backing this
     claim.

查看pvc的accessMode

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolumeClaim.spec.accessModes
KIND:     PersistentVolumeClaim
VERSION:  v1

FIELD:    accessModes <[]string>

DESCRIPTION:
     accessModes contains the desired access modes the volume should have. More
     info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1

查看pvc的resources大小

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolumeClaim.spec.resources
KIND:     PersistentVolumeClaim
VERSION:  v1

RESOURCE: resources <Object>

DESCRIPTION:
     resources represents the minimum resources the volume should have. If
     RecoverVolumeExpansionFailure feature is enabled users are allowed to
     specify resource requirements that are lower than previous value but must
     still be higher than capacity recorded in the status field of the claim.
     More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources

     ResourceRequirements describes the compute resource requirements.

FIELDS:
   claims	<[]Object>
     Claims lists the names of resources, defined in spec.resourceClaims, that
     are used by this container.

     This is an alpha field and requires enabling the DynamicResourceAllocation
     feature gate.

     This field is immutable. It can only be set for containers.

   limits	<map[string]string> #限制大小
     Limits describes the maximum amount of compute resources allowed. More
     info:
     https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

   requests	<map[string]string>#请求大小
     Requests describes the minimum amount of compute resources required. If
     Requests is omitted for a container, it defaults to Limits if that is
     explicitly specified, otherwise to an implementation-defined value. More
     info:
     https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

查看标签选择器,用于匹配pv

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolumeClaim.spec.selector
KIND:     PersistentVolumeClaim
VERSION:  v1

RESOURCE: selector <Object>

DESCRIPTION:
     selector is a label query over volumes to consider for binding.

     A label selector is a label query over a set of resources. The result of
     matchLabels and matchExpressions are ANDed. An empty label selector matches
     all objects. A null label selector matches no objects.

FIELDS:
   matchExpressions	<[]Object> #匹配标签与运算,匹配多个标签
     matchExpressions is a list of label selector requirements. The requirements
     are ANDed.

   matchLabels	<map[string]string> #撇配标签
     matchLabels is a map of {key,value} pairs. A single {key,value} in the
     matchLabels map is equivalent to an element of matchExpressions, whose key
     field is "key", the operator is "In", and the values array contains only
     "value". The requirements are ANDed.

查看volumeName卷的名称

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolumeClaim.spec.volumeName
KIND:     PersistentVolumeClaim
VERSION:  v1

FIELD:    volumeName <string>

DESCRIPTION:
     volumeName is the binding reference to the PersistentVolume backing this
     claim.

查看volumeMode 卷的类型

root@ubuntuharbor50:/opt/harbor/harbor# kubectl explain PersistentVolumeClaim.spec.volumeMode
KIND:     PersistentVolumeClaim
VERSION:  v1

FIELD:    volumeMode <string>

DESCRIPTION:
     volumeMode defines what type of volume is required by the claim. Value of
     Filesystem is implied when not included in claim spec.

4.实操

(1)基于NFS的静态存储

NFS的搭建

安装NFS

apt install nfs-kernel-server nfs-common -y

配置NFS

root@ubuntuharbor50:/opt/harbor/harbor# cat /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#		to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
 /data   *(rw,no_root_squash)

NFS软重启

root@ubuntuharbor50:/opt/harbor/harbor# exportfs  -av 
exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "*:/data".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exporting *:/data
PV的创建

pv的yaml文件

apiVersion: v1
kind: PersistentVolume
metadata:
  name: myapp-server-static-pv
  namespace: myapp
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:                                                                                                                                       
    path: /data/testdata
    server: 10.0.0.50

pv的基本信息

root@ubuntuharbor50:/data/k8s# kubectl  get pv -n myapp 
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
myapp-server-static-pv   10Gi       RWO            Retain           Available                                   8s

PV的详细信息

root@ubuntuharbor50:/data/k8s# kubectl  describe  pv -n myapp 
Name:            myapp-server-static-pv
Labels:          <none>
Annotations:     <none>
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    
Status:          Available
Claim:           
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:         
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.0.0.50
    Path:      /data/testdata
    ReadOnly:  false
Events:        <none>
PVC的创建
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myapp-static-pvc
  namespace: myapp
spec:
  volumeName: myapp-server-static-pv    #一定要匹配PV的名称                                                                                                                                       
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
~                    

PVC的查看

root@ubuntuharbor50:/data/k8s# kubectl get   pvc -n myapp 
NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myapp-server-static-pvc   Bound    myapp-server-static-pv   10Gi       RWO                           13s

PVC的详细信息查看

root@ubuntuharbor50:/data/k8s# kubectl  describe  pvc -n myapp 
Name:          myapp-server-static-pvc
Namespace:     myapp
StorageClass:  
Status:        Bound
Volume:        myapp-server-static-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>
check pod应用
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: www.ghostxin.online/application/nginx@sha256:cf4ffe24f08a167176c84f2779c9fc35c2f7ce417b411978e384cbe63525b420
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myapp-server-static-pvc 

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myapp
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 38888
  selector:
    app: myserver-myapp-frontend

容器查看

root@k8s-master-01-11:~# kubectl  get pod  -o wide -A 
NAMESPACE              NAME                                              READY   STATUS    RESTARTS            AGE   IP               NODE               NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-5d45cfb97b-x6rft          1/1     Running   2 (130m ago)        15d   10.0.0.22        k8s-worker-02-22   <none>           <none>
kube-system            calico-node-ftj5z                                 1/1     Running   2 (<invalid> ago)   15d   10.0.0.13        10.0.0.13          <none>           <none>
kube-system            calico-node-hdbkv                                 1/1     Running   2 (130m ago)        15d   10.0.0.22        k8s-worker-02-22   <none>           <none>
kube-system            calico-node-jjv5z                                 1/1     Running   2 (130m ago)        15d   10.0.0.12        k8s-master-02-12   <none>           <none>
kube-system            calico-node-l7psx                                 1/1     Running   2 (<invalid> ago)   15d   10.0.0.11        k8s-master-01-11   <none>           <none>
kube-system            calico-node-v5l4l                                 1/1     Running   2 (130m ago)        15d   10.0.0.23        10.0.0.23          <none>           <none>
kube-system            calico-node-vz6mw                                 1/1     Running   2 (130m ago)        15d   10.0.0.21        k8s-worker-01-21   <none>           <none>
kube-system            coredns-566564f9fd-2qxnv                          1/1     Running   2 (130m ago)        15d   172.16.76.102    k8s-worker-01-21   <none>           <none>
kube-system            coredns-566564f9fd-9qxmp                          1/1     Running   2 (130m ago)        15d   172.16.221.100   k8s-worker-02-22   <none>           <none>
kube-system            snapshot-controller-0                             1/1     Running   2 (130m ago)        15d   172.16.76.103    k8s-worker-01-21   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-5fdf8ff74f-h8h8x        1/1     Running   2 (130m ago)        15d   172.16.124.107   10.0.0.23          <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-56cdd85c55-wkb7d             1/1     Running   2 (130m ago)        15d   172.16.124.106   10.0.0.23          <none>           <none>
kuboard                kuboard-v3-55b8c7dbd7-lmmnl                       1/1     Running   2 (130m ago)        15d   172.16.221.102   k8s-worker-02-22   <none>           <none>
myapp                  myapp-nginx-deployment-7454547d57-bblgk           1/1     Running   1 (130m ago)        10d   172.16.221.101   k8s-worker-02-22   <none>           <none>
myapp                  myapp-nginx-deployment-7454547d57-jxnpk           1/1     Running   1 (130m ago)        10d   172.16.76.104    k8s-worker-01-21   <none>           <none>
myapp                  myapp-nginx-deployment-7454547d57-nqjm5           1/1     Running   1 (130m ago)        10d   172.16.124.108   10.0.0.23          <none>           <none>
myapp                  myapp-tomcat-app1-deployment-6d9d8885db-v59n7     1/1     Running   1 (130m ago)        10d   172.16.76.105    k8s-worker-01-21   <none>           <none>
myapp                  myserver-myapp-deployment-name-79c564df85-dqnnw   1/1     Running   0                   20m   172.16.124.109   10.0.0.23          <none>           <none>
myapp                  myserver-myapp-deployment-name-79c564df85-kc5wz   1/1     Running   0                   20m   172.16.76.106    k8s-worker-01-21   <none>           <none>
myapp                  myserver-myapp-deployment-name-79c564df85-vms4z   1/1     Running   0                   20m   172.16.221.103   k8s-worker-02-22   <none>           <none>



#进入容器下载图片
root@k8s-master-01-11:~# kubectl  exec -it -n myapp                  myserver-myapp-deployment-name-79c564df85-dqnnw bash 

#下载 图片

root@myserver-myapp-deployment-name-79c564df85-dqnnw:/usr/share/nginx/html/statics# wget https://www.magedu.com/wp-content/uploads/2022/01/2022012003114993.jpg 
--2023-05-06 15:41:28--  https://www.magedu.com/wp-content/uploads/2022/01/2022012003114993.jpg
Resolving www.magedu.com (www.magedu.com)... 140.143.156.192
Connecting to www.magedu.com (www.magedu.com)|140.143.156.192|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 141386 (138K) [image/jpeg]
Saving to: '2022012003114993.jpg'

2022012003114993.jpg                      100%[====================================================================================>] 138.07K  --.-KB/s    in 0.1s    

2023-05-06 15:41:29 (1.36 MB/s) - '2022012003114993.jpg' saved [141386/141386]
root@myserver-myapp-deployment-name-79c564df85-dqnnw:/usr/share/nginx/html/statics# ls 
2022012003114993.jpg
root@myserver-myapp-deployment-name-79c564df85-dqnnw:/usr/share/nginx/html/statics# 

共享目录文件

root@ubuntuharbor50:/data/testdata# pwd 
/data/testdata
root@ubuntuharbor50:/data/testdata# ls 
2022012003114993.jpg
root@ubuntuharbor50:/data/testdata# 

查看访问

在这里插入图片描述

(2)基于NFS的动态存储

RBAC权限控制文件
oot@ubuntuharbor50:/data/k8s/dynamic# cat 1-rbac.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
动态存储类yaml文件
root@ubuntuharbor50:/data/k8s/dynamic# cat 2-storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
  #- vers=4.1 #containerd有部分参数异常
  #- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
  #mountOptions: "vers=4.1,noresvport,noatime"
  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据 
NFS存储文件
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1
  strategy: #部署策略
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner 
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 
          image: www.ghostxin.online/application/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.0.0.50
            - name: NFS_PATH
              value: /data/k8s-dynamic
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.50
            path: /data/k8s-dynamic
PVC文件
# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myserver-myapp-dynamic-pvc
  namespace: myserver
spec:
  storageClassName: managed-nfs-storage #调用的storageclass 名称
  accessModes:
    - ReadWriteMany #访问权限
  resources:
    requests:
      storage: 1Gi #空间大小
app的yaml文件
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myapp
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image:  www.ghostxin.online/application/nginx@sha256:cf4ffe24f08a167176c84f2779c9fc35c2f7ce417b411978e384cbe63525b420
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-dynamic-pvc  #匹配PVC标签

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myapp
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 39999
  selector:
    app: myserver-myapp-frontend
Check pod应用

应用已经全部起来

root@ubuntuharbor50:/data/k8s/dynamic# kubectl  get pod -A 
NAMESPACE              NAME                                              READY   STATUS    RESTARTS            AGE
kube-system            calico-kube-controllers-5d45cfb97b-x6rft          1/1     Running   2 (147m ago)        15d
kube-system            calico-node-ftj5z                                 1/1     Running   2 (<invalid> ago)   15d
kube-system            calico-node-hdbkv                                 1/1     Running   2 (147m ago)        15d
kube-system            calico-node-jjv5z                                 1/1     Running   2 (147m ago)        15d
kube-system            calico-node-l7psx                                 1/1     Running   2 (<invalid> ago)   15d
kube-system            calico-node-v5l4l                                 1/1     Running   2 (147m ago)        15d
kube-system            calico-node-vz6mw                                 1/1     Running   2 (147m ago)        15d
kube-system            coredns-566564f9fd-2qxnv                          1/1     Running   2 (147m ago)        15d
kube-system            coredns-566564f9fd-9qxmp                          1/1     Running   2 (147m ago)        15d
kube-system            snapshot-controller-0                             1/1     Running   2 (147m ago)        15d
kubernetes-dashboard   dashboard-metrics-scraper-5fdf8ff74f-h8h8x        1/1     Running   2 (147m ago)        15d
kubernetes-dashboard   kubernetes-dashboard-56cdd85c55-wkb7d             1/1     Running   2 (147m ago)        15d
kuboard                kuboard-v3-55b8c7dbd7-lmmnl                       1/1     Running   2 (147m ago)        15d
myapp                  myapp-nginx-deployment-7454547d57-bblgk           1/1     Running   1 (147m ago)        10d
myapp                  myapp-nginx-deployment-7454547d57-jxnpk           1/1     Running   1 (147m ago)        10d
myapp                  myapp-nginx-deployment-7454547d57-nqjm5           1/1     Running   1 (147m ago)        10d
myapp                  myapp-tomcat-app1-deployment-6d9d8885db-v59n7     1/1     Running   1 (147m ago)        10d
myapp                  myserver-myapp-deployment-name-5fc55f9544-n5qvw   1/1     Running   0                   94s
nfs                    nfs-client-provisioner-845678b754-gscx4           1/1     Running   0                   95s
root@ubuntuharbor50:/data/k8s/dynamic# 
root@ubuntuharbor50:/data/k8s/dynamic# kubectl  describe  pod  -n nfs   nfs-client-provisioner-845678b754-gscx4 
Name:             nfs-client-provisioner-845678b754-gscx4
Namespace:        nfs
Priority:         0
Service Account:  nfs-client-provisioner
Node:             10.0.0.23/10.0.0.23
Start Time:       Sat, 06 May 2023 15:58:46 +0000
Labels:           app=nfs-client-provisioner
                  pod-template-hash=845678b754
Annotations:      <none>
Status:           Running
IP:               172.16.124.111
IPs:
  IP:           172.16.124.111
Controlled By:  ReplicaSet/nfs-client-provisioner-845678b754
Containers:
  nfs-client-provisioner:
    Container ID:   containerd://456d6f128e2f5ed8e33bd1e8b33ac728bc61965f8d5a49c4e44a0fc76ac1604f
    Image:          www.ghostxin.online/application/nfs-subdir-external-provisioner:v4.0.2
    Image ID:       www.ghostxin.online/application/nfs-subdir-external-provisioner@sha256:ce8203164e7413cfed1bcd1fdfcbd44aa2f7dfaac79d9ae2dab324a49013588b
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 06 May 2023 15:58:47 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  k8s-sigs.io/nfs-subdir-external-provisioner
      NFS_SERVER:        10.0.0.50
      NFS_PATH:          /data/k8s-dynamic
    Mounts:
      /persistentvolumes from nfs-client-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9nphp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  nfs-client-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    10.0.0.50
    Path:      /data/k8s-dynamic
    ReadOnly:  false
  kube-api-access-9nphp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  22s   default-scheduler  Successfully assigned nfs/nfs-client-provisioner-845678b754-gscx4 to 10.0.0.23
  Normal  Pulling    21s   kubelet            Pulling image "www.ghostxin.online/application/nfs-subdir-external-provisioner:v4.0.2"
  Normal  Pulled     20s   kubelet            Successfully pulled image "www.ghostxin.online/application/nfs-subdir-external-provisioner:v4.0.2" in 539.647499ms (539.654457ms including waiting)
  Normal  Created    20s   kubelet            Created container nfs-client-provisioner
  Normal  Started    20s   kubelet            Started container nfs-client-provisioner

进入容器

root@ubuntuharbor50:/data/k8s/dynamic# kubectl  exec -it -n myapp                  myserver-myapp-deployment-name-5fc55f9544-n5qvw bash 
root@myserver-myapp-deployment-name-5fc55f9544-n5qvw:/# apt updaet && apt install wget -y
root@myserver-myapp-deployment-name-5fc55f9544-n5qvw:/usr/share/nginx/html/statics# ls 
2022012003114993.jpg
root@myserver-myapp-deployment-name-5fc55f9544-n5qvw:/usr/share/nginx/html/statics# pwd 
/usr/share/nginx/html/statics
root@myserver-myapp-deployment-name-5fc55f9544-n5qvw:/usr/share/nginx/html/statics# 

访问测试

在这里插入图片描述

动态查看容器挂载

发现对应目录下的目录挂载

动态挂载成功

root@ubuntuharbor50:/data/k8s-dynamic/myapp-myserver-myapp-dynamic-pvc-pvc-c4f8b918-aab4-4d10-93e4-fe22ab4dc12c# pwd 
/data/k8s-dynamic/myapp-myserver-myapp-dynamic-pvc-pvc-c4f8b918-aab4-4d10-93e4-fe22ab4dc12c
root@ubuntuharbor50:/data/k8s-dynamic/myapp-myserver-myapp-dynamic-pvc-pvc-c4f8b918-aab4-4d10-93e4-fe22ab4dc12c# ls 
2022012003114993.jpg
root@ubuntuharbor50:/data/k8s-dynamic/myapp-myserver-myapp-dynamic-pvc-pvc-c4f8b918-aab4-4d10-93e4-fe22ab4dc12c# 
风语者!平时喜欢研究各种技术,目前在从事后端开发工作,热爱生活、热爱工作。