您现在的位置是:首页 >技术杂谈 >Kubebuilder Hello World网站首页技术杂谈
Kubebuilder Hello World
Kubebuilder Hello World
摘要:从0开始建立kubebuilder第一个程序
文章目录
0. 环境 + 简介
0.1 环境
博主的机器:Mac amd64 (arm用户需要去看安装Kubebuilder需要先看kubebuilder官方文档的Quick Start部分)。
0.2 什么是kubebuilder?
(暂时省略,以后会写一篇专门的博客分析)。
1. 安装Kubebuilder
1.1 需要预先准备好的环境
需要go环境,docker,minikube(kind也行)(用来创建集群)。如果没有可以自行安装,不过一般到了kubebuilder学习了,kubernetes应该有一定了解了,所以上述环境大概率是有的。
注意这里我使用的集群环境是minikube。
1.2 安装kubebuilder & kustomize
brew install kubebuilder
brew install kustomize
2. 项目初始化
2.1 新建并进入文件夹
mkdir Helo
cd Helo
2.2 kubebuilder初始化
kubebuilder init --domain xxx.domain --repo Helo
--domain xxx.domain
是我们的项目域名;--repo Helo
是仓库地址。
如果初始化成功将会看到如下提示(可以进行kubebuilder create api
操作了)
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtime@v0.14.1
Update dependencies:
$ go mod tidy
Next: define a resource with:
$ kubebuilder create api
使用tree
命令先看一下当前目录情况⬇️
tree
.
├── Dockerfile
├── Makefile
├── PROJECT
├── README.md
├── config
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ └── manager_config_patch.yaml
│ ├── manager
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ └── rbac
│ ├── auth_proxy_client_clusterrole.yaml
│ ├── auth_proxy_role.yaml
│ ├── auth_proxy_role_binding.yaml
│ ├── auth_proxy_service.yaml
│ ├── kustomization.yaml
│ ├── leader_election_role.yaml
│ ├── leader_election_role_binding.yaml
│ ├── role_binding.yaml
│ └── service_account.yaml
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
└── main.go
7 directories, 24 files
3. 创建Api
命令行执行创建Api命令,其中(名称可以自己设置):
- API Group 是相关 API 功能的集合
- 每个 Group 拥有一或多个 Versions (GV)
- 每个 GV 都包含 N 个 api 类型,称之为
Kinds
,不同Version
同一个Kinds
可能不同
kubebuilder create api --group apps --version v1 --kind Test
如果创建成功,将会看到如下所示的提示⬇️(让我们生成manifests
)
kubebuilder create api --group apps --version v1 --kind Test
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
api/v1/test_types.go
controllers/test_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
mkdir -p /Users/levi/wrksp/Helo/bin
test -s /Users/levi/wrksp/Helo/bin/controller-gen && /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 ||
GOBIN=/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests
此外,执行tree
命令后可以发现创建api后目录结构也发生了变化。需要注意的变化是生成了⬇️
api/v1
- crd
- samples(Test类型资源实力的样例yaml)
.
├── Dockerfile
├── Makefile
├── PROJECT
├── README.md
├── api
│ └── v1
│ ├── groupversion_info.go
│ ├── test_types.go
│ └── zz_generated.deepcopy.go
├── bin
│ └── controller-gen
├── config
│ ├── crd
│ │ ├── kustomization.yaml
│ │ ├── kustomizeconfig.yaml
│ │ └── patches
│ │ ├── cainjection_in_tests.yaml
│ │ └── webhook_in_tests.yaml
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ └── manager_config_patch.yaml
│ ├── manager
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ ├── rbac
│ │ ├── auth_proxy_client_clusterrole.yaml
│ │ ├── auth_proxy_role.yaml
│ │ ├── auth_proxy_role_binding.yaml
│ │ ├── auth_proxy_service.yaml
│ │ ├── kustomization.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── role_binding.yaml
│ │ ├── service_account.yaml
│ │ ├── test_editor_role.yaml
│ │ └── test_viewer_role.yaml
│ └── samples
│ └── apps_v1_test.yaml
├── controllers
│ ├── suite_test.go
│ └── test_controller.go
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
└── main.go
14 directories, 37 files
4. 解决端口占用问题(如果没有可以跳过)
首先使用go run ./main.go
测试一下端口占用问题,我这里可以看到报错信息(error listening on :8080: listen tcp :8080: bind: address already in use
):
go run ./main.go
2023-04-17T10:22:01+08:00 INFO controller-runtime.metrics Metrics server is starting to listen {"addr": ":8080"}
2023-04-17T10:22:01+08:00 ERROR controller-runtime.metrics metrics server failed to listen. You may want to disable the metrics server or use another port if it is due to conflicts {"error": "error listening on :8080: listen tcp :8080: bind: address already in use"}
sigs.k8s.io/controller-runtime/pkg/metrics.NewListener
/Users/levi/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.1/pkg/metrics/listener.go:48
sigs.k8s.io/controller-runtime/pkg/manager.New
/Users/levi/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.1/pkg/manager/manager.go:407
main.main
/Users/levi/wrksp/Helo/main.go:68
runtime.main
/usr/local/Cellar/go/1.20.3/libexec/src/runtime/proc.go:250
2023-04-17T10:22:01+08:00 ERROR setup unable to start manager {"error": "error listening on :8080: listen tcp :8080: bind: address already in use"}
main.main
/Users/levi/wrksp/Helo/main.go:88
runtime.main
/usr/local/Cellar/go/1.20.3/libexec/src/runtime/proc.go:250
exit status 1
通过lsof
命令查看端口占用,可以看到是main(其实是另一个kubebuilder实例)。
lsof -i:8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
main 4304 levi 7u IPv6 0xc681612e5637de5 0t0 TCP *:http-alt (LISTEN)
这里我并不想杀掉原来的进程释放8080,因此将代码中默认的8080改为8182(通过lsof
事先找到一个没有被占用的端口即可)。
需要修改的地方为:
(1)main.go
中将metrics-bind-address
替换从8080替换为8280。
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
(2)Helo/config/default/manager_auth_proxy_patch.yaml
中upstream
和--metrics-bind-address
同样需要替换成8280。
- "--upstream=http://127.0.0.1:8080/"
- "--metrics-bind-address=127.0.0.1:8080"
如果8081被占用,同理进行修改(我这里是将8081替换成了8281)。
搞定端口占用问题后,进入下面步骤即可。
5. 安装CRD
5.1 进入集群
输入minikube update-context
命令
# 创建集群
minikube start
# 进入集群
minikube update-context
## 看到的输出结果为:
? "minikube" context has been updated to point to 127.0.0.1:50336
? 当前的上下文为 "minikube"
5.2 在集群中安装CRD
# 将CRD安装到集群中 (首先要进入minikube上下文)
make install
# 运行
make run
# 运行结果参考
test -s /Users/levi/wrksp/Helo/bin/controller-gen && /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 ||
GOBIN=/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/Users/levi/wrksp/Helo/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go run ./main.go
2023-04-17T15:11:52+08:00 INFO controller-runtime.metrics Metrics server is starting to listen {"addr": ":8280"}
2023-04-17T15:11:52+08:00 INFO setup starting manager
2023-04-17T15:11:52+08:00 INFO Starting server {"path": "/metrics", "kind": "metrics", "addr": "[::]:8280"}
2023-04-17T15:11:52+08:00 INFO Starting server {"kind": "health probe", "addr": "[::]:8281"}
2023-04-17T15:11:52+08:00 INFO Starting EventSource {"controller": "test", "controllerGroup": "apps.xxx.domain", "controllerKind": "Test", "source": "kind source: *v1.Test"}
2023-04-17T15:11:52+08:00 INFO Starting Controller {"controller": "test", "controllerGroup": "apps.xxx.domain", "controllerKind": "Test"}
2023-04-17T15:11:52+08:00 INFO Starting workers {"controller": "test", "controllerGroup": "apps.xxx.domain", "controllerKind": "Test", "worker count": 1}
需要注意controller是一直运行的,ctrl + c
将会导致controller退出。
6. 创建资源实例
6.1 到目前为止的总结
前面的内容是创建了自定义的资源(Test 类型的CRD),接下来我们将自定义资源实例化(创建该类型的pod)。
CRD安装后,kubebuilder已经自动帮我们生成了该类型的部署文件样例,位于./config/samples/apps_v1_test.yaml
文件中。
内容为
apiVersion: apps.xxx.domain/v1
kind: Test
metadata:
labels:
app.kubernetes.io/name: test
app.kubernetes.io/instance: test-sample
app.kubernetes.io/part-of: helo
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: helo
name: test-sample
spec:
# TODO(user): Add fields here
6.2 创建实例
打开新的terminal,cd Helo
,登陆集群minikube update-context
,创建实例:
kubectl apply -f config/samples/
## 创建成功输出
test.apps.xxx.domain/test-sample created
查看Test
类型自定义资源的实例(test-sample
):
# 查看Test资源实例
kubectl get Test
## 查看Test资源实例的输出
NAME AGE
test-sample 15s
6.3 修改实例内容
kubectl edit Test test-sample
我的修改是在最下面添加了(注意缩进)。
spec:
foo: bar
可以看到的结果为:
test.apps.xxx.domain/test-sample edited
7. 删除实例,关闭controller
- 删除实例
kubectl delete -f config/samples/
## 删除结果
test.apps.xxx.domain "test-sample" deleted
-
关闭controller:
ctrl + c
即可。 -
卸载controller:
make uninstall
——没必要删除,先留着后面做实验还用。
8. 镜像制作 & 部署
注意:这里主要是参考了《kubebuilder实战之二:初次体验kubebuilder》
前面的controller是运行在k8s之外的,生产环境中controller一般运行在k8s的环境内。(以下操作均在目录./Helo/
中进行)
8.1 镜像制作
# docker build和push (2513686675是我的docker账号,levitest是镜像名,001是tag(版本号,建议设置成v0.0.1))
make docker-build docker-push IMG=2513686675/levitest:001
## 输出
make docker-build docker-push IMG=2513686675/levitest:001
test -s /Users/levi/wrksp/Helo/bin/controller-gen && /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 ||
GOBIN=/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/Users/levi/wrksp/Helo/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
test -s /Users/levi/wrksp/Helo/bin/setup-envtest || GOBIN=/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
KUBEBUILDER_ASSETS="/Users/levi/wrksp/Helo/bin/k8s/1.26.0-darwin-amd64" go test ./... -coverprofile cover.out
? Helo [no test files]
? Helo/api/v1 [no test files]
ok Helo/controllers 1.568s coverage: 0.0% of statements
docker build -t 2513686675/levitest:001 .
[+] Building 2.6s (18/18) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 4.92kB 0.0s
=> [internal] load metadata for gcr.io/distroless/static:nonroot 1.1s
=> [internal] load metadata for docker.io/library/golang:1.19 2.5s
=> [auth] library/golang:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 3.57kB 0.0s
=> [builder 1/9] FROM docker.io/library/golang:1.19@sha256:9f2dd04486e84eec72d945b077d568976981d9afed8b4e2aeb08f7ab739292b3 0.0s
=> [stage-1 1/3] FROM gcr.io/distroless/static:nonroot@sha256:149531e38c7e4554d4a6725d7d70593ef9f9881358809463800669ac89f3b0ec 0.0s
=> CACHED [builder 2/9] WORKDIR /workspace 0.0s
=> CACHED [builder 3/9] COPY go.mod go.mod 0.0s
=> CACHED [builder 4/9] COPY go.sum go.sum 0.0s
=> CACHED [builder 5/9] RUN go mod download 0.0s
=> CACHED [builder 6/9] COPY main.go main.go 0.0s
=> CACHED [builder 7/9] COPY api/ api/ 0.0s
=> CACHED [builder 8/9] COPY controllers/ controllers/ 0.0s
=> CACHED [builder 9/9] RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go 0.0s
=> CACHED [stage-1 2/3] COPY --from=builder /workspace/manager . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:f3aefc63f93dc5193e6e3e0c168b8c78ad2769e0a79ad018a3faaf34cb20198e 0.0s
=> => naming to docker.io/2513686675/levitest:001 0.0s
docker push 2513686675/levitest:001
The push refers to repository [docker.io/2513686675/levitest]
da2a60d3ec35: Pushed
4cb10dd2545b: Pushed
d2d7ec0f6756: Pushed
1a73b54f556b: Pushed
e624a5370eca: Pushed
d52f02c6501c: Pushed
ff5700ec5418: Pushed
399826b51fcf: Pushed
6fbdf253bbc2: Pushed
d0157aa0c95a: Pushed
001: digest: sha256:efe1d1e17537b90f84cf005b95b2c3b7065d9f6dc4c5761c6d89103963b95507 size: 2402
8.2 部署
kerbernetes部署controller镜像。
make deploy IMG=2513686675/levitest:001
bug解决
Unable to connect to the server: net/http: TLS handshake timeout
make deploy IMG=2513686675/levitest:001 ## 部署输出错误信息(Unable to connect to the server: net/http: TLS handshake timeout) test -s /Users/levi/wrksp/Helo/bin/controller-gen && /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 || GOBIN=/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1 /Users/levi/wrksp/Helo/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases test -s /Users/levi/wrksp/Helo/bin/kustomize || { curl -Ss "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s -- 3.8.7 /Users/levi/wrksp/Helo/bin; } cd config/manager && /Users/levi/wrksp/Helo/bin/kustomize edit set image controller=2513686675/levitest:001 /Users/levi/wrksp/Helo/bin/kustomize build config/default | kubectl apply -f - Unable to connect to the server: net/http: TLS handshake timeout make: *** [deploy] Error 1
可能是代理有问题
unset http_proxy nuset https_proxy
部署成功输出:
make deploy IMG=2513686675/levitest:001
## k8s部署controller镜像成功
test -s /Users/levi/wrksp/Helo/bin/controller-gen && /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 ||
GOBIN=/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
test -s /Users/levi/wrksp/Helo/bin/kustomize || { curl -Ss "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s -- 3.8.7 /Users/levi/wrksp/Helo/bin; }
cd config/manager && /Users/levi/wrksp/Helo/bin/kustomize edit set image controller=2513686675/levitest:001
/Users/levi/wrksp/Helo/bin/kustomize build config/default | kubectl apply -f -
namespace/helo-system created
customresourcedefinition.apiextensions.k8s.io/tests.apps.xxx.domain unchanged
serviceaccount/helo-controller-manager created
role.rbac.authorization.k8s.io/helo-leader-election-role created
clusterrole.rbac.authorization.k8s.io/helo-manager-role created
clusterrole.rbac.authorization.k8s.io/helo-metrics-reader created
clusterrole.rbac.authorization.k8s.io/helo-proxy-role created
rolebinding.rbac.authorization.k8s.io/helo-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/helo-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/helo-proxy-rolebinding created
service/helo-controller-manager-metrics-service created
deployment.apps/helo-controller-manager created
8.3 查看部署 + 进一步解释
查看部署可以看到helo-controller-manager-655547d5dc-5rv59
对应的容器有两个(READY 2/2)
# 查看部署命令
kubectl get pod --all-namespaces
## 查看部署的输出内容
NAMESPACE NAME READY STATUS RESTARTS AGE
default command-demo 0/1 Completed 0 6h13m
helo-system helo-controller-manager-655547d5dc-5rv59 2/2 Running 0 4m29s
kube-system coredns-787d4945fb-85skf 1/1 Running 4 (8h ago) 3d4h
kube-system etcd-minikube 1/1 Running 5 (8h ago) 3d4h
kube-system kube-apiserver-minikube 1/1 Running 5 3d4h
kube-system kube-controller-manager-minikube 1/1 Running 4 (8h ago) 3d4h
kube-system kube-proxy-v4gdc 1/1 Running 6 (8h ago) 3d4h
kube-system kube-scheduler-minikube 1/1 Running 6 (8h ago) 3d4h
kube-system storage-provisioner 1/1 Running 11 (8h ago) 3d4h
kubernetes-dashboard dashboard-metrics-scraper-5c6664855-w4bp2 1/1 Running 0 6h27m
kubernetes-dashboard kubernetes-dashboard-55c4cbbc7c-xg6t5 1/1 Running 0 6h27m
通过describe查看详细讯息(因为namespace并不是default,因此需要用-n指定namespace)。可以发现Containers
字段有两个,分别是kube-rbac-proxy
和manger
,也就是说两个容器分别是kube-rbac-proxy
和manger
。
kubectl describe pod helo-controller-manager-655547d5dc-5rv59 -n helo-system
## 结果
kubectl describe pod helo-controller-manager-655547d5dc-5rv59 -n helo-system
Name: helo-controller-manager-655547d5dc-5rv59
Namespace: helo-system
Priority: 0
Service Account: helo-controller-manager
Node: minikube/192.168.58.2
Start Time: Mon, 17 Apr 2023 19:54:30 +0800
Labels: control-plane=controller-manager
pod-template-hash=655547d5dc
Annotations: kubectl.kubernetes.io/default-container: manager
Status: Running
IP: 10.244.0.10
IPs:
IP: 10.244.0.10
Controlled By: ReplicaSet/helo-controller-manager-655547d5dc
Containers:
kube-rbac-proxy:
Container ID: docker://a86bf7bbf28b6c56f7287915e8e9aacabe28e3cb94989998538963ea3229d8ab
Image: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
Image ID: docker-pullable://gcr.io/kubebuilder/kube-rbac-proxy@sha256:d4883d7c622683b3319b5e6b3a7edfbf2594c18060131a8bf64504805f875522
Port: 8443/TCP
Host Port: 0/TCP
Args:
--secure-listen-address=0.0.0.0:8443
--upstream=http://127.0.0.1:8280/
--logtostderr=true
--v=0
State: Running
Started: Mon, 17 Apr 2023 19:54:46 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 128Mi
Requests:
cpu: 5m
memory: 64Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spmqg (ro)
manager:
Container ID: docker://6c8bda780cfdbd91a1cf550bf9dc5c0c92bcb45cc12f65071c059590fbb0955d
Image: 2513686675/levitest:001
Image ID: docker-pullable://2513686675/levitest@sha256:efe1d1e17537b90f84cf005b95b2c3b7065d9f6dc4c5761c6d89103963b95507
Port: <none>
Host Port: <none>
Command:
/manager
Args:
--health-probe-bind-address=:8281
--metrics-bind-address=127.0.0.1:8280
--leader-elect
State: Running
Started: Mon, 17 Apr 2023 19:55:08 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 500m
memory: 128Mi
Requests:
cpu: 10m
memory: 64Mi
Liveness: http-get http://:8281/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
Readiness: http-get http://:8281/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spmqg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-spmqg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned helo-system/helo-controller-manager-655547d5dc-5rv59 to minikube
Normal Pulling 16m kubelet Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1"
Normal Pulled 16m kubelet Successfully pulled image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1" in 14.99332536s (14.993403934s including waiting)
Normal Created 16m kubelet Created container kube-rbac-proxy
Normal Started 16m kubelet Started container kube-rbac-proxy
Normal Pulling 16m kubelet Pulling image "2513686675/levitest:001"
Normal Pulled 15m kubelet Successfully pulled image "2513686675/levitest:001" in 21.712144409s (21.71215601s including waiting)
Normal Created 15m kubelet Created container manager
Normal Started 15m kubelet Started container manager
查看日志
kubectl logs -f
helo-controller-manager-655547d5dc-5rv59
-n hello-system
-c manager
## 输出
kubectl logs -f
helo-controller-manager-655547d5dc-5rv59
-n helo-system
-c manager
2023-04-17T11:55:08Z INFO controller-runtime.metrics Metrics server is starting to listen {"addr": "127.0.0.1:8280"}
2023-04-17T11:55:08Z INFO setup starting manager
2023-04-17T11:55:08Z INFO Starting server {"path": "/metrics", "kind": "metrics", "addr": "127.0.0.1:8280"}
2023-04-17T11:55:08Z INFO Starting server {"kind": "health probe", "addr": "[::]:8281"}
I0417 11:55:08.632800 1 leaderelection.go:248] attempting to acquire leader lease helo-system/9b9dc273.xxx.domain...
I0417 11:55:08.642969 1 leaderelection.go:258] successfully acquired lease helo-system/9b9dc273.xxx.domain
2023-04-17T11:55:08Z DEBUG events helo-controller-manager-655547d5dc-5rv59_41a3e337-336f-4613-9d0e-75aea8923543 became leader {"type": "Normal", "object": {"kind":"Lease","namespace":"helo-system","name":"9b9dc273.xxx.domain","uid":"0c373ec9-47ab-45b0-bc4b-dad462f8f174","apiVersion":"coordination.k8s.io/v1","resourceVersion":"38253"}, "reason": "LeaderElection"}
2023-04-17T11:55:08Z INFO Starting EventSource {"controller": "test", "controllerGroup": "apps.xxx.domain", "controllerKind": "Test", "source": "kind source: *v1.Test"}
2023-04-17T11:55:08Z INFO Starting Controller {"controller": "test", "controllerGroup": "apps.xxx.domain", "controllerKind": "Test"}
2023-04-17T11:55:08Z INFO Starting workers {"controller": "test", "controllerGroup": "apps.xxx.domain", "controllerKind": "Test", "worker count": 1}
9. 卸载
9.1 uninstall
卸载CRD
make uninstall
9.2 undeploy
清除部署
make undeploy
10. 参考链接
kubebuilder实战(之一、之二) —— 强烈推荐!