Details
-
Bug
-
Status: Done
-
Medium
-
Resolution: Fixed
-
2.28.0
-
Yes
-
2
-
Yes
-
Yes
-
No
-
Server Integrations
Description
User impact:
User gets error when trying to re-register k8s cluster
Steps to reproduce:
Go to DBaaS and register k8s cluster
Unregister it
Register the same cluster again
Actual result:
Expected result:
No error
Workaround:
Re-registered cluster is actually there on page refresh
Download https://raw.githubusercontent.com/percona-platform/dbaas-controller/main/deploy/victoriametrics/crds/crd.yaml
and run `kubectl delete -f crd.yaml`
Details:
Works fine in 2.27.0
dbaas-controller.log:
time="May 4 08:27:42.965714738" level=error msg="RPC /percona.platform.dbaas.controller.v1beta1.KubernetesClusterAPI/StartMonitoring done in 59.765894966s with unexpected error: signal: killed\ncmd: /opt/dbaas-tools/bin/kubectl-1.16 --kubeconfig=/tmp/dbaas-controller-kubeconfig-1499591440 apply -f -\nstderr: \ncannot apply file: \"deploy/victoriametrics/crds/crd.yaml\"\ngithub.com/percona-platform/dbaas-controller/service/k8sclient.(*K8sClient).CreateVMOperator\n\t/home/builder/rpm/BUILD/dbaas-controller-3433a39836c0920dc9675b8e376bfccb9c867bef/service/k8sclient/k8sclient.go:1919\ngithub.com/percona-platform/dbaas-controller/service/cluster.KubernetesClusterService.StartMonitoring\n\t/home/builder/rpm/BUILD/dbaas-controller-3433a39836c0920dc9675b8e376bfccb9c867bef/service/cluster/kubernetes_cluster.go:140\ngithub.com/percona-platform/dbaas-api/gen/controller._KubernetesClusterAPI_StartMonitoring_Handler.func1\n\t/home/builder/go/pkg/mod/github.com/percona-platform/[email protected]/gen/controller/kubernetes_cluster_api.pb.go:730\ngithub.com/grpc-ecosystem/go-grpc-middleware/validator.UnaryServerInterceptor.func1\n\t/home/builder/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/validator/validator.go:28\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1\n\t/home/builder/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25\ngithub.com/grpc-ecosystem/go-grpc-prometheus.(*ServerMetrics).UnaryServerInterceptor.func1\n\t/home/builder/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/server_metrics.go:107\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1\n\t/home/builder/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25\ngithub.com/percona-platform/dbaas-controller/utils/servers.unaryLoggingInterceptor.func1.1\n\t/home/builder/rpm/BUILD/dbaas-controller-3433a39836c0920dc9675b8e376bfccb9c867bef/utils/servers/logging_interceptor.go:89\ngithub.com/percona-platform/dbaas-controller/utils/servers.logGRPCRequest\n\t/home/builder/rpm/BUILD/dbaas-controller-3433a39836c0920dc9675b8e376bfccb9c867bef/utils/servers/logging_interceptor.go:70\ngithub.com/percona-platform/dbaas-controller/utils/servers.unaryLoggingInterceptor.func1\n\t/home/builder/rpm/BUILD/dbaas-controller-3433a39836c0920dc9675b8e376bfccb9c867bef/utils/servers/logging_interceptor.go:87\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1\n\t/home/builder/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:25\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1\n\t/home/builder/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34\ngithub.com/percona-platform/dbaas-api/gen/controller._KubernetesClusterAPI_StartMonitoring_Handler\n\t/home/builder/go/pkg/mod/github.com/percona-platform/[email protected]/gen/controller/kubernetes_cluster_api.pb.go:732\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/home/builder/go/pkg/mod/google.golang.org/[email protected]/server.go:1286\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/home/builder/go/pkg/mod/google.golang.org/[email protected]/server.go:1609\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/home/builder/go/pkg/mod/google.golang.org/[email protected]/server.go:934\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571" request=edc65d03-cb83-11ec-9d41-0242ac110002
Suggested implementation:
We have two options to fix this problem and probably we should do both:
- Check if VM Operator is not installed before installing it.
- Remove VM Operator once user unregister Kubernetes cluster
Scenario: registering a new Kubernetes cluster
When | a new Kubernetes cluster is being registered |
Then | the following labels should be added to the vm-operator's pod to identify it for operations like delete |
app.kubernetes.io/name: <cluster-name> | |
app.kubernetes.io/part-of: pmm | |
app.kubernetes.io/managed-by: ????? | |
app.kubernetes.io/created-by: pmm? |
Scenario: unregistering a Kubernetes cluster
When | a new Kubernetes cluster is being unregistered |
Then | Remove the vm-operator where the label app.kubernetes.io/name matches the cluster name |
Attachments
Issue Links
- is cloned by
-
PMM-10217 DBaaS: Delete VM operator upon unregister
-
- To Do
-