Details
-
Bug
-
Status: Done
-
Medium
-
Resolution: Fixed
-
1.13.0
-
None
-
None
-
Yes
-
Yes
-
Yes
Description
If ClusterIP exposes the Replica Set, when scaling down to 1, the remaining node shows as Secondary and does not automatically transit to Primary.
How to reproduce:
deploy/cr.yaml configuration:
replsets: ... expose: enabled: true exposeType: ClusterIP ... sharding: enabled: false
The cluster is ready:
$ kubectl get perconaservermongodb.psmdb.percona.com NAME ENDPOINT STATUS AGE my-cluster-name my-cluster-name-rs0.mongo.svc.cluster.local ready 106s
Scaling down to 1:
allowUnsafeConfigurations: true
...
replsets:
- name: rs0
size: 1
The cluster scales down to 1:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-cluster-name-rs0-0 2/2 Running 1 (2m3s ago) 3m13s
percona-server-mongodb-operator-8d76677f7-rdnxd 1/1 Running 0 39m
But it's stuck in initializing state:
$ kubectl get perconaservermongodb.psmdb.percona.com NAME ENDPOINT STATUS AGE my-cluster-name my-cluster-name-rs0.mongo.svc.cluster.local initializing 3m37s $ kubectl get perconaservermongodb.psmdb.percona.com NAME ENDPOINT STATUS AGE my-cluster-name my-cluster-name-rs0.mongo.svc.cluster.local initializing 10m
The remaining node shows as Secondary:
$ kubectl exec -it my-cluster-name-rs0-0 -c mongod -- mongo -u clusterAdmin -p clusterAdmin123456 --eval "db.runCommand('ismaster')" ... "ismaster" : false, "secondary" : true, "tags" : { "serviceName" : "my-cluster-name", "podName" : "my-cluster-name-rs0-0"