Uploaded image for project: 'Percona Operator for MongoDB'
  1. Percona Operator for MongoDB
  2. K8SPSMDB-846

Scaling down results in node as Secondary

Details

    • Bug
    • Status: Done
    • Medium
    • Resolution: Fixed
    • 1.13.0
    • 1.14.0
    • None
    • None
    • Yes
    • Yes
    • Yes

    Description

      If ClusterIP exposes the Replica Set, when scaling down to 1, the remaining node shows as Secondary and does not automatically transit to Primary.

      How to reproduce:

      deploy/cr.yaml configuration:

        replsets:
      ...
          expose:
            enabled: true
            exposeType: ClusterIP
      ...
        sharding:
          enabled: false

       

      The cluster is ready:

      $ kubectl get perconaservermongodb.psmdb.percona.com
      NAME              ENDPOINT                                      STATUS   AGE
      my-cluster-name   my-cluster-name-rs0.mongo.svc.cluster.local   ready    106s

       

      Scaling down to 1:

        allowUnsafeConfigurations: true
      ...
        replsets:
        - name: rs0
          size: 1
      

       

      The cluster scales down to 1:

      $ kubectl get pods
      NAME                                              READY   STATUS    RESTARTS       AGE
      my-cluster-name-rs0-0                             2/2     Running   1 (2m3s ago)   3m13s
      percona-server-mongodb-operator-8d76677f7-rdnxd   1/1     Running   0              39m

       

      But it's stuck in initializing state: 

      $ kubectl get perconaservermongodb.psmdb.percona.com
      NAME              ENDPOINT                                      STATUS         AGE
      my-cluster-name   my-cluster-name-rs0.mongo.svc.cluster.local   initializing   3m37s
      $ kubectl get perconaservermongodb.psmdb.percona.com
      NAME              ENDPOINT                                      STATUS         AGE
      my-cluster-name   my-cluster-name-rs0.mongo.svc.cluster.local   initializing   10m
      

       

      The remaining node shows as Secondary:

      $ kubectl exec -it my-cluster-name-rs0-0 -c mongod -- mongo -u clusterAdmin -p clusterAdmin123456 --eval "db.runCommand('ismaster')"
      ...
          "ismaster" : false,
          "secondary" : true,
          "tags" : {
              "serviceName" : "my-cluster-name",
              "podName" : "my-cluster-name-rs0-0"

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            juan.arruti Juan Arruti
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Smart Checklist