Uploaded image for project: 'Percona Operator for MongoDB'
  1. Percona Operator for MongoDB
  2. K8SPSMDB-746

PSA setup will not work automatically with current main branch

Details

    • Bug
    • Status: Done
    • Medium
    • Resolution: Fixed
    • 1.12.0
    • 1.13.0
    • None
    • None
    • Yes
    • Yes
    • Yes

    Description

      Currently, when deploying the operator based on the perconalab/percona-server-mongodb-operator:main image, it will fail to automatically initialize a PSA cluster. This is the config I am trying to use:

      apiVersion: psmdb.percona.com/v1-13-0
      kind: PerconaServerMongoDB
      metadata: 
        name: percona-cluster
        namespace: percona
        finalizers: 
          - delete-psmdb-pods-in-order
          - delete-psmdb-pvc
      spec: 
        crVersion: 1.13.0
        image: percona/percona-server-mongodb:5.0.7-6
        imagePullPolicy: Always
        allowUnsafeConfigurations: true
        updateStrategy: SmartUpdate
        upgradeOptions: 
          versionServiceEndpoint: https://check.percona.com
          apply: disabled
          schedule: "0 2 * * *"
          setFCV: false
        secrets: 
          users: percona-cluster-secrets
          encryptionKey: percona-cluster-encryption-key
        pmm: 
          enabled: true
          image: percona/pmm-client:2.27.0
          serverHost: monitoring-service
        replsets: 
          - name: rs0
            size: 2
            tolerations: 
            - key: "node-role.kubernetes.io/mongodb"
              operator: "Exists"
              effect: "NoExecute"
            expose: 
              enabled: true
              exposeType: ClusterIP
            resources: 
              limits: 
                cpu: "8"
                memory: "64G"
              requests: 
                cpu: "6"
                memory: "48G"
            volumeSpec: 
              persistentVolumeClaim: 
                storageClassName: mongodb-storage
                resources: 
                  requests: 
                    storage: 200Gi
            nonvoting: 
              enabled: false
              size: 0
            arbiter: 
              enabled: true
              size: 1
              volumeSpec: 
                  emptyDir: {}
              affinity: 
                antiAffinityTopologyKey: "kubernetes.io/hostname"
        backup: 
          enabled: false
          image: perconalab/percona-server-mongodb-operator:main-backup
          serviceAccountName: percona-server-mongodb-operator
          pitr: 
            enabled: false
      

      This will fail with the following error message:

      2022-08-18T09:52:32.098Z	ERROR	controller_psmdb	failed to reconcile cluster	{"Request.Namespace": "percona", "Request.Name": "percona-cluster", "replset": "rs0", "error": "fix tags: write mongo config: replSetReconfig: (NewReplicaSetConfigurationIncompatible) Rejecting reconfig where the new config has a PSA topology and the secondary is electable, but the old config contains only one writable node. Refer to https://docs.mongodb.com/manual/reference/method/rs.reconfigForPSASet/ for next steps on reconfiguring a PSA set.", "errorVerbose": "(NewReplicaSetConfigurationIncompatible) Rejecting reconfig where the new config has a PSA topology and the secondary is electable, but the old config contains only one writable node. Refer to https://docs.mongodb.com/manual/reference/method/rs.reconfigForPSASet/ for next steps on reconfiguring a PSA set.\nreplSetReconfig\ngithub.com/percona/percona-server-mongodb-operator/pkg/psmdb/mongo.WriteConfig\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/psmdb/mongo/mongo.go:246\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileCluster\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:292\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:495\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571\nfix tags: write mongo config\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileCluster\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/mgo.go:293\ngithub.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:495\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571"}
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
      	/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:114
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
      	/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:311
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
      	/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
      	/go/src/github.com/percona/percona-server-mongodb-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227
      

      I found out that it tries to set the votes and priority of the secondary node to respectively 1 and 2. But Mongo will fail to set this simultaneously. When I set the votes first, and then the priority, it will work correct automatically and the CRD will get the "ready" status.

      It might be possible to run these changes one-by-one in the operator as well? Or maybe there are better ways to go about this?
       

      Attachments

        Activity

          People

            ege.gunes Ege Gunes
            pkeuter Peter Keuter
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Smart Checklist