Uploaded image for project: 'Percona XtraBackup'
  1. Percona XtraBackup
  2. PXB-703

LP #1372531: intermittent assertion failure in compact backup prepare

    Details

    • Type: Bug
    • Status: On Hold
    • Priority: High
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None

      Description

      **Reported in Launchpad by David Bennett last update 27-04-2015 06:06:39

      While performing SST MTR testing with production builds, during a PXC SST transfer using XtraBackup, the XtraBackup prepare stage will intermittently fail with a SIGABRT while rebuilding an index causing the SST transfer to fail.

      ==== Platform: Centos 5 x66_64

      ==== Binaries (from Jenkins production builds):

      Percona-XtraDB-Cluster-5.6.20-rel68.0-25.7.886.Linux.x86_64.tar.gz (from Jenkins)
      percona-xtrabackup-2.2.4-5022-debug-Linux-x64_64.tar.gz (revno 5022)
      (PXB compiled with -DCMAKE_CXX_FLAGS=-m64 -DCMAKE_C_FLAGS=-m64 -DWITH_DEBUG=1)

      ==== MTR test: t/xb_galera_sst_advanced.sh

      ==== Configuration

      1. parallel+compact+progressfile+time
        [xtrabackup]
        parallel=4
        compact

      [sst]
      time=1
      streamfmt=xbstream
      progress=/tmp/progress2-conf6.log

      ==== Pertinent MTR test output (xb_galera_sst_advancedconf6)

      ...
      2014-09-21 22:06:54: run.sh: Made 29 attempts to connect to server
      2014-09-21 22:06:55: run.sh: Made 30 attempts to connect to server
      2014-09-21 22:06:56: run.sh: Server process PID=2749 died.
      2014-09-21 22:06:56: run.sh: Can't start the server. Server log (if exists):
      2014-09-21 22:06:56: run.sh: ----------------
      2014-09-21 22:06:56: run.sh: Error log for server with id: 1
      ...
      WSREP_SST: [INFO] Evaluating innobackupex --defaults-file=/home/dbennett/work/2/Percona-XtraDB-Cluster-5.6.20-rel68.
      0-25.7.886.Linux.x86_64/percona-xtradb-cluster-tests/var/w1/var901/my.cnf --apply-log $rebuildcmd ${DATA} &>/home/db
      ennett/logs/innobackup.14092122061411351585.prepare.log (20140921 22:06:43.525)
      WSREP_SST: [ERROR] Cleanup after exit with status:1 (20140921 22:06:46.433)
      WSREP_SST: [INFO] Removing the sst_in_progress file (20140921 22:06:46.435)
      2014-09-21 22:06:46 2749 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup --role 'joiner' --address '127.0.0.1:19320' --auth 'root:password' --datadir '/home/dbennett/work/2/Percona-XtraDB-Cluster-5.6.20-rel68.0-25.7.886.Linux.x86_64/percona-xtradb-cluster-tests/var/w1/var901/data/' --defaults-file '/home/dbennett/work/2/Percona-XtraDB-Cluster-5.6.20-rel68.0-25.7.886.Linux.x86_64/percona-xtradb-cluster-tests/var/w1/var901/my.cnf' --parent '2749' '' : 1 (Operation not permitted)
      2014-09-21 22:06:46 2749 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
      2014-09-21 22:06:46 2749 [ERROR] WSREP: SST failed: 1 (Operation not permitted)
      2014-09-21 22:06:46 2749 [ERROR] Aborting
      ...

      ===== pertinent xtrabackup stderr log output

      ...
      [01] Checking if there are indexes to rebuild in table sbtest/sbtest1 (space id: 6)
      [01] Found index k_1
      [01] Rebuilding 1 index(es).
      2014-09-21 22:06:46 7ff4f37fe940 InnoDB: Assertion failure in thread 140690033994048 in file ut0byte.ic line 109
      InnoDB: Failing assertion: ptr
      InnoDB: We intentionally generate a memory trap.
      InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
      InnoDB: If you get repeated assertion failures or crashes, even
      InnoDB: immediately after the mysqld startup, there may be
      InnoDB: corruption in the InnoDB tablespace. Please refer to
      InnoDB: http://dev.mysql.com/doc/refman/5.6/en/forcing-innodb-recovery.html
      InnoDB: about forcing recovery.
      02:06:46 UTC - xtrabackup got signal 6 ;
      This could be because you hit a bug or data is corrupted.
      This error can also be caused by malfunctioning hardware.
      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed,
      something is definitely wrong and this may fail.

      Thread pointer: 0x0
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 0 thread_stack 0x10000
      xtrabackup(my_print_stacktrace+0x32) [0xba05cd]
      xtrabackup(handle_fatal_signal+0x335) [0xb57021]
      /lib64/libpthread.so.0 [0x7ff507fd3ca0]
      /lib64/libc.so.6(gsignal+0x35) [0x7ff50684ffc5]
      /lib64/libc.so.6(abort+0x110) [0x7ff506851a70]
      xtrabackup [0x7708ed]
      xtrabackup [0x77091f]
      xtrabackup [0x771add]
      xtrabackup [0x77c7d0]
      xtrabackup(row_merge_build_indexes(trx_t*, dict_table_t*, dict_table_t*, bool, dict_index_t*, unsigned long const, unsigned long, TABLE*, dtuple_t const*, unsigned long const*, unsigned long, ib_sequence_t&)+0x419) [0x77dad5]
      xtrabackup [0x5f2a46]
      xtrabackup [0x5f2d86]
      /lib64/libpthread.so.0 [0x7ff507fcb83d]
      /lib64/libc.so.6(clone+0x6d) [0x7ff5068f4fcd]

        Smart Checklist

          Attachments

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                lpjirasync lpjirasync (Inactive)
              • Votes:
                0 Vote for this issue
                Watchers:
                1 Start watching this issue

                Dates

                • Created:
                  Updated: