Uploaded image for project: 'Percona Server for MySQL'
  1. Percona Server for MySQL
  2. PS-6150

The execution of SHOW ENGINE INNODB STATUS to show locked mutexes could cause a server exit.


    • Bug
    • Status: Done
    • Low
    • Resolution: Fixed
    • 5.7.27-30, 8.0.18-9
    • 5.7.29-32, 8.0.19-10
    • None


      MySQL crashed with the stack trace below:


      2019-11-22T12:26:55.759278Z 8748327 [Note] Start binlog_dump to master_thread_id(8748327) slave_server(10027034), pos(, 4)
      12:27:30 UTC - mysqld got signal 11 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.
      Attempting to collect some information that could help diagnose the problem.
      As this is a crash and something is definitely wrong, the information
      collection process might fail.
      Please help us make Percona Server better by reporting any
      bugs at https://bugs.percona.com/key_buffer_size=8388608
      It is possible that mysqld could use up to 
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1202862 K  bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.Thread pointer: 0x7f16b0271410
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 7f1639d30e80 thread_stack 0x40000
      /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f1cc67e5d0f]Trying to get some variables.
      Some pointers may be invalid and cause the dump to abort.
      Query (7f16b017a928): SHOW ENGINE INNODB STATUS
      Connection ID (thread ID): 8748658
      Status: NOT_KILLEDYou may download the Percona Server operations manual by visiting
      http://www.percona.com/software/percona-server/. You may find information
      in the manual which will help you identify the cause of the crash.


      And this is the piece of code affected:


      /** String representation of the filename and line number where the
      latch was created
      @param[in]      id              Latch ID
      @param[in]      created         Filename and line number where it was crated
      @return the string representation */
      std::string sync_mutex_to_string(latch_id_t id, const std::string &created) {
        std::ostringstream msg;
        msg << "Mutex " << sync_latch_get_name(id) << " "
            << "created " << created;
        return (msg.str());


      It seems that the lack of protection(mutex) of created variable is causing the crash. The created variable is passed by reference, but then that reference is de-referenced by some other thread, and by the time this function reads it, it's gone and hence causes the SIGSEGV (signal 11).

      It seems it is happening on a very specific latch since I can not reproduce even with the debug version:

      master [localhost] {msandbox} ((none)) > select @@version;
      | @@version                    |
      | 5.7.27-30-debug-log-valgrind |
      1 row in set (0.00 sec)



        Issue Links



              kamil.holubicki Kamil Holubicki
              vinicius.grippa Vinicius Grippa
              3 Vote for this issue
              12 Start watching this issue



                Time Tracking

                  Original Estimate - Not Specified
                  Not Specified
                  Remaining Estimate - 0 minutes
                  Time Spent - 3 days, 4 hours, 10 minutes
                  3d 4h 10m

                  Smart Checklist