Sunday, February 28, 2021

5 Years of Working for MariaDB Corporation

March 1, 2016, was my first day of working for MariaDB Corporation. It means that I worked for them full 5 years in a row! It's the longest period I've ever spent working in the same company. I worked for more than 7 years in MySQL Bugs Verification team of MySQL Support, but formally it was for 3 different companies over that period, MySQL AB, Sun and Oracle. So tonight, after 2 weekend on-call shifts in a row I want to summarize what I lost over that 5 years and what I won.

I never planned to leave Percona after just a bit more than 3 years, but I had to do something to either fix the direction of their services business development after April, 2015, or at least to make some points that are noticed and remembered. I made them by January 26, 2016, and then had to move on. 

So, what I lost after leaving Percona:

  • In Percona I worked in the best Support team in the industry at the moment!
  • In Percona I was proud to be working for a company that does the right things, both for business around open source software, Community and customers.
  • In Percona I was involved in decision making, I influenced bug fixing process and priorities and was a kind of authority on everything I cared to state or do.
  • I had a really good salary, regular bonuses, longer vacations or otherwise properly compensated extra working time and all the opportunities to become a public person in MySQL Community.
  • I spent a lot of time working with Peter Zaitsev and really brilliant engineers both from Development, QA and Support.
  • After few initial months of getting used to a lot of work and work style, then till April 14 or so, 2015, it was a really comfortable place for me to work at and do things I like and good at.

I lost most of the above when I left. No more decision making of any kind (it was my decision to avoid that while joining MariaDB, to begin with). No more bugs processing or prioritizing. No more Percona Live conferences till 2019 when I finally managed to clarify the problems I had with (cancelled) participation in Percona Live Europe 2015. Nobody ever asked me to blog about anything since 2016 and until the beginning of 2020 (when MariaDB Foundation got really interested in my public performances). Joining MariaDB Corporation made me a suspicious member of MySQL Community and eventually forced me to leave Planet MySQL where my posts were not appreciated. 

It takes me just one short tweet with MySQL bug number to share to have some of these bugs immediately hidden from the Community and made private. Looks like people suspect I have some secret agenda and mission from MariaDB Corporation, while I have none related to MySQL, neither to software nor to bugs in it - I do it at my own time and based on my own views that are not influenced by anyone...

Now what I gained from joining MariaDB Support team:

  • I still work in the best Support team in the industry, with new (to me) brilliant people, some of my former colleagues in MySQL and some of those I worked with in Percona and managed to get back into my team now in MariaDB Corporation.
  • I work harder than ever, at least twice as more as I did in Percona (at least speaking about the real life customer issues). The issues are more interesting, complex and challenging in general, and cover everything from MySQL 4.0.x to NDB cluster, MySQL 8.0.x and all versions of MariaDB Server, MaxScale and Connectors, and Galera clusters, with everything MySQL-related that Percona releases in between! This work is properly compensated in recent years.
  • Yes, I do work on Galera issues a lot, read the megabytes of logs and make sense of everything Galera. Who could imagine I'll got that way back in 2015?
  • I work closely and directly with the best developers, from Codership's Galera team, MariaDB Foundation developers, to Sergey Golubchik, Marko Mäkelä, Vladislav Vaintroub, Elena Stepanova and good old MySQL Optimizer team (now MariaDB's), and Monty himself, and more... We chat, we talk and discuss technical topics almost every day! I do some work they ask about, I build more things from sources than ever. It's really great and had almost never been the case before I joined MariaDB. I love this part of the corporate culture here.
  • My blog that you read now is way more popular than ever before 2016. At good times in 2017 I had more than 1000 reads per day, for weeks and months.
  • I am presenting at the conferences way more often than ever in Percona, from FOSDEM to Percona Live and everything MariaDB.
  • My influence and impact on MySQL Community got increased. I was declared a MySQL Community Contributor of the Year 2019. As often happens, it's easier to make impact when you are outsider. They can not ignore you even if that's only because you are considered "asshole" and "enemy" with "corporate agenda", for whatever reasons.

So far I do not regret that I made a decision in favor of MariaDB back in 2016, even though it forced me to keep up with or ignore many things I don't like at my current company. I am sorry that back in 2010 Monty and Kay were just 10 days too late to get me to SkySQL of the times. I had signed the contract with Oracle, and in 2012 there I really was mostly wasting my time, unfortunately...

That switch formally happened on March 1, 2016. It was a good decision.

Just few random notes at the and of a hard 7 days week of work. I hope you would not blame me too much for these. I also hope I'll still have my job tomorrow :)

Sunday, February 14, 2021

What about the "Fun with Bugs" series on YouTube?

My "Fun with Bugs" series of blog posts about interesting or badly handled MySQL bug reports ended more than 7 months ago. The time had come for that. But I honestly miss that media for bitching about something wrong in MySQL once in a while...

The year of COVID-19 pandemic with video conferences that replaced normal offline ones forced me to start recording videos, and I even used to like the process so much that I am working on my wife's channel content as a hobby. I've created my own channel as well, for uploading some draft/bad/long/extended videos recorded in the process of work on my online talks:


Being fluent enough with recording videos at home (using different software tools and cameras) and publishing them at YouTube, I wonder now if it makes sense to turn this activity into a regular and MySQL-concentrated one? My next talk(s) will be submitted to Percona Live 2021, but it means they may go live only in May, and I'd like to be on screens earlier than that.

So, I wonder should I maybe have a regular video recordings shared, let's say, once a week every Tuesday, up to 5 minutes long at most, and devoted to some MySQL-related topics? What topics would you like me to cover? Would you mind if it will be a five minutes talk about a recent MySQL bug report or few of them, either interesting in general or badly handled by my former Oracle colleagues? Something else to better spend megabytes of video on? Leave it to younger and more attractive and experienced speakers? Keep writing here or stop bitching about the bugs once and for good?

I am waiting for your comments in this blog post and in social media that I'll share it till March 5, 2021. Then I'll decide on how to proceed with this regular YouTube videos idea.

Friday, February 5, 2021

On Upcoming FOSDEM 2021

FOSDEM 2021 starts tomorrow. This time I have 3 talks to present. Here they are, in the order of appearance, with links to my related blog posts:

  1. "Upgrading to a newer major version of MariaDB" - it should start at 10:30 on Saturday in the MariaDB devroom and is mostly devoted to mysql_upgrade. Related blog posts are:
  2. "Monitoring MariaDB Server with bpftrace on Linux" - it should start at 12:40 on Sunday in the Monitoring and Observability devroom and is devoted to bpftrace basics. A lot more information is provided in recent blog posts:
  3. "Linux /proc filesystem for MySQL DBAs" - this talk should start at 15:00 on Sunday in MySQL devroom and is devoted to a totally different (comparing to dynamic tracing I am so big fan of) way to get insights about MySQL internal working, waits etc - sampling of files in /proc file system. I had written 3 blog posts on the topic:

Slides are uploaded to the talks pages and will be shared via SlideShare. Draft, longer versions of the talks that I've recorded but FOSDEM system had not accepted will also be shared at my YouTube channel soon.

Usual view on my way to FOSDEM ULB site... I miss Brussels!

Other talks in these devrooms that I consider interesting (will not be able to attend them all live though):

See you there! FOSDEM was a real driver of my more or less advanced performance studies this year.

Saturday, January 30, 2021

Playing with recent bpftrace and MariaDB 10.5 on Fedora - Part V, Proper Way To Summarize Time Spent per Stack

In my previous post in this series I was tracing __lll_lock_wait() function and measuring time spent in it per unique MariaDB server's stack trace. I've got the results that had not looked like totally wrong or useless and even presented them as a flame graph, but it took a long time and during all thyat time, for minutes, sysbench test TPS and QPS results were notably smaller. So, it was a useful tracing, but surely not a low impact one.

I already stated before that the approach used is surely not optimal. There should be some better way to summarize the stack traces without such a performance impact. Today I continued to work on this problem. First of all, I tried to measure finally how much time exactly I had to wait for the results from bpftrace command:

[openxs@fc31 ~]$ time sudo ./lll_lock_wait.bt 2>/dev/null >/tmp/lll_lock_perf_stacks1.txt

real    9m51.904s
user    8m25.576s
sys     1m8.057s
[openxs@fc31 ~]$ ls -l /tmp/lll_lock_perf_stacks1.txt
-rw-rw-r--. 1 openxs openxs 1564291 Jan 30 11:58 /tmp/lll_lock_perf_stacks1.txt

So, test was running for 300 seconds (5 minutes) and monitoring results (for about 20 seconds) came 5 minutes later! This is definitely NOT acceptable for any regular use.

I start to check things and test more. I noted in the Reference Guide (that I've read entirely a couple of times yesterday and today) that ustack() function of bpftrace can accept the second argument, number of frames to show in stack. Based on the previous flame graphs most interesting stacks were not long, so I decided to check if 8 or just 5 may be still enough for useful results while reducing the performance impact. I changed the code here:

uprobe:/lib64/libpthread.so.0:__lll_lock_wait
/comm == "mariadbd"/
{
    @start[tid] = nsecs;
    @tidstack[tid] = ustack(perf,5); -- <-- I also tried 8
}

and got these run times for 8 and 5 frames:

[openxs@fc31 ~]$ time sudo ./lll_lock_wait.bt 2>/dev/null >/tmp/lll_lock_perf_stacks1.txt

real    9m39.195s
user    8m9.188s
sys     1m8.430s

[openxs@fc31 ~]$ ls -l /tmp/lll_lock_perf_stacks1.txt
-rw-rw-r--. 1 openxs openxs 1085019 Jan 30 12:12 /tmp/lll_lock_perf_stacks1.txt

[openxs@fc31 ~]$ time sudo ./lll_lock_wait.bt 2>/dev/null >/tmp/lll_lock_perf_stacks2.txt

real    6m6.801s
user    5m5.537s
sys     0m42.226s

[openxs@fc31 ~]$ ls -l /tmp/lll_lock_perf_stacks2.txt
-rw-rw-r--. 1 openxs openxs 728072 Jan 30 12:19 /tmp/lll_lock_perf_stacks2.txt

So, reducing stack to 5 frames made notable impact, and QPS drop was a bit smaller, but still waiting for 5 minutes is not an option in general.

I had not found really powerful functions to replace awk and collapse stacks inside the bpftrace program, so decided to export them "raw", on every hit of uretprobe, and then process by Linux tools as unusual. I'll skip steps, tests and failures and just show the resulting new program for the task:

[openxs@fc31 ~]$ cat lll_lock_wait2.bt
#!/usr/bin/env bpftrace

BEGIN
{
/*    printf("Tracing time from __lll_lock_wait, Ctrl-C to stop\n"); */
}

interval:s:$1 { exit(); }

uprobe:/lib64/libpthread.so.0:__lll_lock_wait
/comm == "mariadbd"/
{
    @start[tid] = nsecs;
    @tidstack[tid] = ustack(perf);    
}

uretprobe:/lib64/libpthread.so.0:__lll_lock_wait
/comm == "mariadbd" && @start[tid] != 0/
{
    $now = nsecs;
    $time = $now - @start[tid];
    @futexstack[@tidstack[tid]] += $time;
    print(@futexstack);
    delete(@futexstack[@tidstack[tid]]);
/*
    printf("Thread: %u, time: %d\n", tid, $time);
*/
    delete(@start[tid]);
    delete(@tidstack[tid]);
}

END
{
    clear(@start);
    clear(@tidstack);
    clear(@futexstack);
}

Changes vs the previous version are highlighted. Basically I print and then delete every stack collected, as function call ends, along with time, to further sum them up externally. The function used is the following:

print(@x[, top [, div]]) - Print the map, optionally the top entries only and with a divisor

I've also added the interval probe at the beginning, referring to the first program argument as seconds to run. It just forces exit after N seconds (0 by default means immediate exist, wrong format will be reported as error).

Now if I run it while test is running, this way:

[openxs@fc31 ~]$ time sudo ./lll_lock_wait2.bt 10 2>/dev/null | awk '
BEGIN { s = ""; }
/^@futexstack\[\]/ { s = ""; }
/^@futexstack/ { s = ""; }
/^\t/ { if (index($2, "(") > 0) {targ = substr($2, 1, index($2, "(") - 1)} else {targ = substr($2, 1, index($2, "+") - 1)} ; if (s != "") { s = s ";" targ } else { s = targ } }
/^]/ { print $2, s }
' > /tmp/collapsed_lll_lock_v2_raw.txt


real    0m10.646s
user    0m5.648s
sys     0m0.843s
[openxs@fc31 ~]$ ls -l /tmp/collapsed_lll_lock_v2_raw.txt
-rw-rw-r--. 1 openxs openxs 1566 Jan 30 13:12 /tmp/collapsed_lll_lock_v2_raw.txt
[openxs@fc31 ~]$ cat /tmp/collapsed_lll_lock_v2_raw.txt
15531 __lll_lock_wait;
1233833 __lll_lock_wait;
10638 __lll_lock_wait;buf_read_page_background;btr_cur_prefetch_siblings;btr_cur_optimistic_delete_func;row_purge_remove_sec_if_poss_leaf;row_purge_record_func;row_purge_step;que_run_threads;purge_worker_callback;tpool::task_group::execute;tpool::thread_pool_generic::worker_main;;;;
7170 __lll_lock_wait;tpool::thread_pool_generic::worker_main;;;;
273330 __lll_lock_wait;tdc_acquire_share;open_table;open_tables;open_and_lock_tables;execute_sqlcom_select;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread
1193083 __lll_lock_wait;trx_undo_report_row_operation;btr_cur_update_in_place;btr_cur_optimistic_update;row_upd_clust_rec;row_upd_clust_step;row_upd_step;row_update_for_mysql;ha_innobase::update_row;handler::ha_update_row;mysql_update;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread
183353 __lll_lock_wait;
43231 __lll_lock_wait;MDL_lock::remove_ticket;MDL_context::release_lock;ha_commit_trans;trans_commit;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread

I get a very condensed output almost immediately!  I can afford running for 60 seconds without much troubles:

[openxs@fc31 ~]$ time sudo ./lll_lock_wait2.bt 60 2>/dev/null | awk '
BEGIN { s = ""; }
/^@futexstack\[\]/ { s = ""; }
/^@futexstack/ { s = ""; }
/^\t/ { if (index($2, "(") > 0) {targ = substr($2, 1, index($2, "(") - 1)} else {targ = substr($2, 1, index($2, "+") - 1)} ; if (s != "") { s = s ";" targ } else { s = targ } }
/^]/ { print $2, s }
' > /tmp/collapsed_lll_lock_v2_raw.txt


real    1m0.959s
user    0m44.146s
sys     0m6.126s


[openxs@fc31 ~]$ ls -l /tmp/collapsed_lll_lock_v2_raw.txt
-rw-rw-r--. 1 openxs openxs 12128 Jan 30 13:17 /tmp/collapsed_lll_lock_v2_raw.txt

The impact of these 60 seconds is visible in the sysbench output:

[openxs@fc31 maria10.5]$ sysbench oltp_read_write --db-driver=mysql --tables=5 --table-size=100000 --mysql-user=openxs --mysql-socket=/tmp/mariadb.sock --mysql-db=sbtest --threads=32 --report-interval=10 --time=300 run
sysbench 1.1.0-174f3aa (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 32
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 32 tps: 625.68 qps: 12567.66 (r/w/o: 8800.49/2512.81/1254.36) lat (ms,95%): 282.25 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 32 tps: 891.51 qps: 17825.69 (r/w/o: 12479.31/3563.26/1783.13) lat (ms,95%): 142.39 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 32 tps: 512.09 qps: 10230.44 (r/w/o: 7163.62/2042.55/1024.27) lat (ms,95%): 287.38 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 32 tps: 369.10 qps: 7390.48 (r/w/o: 5171.39/1480.90/738.20) lat (ms,95%): 350.33 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 32 tps: 417.83 qps: 8347.73 (r/w/o: 5845.14/1667.03/835.56) lat (ms,95%): 390.30 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 32 tps: 484.03 qps: 9687.55 (r/w/o: 6782.09/1937.31/968.16) lat (ms,95%): 292.60 err/s: 0.00 reconn/s: 0.00
[ 70s ] thds: 32 tps: 431.98 qps: 8640.35 (r/w/o: 6049.38/1727.01/863.95) lat (ms,95%): 344.08 err/s: 0.00 reconn/s: 0.00
[ 80s ] thds: 32 tps: 354.60 qps: 7097.39 (r/w/o: 4968.86/1419.32/709.21) lat (ms,95%): 419.45 err/s: 0.00 reconn/s: 0.00
[ 90s ] thds: 32 tps: 380.98 qps: 7600.82 (r/w/o: 5317.17/1521.80/761.85) lat (ms,95%): 520.62 err/s: 0.00 reconn/s: 0.00
[ 100s ] thds: 32 tps: 423.01 qps: 8467.99 (r/w/o: 5928.71/1693.16/846.13) lat (ms,95%): 397.39 err/s: 0.00 reconn/s: 0.00
[ 110s ] thds: 32 tps: 475.66 qps: 9525.07 (r/w/o: 6669.22/1904.53/951.32) lat (ms,95%): 344.08 err/s: 0.00 reconn/s: 0.00
[ 120s ] thds: 32 tps: 409.13 qps: 8171.48 (r/w/o: 5718.38/1634.94/818.17) lat (ms,95%): 458.96 err/s: 0.00 reconn/s: 0.00
[ 130s ] thds: 32 tps: 190.91 qps: 3826.72 (r/w/o: 2682.19/762.62/381.91) lat (ms,95%): 450.77 err/s: 0.00 reconn/s: 0.00
[ 140s ] thds: 32 tps: 611.40 qps: 12229.99 (r/w/o: 8558.99/2448.20/1222.80) lat (ms,95%): 223.34 err/s: 0.00 reconn/s: 0.00
[ 150s ] thds: 32 tps: 581.99 qps: 11639.19 (r/w/o: 8148.53/2326.68/1163.99) lat (ms,95%): 287.38 err/s: 0.00 reconn/s: 0.00
[ 160s ] thds: 32 tps: 653.21 qps: 13058.51 (r/w/o: 9139.57/2612.52/1306.41) lat (ms,95%): 257.95 err/s: 0.00 reconn/s: 0.00
[ 170s ] thds: 32 tps: 561.87 qps: 11231.27 (r/w/o: 7860.53/2246.99/1123.75) lat (ms,95%): 320.17 err/s: 0.00 reconn/s: 0.00
[ 180s ] thds: 32 tps: 625.66 qps: 12526.32 (r/w/o: 8770.19/2504.82/1251.31) lat (ms,95%): 235.74 err/s: 0.00 reconn/s: 0.00
[ 190s ] thds: 32 tps: 554.50 qps: 11088.01 (r/w/o: 7760.61/2218.40/1109.00) lat (ms,95%): 325.98 err/s: 0.00 reconn/s: 0.00
[ 200s ] thds: 32 tps: 607.10 qps: 12143.89 (r/w/o: 8501.49/2428.20/1214.20) lat (ms,95%): 344.08 err/s: 0.00 reconn/s: 0.00
[ 210s ] thds: 32 tps: 424.17 qps: 8467.63 (r/w/o: 5925.43/1693.87/848.33) lat (ms,95%): 397.39 err/s: 0.00 reconn/s: 0.00
[ 220s ] thds: 32 tps: 466.80 qps: 9335.03 (r/w/o: 6533.32/1868.31/933.40) lat (ms,95%): 397.39 err/s: 0.00 reconn/s: 0.00
[ 230s ] thds: 32 tps: 365.83 qps: 7330.56 (r/w/o: 5133.49/1465.21/731.86) lat (ms,95%): 475.79 err/s: 0.00 reconn/s: 0.00
[ 240s ] thds: 32 tps: 411.27 qps: 8218.45 (r/w/o: 5754.25/1641.87/822.34) lat (ms,95%): 467.30 err/s: 0.00 reconn/s: 0.00
[ 250s ] thds: 32 tps: 127.10 qps: 2534.71 (r/w/o: 1772.60/507.70/254.40) lat (ms,95%): 450.77 err/s: 0.00 reconn/s: 0.00
[ 260s ] thds: 32 tps: 642.35 qps: 12856.29 (r/w/o: 8999.09/2572.50/1284.70) lat (ms,95%): 282.25 err/s: 0.00 reconn/s: 0.00
[ 270s ] thds: 32 tps: 603.80 qps: 12078.79 (r/w/o: 8456.89/2414.30/1207.60) lat (ms,95%): 314.45 err/s: 0.00 reconn/s: 0.00
[ 280s ] thds: 32 tps: 642.70 qps: 12857.60 (r/w/o: 9001.40/2570.80/1285.40) lat (ms,95%): 257.95 err/s: 0.00 reconn/s: 0.00
[ 290s ] thds: 32 tps: 716.57 qps: 14325.60 (r/w/o: 10026.98/2865.48/1433.14) lat (ms,95%): 144.97 err/s: 0.00 reconn/s: 0.00
[ 300s ] thds: 32 tps: 611.16 qps: 12219.69 (r/w/o: 8551.80/2445.66/1222.23) lat (ms,95%): 292.60 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            2124836
        write:                           607096
        other:                           303548
        total:                           3035480
    transactions:                        151774 (505.83 per sec.)
    queries:                             3035480 (10116.62 per sec.)

    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

Throughput:
    events/s (eps):                      505.8309
    time elapsed:                        300.0489s
    total number of events:              151774

Latency (ms):
         min:                                    2.17
         avg:                                   63.25
         max:                                 8775.26
         95th percentile:                      331.91
         sum:                              9599021.34

Threads fairness:
    events (avg/stddev):           4742.9375/42.76
    execution time (avg/stddev):   299.9694/0.03

but it continues only for some time after the end of monitoring and overal QPS is not much affected.

Now what about top 5 stacks by wait time? Here they are (well, I had NOT cared to sum up several entires with the same stack, but you should know how to do this with awk and flamegraph.pl will do it for us later) :

[openxs@fc31 ~]$ cat /tmp/collapsed_lll_lock_v2_raw.txt | sort -r -n -k 1,1 | head -5
340360731 __lll_lock_wait;btr_cur_optimistic_insert;row_ins_sec_index_entry_low;row_ins_sec_index_entry;row_ins_step;row_insert_for_mysql;ha_innobase::write_row;handler::ha_write_row;write_record;mysql_insert;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread
335903154 __lll_lock_wait;btr_cur_optimistic_insert;row_ins_sec_index_entry_low;row_ins_sec_index_entry;row_upd_sec_index_entry;row_upd_step;row_update_for_mysql;ha_innobase::update_row;handler::ha_update_row;mysql_update;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread
287819974 __lll_lock_wait;btr_cur_optimistic_insert;row_ins_sec_index_entry_low;row_ins_sec_index_entry;row_ins_step;row_insert_for_mysql;ha_innobase::write_row;handler::ha_write_row;write_record;mysql_insert;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread
281374237 __lll_lock_wait;btr_cur_optimistic_insert;row_ins_sec_index_entry_low;row_ins_sec_index_entry;row_ins_step;row_insert_for_mysql;ha_innobase::write_row;handler::ha_write_row;write_record;mysql_insert;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread
268817930 __lll_lock_wait;buf_page_init_for_read;buf_read_page_background;btr_cur_prefetch_siblings;btr_cur_optimistic_delete_func;row_purge_remove_sec_if_poss_leaf;row_purge_record_func;row_purge_step;que_run_threads;purge_worker_callback;tpool::task_group::execute;tpool::thread_pool_generic::worker_main;;;;

A lot of contention happens on INSERTing rows. I'd say it's expected! Now to create the flame graph we just have to re-order the output a bit:

[openxs@fc31 ~]$ cat /tmp/collapsed_lll_lock_v2_raw.txt | awk '{ if (length($2) > 0) {print $2, $1} }' | /mnt/home/openxs/git/FlameGraph/flamegraph.pl --title="Time spent in ___lll_lock_wait in MariaDB 10.5, all frames" --countname=nsecs > ~/Documents/lll_lock_v2_2.svg

Here it is:

Most mutex waits for oltp_read_write test happen on inserting rows...

Monitoring over 60 seconds allowed to see more realistic picture of contention. It mostly happens on inserting rows by 32 threads into just 5 tables on a system with 4 cores only.

Now it seems we have the tool that can be used more than once ina lifetime and seems to provide a useful information, fast.

* * *

To summarize:

  1. One should not even try to summarize all stacks in a single associative arrays for any function that is called to often in many different contests inside the bftrace! There is a limit on number of items in the ma you may hit, and the impact of storing and exporting this is too high for monitoring more than a couple of seconds.
  2. Cleaning up stack traces should be done externally, by usual Linux test processing tools thta produce smaller summary output. The bpftrace program should NOT be designed to collect a lot of outputs for a long time and then output them at the end. Looks like smaller shunks exported to the userspace regularly is a better approach.
  3. We can use simple command line arguments for the program that can be literally substituted as parts of probe definitions. Next step would be to make the more generic tool with binary and function name to probe as command line arguments.
  4. bpftrace is cool!

Thursday, January 28, 2021

Playing with recent bpftrace and MariaDB 10.5 on Fedora - Part IV, Creating a New Tool for Tracing Time Spent in __lll_lock_wait

So far in this series I am just trying to find out what can be done with bpftrace and how to get interesting insights or implement something I was asked about. I am not sharing "best practices" (mostly worst, probably, when it's my own inventions) or any final answers on what to trace given the specific MariaDB performance problem. This is still work in progress, and on really early stage.

In the comments for my previous post you can see that I probably measured a wrong thing if the goal is to find the mutex that causes most of the contention. One of the comments suggested a different tracing idea:

"... indexed by the *callstack*, with the goal to find the place where waits happen. I think __lll_lock_wait could be a better indicator for contention, measuring uncontended pthread_mutex_locks could give some false alarms."

Today I had an hour or so again near my Fedora box with bpftrace, so I tried to modify my previous tool to store the time spent in the __lll_lock_wait function, and sum it up (the same inefficient way so far, I had no chance yet to test and find anything more suitable for production use) these times per unique stack that led to __lll_lock_wait call.

To remind you, __lll_lock_wait() is a low-level wrapper around the Linux futex system call. The prototype for this function is:

void __lll_lock_wait (int *futex, int private)
It is also from the libpthread.so library:

[openxs@fc31 ~]$ strings /lib64/libpthread.so.0 | grep '__lll_lock_wait'
__lll_lock_wait
__lll_lock_wait_private

So, my code will not require many modifications. Basic quick code to prove the concept is as simple as this:

[openxs@fc31 ~]$ cat lll_lock_wait.bt
#!/usr/bin/env bpftrace

BEGIN
{
    printf("Tracing time from __lll_lock_wait, Ctrl-C to stop\n");
}

uprobe:/lib64/libpthread.so.0:__lll_lock_wait
/comm == "mariadbd"/

{
    @start[tid] = nsecs;
    @tidstack[tid] = ustack(perf);
}

uretprobe:/lib64/libpthread.so.0:__lll_lock_wait
/comm == "mariadbd" && @start[tid] != 0/

{
    $now = nsecs;
    $time = $now - @start[tid];
    @futexstack[@tidstack[tid]] += $time;

    printf("Thread: %u, time: %d\n", tid, $time);

    delete(@start[tid]);
    delete(@tidstack[tid]);
}

END
{
    clear(@start);
    clear(@tidstack);
    clear(@futexstack);
}

The main difference is that to measure time spent inside a single function I need uprobe on it and uretprobe on it. Function call arguments are NOT available in the uretprobe.

When I made this .bt executable and tried to run it with MariaDB 10.5 server started but not under any load I had NOT got any output from uretprobe - no wonder, there is no contention! With sysbench test started the result was different, a flood of outputs:

[openxs@fc31 ~]$ sudo ./lll_lock_wait.bt 2>/dev/null
...
Thread: 7488, time: 70520
Thread: 7494, time: 73351
Thread: 5790, time: 106635
Thread: 5790, time: 10008
Thread: 7484, time: 87016
Thread: 5790, time: 18723
^C

So, the program works to some extent and reports some timer per thread (but I have no mutex address in uretprobe). So, I modified it to remove prints and keep the stack associative array for the output in the END probe:

#!/usr/bin/env bpftrace

BEGIN
{
/*    printf("Tracing time from __lll_lock_wait, Ctrl-C to stop\n"); */
}

uprobe:/lib64/libpthread.so.0:__lll_lock_wait
/comm == "mariadbd"/
{
    @start[tid] = nsecs;
    @tidstack[tid] = ustack(perf);    
}

uretprobe:/lib64/libpthread.so.0:__lll_lock_wait
/comm == "mariadbd" && @start[tid] != 0/
{
    $now = nsecs;
    $time = $now - @start[tid];
    @futexstack[@tidstack[tid]] += $time;
/*
    printf("Thread: %u, time: %d\n", tid, $time);
*/
    delete(@start[tid]);
    delete(@tidstack[tid]);
}

END
{
    clear(@start);
    clear(@tidstack);
/*    clear(@futexstack); */
}

Then I started usual sysbench test for this series:

[openxs@fc31 maria10.5]$ sysbench oltp_read_write --db-driver=mysql --tables=5 --table-size=100000 --mysql-user=openxs --mysql-socket=/tmp/mariadb.sock --mysql-db=sbtest --threads=32 --report-interval=10 --time=300 run
sysbench 1.1.0-174f3aa (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 32
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 32 tps: 516.06 qps: 10362.55 (r/w/o: 7260.90/2066.43/1035.22) lat (ms,95%): 390.30 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 32 tps: 550.82 qps: 11025.98 (r/w/o: 7718.97/2205.28/1101.74) lat (ms,95%): 320.17 err/s: 0.00 reconn/s: 0.00

and at this moment started my program and kept it running for some time more than 10 but less than 20 seconds:

[openxs@fc31 ~]$ sudo ./lll_lock_wait.bt 2>/dev/null >/tmp/lll_lock_perf_stacks.txt
^C

I've got immediate drop in QPS and it continued till the end of 300 seconds test:

[ 30s ] thds: 32 tps: 397.67 qps: 7950.29 (r/w/o: 5565.37/1589.58/795.34) lat (ms,95%): 434.83 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 32 tps: 439.87 qps: 8787.15 (r/w/o: 6149.45/1757.97/879.74) lat (ms,95%): 397.39 err/s: 0.00 reconn/s: 0.00
...

    transactions:                        101900 (339.38 per sec.)
    queries:                             2038016 (6787.74 per sec.)

    ignored errors:                      1      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)
...

I had to wait some more and ended up with this big output:

[openxs@fc31 maria10.5]$ ls -l /tmp/lll_lock_perf_stacks.txt
-rw-rw-r--. 1 openxs openxs 1347279 Jan 28 14:29 /tmp/lll_lock_perf_stacks.txt

[openxs@fc31 maria10.5]$ more /tmp/lll_lock_perf_stacks.txt
Attaching 4 probes...


@futexstack[
        7fadbd0bd5e0 __lll_lock_wait+0 (/usr/lib64/libpthread-2.30.so)
        14e5f 0x14e5f ([unknown])
]: 4554

...
@futexstack[
        7fadbd0bd5e0 __lll_lock_wait+0 (/usr/lib64/libpthread-2.30.so)
        55a71e6dd31c MDL_lock::remove_ticket(LF_PINS*, MDL_lock::Ticket_list MDL
_lock::*, MDL_ticket*)+60 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        55a71e6ddca5 MDL_context::release_lock(enum_mdl_duration, MDL_ticket*)+3
7 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        55a71ebc5ef4 row_purge_step(que_thr_t*)+388 (/home/openxs/dbs/maria10.5/
bin/mariadbd)
        55a71eb874b8 que_run_threads(que_thr_t*)+2264 (/home/openxs/dbs/maria10.
5/bin/mariadbd)
        55a71ebe5773 purge_worker_callback(void*)+355 (/home/openxs/dbs/maria10.
5/bin/mariadbd)
        55a71ed2b99a tpool::task_group::execute(tpool::task*)+170 (/home/openxs/
dbs/maria10.5/bin/mariadbd)
        55a71ed2a7cf tpool::thread_pool_generic::worker_main(tpool::worker_data*
)+79 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        7fadbcac43d4 0x7fadbcac43d4 ([unknown])
        55a721f10f00 0x55a721f10f00 ([unknown])
        55a71ed2ab60 std::thread::_State_impl<std::thread::_Invoker<std::tuple<v
oid (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_gen
eric*, tpool::worker_data*> > >::~_State_impl()+0 (/home/openxs/dbs/maria10.5/bi
n/mariadbd)
        2de907894810c083 0x2de907894810c083 ([unknown])
]: 4958
...

@futexstack[
        7fadbd0bd5e0 __lll_lock_wait+0 (/usr/lib64/libpthread-2.30.so)
        55a71eba3e20 row_ins_clust_index_entry_low(unsigned long, unsigned long,
 dict_index_t*, unsigned long, dtuple_t*, unsigned long, que_thr_t*)+4144 (/home
/openxs/dbs/maria10.5/bin/mariadbd)
        55a71eba4a36 row_ins_clust_index_entry(dict_index_t*, dtuple_t*, que_thr
_t*, unsigned long)+198 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        55a71eba52b4 row_ins_step(que_thr_t*)+1956 (/home/openxs/dbs/maria10.5/b
in/mariadbd)
        55a71ebb5ea1 row_insert_for_mysql(unsigned char const*, row_prebuilt_t*,
 ins_mode_t)+865 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        55a71eafadc1 ha_innobase::write_row(unsigned char const*)+177 (/home/ope
nxs/dbs/maria10.5/bin/mariadbd)
        55a71e7e5e10 handler::ha_write_row(unsigned char const*)+464 (/home/open
xs/dbs/maria10.5/bin/mariadbd)
        55a71e598b8d write_record(THD*, TABLE*, st_copy_info*, select_result*)+4
77 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        55a71e59f6d7 mysql_insert(THD*, TABLE_LIST*, List<Item>&, List<List<Item
> >&, List<Item>&, List<Item>&, enum_duplicates, bool, select_result*)+2967 (/ho
me/openxs/dbs/maria10.5/bin/mariadbd)
        55a71e5d894a mysql_execute_command(THD*)+7722 (/home/openxs/dbs/maria10.
5/bin/mariadbd)
        55a71e5ec945 Prepared_statement::execute(String*, bool)+981 (/home/openx
s/dbs/maria10.5/bin/mariadbd)
        55a71e5ecbd5 Prepared_statement::execute_loop(String*, bool, unsigned ch
ar*, unsigned char*)+133 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        55a71e5ed8f5 mysql_stmt_execute_common(THD*, unsigned long, unsigned cha
r*, unsigned char*, unsigned long, bool, bool)+549 (/home/openxs/dbs/maria10.5/b
in/mariadbd)
        55a71e5edb3c mysqld_stmt_execute(THD*, char*, unsigned int)+44 (/home/op
enxs/dbs/maria10.5/bin/mariadbd)
        55a71e5d4ba6 dispatch_command(enum_server_command, THD*, char*, unsigned
 int, bool, bool)+9302 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        55a71e5d5f12 do_command(THD*)+274 (/home/openxs/dbs/maria10.5/bin/mariad
bd)
        55a71e6d3bf1 do_handle_one_connection(CONNECT*, bool)+1025 (/home/openxs
/dbs/maria10.5/bin/mariadbd)
        55a71e6d406d handle_one_connection+93 (/home/openxs/dbs/maria10.5/bin/ma
riadbd)
        55a71ea4abf2 pfs_spawn_thread+322 (/home/openxs/dbs/maria10.5/bin/mariad
bd)
        7fadbd0b34e2 start_thread+226 (/usr/lib64/libpthread-2.30.so)
]: 5427

...

Some stacks surely look reasonable, so I continued with ccollapsing them into a simpler form:

[openxs@fc31 maria10.5]$ cat /tmp/lll_lock_perf_stacks.txt | awk '
BEGIN { s = ""; }
/^@futexstack\[\]/ { s = ""; }
/^@futexstack/ { s = ""; }
/^\t/ { if (index($2, "(") > 0) {targ = substr($2, 1, index($2, "(") - 1)} else {targ = substr($2, 1, index($2, "+") - 1)} ; if (s != "") { s = s ";" targ } else { s = targ } }
/^]/ { print $2, s }
' | more

...
5380 __lll_lock_wait;
5406 __lll_lock_wait;
5421 __lll_lock_wait;
5427 __lll_lock_wait;row_ins_clust_index_entry_low;row_ins_clust_index_entry;row
_ins_step;row_insert_for_mysql;ha_innobase::write_row;handler::ha_write_row;writ
e_record;mysql_insert;mysql_execute_command;Prepared_statement::execute;Prepared
_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_
command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thre
ad;start_thread
5436 __lll_lock_wait;row_upd_clust_step;row_upd_step;row_update_for_mysql;ha_inn
obase::delete_row;handler::ha_delete_row;mysql_delete;mysql_execute_command;Prep
ared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_comm
on;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;hand
le_one_connection;pfs_spawn_thread;start_thread
5443 __lll_lock_wait;
5445 __lll_lock_wait;purge_worker_callback;tpool::task_group::execute;tpool::thr
ead_pool_generic::worker_main;;;;
5502 __lll_lock_wait;
...

The awk code is a bit different this time. I know that for flame graphs I need function calls separated by ';', so I am doing it immediately. Non-resolved stack traces are all removed etc. Now I have to sort this to find top N:

[openxs@fc31 maria10.5]$ cat /tmp/lll_lock_perf_stacks.txt | awk '
BEGIN { s = ""; }
/^@futexstack\[\]/ { s = ""; }
/^@futexstack/ { s = ""; }
/^\t/ { if (index($2, "(") > 0) {targ = substr($2, 1, index($2, "(") - 1)} else {targ = substr($2, 1, index($2, "+") - 1)} ; if (s != "") { s = s ";" targ } else { s = targ } }
/^]/ { print $2, s }
' | sort -r -n -k 1,1 > /tmp/collapsed_llll_lock.txt

and then:

[openxs@fc31 maria10.5]$ cat /tmp/collapsed_llll_lock.txt | awk '{ if (length($2) > 0) {print} }' | head -5
28276454 __lll_lock_wait;timer_handler;start_thread
35842 __lll_lock_wait;MDL_map::find_or_insert;MDL_context::try_acquire_lock_impl;MDL_context::acquire_lock;open_table;open_tables;open_and_lock_tables;mysql_delete;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;;;;;;
35746 __lll_lock_wait;MDL_lock::remove_ticket;MDL_context::release_lock;MDL_context::release_locks_stored_before;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread
35675 __lll_lock_wait;row_upd_clust_step;row_upd_step;row_update_for_mysql;ha_innobase::delete_row;handler::ha_delete_row;mysql_delete;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread
35563 __lll_lock_wait;row_upd_sec_index_entry;row_upd_step;row_update_for_mysql;ha_innobase::update_row;handler::ha_update_row;mysql_update;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread

So, the above is the top 5 fully resolved stack traces where the longest time was spent inside the __lll_lock_wait function. Now let's create the flame graph. We need this output, stack first and nanoseconds spent in it next, comma separated, piped for processing by the same flamegraph.pl program as before:

[openxs@fc31 maria10.5]$ cat /tmp/collapsed_llll_lock.txt | awk '{ if (length($2) > 0) {print $2, $1} }' | head -3
__lll_lock_wait;timer_handler;start_thread 28276454
__lll_lock_wait;MDL_map::find_or_insert;MDL_context::try_acquire_lock_impl;MDL_context::acquire_lock;open_table;open_tables;open_and_lock_tables;mysql_delete;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;;;;;; 35842
__lll_lock_wait;MDL_lock::remove_ticket;MDL_context::release_lock;MDL_context::release_locks_stored_before;mysql_execute_command;Prepared_statement::execute;Prepared_statement::execute_loop;mysql_stmt_execute_common;mysqld_stmt_execute;dispatch_command;do_command;do_handle_one_connection;handle_one_connection;pfs_spawn_thread;start_thread 35746

Not the flame graph was created as follows:

cat /tmp/collapsed_llll_lock.txt | awk '{ if (length($2) > 0) {print $2, $1} }' | /mnt/home/openxs/git/FlameGraph/flamegraph.pl --title="Time spent in ___lll_lock_wait in MariaDB 10.5" --countname=nsecs > ~/Documents/lll_lock.svg

The resulting flame graph is presented below:

Time related to MDL is highlighted, some 3% only

I highlighted the impact of MDL-related stack traces, some 3%. The shape is totally different from the one of pthread_mutex_wait, so this shows a how different point of view may lead to different conclusions about the contention.

* * *

To summarize:

  1. I am still not any good with finding low impact way (if any) to get stack traces with time summed up. It seems associative array with them become too big if stacks are not preprocessed, and exporting it to user space at the end of the program takes minutes, literally, and impact performance during all this time.
  2. It's easy to measure time spent inside the function with uprobe + ureprobe for the function
  3. I am not sure yet about the tools to run in production, but for checking the ideas and creating prototypes bpftrace is really easy to use and flexible.
  4. It is not clear where the stack traces with all calls unresolved to symbols may come from.

Wednesday, January 27, 2021

Playing with recent bpftrace and MariaDB 10.5 on Fedora - Part III, Creating a New Tool for Tracing Mutexes

I had some free time yesterday to do some more tests on Fedora, so I've got back to the old request for one of MariaDB developers that I first mentioned in the blog few months ago:

"ideally, collect stack traces of mariadbd, and sort them in descending order by time spent between pthread_mutex_lock and next pthread_mutex_unlock."

The original request above got no answer yet, and I was recently reminded about it. What I did with perf and recently with bpftrace was (lame) counting number of samples per unique stack, while I actually had to count time spent between acquiring and releasing specific mutexes.

From the very beginning I was sure that bpftrace should allow to get the answer easily, and after reviewing the way existing tools are coded, yesterday I decided to finally write some real, multiple liner bpftrace program, with multiple probes, myself. I wanted to fulfill the request literally, no matter how much that would "cost" for now, with bpftrace. It turned out that a couple of hours of calm vacation time is more than enough to get a draft of solution.

I've started with checking the pthread_mutext_lock manual page. From it I've got the (primitive) idea of two functions used in the process, with single argument, mutex pointer/address:

int pthread_mutex_lock(pthread_mutex_t *mutex);
int pthread_mutex_unlock(pthread_mutex_t *mutex);

Multiple threads can try to lock the same mutex and those that found it locked will wait until unlock eventually makes the mutex available for acquire for one of them (as decided by the scheduler). I've made the assumption (correct me if I am wrong) that that same thread that locked the mutex must unlock it eventually. Based on that I came up with the following initial lame version of bpftrace program:

[openxs@fc31 ~]$ cat pthread_mutex.bt
#!/usr/bin/env bpftrace

BEGIN
{
    printf("Tracing time from pthread_mutex_lock to _unlock, Ctrl-C to stop\n");
}

uprobe:/lib64/libpthread.so.0:pthread_mutex_lock /comm == "mariadbd"/
{
    @start[arg0] = nsecs;
    @mutexid[arg0] = tid;
    @tidstack[tid] = ustack;
}

uprobe:/lib64/libpthread.so.0:pthread_mutex_unlock
/comm == "mariadbd" && @start[arg0] != 0 && @mutexid[arg0] == tid/
{
    $now = nsecs;
    $time = $now - @start[arg0];
    @mutexstack[@tidstack[tid]] += $time;
    printf("Mutex: %u, time: %d\n", arg0, $time);
    delete(@start[arg0]);
    delete(@mutexid[arg0]);
    delete(@tidstack[tid]);
}

END
{
    clear(@start);
    clear(@mutexid);
    clear(@tidstack);
}
/* the end */

Do you recognize biosnoop.bt style? Yes, this is what I was inspired by... So, I've added two uprobes for the library providing the function, both checking that the call is done from the mariadbd. The first one, for lock, stores start time for the given mutex address, thread id that locked it, and stack trace of the thread at the moment of locking. The second one, for unlock, computes time difference since the same mutex was locked last, but it fires only if unlock thread has the same id that the lock one. Then I add this time difference to the time spent "within this stack trace", by referring to the thread stack stored as index in the @mutexstack[] associative array. Then I print some debugging output to see what happens in the process of tracing and remove items from the associative arrays that were added to them by the first probe.

In the END probe I just clean up all associative arrays but @mutexstack[], and, as we've seen before, then its content is just dumped to the output by the bpftrace. This is what I am going to post process later, after quick debugging session proves my idea gives some reasonable results.

So, with MariaDB 10.5 up and running, started like this (no real tuning for anything, no wonder QPS is not high in the tests below):

./bin/mysqld_safe --no-defaults --socket=/tmp/mariadb.sock --innodb_buffer_pool_size=1G --innodb_flush_log_at_trx_commit=2 --port=3309 &

and having zero user connections, I made pthread_mutex.bt executable and started my very first bpftrace program for the very first time (OK, honestly, few previous runs shown some syntax errors that I corrected):

[openxs@fc31 ~]$ sudo ./pthread_mutex.bt
Attaching 4 probes...
Tracing time from pthread_mutex_lock to _unlock, Ctrl-C to stop
Mutex: 652747168, time: 6598
Mutex: 629905136, time: 46594
...
Mutex: 652835840, time: 26491
Mutex: 652835712, time: 4569
^C


@mutexstack[
    __pthread_mutex_lock+0
    tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79
    0x7f7e60fb53d4
    0x5562258c9080
    std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0
    0x2de907894810c083
]: 23055
@mutexstack[
    __pthread_mutex_lock+0
    tpool::thread_pool_generic::timer_generic::execute(void*)+188
    tpool::task::execute()+50
    tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79
    0x7f7e60fb53d4
    0x5562258c9080
    std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0
    0x2de907894810c083
]: 23803
@mutexstack[
    __pthread_mutex_lock+0
    tpool::thread_pool_generic::timer_generic::execute(void*)+210
    tpool::task::execute()+50
    tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79
    0x7f7e60fb53d4
    0x5562258c9080
    std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0
    0x2de907894810c083
]: 24555
@mutexstack[
    __pthread_mutex_lock+0
    tpool::thread_pool_generic::submit_task(tpool::task*)+88
    timer_handler+326
    start_thread+226
]: 31859
@mutexstack[
    __pthread_mutex_lock+0
    srv_monitor_task+130
    tpool::thread_pool_generic::timer_generic::execute(void*)+53
    tpool::task::execute()+50
    tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79
    0x7f7e60fb53d4
    0x5562258c9080
    std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0
    0x2de907894810c083
]: 53282
@mutexstackERROR: failed to look up stack id 0 (pid 0): -1
[]: 322499

Not bad. I see mutexes are locked and (last ERROR aside) time is summed up per stack trace as planned. More over, stack traces look reasonable for 10.5 (generic thread pool is used inside InnoDB in this version, to run background tasks, are you aware of that?). Some symbols are not resolved, but what can I do about it? I'll just skip that addresses at some later stage, maybe.

I just decided to check what threads are locking mutexes, and modified print:

    printf("Mutex: %u, thread: %u, time: %d\n", arg0, tid, $time);

With that modification I also redirected errors to /dev/null and got this:

[openxs@fc31 ~]$ sudo ./pthread_mutex.bt 2>/dev/null
Attaching 4 probes...
Tracing time from pthread_mutex_lock to _unlock, Ctrl-C to stop
Mutex: 652835712, thread: 4476, time: 6354
Mutex: 629905136, thread: 4476, time: 37053
Mutex: 621289632, thread: 4485, time: 5254
Mutex: 621289632, thread: 4485, time: 4797
Mutex: 652835840, thread: 4485, time: 31465
Mutex: 652835712, thread: 4485, time: 4374
Mutex: 652835712, thread: 4476, time: 6048
Mutex: 629905136, thread: 4476, time: 35703
Mutex: 621289632, thread: 4485, time: 4917
Mutex: 621289632, thread: 4485, time: 4779
Mutex: 652835840, thread: 4485, time: 30316
Mutex: 652835712, thread: 4485, time: 4389
Mutex: 652835712, thread: 4476, time: 6733
Mutex: 629905136, thread: 4476, time: 40936
Mutex: 621289632, thread: 4485, time: 4719
Mutex: 621289632, thread: 4485, time: 4725
Mutex: 652835840, thread: 4485, time: 30637
Mutex: 652835712, thread: 4485, time: 4441
^C


@mutexstack[
    __pthread_mutex_lock+0
    tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79
    0x7f7e60fb53d4
    0x5562258c9080
    std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0
    0x2de907894810c083
]: 13204
@mutexstack[
    __pthread_mutex_lock+0
    tpool::thread_pool_generic::timer_generic::execute(void*)+210
    tpool::task::execute()+50
    tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79
    0x7f7e60fb53d4
    0x5562258c9080
    std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0
    0x2de907894810c083
]: 14301
@mutexstack[
    __pthread_mutex_lock+0
    tpool::thread_pool_generic::timer_generic::execute(void*)+188
    tpool::task::execute()+50
    tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79
    0x7f7e60fb53d4
    0x5562258c9080
    std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0
    0x2de907894810c083
]: 14890
@mutexstack[
    __pthread_mutex_lock+0
    tpool::thread_pool_generic::submit_task(tpool::task*)+88
    timer_handler+326
    start_thread+226
]: 19135
@mutexstack[]: 206110

I see different threads locking same mutexes etc. One day given more time I'd try to figure out what mutexes are that and what was the purpose of each thread (it can be seen based on OS thread id in the performance_schema.threads in 10.5, fortunately, or inferred from the stacks at the moment).

I've removed debug print (no interactive output, just final summarized data), changed ustack to ustack(perf) (assuming I know better how to deal with that output format later - it was not really a good idea), and ended up with this final version of the tool:

#!/usr/bin/env bpftrace

BEGIN
{
/*
    printf("Tracing time from pthread_mutex_lock to _unlock, Ctrl-C to stop\n");
*/
}

uprobe:/lib64/libpthread.so.0:pthread_mutex_lock /comm == "mariadbd"/
{
    @start[arg0] = nsecs;
    @mutexid[arg0] = tid;
    @tidstack[tid] = ustack(perf);    
}

uprobe:/lib64/libpthread.so.0:pthread_mutex_unlock
/comm == "mariadbd" && @start[arg0] != 0 && @mutexid[arg0] == tid/
{
    $now = nsecs;
    $time = $now - @start[arg0];
    @mutexstack[@tidstack[tid]] += $time;
/*
    printf("Mutex: %u, thread: %u, time: %d\n", arg0, tid, $time);
*/
    delete(@start[arg0]);
    delete(@mutexid[arg0]);
    delete(@tidstack[tid]);
}

END
{
    clear(@start);
    clear(@mutexid);
    clear(@tidstack);
}
/* the end */

I saved stack trace to the file in /tmp, to work on the output further outside of the bpftrace. Again probably it was not the best idea, but I am not yet fluent with strings processing in bpftrace anyway, I rely on awk, sort etc.:

[openxs@fc31 ~]$ sudo ./pthread_mutex.bt 2>/dev/null >/tmp/pthread_mutex_perf_stacks.txt
^C[openxs@fc31 ~]cat /tmp/pthread_mutex_perf_stacks.txt
Attaching 4 probes...



@mutexstack[
        7f7e615a6e70 __pthread_mutex_lock+0 (/usr/lib64/libpthread-2.30.so)
        556223eb07cf tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        7f7e60fb53d4 0x7f7e60fb53d4 ([unknown])
        5562258c9080 0x5562258c9080 ([unknown])
        556223eb0b60 std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        2de907894810c083 0x2de907894810c083 ([unknown])
]: 21352
@mutexstack[
        7f7e615a6e70 __pthread_mutex_lock+0 (/usr/lib64/libpthread-2.30.so)
        556223eb0d62 tpool::thread_pool_generic::timer_generic::execute(void*)+210 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        556223eb1c52 tpool::task::execute()+50 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        556223eb07cf tpool::thread_pool_generic::worker_main(tpool::worker_data*)+79 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        7f7e60fb53d4 0x7f7e60fb53d4 ([unknown])
        5562258c9080 0x5562258c9080 ([unknown])
        556223eb0b60 std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::~_State_impl()+0 (/home/openxs/dbs/maria10.5/bin/mariadbd)
        2de907894810c083 0x2de907894810c083 ([unknown])
]: 22975
...

After checking what I did with such stack traces previously to collapse them into one line per stack pt-pmp style, and multiple clarification runs and changes I ended up with the following awk code:

awk '
BEGIN { s = ""; }
/^@mutexstack\[\]/ { s = ""; }
/^@mutexstack/ { s = ""; }
/^\t/ { if (index($2, "(") > 0) {targ = substr($2, 1, index($2, "(") - 1)} else {targ = substr($2, 1, index($2, "+") - 1)} ; if (s != "") { s = s "," targ } else { s = targ } }
/^]/ { print $2, s }
'

I process the lines that are not containing stacks, those around the stack block, resetting the stack s at the beginning and printing it at the end of the block, and for the stack lines I take only function name and ignore everything else to form a targ, and concatenate it to the stack already collected with comma (that was a wrong idea for future use) as a separator between function names. Original code that inspired all these came from pt-pmp as far as I remember. I just adapted it to the format, better than in the previous posts.

Post-processing the output with this awk code gave me the following:

[openxs@fc31 ~]$ cat /tmp/pthread_mutex_perf_stacks.txt | awk '
> BEGIN { s = ""; }
> /^@mutexstack\[\]/ { s = ""; }
> /^@mutexstack/ { s = ""; }
> /^\t/ { if (index($2, "(") > 0) {targ = substr($2, 1, index($2, "(") - 1)} else {targ = substr($2, 1, index($2, "+") - 1)} ; if (s != "") { s = s "," targ } else { s = targ } }
> /^]/ { print $2, s }
> '
21352 __pthread_mutex_lock,tpool::thread_pool_generic::worker_main,,,,
22975 __pthread_mutex_lock,tpool::thread_pool_generic::timer_generic::execute,tpool::task::execute,tpool::thread_pool_generic::worker_main,,,,
24568 __pthread_mutex_lock,tpool::thread_pool_generic::timer_generic::execute,tpool::task::execute,tpool::thread_pool_generic::worker_main,,,,
33469 __pthread_mutex_lock,tpool::thread_pool_generic::submit_task,timer_handler,start_thread

Non-resolved addresses are removed, same as offsents from the functrion start. Now sorting remains, in descending order, on the first column as a key:

[openxs@fc31 ~]$ cat /tmp/pthread_mutex_perf_stacks.txt | awk '
BEGIN { s = ""; }
/^@mutexstack\[\]/ { s = ""; }
/^@mutexstack/ { s = ""; }
/^\t/ { if (index($2, "(") > 0) {targ = substr($2, 1, index($2, "(") - 1)} else {targ = substr($2, 1, index($2, "+") - 1)} ; if (s != "") { s = s "," targ } else { s = targ } }
/^]/ { print $2, s }
' | sort -r -n -k 1,1
33469 __pthread_mutex_lock,tpool::thread_pool_generic::submit_task,timer_handler,start_thread
24568 __pthread_mutex_lock,tpool::thread_pool_generic::timer_generic::execute,tpool::task::execute,tpool::thread_pool_generic::worker_main,,,,
22975 __pthread_mutex_lock,tpool::thread_pool_generic::timer_generic::execute,tpool::task::execute,tpool::thread_pool_generic::worker_main,,,,
21352 __pthread_mutex_lock,tpool::thread_pool_generic::worker_main,,,,

That's what we have, for the server without user connections. Now let me put it under the high concurrent sysbench test load (good idea, isn't it?):

[openxs@fc31 maria10.5]$ sysbench oltp_read_write --db-driver=mysql --tables=5 --table-size=100000 --mysql-user=openxs --mysql-socket=/tmp/mariadb.sock --mysql-db=sbtest --threads=32 --report-interval=10 --time=300 run
sysbench 1.1.0-174f3aa (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 32
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

[ 10s ] thds: 32 tps: 653.07 qps: 13097.92 (r/w/o: 9174.69/2613.89/1309.34) lat (ms,95%): 240.02 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 32 tps: 1025.71 qps: 20511.58 (r/w/o: 14358.12/4102.14/2051.32) lat (ms,95%): 71.83 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 32 tps: 588.21 qps: 11770.70 (r/w/o: 8238.74/2355.44/1176.52) lat (ms,95%): 235.74 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 32 tps: 306.22 qps: 6135.54 (r/w/o: 4298.14/1224.97/612.43) lat (ms,95%): 369.77 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 32 tps: 467.00 qps: 9339.64 (r/w/o: 6537.96/1867.69/933.99) lat (ms,95%): 308.84 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 32 tps: 302.19 qps: 6044.31 (r/w/o: 4230.60/1209.34/604.37) lat (ms,95%): 520.62 err/s: 0.00 reconn/s: 0.00
[ 70s ] thds: 32 tps: 324.91 qps: 6496.60 (r/w/o: 4548.67/1298.12/649.81) lat (ms,95%): 467.30 err/s: 0.00 reconn/s: 0.00
[ 80s ] thds: 32 tps: 303.58 qps: 6058.55 (r/w/o: 4238.05/1213.33/607.16) lat (ms,95%): 646.19 err/s: 0.00 reconn/s: 0.00
[ 90s ] thds: 32 tps: 258.39 qps: 5176.10 (r/w/o: 3625.73/1033.58/516.79) lat (ms,95%): 634.66 err/s: 0.00 reconn/s: 0.00
[ 100s ] thds: 32 tps: 213.72 qps: 4279.43 (r/w/o: 2995.93/856.07/427.43) lat (ms,95%): 707.07 err/s: 0.00 reconn/s: 0.00
[ 110s ] thds: 32 tps: 208.29 qps: 4144.23 (r/w/o: 2896.58/831.07/416.58) lat (ms,95%): 623.33 err/s: 0.00 reconn/s: 0.00
[ 120s ] thds: 32 tps: 456.29 qps: 9135.45 (r/w/o: 6397.03/1826.05/912.38) lat (ms,95%): 363.18 err/s: 0.00 reconn/s: 0.00
[ 130s ] thds: 32 tps: 582.21 qps: 11641.73 (r/w/o: 8148.49/2328.63/1164.61) lat (ms,95%): 277.21 err/s: 0.00 reconn/s: 0.00
[ 140s ] thds: 32 tps: 560.39 qps: 11208.17 (r/w/o: 7845.84/2241.55/1120.78) lat (ms,95%): 257.95 err/s: 0.00 reconn/s: 0.00
[ 150s ] thds: 32 tps: 338.03 qps: 6768.93 (r/w/o: 4739.47/1353.41/676.05) lat (ms,95%): 442.73 err/s: 0.00 reconn/s: 0.00
[ 160s ] thds: 32 tps: 410.20 qps: 8210.38 (r/w/o: 5748.19/1641.80/820.40) lat (ms,95%): 411.96 err/s: 0.00 reconn/s: 0.00
[ 170s ] thds: 32 tps: 480.28 qps: 9599.94 (r/w/o: 6716.68/1922.81/960.45) lat (ms,95%): 325.98 err/s: 0.00 reconn/s: 0.00
[ 180s ] thds: 32 tps: 397.62 qps: 7952.16 (r/w/o: 5568.62/1588.19/795.35) lat (ms,95%): 411.96 err/s: 0.00 reconn/s: 0.00
[ 190s ] thds: 32 tps: 338.77 qps: 6769.31 (r/w/o: 4739.09/1352.78/677.44) lat (ms,95%): 475.79 err/s: 0.00 reconn/s: 0.00
[ 200s ] thds: 32 tps: 417.81 qps: 8372.59 (r/w/o: 5857.10/1679.76/835.73) lat (ms,95%): 331.91 err/s: 0.00 reconn/s: 0.00
[ 210s ] thds: 32 tps: 267.40 qps: 5340.01 (r/w/o: 3742.10/1063.10/534.80) lat (ms,95%): 634.66 err/s: 0.00 reconn/s: 0.00
[ 220s ] thds: 32 tps: 267.70 qps: 5355.78 (r/w/o: 3748.96/1071.42/535.41) lat (ms,95%): 590.56 err/s: 0.00 reconn/s: 0.00
[ 230s ] thds: 32 tps: 243.11 qps: 4859.74 (r/w/o: 3401.70/971.83/486.21) lat (ms,95%): 733.00 err/s: 0.00 reconn/s: 0.00
[ 240s ] thds: 32 tps: 173.99 qps: 3474.97 (r/w/o: 2430.94/696.05/347.98) lat (ms,95%): 1013.60 err/s: 0.00 reconn/s: 0.00
[ 250s ] thds: 32 tps: 169.71 qps: 3403.05 (r/w/o: 2384.37/679.25/339.42) lat (ms,95%): 877.61 err/s: 0.00 reconn/s: 0.00
[ 260s ] thds: 32 tps: 407.57 qps: 8151.27 (r/w/o: 5704.23/1631.89/815.15) lat (ms,95%): 272.27 err/s: 0.00 reconn/s: 0.00
...
[ 300s ] thds: 32 tps: 382.41 qps: 7641.05 (r/w/o: 5348.01/1528.43/764.62) lat (ms,95%): 434.83 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            1663592
        write:                           475312
        other:                           237656
        total:                           2376560
    transactions:                        118828 (396.04 per sec.)
    queries:                             2376560 (7920.89 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

Throughput:
    events/s (eps):                      396.0445
    time elapsed:                        300.0370s
    total number of events:              118828

Latency (ms):
         min:                                    2.17
         avg:                                   80.79
         max:                                 5012.10
         95th percentile:                      390.30
         sum:                              9600000.68

Threads fairness:
    events (avg/stddev):           3713.3750/40.53
    execution time (avg/stddev):   300.0000/0.01

Trust me that I started by bpftrace program after initial 20 seconds of the test run, and let it work at most 20 seconds. But the entire test, next 280 seconds, were notably affected by a visible drop in QPS! I pressed Ctrl-C but got the command probm back much later, not even after 300 seconds... I was wathing the output growth in another shell:

[openxs@fc31 ~]$ ls -l /tmp/pthread_mutex_perf_stacks.txt
-rw-rw-r--. 1 openxs openxs 264177 Jan 26 13:26 /tmp/pthread_mutex_perf_stacks.txt
...
[openxs@fc31 ~]$ ls -l /tmp/pthread_mutex_perf_stacks.txt
-rw-rw-r--. 1 openxs openxs 281111 Jan 26 13:27 /tmp/pthread_mutex_perf_stacks.txt
...
[openxs@fc31 ~]$ ls -l /tmp/pthread_mutex_perf_stacks.txt
-rw-rw-r--. 1 openxs openxs 4116283 Jan 26 13:35 /tmp/pthread_mutex_perf_stacks.txt

So I ended with 4M of text data expoirted to the userland, for just 20 seconds of data collection and with performance drop for many miutes for my 32 threds test on 4 cores old system. Not that impressive and I should care better to probably aggregate and process data more in my bpftrace program, or maybe just dump raw stak-time entries as they are collected. I'll test and see how to improve, as this way of collection is not suitable for production use on a loaded system :(

Anyway, I have to process what was collected with such an impact. To remind you, the data were collected this way:

[openxs@fc31 ~]$ sudo ./pthread_mutex.bt 2>/dev/null >/tmp/pthread_mutex_perf_stacks.txt
[sudo] password for openxs:
^C

and then I applied that same awk followed by sort command line as above to get collapsed stacks. This is what I've seen as a result:

[openxs@fc31 ~]$ cat /tmp/collapsed_pthread_mutex.txt | more
104251253 __pthread_mutex_lock,buf_flush_page,buf_flush_try_neighbors,buf_do_flu
sh_list_batch,buf_flush_lists,buf_flush_page_cleaner,start_thread
78920938
74828263
74770599
74622438
72853129
67893142
66546439 __pthread_mutex_lock,buf_do_flush_list_batch,buf_flush_lists,buf_flush_
page_cleaner,start_thread
61669188
59330217
55480213
55045396
53531941
53216338
...

I am yet to find out what are those non-resolved and all removed entries are coming from and what to do with them to not influence the analysis. For now I need to get rid of them as useless. This is how I did it to get "top 5" stacks with times (in nanoseconds) spent in them:

[openxs@fc31 ~]$ cat /tmp/collapsed_pthread_mutex.txt | awk '{ if (length($2) > 0) {print} }' | head -5
104251253 __pthread_mutex_lock,buf_flush_page,buf_flush_try_neighbors,buf_do_flush_list_batch,buf_flush_lists,buf_flush_page_cleaner,start_thread
66546439 __pthread_mutex_lock,buf_do_flush_list_batch,buf_flush_lists,buf_flush_page_cleaner,start_thread
31431176 __pthread_mutex_lock,buf_flush_try_neighbors,buf_do_flush_list_batch,buf_flush_lists,buf_flush_page_cleaner,start_thread
27100601 __pthread_mutex_lock,tpool::aio_linux::getevent_thread_routine,,,,
11730055 __pthread_mutex_lock,buf_flush_lists,buf_flush_page_cleaner,start_thread
[openxs@fc31 ~]$ cat /tmp/collapsed_pthread_mutex.txt | awk '{ if (length($2) > 0) {print} }' > /tmp/collapsed_clean_pthread_mutex.txt

I saved the output into the /tmp/collapsed_clean_pthread_mutex.txt file. The enxt step would be to represent the result in some nice graphical way, a flame graph! I have the software in place:

[openxs@fc31 ~]$ ls /mnt/home/openxs/git/FlameGraph/
aix-perf.pl                 stackcollapse-gdb.pl
demos                       stackcollapse-go.pl
dev                         stackcollapse-instruments.pl
difffolded.pl               stackcollapse-java-exceptions.pl
docs                        stackcollapse-jstack.pl
example-dtrace-stacks.txt   stackcollapse-ljp.awk
example-dtrace.svg          stackcollapse-perf.pl
example-perf-stacks.txt.gz  stackcollapse-perf-sched.awk
example-perf.svg            stackcollapse.pl
files.pl                    stackcollapse-pmc.pl
flamegraph.pl               stackcollapse-recursive.pl
jmaps                       stackcollapse-sample.awk
pkgsplit-perf.pl            stackcollapse-stap.pl
range-perf.pl               stackcollapse-vsprof.pl
README.md                   stackcollapse-vtune.pl
record-test.sh              stackcollapse-xdebug.php
stackcollapse-aix.pl        test
stackcollapse-bpftrace.pl   test.sh
stackcollapse-elfutils.pl

But I quickly recalled that flamegraph.pl expects this kind of format of the imput, ";" as separator and number as a second column, not the first:

bash;entry_SYSCALL_64_fastpath;sys_read;vfs_read;...;schedule 8

There is also a tool to collapse raw bpftrace stacks, stackcollapse-bpftrace.pl, and I have to check how it work for my case one day... Yesterday I just wanted to complet testing as soon as possible, so proceeded with a quick and dirty awk hack:

[openxs@fc31 ~]$ cat /tmp/collapsed_clean_pthread_mutex.txt | awk ' { gsub(",",";",$2); print "mariadbd;"$2, $1 }' | head -5
mariadbd;__pthread_mutex_lock;buf_flush_page;buf_flush_try_neighbors;buf_do_flush_list_batch;buf_flush_lists;buf_flush_page_cleaner;start_thread 104251253
mariadbd;__pthread_mutex_lock;buf_do_flush_list_batch;buf_flush_lists;buf_flush_page_cleaner;start_thread 66546439
mariadbd;__pthread_mutex_lock;buf_flush_try_neighbors;buf_do_flush_list_batch;buf_flush_lists;buf_flush_page_cleaner;start_thread 31431176
mariadbd;__pthread_mutex_lock;tpool::aio_linux::getevent_thread_routine;;;; 27100601
mariadbd;__pthread_mutex_lock;buf_flush_lists;buf_flush_page_cleaner;start_thread 11730055

This format looks acceptable, so I've generated the flame graph with the same hack:

[openxs@fc31 ~]$ cat /tmp/collapsed_clean_pthread_mutex.txt | awk ' { gsub(",",";",$2); print "mariadbd;"$2, $1 }' | /mnt/home/openxs/git/FlameGraph/flamegraph.pl --title="pthread_mutex_waits in MariaDB 10.5" --countname=nsecs > ~/Documents/mutex.svg

and here is the result, with sdearhc for "tpool" highlighting how much time of the mutex waits is related to the thread pool of background InnoDB threads:

One can surely create a flame graph based on stacks collected by the bpftrace program, one way or the other...

I'll stop at this stage and maybe continue testing later this week. Stay tuned!

* * *

To summarize:

  • I am not yet sure if my logic in the programs above was correct. I have to think more about it.
  • I surely need to find another way to process the data, either by collapsing/processing stacks in my bpftrace program to make them smaller, or maybe by submitting raw stack/time data as they are collected to the user level. More tests to come...
  • It is easy to create custom bpftrace programs for collecting the data you need. I think memory allocations tracing is my next goal. Imagine a printout of memory allocated and not freed, per allocating thread... If only that had less impact on QPS than what my lame program above demonstrated :)