Showing posts sorted by date for query howto. Sort by relevance Show all posts
Showing posts sorted by date for query howto. Sort by relevance Show all posts

Sunday, March 22, 2020

Post #200 in the Blog of Former MySQL Entomologist!

I noted that since August 2012 when I started this blog (while planning to leave my job as a MySQL Entomologist in Oracle to see some real MySQL life besides bugs) I've already published 199 posts. In this post #200 I'd like to look back and pick up one post per each year of blogging so far that I consider really good, important or useful.

I try my best to pick up posts that are mostly not about specific MySQL bugs. These posts either caused some interesting discussions or defined my interests for a log time, or just opened new directions in my public blogging efforts. I think many of them also got less public attention than they deserve (so most popular posts are not in this list). I also try to summarize my work related memories and sentiments of the year.

Here is the list:
  • 2012. "New Year Wishes for Customers of Oracle's MySQL Technical Support Services".

    That year I considered my job roles in Oracle too boring and decided to quit and join Percona for real fun, practical experience and really good money! I also created few social media accounts and this blog, as I decided to finally share the details of my past and current work in public. I just started and published only 4 posts during that year and the one I picked up was probably created during my shift on December 31 (I still like to work on weekends and public holidays, but do this less often). Basically I wanted to encourage Oracle customers (and employees) to report MySQL bugs they found in public MySQL bugs database. Expected for the MySQL Entomologist to wish this, isn't it? I still wish the same today, even more! Unfortunately I see public MySQL bugs database is used less these days and this makes me sad.

  • 2013. "How to Drop the Trigger".

    I had a lot of work, bugs and problems to deal with in 2013, including hard to forget snow storm in March. This year I first visited UK to speak about Performance Schema at Percona Live (it was probably my worst accepted talk ever) and enjoy London and Brighton. Who could imagine the outcomes of that very first visit... I had written a lot in public that year, 42 blog posts and so many rumblings about bugs at Facebook that I annoyed many of my former colleagues. The blog post I picked up was probably my first ever "HowTo" style post in this blog and that's why it is important. It described the real life case when due to missing .TRG file DROP TRIGGER statement just failed. One had to create the file manually to proceed, and it's easy as it's a plain text file. Surely I listed some bugs I checked in the process, including those that could explain why the file was missing.

  • 2014. "On responsible bugs reporting".

    I was less active blogger in 2014, as I was kindly asked by both Oracle and Percona officials and colleagues to reduce my extraordinary bugs-related social media activity. This made me sad, but allowed to concentrate better on support and other job duties in Percona. I kept myself mostly silent after the blog post mentioned above, where I discussed what a responsible engineer should do when she finds some simple sequence of SQL statements that, when executed by authenticated user explicitly having all the privileges needed to execute these statements, crashes some version of her favorite MySQL fork. I was (and still is) against hiding this kind of crashes by Oracle or any other open source software vendor, and suggested to report them in public as I always did. Many MySQL Community members disagree with this approach even today, probably. In 2014 I've got my (first and only) talk (about MySQL bugs reporting) accepted to Oracle Open World 2014 somehow. I keep submitting multiple talks for this event every year, to no result since that time...

  • 2015. "Using gdb to understand what locks (and when) are really set by InnoDB. Part I.".

    In 2015 I started to study MySQL internals with gdb on a regular basis. The blog post I picked up for this year was the first in a series of such studies that continue until today. Initially it was as simple as setting few breakpoints and checking several lines of source code here and there, but I had not stopped at that stage. As a result of that studies, in 2015 I've presented interesting and really well accepted talks at FOSDEM and another one about InnoDB locks and deadlocks at Percona Live. Who could imagine that next time I speak at any Percona conference it will be 2019! The rest of the year was hard and sad at work, where I spent most of my time fighting for my Support team values and colleagues against the upcoming changes, new leaders and new company approaches to services. Not so many blog posts as a result, just 12.

  • 2016. "I'm Winston Wolf, I solve problems.".

    In January, 2016, my internal fights in Percona influenced content and highlights of my blog posts. One of them, my all times favorite, was aimed at explaining how Support should work and why, based on real life story with one of customer issues I resolved instantly working the way I prefer! I have to note that my feature request (Bug #76030 - "Add a way to disable rowid-ordered retrieval to optimizer_switch") that led me to the idea about the root cause is still "Open" today. Even more interesting that the issue was about MariaDB. Later I lost in all my fights for Support in Percona (fights, but not the battle, as Percona today mostly operates the way I defended back then, not the way they planned or tried to enforce) and had to quit, to end up working in MariaDB Corporation where I stay till today, and happy! In general, 2016 was my most successful year as a blogger, with up to 1000 page views per day during some months. Many of 29 blog posts published that year are worth re-reading and quoting, and became really popular, but I'd like to remind mostly about this series about MySQL Support people... Since 2016 I am less active at conferences due to my (now resolved) conflict with Percona on this topic as well, so FOSDEM was my only MySQL-related public appearance in 2016.

  • 2017. "perf Basics for MySQL Profiling".

    By 2017 I found yet another type of tools of enormous value for my daily work in Support - profilers (not only pt-pmp, but also real ones), specifically perf in case of any modern Linux. I had found out (hard way) that proper instrumentation of source code, while useful as MySQL Performance Schema (and Oracle's wait events long before it) shown, is never 100% complete and ready to use in practice, unlike proper OS level profiling tools that Linux got since kernel 2.6.x. Profilers helped me to solve real life performance problems and my findings were later summarized in this FOSDEM 2017 talk, the best one I ever made IMHO. It was a fruitful year for blogging and it was hard to pick up one post out of 32. I continued my gdb studies of various MySQL features and ended up with a long workshop on this topic presented in Sofia to few attendees.

  • 2018. "On Some Problematic Oracle MySQL Server Features".

    In 2018 I had crazy plans to go to Oracle Open World and speak there about problems with MySQL and Oracle's way of developing it. So, many of my numerous (37 in total) blog posts that year were explaining my views on these in details. I picked up the one above as it summarized the most problematic features of MySQL server itself (namely implementations of InnoDB data compression, FULLTEXT indexes, so called "online" DDL, automatic recalculation of InnoDB "persistent" statistics, and partitioning, as well as some details on how all these (does not) work well together). This was surely NOT a topic to be accepted for any MySQL conference! But posts from this series are still useful to review even today, as some of these features got zero improvements even in MySQL 8.0 (and just few are planned or done by other vendors, with MyRocks providing better data compression and with some ongoing changes to InnoDB that are implemented by MariaDB Corporation). I had to cancel my FOSDEM talk that year due to family issues, and had not made any public presentations.

  • 2019. "Dynamic Tracing of MySQL Server With perf probe - Basic Example".


    The year of 2019 was really successful for me. In May I've got "MySQL Community Contributor of The Year" award "for bug identification and submission", that is, for activity that is the main topic of this blog. Obviously, many (of 26 in total) blog posts were about bugs, bugs reporting and bugs processing, same as my FOSDEM talk in 2019. But later I stopped exploiting "MySQL bugs" topic and switched to my current interests (dynamic tracing and profiling with perf, BPF tools and bpftrace). So, I picked up one of the key blog posts that provided additional details for my Percona Live Europe 2019 talk on the topic.

    I had a lot of plans for related studies and talks for 2020, but with COVID-19 breaking everything and conferences cancelled it may happen so that this blog would be the only public platform for all my MySQL-related activities this year.
I also checked many old photos made with Nokia dumb phone while working on this post. Who known when (and if) I ever see the West Pier in Brighton again and swim nearby...
I do not plan to stop blogging any time soon. So, stay tuned for the next 200 post :)

Saturday, August 18, 2018

On Fine MySQL Manual

Today I am going to provide some details on the last item in my list of problems with Oracle's way of MySQL server development, maintenance of MySQL Manual. I stated that:
"MySQL Manual still have many details missing and is not fixed fast enough.
Moreover, it is not open source...
"
Let me explain the above:
  1. MySQL Reference Manual is not open source. It used to be built from DocBook XML sources. Probably that's still the case. But you can not find the source code in open repositories (please, correct me if I am wrong, I tried to search...) That's because it is NOT open source. It says this clearly in Preface:
    "This documentation is NOT distributed under a GPL license. Use of this documentation is subject to the following terms:
    ...
    Any other use, such as any dissemination of printed copies or use of this documentation, in whole or in part, in another publication, requires the prior written consent from an authorized representative of Oracle. Oracle and/or its affiliates reserve any and all rights to this documentation not expressly granted above.
    "
    It was NOT Oracle who closed the source (as far as I remember, the manual was not GPL even in 2004, when I started to care about MySQL in general). But Oracle had a chance to change the license and set up some better contribution process for MySQL Community users, with many good writers among them. They decided not do this, so creators of forks and derived software have to duplicate efforts and rewrite everything themselves, and the only real way to help make the manual better is to report bugs.
  2. Quality of new documentation is not improved much. We, MySQL Community users, have to report bugs in the manual, as it still has some details missing or documented wrongly. Let me illustrate this with recent 10 or so documentation requests users made (I skipped reports for features I do not care about for now, like group replication):
    • Bug #91955 - "8.0 broke plugin API for C". According to the comment from Oracle developer in this bug reported by Laurynas Biveinis, writing plugins in C is not supported any more... But this change is NOT properly documented.
    • Bug #91781 - "Add version matrix in "Chapter 2 Keywords and Reserved Words". Good idea in the manual is not consistently implemented.
    • Bug #91743 - "performance_schema.data_locks not working correctly". Nice feature was added to Performance Schema (now we can check and study InnoDB locks with SQL!), but it is still not completely and properly documented.
    • Bug #91654 - "Handling an Unexpected Halt of a Replication Slave" Documentation is uncertain". This is also an example of improper bug handling - when I worked in Oracle newer bugs were usually declared duplicates of older ones. Here the opposite decision was made, even though both reports were from the same user, Uday Varagani, who explicitly asked to change the decision. Obviously, documentation requests do not get any extra care comparing to other kinds of bugs, quite to the contrary...
    • Bug #91648 - "Numeric literals must be a number between 0 and 9999". Surely ports with numbers larger than 9999 can be used.
    • Bug #91642 - "Document how protocol version 4.0 and 4.1 differ on how strings are terminated". As noted by Rene' Cannao', comments in the code are still sometimes more useful than documentation.
    • Bug #91549 - "MDL lock queue order seems to depend on table names". Make sure to read last comment in this nice request by Shane Bester. Dmitry Lenev provides some details on priorities of MDL locks in it. There are still cases when bugs and documentation requests document some details better than fine MySQL Manual!
    • Bug #90997 - "Handling an Unexpected Halt of a Replication Slave" manual page is wrong". Sveta Smirnova highlighted a totally misleading statement in the manual.
    • Bug #90935 - "Modify relay_log_recovery Documentation". Simple documentation request stays "Open" for 3 months. Definitely processing documentation requests is not a highest priority for Oracle engineers.
    • Bug #90680 - "dragnet logging: document how to use / find error_symbol codes". Even if request comes from a customer or otherwise well known bug reporter, like Simon Mudd, and it's related to some totally new cool feature of MySQL 8, it can wait for the fix for months...
    You can make your own conclusions from the above. But I do not see any good trends in the way new documentation is created or documentation requests are processed recently. Same problems as 4 years ago (see more on that in a side note below).
  3. Older documentation requests get even less attention than recent ones sometimes, even though they may highlight problems with software itself, not the MySQL Manual. Let me illustrate this with a few bugs I reported:
    • Bug #83640 - "Locks set by DELETE statement on already deleted record". I explicitly asked to
      "...document in the manual what locks this kind of DELETE sets when it encountered a record already marked as deleted, and why"
      This still had NOT happened. Moreover, eventually both MySQL and MariaDB engineers decided that current implementation of locking for this case is wrong and can be improved, so this report ended up as InnoDB bug. Check related  MDEV-11215 for more details.
    • Bug #73413 - "Manual does not explain MTS implementation in details". This is one of my favorite documentation requests. I've got a suggestion to explain what I want to see documented, in details. I tried my best, but if you want to get more details, read this blog.
    • Bug #72368 - "Empty/zero statistics for imported tablespace until explicit ANALYZE TABLE". This is a bug in the way persistent statistics (the feature I truly hate) in InnoDB is re-estimated "automatically". But until the bug is fixed, I asked to document current implementation (and problems with it). So, where current implementation is properly documented? If only in the bug reports...
    • Bug #71294 - "Manual page for P_S.FILE_INSTANCES table does not explain '~' in FILE_NAME".  They pretend they do not get the point:
      "Per Marc Alff, the Performance Schema is simply reporting what it gets from runtime. If runtime is reporting nonexistent filenames, that's a server issue.

      Recategorizing this as a server bug.
      "
    • Bug #71293 - "Manual page for P_S.FILE_INSTANCES table does not explain EVENT_NAME values". Now they are waiting for the new DOCUMENTATION column in the setup_instruments table to be filled in with something by server developers... The code is the documentation? OK, bus as we know from the experience (see slides 44-46 here) chances to get a bug in Performance Schema fixed fast are even less than to see it properly and completely documented...
There are more problems with MySQL documentation (not only reference manual), but at the moment I consider 3 highlighted and illustrated above the most serious.

Regent's Canal is nice. If I only knew how to operate the locks there... MySQL Manual also misses some information about locks.
As a side note, it's not the first time I write about MySQL Manual. You can find some useful details in the following posts:
  • "What's wrong with MySQL Manual". In 2014, after spending few weeks reporting up to 5 documentation bugs per day, I thought that, essentially, there is nothing much wrong with it - it's fine, well indexed by Google and has meaningful human-readable URLs. Few problems listed were lack of careful readers (I tried my best to fix that), limited attention to old documentation requests, some pages with not so much useful content and lack of "How to" documents. The later I also tried to fix to some small extent in this blog, see howto tagged posts. The real fix came mostly from Percona's blog, though.
  • I have a special "missing manual" tag for blog posts that mention at least one bug in the manual.
  • I tried to use "missing_manual" tag consistently for my own documentation requests. Last year I shared a detailed enough review of the bugs with that tag that were still active.
As a yet another side note, I tried once to create a kind of community driven "missing manual" project, but failed. Writing manual from scratch is a (hard) full time job, while my full time job is providing support to users of MySQL, MariaDB and Percona software...

That said, I also wanted to join MySQL's documentation team in the past, but this was not possible at least because I am not a native speaker. If anything changed in this regard, I am still open to a job offer of this kind. My conditions for an acceptable offer from Oracle are known to all interested parties and they include (but are not limited to) at least 4 times the salary I had before I quit (it was almost 6 years ago) and working for Oracle's office in Paris (because in Oracle office you ere employed by formally matters a lot for your happiness and success as employee).

In case of no offer in upcoming week, I'll just continue to write my annoying but hopefully somewhat useful blog posts (until MySQL Reference Manual becomes true open source project ;)

Sunday, June 25, 2017

My First Steps with MariaDB ColumnStore

This is a "Howto" kind of post, and some speculations and historical references aside, it will show you how to build MariaDB ColumnStore (current version 1.0.9) from GitHub source, how to install and configure it for basic usage, as well as how to resolve some minor problems you may get in the process. Keep reading and eventually you'll get the real Howto below :)

* * *

I try not to care about any software issues besides good old MySQL, InnoDB storage engine internals, query optimization and some MyRocks or a little bit of Galera clusters. But as I work in MariaDB Support it is not possible to ignore other things production DBAs use and care about these days. So, recently I noted increased number of issues related to MariaDB ColumnStore and had to step in to resolve some of them. To do this I had first to make sure this software is "supportable by me", that is, I can build it from source on my own hardware (so that I can apply patches if needed and is not dependent of any packaging problems), install, start and stop, do some basic things with it and know what to do to troubleshoot non-trivial problems.

Actually I tried to check if it is "supportable by me" long time ago. First during the company meeting in Berlin back in April 2016 I was presented with some basic installation and usage steps, then a year ago I tried to build one of the very first public releases of ColumnStore, and this ended up with MCOL-142  that was later fixed by the team. Starting from version 1.0.6 and my successful build of MariaDB ColumnStore based on GitHub instructions I was sure that I can supported it if needed.

This week I found myself in a situation that I do need to install MariaDB ColumnStore to help customer, but I had only my Ubuntu 14.04 netbook at hand. Based on my records, I had few minor problems building the engine itself there, as of version 1.0.7:

...
-- Performing Test INLINE - Failed
-- Could NOT find LibXml2 (missing:  LIBXML2_LIBRARIES LIBXML2_INCLUDE_DIR)
CMake Error at CMakeLists.txt:91 (MESSAGE):
  Could not find a usable libxml2 development environment!


-- Configuring incomplete, errors occurred!
See also "/home/openxs/git/mariadb-columnstore-engine/CMakeFiles/CMakeOutput.log".
See also "/home/openxs/git/mariadb-columnstore-engine/CMakeFiles/CMakeError.log".

sudo apt-get install build-essential automake libboost-all-dev bison cmake libncurses5-dev libreadline-dev libperl-dev libssl-dev libxml2-dev libkrb5-dev flex
...
Setting up libboost-wave-dev:amd64 (1.54.0.1ubuntu1) ...
Setting up libboost-all-dev (1.54.0.1ubuntu1) ...
Setting up libperl-dev (5.18.2-2ubuntu1.1) ...
Setting up libhwloc-plugins (1.8-1ubuntu1.14.04.1) ...
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
openxs@ao756:~/git/mariadb-columnstore-engine$
openxs@ao756:~/git/mariadb-columnstore-engine$ cmake . -DSERVER_BUILD_INCLUDE_DIR=../mariadb-columnstore-server/include -DSERVER_SOURCE_ROOT_DIR=../mariadb-columnstore-server
-- Running cmake version 2.8.12.2
-- MariaDB-Columnstore 1.0.7
-- Found LibXml2: /usr/lib/x86_64-linux-gnu/libxml2.so (found version "2.9.1")
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   system
--   filesystem
--   thread
--   regex
--   date_time
SERVER_BUILD_INCLUDE_DIR = /home/openxs/git/mariadb-columnstore-engine/../mariadb-columnstore-server/include
SERVER_SOURCE_ROOT_DIR = /home/openxs/git/mariadb-columnstore-engine/../mariadb-columnstore-server
-- Configuring done
-- Generating done
-- Build files have been written to: /home/openxs/git/mariadb-columnstore-engine
openxs@ao756:~/git/mariadb-columnstore-engine$

...
[100%] Built target cpimport
[100%] Building CXX object writeengine/server/CMakeFiles/WriteEngineServer.dir/we_getfilesizes.cpp.o
/home/openxs/git/mariadb-columnstore-engine/writeengine/server/we_getfilesizes.cpp: In static member function ‘static int WriteEngine::WE_GetFileSizes::processFileName(messageqcpp::ByteStream&, std::string&, int)’:
/home/openxs/git/mariadb-columnstore-engine/writeengine/server/we_getfilesizes.cpp:275:19: warning: ‘fileSize’ may be used uninitialized in this function [-Wmaybe-uninitialized]
     bs << fileSize;
                   ^
/home/openxs/git/mariadb-columnstore-engine/writeengine/server/we_getfilesizes.cpp:276:29: warning: ‘compressedFileSize’ may be used uninitialized in this function [-Wmaybe-uninitialized]
     bs << compressedFileSize;
                             ^
Linking CXX executable WriteEngineServer
[100%] Built target WriteEngineServer
openxs@ao756:~/git/mariadb-columnstore-engine$ echo $?
0
openxs@ao756:~/git/mariadb-columnstore-engine$ make install
...
[100%] Built target cpimport
Install the project...
-- Install configuration: "RELWITHDEBINFO"
CMake Error at cmake_install.cmake:44 (FILE):
  file cannot create directory: /usr/local/mariadb/columnstore.  Maybe need
  administrative privileges.


make: *** [install] Error 1
So, first I had some missing build dependencies, had to read the manual and install them. Then both server and engine, as of version 1.0.7, were built successfully, but I had a problem during installation. The reason is clear, ColumnStore used to insist on installing into the /usr/local/mariadb/columnstore only, and I insisted on getting it elsewhere (as I usually do for my source code builds, and failed, so had to give up and use sudo). I got it and moved on at that times. If you care, proper steps for installing into any directory as a non-user are described in KB now.

So, based on the previous attempts and my success on Fedora 23, I decided to get recent 1.0.9 version from GitHub and build it on this Ubuntu 14.04 netbook. I followed current  GitHub instructions and few of my older notes (hence some extra options and not just cmake ., but seems these are truly optional and everything works as described):
...
openxs@ao756:~/git/mariadb-columnstore-server$ cd ../mariadb-columnstore-engine/openxs@ao756:~/git/mariadb-columnstore-engine$ cmake . -DSERVER_BUILD_INCLUDE_DIR=../mariadb-columnstore-server/include -DSERVER_SOURCE_ROOT_DIR=../mariadb-columnstore-server
-- Running cmake version 2.8.12.2
-- MariaDB-Columnstore 1.0.9
-- Boost version: 1.54.0
-- Found the following Boost libraries:
--   system
--   filesystem
--   thread
--   regex
--   date_time
SERVER_BUILD_INCLUDE_DIR = /home/openxs/git/mariadb-columnstore-engine/../mariadb-columnstore-server/include
SERVER_SOURCE_ROOT_DIR = /home/openxs/git/mariadb-columnstore-engine/../mariadb-columnstore-server
-- Configuring done
-- Generating done
-- Build files have been written to: /home/openxs/git/mariadb-columnstore-engine
openxs@ao756:~/git/mariadb-columnstore-engine$ time make -j 2
...
[100%] Built target cpimport
[100%] Building CXX object writeengine/server/CMakeFiles/WriteEngineServer.dir/we_getfilesizes.cpp.o
/home/openxs/git/mariadb-columnstore-engine/writeengine/server/we_getfilesizes.cpp: In static member function -?static int WriteEngine::WE_GetFileSizes::processFileName(messageqcpp::ByteStream&, std::string&, int)Б-?:
/home/openxs/git/mariadb-columnstore-engine/writeengine/server/we_getfilesizes.cpp:275:19: warning: Б-?fileSizeБ-? may be used uninitialized in this function [-Wmaybe-uninitialized]
     bs << fileSize;
                   ^
/home/openxs/git/mariadb-columnstore-engine/writeengine/server/we_getfilesizes.cpp:276:29: warning: Б-?compressedFileSizeБ-? may be used uninitialized in this function [-Wmaybe-uninitialized]
     bs << compressedFileSize;
                             ^
Linking CXX executable WriteEngineServer
[100%] Built target WriteEngineServer

real    43m48.955s
user    80m37.430s
sys     4m17.819s

openxs@ao756:~/git/mariadb-columnstore-engine$ sudo make install
...
-- Installing: /usr/local/mariadb/columnstore/bin/cpimport
-- Set runtime path of "/usr/local/mariadb/columnstore/bin/cpimport" to "/usr/local/mariadb/columnstore/lib"

openxs@ao756:~/git/mariadb-columnstore-engine$ cd /usr/local/mariadb/columnstore/bin/
Next step is to run post-install (as root or via sudo in my case):
openxs@ao756:/usr/local/mariadb/columnstore/bin$ sudo ./post-install
...
There is literally nothing interesting in the output and everything should just work. Now time to configure settings for MariaDB ColumnStore and run postConfigure script (again via sudo):
openxs@ao756:/usr/local/mariadb/columnstore/bin$ sudo ./postConfigure

This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System and will perform a Package
Installation of all of the Servers within the System that is being configured.

IMPORTANT: This tool should only be run on the Parent OAM Module
           which is a Performance Module, preferred Module #1

Prompting instructions:

        Press 'enter' to accept a value in (), if available or
        Enter one of the options within [], if available, or
        Enter a new value


===== Setup System Server Type Configuration =====

There are 2 options when configuring the System Server Type: single and multi

  'single'  - Single-Server install is used when there will only be 1 server configured
              on the system. It can also be used for production systems, if the plan is
              to stay single-server.

  'multi'   - Multi-Server install is used when you want to configure multiple servers now or
              in the future. With Multi-Server install, you can still configure just 1 server
              now and add on addition servers/modules in the future.

Select the type of System Server install [1=single, 2=multi] (2) > 1

Performing the Single Server Install.
Enter System Name (columnstore-1) >

===== Setup Storage Configuration =====


----- Setup Performance Module DBRoot Data Storage Mount Configuration -----

There are 2 options when configuring the storage: internal or external

  'internal' -    This is specified when a local disk is used for the DBRoot storage.
                  High Availability Server Failover is not Supported in this mode

  'external' -    This is specified when the DBRoot directories are mounted.
                  High Availability Server Failover is Supported in this mode.

Select the type of Data Storage [1=internal, 2=external] (1) >


Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) >
The MariaDB ColumnStore port of '3306' is already in-use on local server
Enter a different port number > 3307


===== Performing Configuration Setup and MariaDB ColumnStore Startup =====

NOTE: Setting 'NumBlocksPct' to 50%
      Setting 'TotalUmMemory' to 25% of total memory.

Running the MariaDB ColumnStore setup scripts

post-mysqld-install Successfully Completed
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/usr/local/mariadb/columnstore/mysql/lib/mysql/mysql.sock' (2 "No such file or directory")
Error running post-mysql-install, /tmp/post-mysql-install.log
Exiting...
openxs@ao756:/usr/local/mariadb/columnstore/bin$
The process is quite clear and I installed a simple test configuration on a single server, with internal storage used (you may find interesting details on how data are stored in KB articles and in my previous post), single Performance Module (PM) and single DBRoot. The process and details are well described here. It was noted that I have another running instance listening to port 3306 (it is Percona Server that I have on this netbook since my days working for Percona). I've picked up port 3307 for MariaDB instance. Now, it seems post-mysqld-install part was executed successfully and then after waiting for some time there was some failure while connecting to MySQL. I decided that it does not matter (I had some failures on Fedora 23, but eventually it worked, and I was in hurry and proceeded to the KB article on mscadmin and basic usage. I've noted that ColumnStore is NOT started and tried to start it:
openxs@ao756:/usr/local/mariadb/columnstore/bin$ sudo ./mcsadmin startsystem
startsystem   Sat Jun 24 20:24:54 2017
startSystem command, 'columnstore' service is down, sending command to
start the 'columnstore' service on all modules


   System being started, please wait...........
**** startSystem Failed : check log files
openxs@ao756:/usr/local/mariadb/columnstore/bin$
So, I've got my first real trouble with MariaDB ColumnStore and I have no other option than troubleshoot it. Let's check the content of the error log of MariaDB server first:
openxs@ao756:/usr/local/mariadb/columnstore/bin$ tail ../mysql/db/ao756.err
tail: cannot open Б-?../mysql/db/ao756.errБ-? for reading: Permission denied
openxs@ao756:/usr/local/mariadb/columnstore/bin$ sudo tail ../mysql/db/ao756.err
2017-06-24 20:25:14 139795698476992 [Note] InnoDB: Highest supported file format is Barracuda.
2017-06-24 20:25:14 139795698476992 [Note] InnoDB: starting tracking changed pages from LSN 1616918
2017-06-24 20:25:14 139795698476992 [Note] InnoDB: 128 rollback segment(s) are active.
2017-06-24 20:25:14 139795698476992 [Note] InnoDB: Waiting for purge to start
2017-06-24 20:25:14 139795698476992 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.35-80.0 started; log sequence number 1616918
2017-06-24 20:25:14 139795698476992 [Note] Plugin 'FEEDBACK' is disabled.
2017-06-24 20:25:14 139795698476992 [ERROR] /usr/local/mariadb/columnstore/mysql//bin/mysqld: unknown variable 'gtid_mode=ON'
2017-06-24 20:25:14 139795698476992 [ERROR] Aborting

170624 20:25:17 mysqld_safe mysqld from pid file /usr/local/mariadb/columnstore/mysql/db/ao756.pid ended
So, the problem is in incompatible setting in some my.cnf file used by default (this setting was used for Percona Server I have installed from packages). After fixing the problem (I had to proceed so went for as dirty fix of commenting out settings I've found in /etc/my,.cnf) it was easy too start ColumnStore:
openxs@ao756:/usr/local/mariadb/columnstore/bin$ sudo vi /etc/my.cnf
openxs@ao756:/usr/local/mariadb/columnstore/bin$ grep gtid /etc/my.cnf#gtid_mode=ON
#enforce_gtid_consistency
openxs@ao756:/usr/local/mariadb/columnstore/bin$ sudo ./mcsadmin shutdownsystem

shutdownsystem   Sat Jun 24 20:42:21 2017

This command stops the processing of applications on all Modules within the MariaDB ColumnStore System

   Checking for active transactions
           Do you want to proceed: (y or n) [n]: y

   Stopping System...
   Successful stop of System

   Shutting Down System...
   Successful shutdown of System

openxs@ao756:/usr/local/mariadb/columnstore/bin$ sudo ./mcsadmin startsystem
startsystem   Sat Jun 24 20:43:15 2017
startSystem command, 'columnstore' service is down, sending command to
start the 'columnstore' service on all modules


   System being started, please wait..................
   Successful start of System

openxs@ao756:/usr/local/mariadb/columnstore/bin$ ps aux | grep column
root      6993  0.0  0.0  15336  1484 pts/2    S    20:43   0:00 /bin/bash /usr/local/mariadb/columnstore/bin/run.sh /usr/local/mariadb/columnstore/bin/ProcMon
root      6994  7.1  0.4 1214156 15808 pts/2   Sl   20:43   0:05 /usr/local/mariadb/columnstore/bin/ProcMon
root      7162  0.0  0.0   4456   788 pts/2    S    20:43   0:00 /bin/sh /usr/local/mariadb/columnstore/mysql//bin/mysqld_safe --datadir=/usr/local/mariadb/columnstore/mysql/db --pid-file=/usr/local/mariadb/columnstore/mysql/db/ao756.pid --ledir=/usr/local/mariadb/columnstore/mysql//bin
mysql     7380  0.6  4.1 1170124 160416 pts/2  Sl   20:43   0:00 /usr/local/mariadb/columnstore/mysql//bin/mysqld --basedir=/usr/local/mariadb/columnstore/mysql/ --datadir=/usr/local/mariadb/columnstore/mysql/db --plugin-dir=/usr/local/mariadb/columnstore/mysql/lib/plugin --user=mysql --log-error=/usr/local/mariadb/columnstore/mysql/db/ao756.err --pid-file=/usr/local/mariadb/columnstore/mysql/db/ao756.pid --socket=/usr/local/mariadb/columnstore/mysql/lib/mysql/mysql.sock --port=3307
root      7454  0.0  0.2 506564 10564 pts/2    Sl   20:43   0:00 /usr/local/mariadb/columnstore/bin/controllernode fg
root      7475  0.0  0.2 353284 10176 pts/2    Sl   20:43   0:00 /usr/local/mariadb/columnstore/bin/ServerMonitor
root      7502  0.0  0.4 177956 17984 pts/2    Sl   20:43   0:00 /usr/local/mariadb/columnstore/bin/workernode DBRM_Worker1 fg
openxs    7951  0.0  0.0  14652   960 pts/2    S+   20:44   0:00 grep --color=auto column
openxs@ao756:/usr/local/mariadb/columnstore/bin$
 So, it seems the system started as expected, a lot of processes are running, but when I tried to create my first ColumnStore table I've got a message that the storage engine in not supported! Same for InfiniDB.

I'll skip some troubleshooting steps and ideas I had, and just state that:

You should make sure none of MySQL configuration files that are read by default should have any incompatible settings at the moment you run postConfigure script.

You can surely re-run postConfigure when everything is fixed, and this is a proper output for completed, correct attempt:
openxs@ao756:/usr/local/mariadb/columnstore/bin$ sudo ./postConfigure

This is the MariaDB ColumnStore System Configuration and Installation tool.
It will Configure the MariaDB ColumnStore System and will perform a Package
Installation of all of the Servers within the System that is being configured.

IMPORTANT: This tool should only be run on the Parent OAM Module
           which is a Performance Module, preferred Module #1

Prompting instructions:

        Press 'enter' to accept a value in (), if available or
        Enter one of the options within [], if available, or
        Enter a new value


===== Setup System Server Type Configuration =====

There are 2 options when configuring the System Server Type: single and multi

  'single'  - Single-Server install is used when there will only be 1 server configured
              on the system. It can also be used for production systems, if the plan is
              to stay single-server.

  'multi'   - Multi-Server install is used when you want to configure multiple servers now or
              in the future. With Multi-Server install, you can still configure just 1 server
              now and add on addition servers/modules in the future.

Select the type of System Server install [1=single, 2=multi] (1) >

Performing the Single Server Install.
Enter System Name (columnstore-1) >

===== Setup Storage Configuration =====


----- Setup Performance Module DBRoot Data Storage Mount Configuration -----

There are 2 options when configuring the storage: internal or external

  'internal' -    This is specified when a local disk is used for the DBRoot storage.
                  High Availability Server Failover is not Supported in this mode

  'external' -    This is specified when the DBRoot directories are mounted.
                  High Availability Server Failover is Supported in this mode.

Select the type of Data Storage [1=internal, 2=external] (1) >

Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) >


===== Performing Configuration Setup and MariaDB ColumnStore Startup =====

NOTE: Setting 'NumBlocksPct' to 50%
      Setting 'TotalUmMemory' to 25% of total memory.

Running the MariaDB ColumnStore setup scripts

post-mysqld-install Successfully Completed
post-mysql-install Successfully Completed

Starting MariaDB Columnstore Database Platform

MariaDB ColumnStore Database Platform Starting, please wait ..... DONE

System Catalog Successfull Created

MariaDB ColumnStore Install Successfully Completed, System is Active

Enter the following command to define MariaDB ColumnStore Alias Commands

. /usr/local/mariadb/columnstore/bin/columnstoreAlias

Enter 'mcsmysql' to access the MariaDB ColumnStore SQL console
Enter 'mcsadmin' to access the MariaDB ColumnStore Admin console

openxs@ao756:/usr/local/mariadb/columnstore/bin$

Node new messages at the end. If you do NOT see any of them do not blindly assume that everything is working as expected. Also note that the tool remembereds previous chopices.

Now I've got access to the engine:
openxs@ao756:/usr/local/mariadb/columnstore/bin$ ../mysql/bin/mysql -uroot test --host=127.0.0.1 --port=3307
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.1.23-MariaDB Columnstore 1.0.9-1
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [test]> show engines;
+--------------------+---------+--------------------------------------------------------------------------------------------------+--------------+------+------------+
| Engine             | Support | Comment                                                                                          | Transactions | XA   | Savepoints |
+--------------------+---------+--------------------------------------------------------------------------------------------------+--------------+------+------------+
| Columnstore        | YES     | Columnstore storage engine                                                                       | YES          | NO   | NO         |
| CSV                | YES     | CSV storage engine                                                                               | NO           | NO   | NO         |
| MyISAM             | YES     | MyISAM storage engine                                                                            | NO           | NO   | NO         |
| MEMORY             | YES     | Hash based, stored in memory, useful for temporary tables                                        | NO           | NO   | NO         |
| MRG_MyISAM         | YES     | Collection of identical MyISAM tables                                                            | NO           | NO   | NO         |
| InfiniDB           | YES     | Columnstore storage engine (deprecated: use columnstore)                                         | YES          | NO   | NO         |
| SEQUENCE           | YES     | Generated tables filled with sequential values                                                   | YES          | NO   | YES        |
| Aria               | YES     | Crash-safe tables with MyISAM heritage                                                           | NO           | NO   | NO         |
| InnoDB             | DEFAULT | Percona-XtraDB, Supports transactions, row-level locking, foreign keys and encryption for tables | YES          | YES  | YES        |
| PERFORMANCE_SCHEMA | YES     | Performance Schema                                                                               | NO           | NO   | NO         |
+--------------------+---------+--------------------------------------------------------------------------------------------------+--------------+------+------------+
10 rows in set (0.00 sec)

MariaDB [test]> create table taa(id int, c1 int, c2 date, c3 int) engine=columnstore;
Query OK, 0 rows affected (3.81 sec)

MariaDB [test]> insert into taa values(1,1,now(),1), (2,2,NULL,2),(3,NULL,NULL,3);
Query OK, 3 rows affected, 1 warning (1.15 sec)
Records: 3  Duplicates: 0  Warnings: 1

MariaDB [test]> show warnings\G
*************************** 1. row ***************************
  Level: Note
   Code: 1265
Message: Data truncated for column 'c2' at row 1
1 row in set (0.00 sec)

MariaDB [test]> select * from taa;
+------+------+------------+------+
| id   | c1   | c2         | c3   |
+------+------+------------+------+
|    1 |    1 | 2017-06-24 |    1 |
|    2 |    2 | NULL       |    2 |
|    3 | NULL | NULL       |    3 |
+------+------+------------+------+
3 rows in set (0.15 sec)

MariaDB [test]> create table tab like taa;
ERROR 1178 (42000): The storage engine for the table doesn't support The syntax or the data type(s) is not supported by Columnstore. Please check the Columnstore syntax guide for supported syntax or data types.MariaDB [test]> create table tab engine=columnstore select * from taa;
ERROR 1178 (42000): The storage engine for the table doesn't support The syntax or the data type(s) is not supported by Columnstore. Please check the Columnstore syntax guide for supported syntax or data types.MariaDB [test]> create table tab(id int, c1 int, c2 date, c3 int) engine=columnstore;
Query OK, 0 rows affected (3.40 sec)

MariaDB [test]> alter table taa engine=InnoDB;
ERROR 1030 (HY000): Got error 1815 "Unknown error 1815" from storage engine Columnstore
MariaDB [test]>
Note some limitation we have in ColumnStore comparing to "normal" MariaDB it is based on. Some things do not work, some unique error messages and bugs are present, surely. You can read about current known limitations here.

My real goal was to check if NULL values are properly loaded by the cpimport tool, so I tried:
openxs@ao756:/usr/local/mariadb/columnstore/bin$ ./cpimport -e 1 -s '\t' -n1 test tab
Locale is : C
Column delimiter : \t

Using table OID 3005 as the default JOB ID
Input file(s) will be read from : STDIN
Job description file : /usr/local/mariadb/columnstore/data/bulk/tmpjob/3005_D20170624_T211331_S295214_Job_3005.xml
I/O error : Permission denied
I/O error : Permission denied
2017-06-24 21:13:31 (16045) ERR  :  file /usr/local/mariadb/columnstore/data/bulk/tmpjob/3005_D20170624_T211331_S295214_Job_3005.xml does not exist [1055]
Error in loading job information;  The File does not exist. ; cpimport.bin is terminating.
openxs@ao756:/usr/local/mariadb/columnstore/bin$

openxs@ao756:/usr/local/mariadb/columnstore/bin$ cat /tmp/taa.txt | sudo ./cpimport -e 1 -s '\t' -n1 test tab
Locale is : C
Column delimiter : \t

Using table OID 3005 as the default JOB ID
Input file(s) will be read from : STDIN
Job description file : /usr/local/mariadb/columnstore/data/bulk/tmpjob/3005_D20170624_T211615_S997504_Job_3005.xml
Log file for this job: /usr/local/mariadb/columnstore/data/bulk/log/Job_3005.log
2017-06-24 21:16:16 (16582) INFO : successfully loaded job file /usr/local/mariadb/columnstore/data/bulk/tmpjob/3005_D20170624_T211615_S997504_Job_3005.xml
2017-06-24 21:16:16 (16582) INFO : Job file loaded, run time for this step : 0.0409038 seconds
2017-06-24 21:16:16 (16582) INFO : PreProcessing check starts
2017-06-24 21:16:16 (16582) INFO : PreProcessing check completed
2017-06-24 21:16:16 (16582) INFO : preProcess completed, run time for this step : 0.209377 seconds
2017-06-24 21:16:16 (16582) INFO : No of Read Threads Spawned = 1
2017-06-24 21:16:16 (16582) INFO : No of Parse Threads Spawned = 3
2017-06-24 21:16:16 (16582) INFO : Reading input from STDIN to import into table test.tab...
2017-06-24 21:16:16 (16582) INFO : For table test.tab: 4 rows processed and 4 rows inserted.
2017-06-24 21:16:16 (16582) WARN : Column test.tab.c2; Number of invalid dates replaced with zero value : 1
2017-06-24 21:16:17 (16582) INFO : Bulk load completed, total run time : 1.28894 seconds

openxs@ao756:/usr/local/mariadb/columnstore/bin$
The problem is I have the error for a date column, and the error is caused by the first line in the file with column titles, as produced by mysql command line client by default. One has to add -N option to get proper data loaded:
openxs@ao756:/usr/local/mariadb/columnstore/bin$ ../mysql/bin/mysql -uroot test --host=127.0.0.1 --port=3307 -N -e"select * from taa" >/tmp/taa.txt
openxs@ao756:/usr/local/mariadb/columnstore/bin$ cat /tmp/taa.txt
1 1 2017-06-24 1

2 2 NULL 2
3 NULL NULL 3
openxs@ao756:/usr/local/mariadb/columnstore/bin$ cat /tmp/taa.txt | sudo ./cpimport -e 1 -s '\t' -n1 test tab
Locale is : C
Column delimiter : \t

Using table OID 3005 as the default JOB ID
Input file(s) will be read from : STDIN
Job description file : /usr/local/mariadb/columnstore/data/bulk/tmpjob/3005_D20170624_T212114_S554015_Job_3005.xml
Log file for this job: /usr/local/mariadb/columnstore/data/bulk/log/Job_3005.log
2017-06-24 21:21:14 (18107) INFO : successfully loaded job file /usr/local/mariadb/columnstore/data/bulk/tmpjob/3005_D20170624_T212114_S554015_Job_3005.xml
2017-06-24 21:21:14 (18107) INFO : Job file loaded, run time for this step : 0.0412772 seconds
2017-06-24 21:21:14 (18107) INFO : PreProcessing check starts
2017-06-24 21:21:14 (18107) INFO : PreProcessing check completed
2017-06-24 21:21:14 (18107) INFO : preProcess completed, run time for this step : 0.205652 seconds
2017-06-24 21:21:14 (18107) INFO : No of Read Threads Spawned = 1
2017-06-24 21:21:14 (18107) INFO : No of Parse Threads Spawned = 3
2017-06-24 21:21:14 (18107) INFO : Reading input from STDIN to import into table test.tab...
2017-06-24 21:21:15 (18107) INFO : For table test.tab: 3 rows processed and 3 rows inserted.
2017-06-24 21:21:15 (18107) INFO : Bulk load completed, total run time : 1.28646 seconds
openxs@ao756:/usr/local/mariadb/columnstore/bin$ ../mysql/bin/mysql -uroot test --host=127.0.0.1 --port=3307 -N -e"select * from tab"
+------+------+------------+------+
|    1 |    1 | 2017-06-24 |    1 |
|    2 |    2 | NULL       |    2 |
|    3 | NULL | NULL       |    3 |
+------+------+------------+------+
To sunmmarize:
  1. These days one can easliy build, install and configure current version of MariaDB ColumnStore from GitHub sources.
  2. As long as you are reding KB artciles and GitHub documentation carefully, there is no problem to install and configure even non-default environment, where other MySQL versions co-exist.
  3. Youi should make sure node of the configuration files that are read by default contains any incompatible setting (MySQL 5.7-specific ones may fit into that calregory, or anything about GTID-based replication in MySQL).
  4. cpimport is a cool tol for paralles/fast data loads into MariaDB ColumnStore, but it does assume that input is a CSV or tab-separated file, and it tries to load all rows.
  5. Read error messages carefully.
  6. Make sure you read this article if you want to use ColumnStore.

Sunday, January 29, 2017

perf Basics for MySQL Profiling

Oprofile was widely used for MySQL profiling on Linux in the past. But since 2010 and 2.6.31 Linux kernels another profiler, perf, gets increasing popularity. It uses performance counters (CPU hardware registers that count hardware events such as instructions executed) subsystem in Linux. perf is capable of lightweight profiling. It is included in the Linux kernel, under tools/perf (so features available depends on kernel version), and is frequently updated and enhanced.

So, probably perf is the future of profiling on Linux and it makes sense to discuss its basic usage for profiling MySQL servers. For detailed discussions of features provided, numerous examples (not related to MySQL) and links I suggest to read great posts by Brendan Gregg, starting from this one.

I am going to use Ubuntu 14.04 for this basic Howto, with the following perf version provided by the linux-tools-common package:

openxs@ao756:~/dbs/maria10.1$ perf --version
perf version 3.13.11-ckt39
openxs@ao756:~/dbs/maria10.1$ dpkg -S `which perf`
linux-tools-common: /usr/bin/perf
On RPM-based systems package name is just perf. I plan to discuss and show details of (other versions of) perf usage on CentOS 6.8 and Fedora 25 later.

I'll work with a bit outdated MariaDB 10.1.x (built from source) I have at hand up and running. In case of standard vendor's packages, make sure that you have debug symbols installed (perf will inform you if some are missing anyway), to get useful results:
openxs@ao756:~$ dpkg -l | grep -i percona | grep dbg
ii  percona-server-5.7-dbg                                5.7.16-10-1.trusty                                  amd64        Debugging package for Percona Server
Also, ideally binaries that you are profiling should be built with  -fno-omit-frame-pointer option.

As in the previous post about basic oprofile usage, I'll try to start with a primitive test case based on  Bug #39630, for simplicity and to establish basic trust in the tool's outputs, but this time I also want to try profiling some concurrent load more close to a real life, along the lines of Bug #83912.

Basic two(!) steps to use perf for studying some problematic period of MySQL server are the following:
  1. Assuming the we want to get a profile of mysqld process during some problematic period, let's say in some script, we start recording samples for, let's say, 30 seconds, as a root user or via sudo:
    openxs@ao756:~/dbs/maria10.1$ sudo perf record -p `pidof mysqld` sleep 30
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.052 MB perf.data (~2257 samples) ]
  2. Samples are collected in the perf.data file in the current directory by default:
    openxs@ao756:~/dbs/maria10.1$ ls -l perf.data-rw------- 1 root root 56768 січ 29 13:53 perf.data
    When collection completes we can get profiling information using perf report command. If you want to save the result into some file or just review it, add --stdio option. otherwise you end up in a text-based "interactive" user interface of perf, that, while can be useful, may be misleading and weird for beginners. For the 'select benchmark(500000000, 2*2) case I've got:
    openxs@ao756:~/dbs/maria10.1$ sudo perf report --stdio
    # ========
    # captured on: Sun Jan 29 14:01:24 2017
    # hostname : ao756
    # os release : 3.13.0-100-generic
    # perf version : 3.13.11-ckt39
    # arch : x86_64
    # nrcpus online : 2
    # nrcpus avail : 2

    # cpudesc : Intel(R) Pentium(R) CPU 987 @ 1.50GHz
    # cpuid : GenuineIntel,6,42,7
    # total memory : 3861452 kB
    # cmdline : /usr/lib/linux-tools-3.13.0-100/perf record -p 4270 sleep 30
    # event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 = 0x0, e
    # HEADER_CPU_TOPOLOGY info available, use -I to display
    # HEADER_NUMA_TOPOLOGY info available, use -I to display
    # pmu mappings: cpu = 4, software = 1, tracepoint = 2, uncore_cbox_0 = 6, uncore
    # ========
    #
    # Samples: 248  of event 'cycles'
    # Event count (approx.): 24811349
    #
    # Overhead  Command       Shared Object
    # ........  .......  ..................  .......................................
    #
        11.60%   mysqld  [kernel.kallsyms]   [k] update_blocked_averages
         8.12%   mysqld  [kernel.kallsyms]   [k] update_cfs_rq_blocked_load
         6.98%   mysqld  [kernel.kallsyms]   [k] find_busiest_group
         5.76%   mysqld  [kernel.kallsyms]   [k] sys_io_getevents
         4.40%   mysqld  libc-2.19.so        [.] memset
         3.21%   mysqld  [kernel.kallsyms]   [k] update_curr
         3.03%   mysqld  [kernel.kallsyms]   [k] __calc_delta
         2.96%   mysqld  [kernel.kallsyms]   [k] put_prev_task_fair
         2.46%   mysqld  [kernel.kallsyms]   [k] timerqueue_add
         2.12%   mysqld  [kernel.kallsyms]   [k] cpumask_next_and
         2.11%   mysqld  libpthread-2.19.so  [.] __pthread_mutex_cond_lock     2.10%   mysqld  mysqld              [.] os_aio_linux_handle(unsigned long,
    ...
         1.09%   mysqld  mysqld              [.] buf_flush_lru_manager_thread
    ...
Now, the result above looks similar to and as suspicious as the result I've got with operf (that is also based on perf_events internally) on the same system and MariaDB version in the previous post. Everyone wonders where is the time spent on multiplication etc in the above? For operf (see my comment) I had found out that system-wide profiling gives expected results. It seems perf record -p pid also collects samples for single CPU (the one it runs on, or the one running that process actively, or just some one), and this is a problem for multi-threaded programs like mysqld if they are executed on multiple-core hardware (almost all hardware are like that these days, maybe some virtual machines aside). The solution is simple, add -a option for perf record:
       -a, --all-cpus
           System-wide collection from all CPUs.
and do not refer to mysqld, you can filter out related results later (for example, with  sudo perf report --stdio | grep mysqld | head -10, or interactively when --stdio option is not used). Like this:
openxs@ao756:~/dbs/maria10.1$ sudo perf record -a sleep 30
[ perf record: Woken up 15 times to write data ]
[ perf record: Captured and wrote 4.411 MB perf.data (~192717 samples) ]
openxs@ao756:~/dbs/maria10.1$ sudo perf report --stdio
# ========
# captured on: Sun Jan 29 14:27:31 2017
# hostname : ao756
# os release : 3.13.0-100-generic
# perf version : 3.13.11-ckt39
# arch : x86_64
# nrcpus online : 2
...
# Samples: 83K of event 'cycles'
# Event count (approx.): 30621323147
#
# Overhead          Command                  Shared Object
# ........  ...............  .............................  ............................................................................................................
#
    31.22%           mysqld  mysqld                         [.] Item_func_mul::int_op()    16.57%           mysqld  mysqld                         [.] Item_func_hybrid_field_type::val_int()
    16.51%           mysqld  mysqld                         [.] Item_hybrid_func::result_type() const
    16.11%           mysqld  mysqld                         [.] Item_func_benchmark::val_int()     9.92%           mysqld  mysqld                         [.] Item_int::val_int()
     3.27%           mysqld  mysqld                         [.] Type_handler_int_result::cmp_type() const
     3.25%           mysqld  mysqld                         [.] Type_handler_int_result::result_type() const
     0.27%      kworker/0:2  [kernel.kallsyms]              [k] aes_encrypt
     0.18%           compiz  i965_dri.so                    [.] 0x00000000000def10
     0.14%          swapper  [kernel.kallsyms]              [k] intel_idle
...
So, system-wide profile provides the information we can trust, even on multi-core systems for multi-threaded programs. If you know some other way, less intrusive way to get profiles that do not miss the details from threads running on other cores, please, tell me. I was NOT able to find it quickly.

By the way, nice interactive perf top command shows expected results as well when this primitive test runs (it is still a system-wide profile):

Samples: 17K of event 'cycles', Event count (approx.): 5061557268
 28,29%  mysqld                         [.] Item_func_mul::int_op()
 15,05%  mysqld                         [.] Item_func_benchmark::val_int()
 15,01%  mysqld                         [.] Item_hybrid_func::result_type() cons
 15,01%  mysqld                         [.] Item_func_hybrid_field_type::val_int
  9,12%  mysqld                         [.] Item_int::val_int()
  3,34%  perf                           [.] 0x00000000000564db
  3,18%  mysqld                         [.] Type_handler_int_result::cmp_type()
  2,93%  mysqld                         [.] Type_handler_int_result::result_type
  0,96%  libc-2.19.so                   [.] __strstr_sse2
  0,61%  libc-2.19.so                   [.] __GI___strcmp_ssse3
  0,24%  [kernel]                       [k] kallsyms_expand_symbol.constprop.1
  0,21%  [kernel]                       [k] module_get_kallsym
  0,20%  libc-2.19.so                   [.] _int_malloc
  0,20%  libharfbuzz.so.0.927.0         [.] 0x0000000000028051
  0,19%  i965_dri.so                    [.] 0x000000000010a9a0
  0,18%  [kernel]                       [k] format_decode
  0,16%  [kernel]                       [k] vsnprintf
  0,15%  libc-2.19.so                   [.] memchr
  0,15%  libc-2.19.so                   [.] __strchr_sse2
  0,14%  [kernel]                       [k] number.isra.1
  0,13%  [kernel]                       [k] memcpy
  0,09%  libc-2.19.so                   [.] strlen
Press '?' for help on key bindings
Press 'q' to get out of that screen.

perf record may be also used the same way as operf, run in foreground and be interrupted with Ctrl-C (examples are from my last night's session on CentOS):
[root@centos ~]# perf record -ag
^C[ perf record: Woken up 63 times to write data ]
[ perf record: Captured and wrote 15.702 MB perf.data (226643 samples) ]
or it can be started in the background and then interrupted with -SIGINT signal sent to the process:
[root@centos ~]# perf record -ag &[1] 2353
[root@centos ~]# mysql -uroot -e'select benchmark(500000000,2*2);' test
+--------------------------+
| benchmark(500000000,2*2) |
+--------------------------+
|                        0 |
+--------------------------+
[root@centos ~]# kill -sigint  2353[ perf record: Woken up 90 times to write data ]
[root@centos ~]# [ perf record: Captured and wrote 22.366 MB perf.data (322362 samples) ]

[1]+  Interrupt               perf record -ag
[root@centos ~]# ps aux | grep perf
root      2358  0.0  0.0 103312   864 pts/0    S+   22:16   0:00 grep perf
You've probably noted additional -g option in the examples above (explanation pasted from perf help record/man perf-record output):
       -g
           Enables call-graph (stack chain/backtrace) recording.
Another option often used is -F (like -F99 or -F1000):
       -F, --freq=
           Profile at this frequency.
For select benchmark(500000000, 2*2) case -g does not show anything really unexpected or enlightening:

openxs@ao756:~/dbs/maria10.1$ sudo perf record -ag sleep 30
[sudo] password for openxs:
[ perf record: Woken up 64 times to write data ]
[ perf record: Captured and wrote 16.496 MB perf.data (~720730 samples) ]
openxs@ao756:~/dbs/maria10.1$ sudo perf report --stdio
...

# Overhead          Command                  Shared Object
# ........  ...............  .............................  ............................................................................................................
#
    31.02%           mysqld  mysqld                         [.] Item_func_mul::int_op()
                     |
                     --- Item_func_mul::int_op()
                        |
                        |--94.56%-- Item_func_hybrid_field_type::val_int()
                        |          Item_func_benchmark::val_int()
                        |          Item::send(Protocol*, String*)
                        |          Protocol::send_result_set_row(List<Item>*)
                        |          select_send::send_data(List<Item>&)
                        |          JOIN::exec_inner()
                        |          JOIN::exec()
                        |          mysql_select(THD*, Item***, TABLE_LIST*, unsigned int, List<Item>&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsi
                        |          handle_select(THD*, LEX*, select_result*, unsigned long)
                        |          execute_sqlcom_select(THD*, TABLE_LIST*)
                        |          mysql_execute_command(THD*)
                        |          mysql_parse(THD*, char*, unsigned int, Parser_state*)
                        |          dispatch_command(enum_server_command, THD*, char*, unsigned int)
                        |          do_command(THD*)
                        |          do_handle_one_connection(THD*)
                        |          handle_one_connection
                        |          start_thread
                        |
                         --5.44%-- Item_func_benchmark::val_int()
                                   Item::send(Protocol*, String*)
                                   Protocol::send_result_set_row(List<Item>*)
                                   select_send::send_data(List<Item>&)
                                   JOIN::exec_inner()
                                   JOIN::exec()
                                   mysql_select(THD*, Item***, TABLE_LIST*, unsigned int, List<Item>&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsi
                                   handle_select(THD*, LEX*, select_result*, unsigned long)
                                   execute_sqlcom_select(THD*, TABLE_LIST*)
                                   mysql_execute_command(THD*)
                                   mysql_parse(THD*, char*, unsigned int, Parser_state*)
                                   dispatch_command(enum_server_command, THD*, char*, unsigned int)
                                   do_command(THD*)
                                   do_handle_one_connection(THD*)
                                   handle_one_connection
                                   start_thread
...


But we clearly see backtraces/callgraphs.

Note that this tree starts with the on-CPU functions and works back through the ancestry. This approach is called a "callee based call graph". This can be flipped by using -G for an "inverted call graph", or by using the caller option to -g/--call-graph, instead of the callee default. The hottest (most frequent) stack trace in this perf call graph occurred in 29.33% of samples, which is the product of the overhead percentage and top stack leaf (31.02%*94.56%, which are relative rates). perf report can also be run with "-g graph" to show absolute overhead rates

I'd like to conclude this blog post with profiling system running test load presented in Bug #83912 (that, in turn, is just one of the observations based on a very complex real life performance problem I am still working on). There we run N+2 long running UPDATEs on N+2 different tables on a system with N cores. While UPDATEs run, sometimes SELECT asking for a single record by PRIMARY KEY for a different table may sometimes run slow, for fractions of second to many seconds.

So, with table to select from, t0, and other 4 (N+2, N=2 on my netbook used for testing) tables to update, t1 ... t4, created:
openxs@ao756:~/dbs/maria10.1$ bin/mysql -uroot test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 21
Server version: 10.1.18-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [test]> select count(*) from t0;
+----------+
| count(*) |
+----------+
|  2097152 |
+----------+
1 row in set (9.33 sec)

MariaDB [test]> select * from t0 where id = 13;
+----+------+----------------------+
| id | c1   | c2                   |
+----+------+----------------------+
| 13 |  269 | 0.008545639932663777 |
+----+------+----------------------+
1 row in set (0.00 sec)

MariaDB [test]> show table status like 't1'\G
*************************** 1. row ***************************
           Name: t1
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 2079632
 Avg_row_length: 139
    Data_length: 289177600
Max_data_length: 0
   Index_length: 0
      Data_free: 6291456
 Auto_increment: 2424797
    Create_time: 2017-01-29 12:31:57
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options: stats_persistent=1
        Comment:
1 row in set (0.11 sec)

MariaDB [test]> explain select * from t0 where id = 13;
+------+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
| id   | select_type | table | type  | possible_keys | key     | key_len | ref   | rows | Extra |
+------+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
|    1 | SIMPLE      | t0    | const | PRIMARY       | PRIMARY | 4       | const |    1 |       |
+------+-------------+-------+-------+---------------+---------+---------+-------+------+-------+
1 row in set (0.00 sec)

MariaDB [test]> explain update t1 set c2 = rand() where id between 2000 and 300000;
+------+-------------+-------+-------+---------------+---------+---------+------+--------+-------------+
| id   | select_type | table | type  | possible_keys | key     | key_len | ref  | rows   | Extra       |
+------+-------------+-------+-------+---------------+---------+---------+------+--------+-------------+
|    1 | SIMPLE      | t1    | range | PRIMARY       | PRIMARY | 4       | NULL | 339246 | Using where |
+------+-------------+-------+-------+---------------+---------+---------+------+--------+-------------+
1 row in set (0.00 sec)
I've started 4 (N+2) threads each updating large part of one of the tables:
openxs@ao756:~/dbs/maria10.1$ for i in `seq 1 4`; do bin/mysqlslap -uroot --no-drop --create-schema=test --number-of-queries=1000 iternations=10 --concurrency=1 --query="update t$i set c2 = rand() where id between 2000 and 300000;" & done
[1] 15513
[2] 15514
[3] 15515
[4] 15516

In the SHOW PROCESSLIST we see:
...
| 26 | root | localhost | test | Query   |   10 | updating | update t2 set c2 = rand() where id between 2000 and 300000 |    0.000 |
| 27 | root | localhost | test | Query   |   32 | updating | update t1 set c2 = rand() where id between 2000 and 300000 |    0.000 |
| 28 | root | localhost | test | Query   |    5 | updating | update t4 set c2 = rand() where id between 2000 and 300000 |    0.000 |
| 29 | root | localhost | test | Query   |   11 | updating | update t3 set c2 = rand() where id between 2000 and 300000 |    0.000 |
...

Let's see with perf what these threads are doing, so they spend 29 seconds in updating status. First without -g:
openxs@ao756:~/dbs/maria10.1$ sudo perf record -a sleep 30[sudo] password for openxs:
[ perf record: Woken up 15 times to write data ]
[ perf record: Captured and wrote 4.260 MB perf.data (~186128 samples) ]
openxs@ao756:~/dbs/maria10.1$ sudo perf report --stdio
...
# Overhead          Command                  Shared Object
# ........  ...............  .............................  ............................................................................................................
#
    12.50%      kworker/0:1  [kernel.kallsyms]              [k] aes_encrypt    10.54%      kworker/1:1  [kernel.kallsyms]              [k] aes_encrypt
     9.77%      kworker/1:2  [kernel.kallsyms]              [k] aes_encrypt
     4.81%      kworker/0:0  [kernel.kallsyms]              [k] aes_encrypt
     4.52%           mysqld  mysqld                         [.] buf_calc_page_new_checksum(unsigned char const*)     3.64%           mysqld  mysqld                         [.] log_block_calc_checksum_innodb(unsigned char const*)
     2.22%           mysqld  mysqld                         [.] dtoa.constprop.6
     2.15%           mysqld  mysqld                         [.] trx_undo_report_row_operation(unsigned long, unsigned long, que_thr_t*, dict_index_t*, dtuple_t const*,
     2.10%           mysqld  mysqld                         [.] quorem
     1.78%           mysqld  mysqld                         [.] mtr_commit(mtr_t*)
     1.41%           mysqld  mysqld                         [.] ha_innobase::update_row(unsigned char const*, unsigned char*)
     1.15%           mysqld  mysqld                         [.] buf_page_optimistic_get(unsigned long, buf_block_t*, unsigned long, char const*, unsigned long, mtr_t*)

     1.15%           mysqld  mysqld                         [.] buf_page_get_gen(unsigned long, unsigned long, unsigned long, unsigned long, buf_block_t*, unsigned long
     1.11%           mysqld  mysqld                         [.] row_search_for_mysql(unsigned char*, unsigned long, row_prebuilt_t*, unsigned long, unsigned long)
     1.10%           mysqld  mysqld                         [.] multadd
...
I am not surprised to see aes_encrypt in kernel on top of this system-wide profile, as encrupted HDD is used on this netbook (one of Percona security requirements for any hardware ever used for work, even personal) and upodates are surely I/O bound with mysqld --no-defaults and 128M of buffer pool.

We may want to study some of mysqld-related entries further with callgraphs generated (-g):

openxs@ao756:~/dbs/maria10.1$ sudo perf record -ag sleep 20
[ perf record: Woken up 31 times to write data ]
[ perf record: Captured and wrote 8.313 MB perf.data (~363186 samples) ]
openxs@ao756:~/dbs/maria10.1$ sudo perf report --stdio

...
     4.29%           mysqld  mysqld                         [.] buf_calc_page_new_checksum(unsigned char const*)
                     |
                     --- buf_calc_page_new_checksum(unsigned char const*)
                        |
                        |--67.15%-- buf_flush_init_for_writing(unsigned char*, void*, unsigned long)
                        |          buf_flush_page(buf_pool_t*, buf_page_t*, buf_flush_t, bool)
                        |          buf_flush_try_neighbors(unsigned long, unsigned long, buf_flush_t, unsigned long, unsigned long)
                        |          |
                        |          |--52.15%-- buf_do_flush_list_batch(buf_pool_t*, unsigned long, unsigned long)
                        |          |          buf_flush_list(unsigned long, unsigned long, unsigned long*)
                        |          |          buf_flush_page_cleaner_thread
                        |          |          start_thread
                        |          |
                        |           --47.85%-- buf_flush_LRU_list_batch(buf_pool_t*, unsigned long, bool, flush_counters_t*)
                        |                     buf_flush_batch(buf_pool_t*, buf_flush_t, unsigned long, unsigned long, bool, flush_counters_t*)
                        |                     buf_flush_LRU_tail()
                        |                     buf_flush_lru_manager_thread
                        |                     start_thread
                        |
                         --32.85%-- buf_page_is_corrupted(bool, unsigned char const*, unsigned long)
                                   buf_page_io_complete(buf_page_t*)
                                   |
                                   |--82.17%-- fil_aio_wait(unsigned long)
                                   |          io_handler_thread
                                   |          start_thread
                                   |
                                    --17.83%-- buf_read_page(unsigned long, unsigned long, unsigned long, trx_t*, buf_page_t**)
                                              buf_page_get_gen(unsigned long, unsigned long, unsigned long, unsigned long, buf_block_t*, unsigned long, char const*, uns
                                              |
                                              |--92.31%-- btr_pcur_move_to_next_page(btr_pcur_t*, mtr_t*)
                                              |          btr_pcur_move_to_next(btr_pcur_t*, mtr_t*)
                                              |          row_search_for_mysql(unsigned char*, unsigned long, row_prebuilt_t*, unsigned long, unsigned long)
                                              |          ha_innobase::index_next(unsigned char*)
                                              |          handler::ha_index_next(unsigned char*)
                                              |          handler::read_range_next()
                                              |          handler::multi_range_read_next(void**)
                                              |          Mrr_simple_index_reader::get_next(void**)
                                              |          DsMrr_impl::dsmrr_next(void**)
                                              |          QUICK_RANGE_SELECT::get_next()
                                              |          rr_quick(READ_RECORD*)
                                              |          mysql_update(THD*, TABLE_LIST*, List<Item>&, List<Item>&, Item*, unsigned int, st_order*, unsigned long long, e
                                              |          mysql_execute_command(THD*)
                                              |          mysql_parse(THD*, char*, unsigned int, Parser_state*)
                                              |          dispatch_command(enum_server_command, THD*, char*, unsigned int)
                                              |          do_command(THD*)
                                              |          do_handle_one_connection(THD*)
                                              |          handle_one_connection
                                              |          start_thread
                                              |
                                              |--5.37%-- btr_estimate_n_rows_in_range(dict_index_t*, dtuple_t const*, unsigned long, dtuple_t const*, unsigned long, trx
                                              |          ha_innobase::records_in_range(unsigned int, st_key_range*, st_key_range*)
                                              |          handler::multi_range_read_info_const(unsigned int, st_range_seq_if*, void*, unsigned int, unsigned int*, unsign
                                              |          DsMrr_impl::dsmrr_info_const(unsigned int, st_range_seq_if*, void*, unsigned int, unsigned int*, unsigned int*,
                                              |          check_quick_select(PARAM*, unsigned int, bool, SEL_ARG*, bool, unsigned int*, unsigned int*, Cost_estimate*)
                                              |          get_key_scans_params(PARAM*, SEL_TREE*, bool, bool, double)
                                              |          SQL_SELECT::test_quick_select(THD*, Bitmap<64u>, unsigned long long, unsigned long long, bool, bool, bool)
                                              |          mysql_update(THD*, TABLE_LIST*, List<Item>&, List<Item>&, Item*, unsigned int, st_order*, unsigned long long, e
                                              |          mysql_execute_command(THD*)
                                              |          mysql_parse(THD*, char*, unsigned int, Parser_state*)
                                              |          dispatch_command(enum_server_command, THD*, char*, unsigned int)
                                              |          do_command(THD*)
                                              |          do_handle_one_connection(THD*)
                                              |          handle_one_connection
                                              |          start_thread
...

     1.28%           mysqld  mysqld                         [.] ha_innobase::update_row(unsigned char const*, unsigned char*)
                     |
                     --- ha_innobase::update_row(unsigned char const*, unsigned char*)
                        |
                        |--97.71%-- handler::ha_update_row(unsigned char const*, unsigned char*)
                        |          mysql_update(THD*, TABLE_LIST*, List<Item>&, List<Item>&, Item*, unsigned int, st_order*, unsigned long long, enum_duplicates, bool,
                        |          mysql_execute_command(THD*)
                        |          mysql_parse(THD*, char*, unsigned int, Parser_state*)
                        |          dispatch_command(enum_server_command, THD*, char*, unsigned int)
                        |          do_command(THD*)
                        |          do_handle_one_connection(THD*)
                        |          handle_one_connection
                        |          start_thread
                        |
                         --2.29%-- mysql_update(THD*, TABLE_LIST*, List<Item>&, List<Item>&, Item*, unsigned int, st_order*, unsigned long long, enum_duplicates, bool,
                                   mysql_execute_command(THD*)
                                   mysql_parse(THD*, char*, unsigned int, Parser_state*)
                                   dispatch_command(enum_server_command, THD*, char*, unsigned int)
                                   do_command(THD*)
                                   do_handle_one_connection(THD*)
                                   handle_one_connection
                                   start_thread
...


Now to the interesting part (slow UPDATEs when I/O bound are expected). Usually (but not always, in original real life case!) simple SELECT runs fast, but if we limit innodb_thread_concurrency to, say, 4, it may start to run slow:
MariaDB [test]> select * from t0 where id = 13;
+----+------+----------------------+
| id | c1   | c2                   |
+----+------+----------------------+
| 13 |  269 | 0.008545639932663777 |
+----+------+----------------------+
1 row in set (0.00 sec)

MariaDB [test]> select @@innodb_thread_concurrency;
+-----------------------------+
| @@innodb_thread_concurrency |
+-----------------------------+
|                           0 |
+-----------------------------+
1 row in set (0.05 sec)

MariaDB [test]> set global innodb_thread_concurrency=4;
Query OK, 0 rows affected (0.25 sec)

MariaDB [test]> select * from t0 where id = 13;
+----+------+----------------------+
| id | c1   | c2                   |
+----+------+----------------------+
| 13 |  269 | 0.008545639932663777 |
+----+------+----------------------+
1 row in set (2.92 sec)

MariaDB [test]> select * from t0 where id = 13;
+----+------+----------------------+
| id | c1   | c2                   |
+----+------+----------------------+
| 13 |  269 | 0.008545639932663777 |
+----+------+----------------------+
1 row in set (12.16 sec)

Profile with callgraph may help to find out why this happens:
openxs@ao756:~/dbs/maria10.1$ sudo perf record -ag sleep 20
[ perf record: Woken up 32 times to write data ]
[ perf record: Captured and wrote 8.503 MB perf.data (~371488 samples) ]
openxs@ao756:~/dbs/maria10.1$ sudo perf report --stdio
...
                        |--1.40%-- sys_select
                        |          system_call_fastpath
                        |          __select
                        |          |
                        |          |--71.70%-- srv_conc_enter_innodb(trx_t*)
                        |          |          ha_innobase::index_read(unsigned char*, unsigned char const*, unsigned int, ha_rkey_function)
                        |          |          handler::index_read_idx_map(unsigned char*, unsigned int, unsigned char const*, unsigned long, ha_rkey_function)
                        |          |          handler::ha_index_read_idx_map(unsigned char*, unsigned int, unsigned char const*, unsigned long, ha_rkey_function)
                        |          |          join_read_const(st_join_table*)
                        |          |          join_read_const_table(THD*, st_join_table*, st_position*) [clone .isra.301]
                        |          |          make_join_statistics(JOIN*, List<TABLE_LIST>&, st_dynamic_array*)
                        |          |          JOIN::optimize_inner()
                        |          |          JOIN::optimize()
                        |          |          mysql_select(THD*, Item***, TABLE_LIST*, unsigned int, List<Item>&, Item*, unsigned int, st_order*, st_order*, Item*, st_o
                        |          |          handle_select(THD*, LEX*, select_result*, unsigned long)
                        |          |          execute_sqlcom_select(THD*, TABLE_LIST*)
                        |          |          mysql_execute_command(THD*)
                        |          |          mysql_parse(THD*, char*, unsigned int, Parser_state*)
                        |          |          dispatch_command(enum_server_command, THD*, char*, unsigned int)
                        |          |          do_command(THD*)
                        |          |          do_handle_one_connection(THD*)
                        |          |          handle_one_connection
                        |          |          start_thread
...
It spends 71.7% of time waiting to join InnoDB queue (in this limited InnoDB concurrency case) while diving into the index while optimizing the query... This time is spent on "statistics" stage, for single row SELECT by PRIMARY KEY. Now, go conclude this without a backtrace of calls or some profiler.