Optimizer tracing: how to configure it

In this blog post, my colleague Jørgen Løland described a new feature of MySQL 5.6: Optimizer Tracing. I recommend reading his article, as it presents this new feature in a simple, easy-to-read manner.

The Optimizer Tracing feature can help understanding what the Optimizer is doing; it is available since milestone 5.6.3, announced October 3rd at Oracle Open World (here is the changelog). It's good to see it mature now; I remember that Sergey Petrunia did the first prototype back in March 2009!

Today  I will be giving some must-have tips related to handling big traces.

First thing to know, a trace lives in main memory (internally it is allocated on the heap or free store of the MySQL Server). An SQL statement which gives the optimizer a lot of work (for example, by joining many tables) will generate a large trace. Up to gigabytes in some pathological cases! To avoid hogging memory then, a trace will never grow beyond the value of the @@optimizer_trace_max_mem_size session variable. Which has a default of 16 kilobytes; yes, it's a low value, to protect the innocent. I often raise it in my session, to 1 megabyte, which is enough for most queries in my daily work:


If a trace was truncated because exceeding this limit, the column INFORMATION_SCHEMA.OPTIMIZER_TRACE.MISSING_BYTES_BEYOND_MAX_MEM_SIZE shows a non-zero value: the number of bytes which could not be added to the trace.

When I read a trace I like to scroll up and down in it, search for some optimization phase (that means searching for some keyword in the trace); to do this, I set, in the "mysql" command-line client, the pager to "less" (I'm using Unix):

mysql> pager less;

This is very useful. I have all "less" commands under hand: can save to a file, can search for a regular expression match, search forward or backward... One last benefit is that when I have finished reading, quitting "less" makes the trace go away from the terminal, it does not linger on my screen forever, does not occupy the terminal's display history...

When there are many tables to join, as I said above the trace can grow a lot, because the Optimizer evaluates many possible join orders (using an algorithm known as "greedy search"): each evaluated join order is mentioned in the trace. Greedy search ends up being the greatest part of the trace. What if I want to see the trace of what happens in the Optimizer after greedy search is complete? If I set @@optimizer_trace_max_mem_size to a low value, it will trim greedy search and what follows. If I set @@optimizer_trace_max_mem_size to a high value, to see what I want, greedy search will be traced which will possibly exceed the amount of memory I can afford on this machine... It would be nice if I could tell the system: "please do not trace this and that". In my case, it would be "please do not trace greedy search".

As an example, let's consider twenty tables, t1...t20, all similar to this one:

mysql> INSERT INTO t1 VALUES(x),(y); # x and y being some integers

and let's run this query, after turning tracing on:
mysql> EXPLAIN SELECT 1 FROM t1  JOIN t2 ON t2.a=t1.a JOIN t3 ON t3.a=t2.a JOIN t4 ON t4.a=t3.a JOIN t5 ON t5.a=t4.a JOIN t6 ON t6.a=t5.a JOIN t7 ON t7.a=t6.a JOIN t8 ON t8.a=t7.a JOIN t9 ON t9.a=t8.a JOIN t10 ON t10.a=t9.a JOIN t11 ON t11.a=t10.a JOIN t12 ON t12.a=t11.a JOIN t13 ON t13.a=t12.a JOIN t14 ON t14.a=t13.a JOIN t15 ON t15.a=t14.a JOIN t16 ON t16.a=t15.a JOIN t17 ON t17.a=t16.a JOIN t18 ON t18.a=t17.a JOIN t19 ON t19.a=t18.a JOIN t20 ON t20.a=t19.a;
| id | select_type | table | type  | possible_keys | key  | key_len | ref        | rows | Extra                    |
|  1 | SIMPLE      | t1    | index | a             | a    | 5       | NULL       |    2 | Using where; Using index |
|  1 | SIMPLE      | t2    | ref   | a             | a    | 5       | test.t1.a  |    1 | Using where; Using index |
|  1 | SIMPLE      | t3    | ref   | a             | a    | 5       | test.t2.a  |    1 | Using where; Using index |
|  1 | SIMPLE      | t4    | ref   | a             | a    | 5       | test.t1.a  |    1 | Using index              |
|  1 | SIMPLE      | t5    | ref   | a             | a    | 5       | test.t1.a  |    1 | Using index              |
|  1 | SIMPLE      | t6    | ref   | a             | a    | 5       | test.t1.a  |    1 | Using index              |
|  1 | SIMPLE      | t7    | ref   | a             | a    | 5       | test.t1.a  |    1 | Using index              |
|  1 | SIMPLE      | t8    | ref   | a             | a    | 5       | test.t1.a  |    1 | Using index              |
|  1 | SIMPLE      | t9    | ref   | a             | a    | 5       | test.t1.a  |    1 | Using where; Using index |
|  1 | SIMPLE      | t10   | ref   | a             | a    | 5       | test.t9.a  |    1 | Using where; Using index |
|  1 | SIMPLE      | t11   | ref   | a             | a    | 5       | test.t1.a  |    1 | Using where; Using index |
|  1 | SIMPLE      | t12   | ref   | a             | a    | 5       | test.t11.a |    1 | Using where; Using index |
|  1 | SIMPLE      | t13   | ref   | a             | a    | 5       | test.t12.a |    1 | Using where; Using index |
|  1 | SIMPLE      | t14   | ref   | a             | a    | 5       | test.t12.a |    1 | Using where; Using index |
|  1 | SIMPLE      | t15   | ref   | a             | a    | 5       | test.t12.a |    1 | Using where; Using index |
|  1 | SIMPLE      | t16   | ref   | a             | a    | 5       | test.t12.a |    1 | Using where; Using index |
|  1 | SIMPLE      | t17   | ref   | a             | a    | 5       | test.t12.a |    1 | Using where; Using index |
|  1 | SIMPLE      | t18   | ref   | a             | a    | 5       | test.t11.a |    1 | Using where; Using index |
|  1 | SIMPLE      | t19   | ref   | a             | a    | 5       | test.t12.a |    1 | Using where; Using index |
|  1 | SIMPLE      | t20   | ref   | a             | a    | 5       | test.t11.a |    1 | Using where; Using index |
As optimizer developer, it catches my eye that some tables have "Using where" and others don't. "Using where" tells me that a condition is evaluated after fetching a row from the table, but does not tell me what condition. To know more, I will look at the trace. But I'm not interested in the trace of greedy search, which likely accounts, in this 20-table example, for most of the total trace, and would just make my reading less comfortable, or even hog memory, if I had more than those 20 tables or more indices on them, or more complex conditions. So I do:

and run the EXPLAIN again. To show the resulting trace, I'm posting a screenshot of how it is displaid by the JsonView Firefox add-on. Using this add-on requires taking two precautions:
  • turning off the end_marker flag of @@optimizer_trace (this flag is good for human-readability but is not JSON-compliant)
  • sending the trace to a file without escaping of newlines (which is why I use INTO DUMPFILE instead of INTO OUTFILE):


Here's the trace now in Firefox:

We see how greedy_search=off has eliminated the trace of greedy search ("considered_execution_plans"), replacing it with just an ellipsis ("...")! Then there is what I wanted to see: the part after greedy search, named "attached_conditions_summary", which describes what condition is behind each "Using where" in EXPLAIN. There are equality conditions of course. Some coming directly from the "JOIN ON" conditions. Some deduced by equality propagation (if t1.a=t2.a and t2.a=t3.a then t1.a=t3.a, a new condition which we see is attached to t3). There are also "IS NOT NULL" conditions; indeed, an equality condition of the form t1.a=t2.a allows us to deduce that both t1.a and t2.a are not NULL, so if rows of t1 are retrieved first they can be filtered with this deduced condition, known as early NULL filtering.

There are other flags in @@optimizer_trace_features; we added a flag for each feature which we think can make the trace grow unreasonably large:
  • greedy search
  • repeated execution of subqueries (one execution per row)
  • repeated range optimization (known as "range checked for each record" in EXPLAIN)
  • range optimization in general.
I think I have shown enough for today. A positive note to finish: in the Optimizer team, this tracing has already helped us to debug some problems, so has started to fulfill its goal!

For the curious: the Optimizer Trace is covered in full detail, including the above, and more, in a chapter of the "MySQL Internals" manual.


  1. I think this feature will be very helpful for DBAs as well as for developers. I'm glad that it's already helpful for the optimizer developers, too!

  2. Ce commentaire a été supprimé par un administrateur du blog.

  3. The price is 8169, source. The following are the most prevalent types of drug tests: This sort of testing, often known as a urinalysis, is the most popular because it is inexpensive, non-invasive, and relatively accurate. Urine tests, despite their popularity, do not detect THC in the body; instead, they look for the metabolite THC-COOH. This means urine tests are unable to detect the psychoactive cannabinoid THC and, therefore, cannot be used to assess impairment. The THC-COOH metabolite has an unusually long half-life, and it can be discovered in the urine for months after it has been used. A one-time user’s urinalysis can be positive for one to seven days following usage; however, most users will fall below the ‘positive’ level within a few days. A regular user (more than once a week) can test positive for the THC metabolite up to 90 days after their last consumption. Blood tests, as opposed to urinalysis, identify the active presence of THC in the bloodstream. THC levels peak quickly after smoking cannabis and then begin to fall within an hour. As a result, high THC levels in the blood are a good indicator that you have recently ingested cannabis. It is worth mentioning that in one-time users too, modest quantities of THC can stay detectable in the blood for up to eight hours without causing impairment.


Enregistrer un commentaire

Posts les plus consultés de ce blog


Faster subqueries with materialization