mercredi 21 mai 2014

Re-factoring some internals of prepared statements in 5.7

[ this is a re-posting of what I published on the MySQL server team blog a few days ago ]
 
When the MySQL server receives a SELECT query, the query goes through several consecutive phases:
  • parsing: SQL words are recognized, the query is split into different parts following the SQL grammar rules: a list of selected expressions, a list of tables to read, a WHERE condition, …
  • resolution: the output of the parsing stage contains names of columns and names of tables. Resolution is about making sense out of this. For example, in “WHERE foo=3“, “foo” is a column name without a table name; by applying SQL name resolution rules, we discover the table who contains “foo” (it can be complicated if subqueries or outer joins are involved).
  • optimization: finding the best way to read tables: the best order of tables, and for each table, the best way to access it (index lookup, index scan, …). The output is the so-called “plan”.
  • execution: we read tables as dictated by the plan above, and send output rows to the client.
This design hasn’t changed since many many years. Originally, MySQL didn’t have prepared statements. So it took a query, passed it through the phases above, and then threw it away. This still happens nowadays when using non-prepared statements.
But, with a prepared statement, come more requirements. MySQL has to be able to execute the query a second time (and a third time, and so on, at the user’s will). No matter the used API (the C API or the SQL API), this usually looks like the following dialogue:

 # Assume that "t" contains 3 rows with values 1, 2, 98 in a column
 user: PREPARE stmt FROM 'SELECT * FROM t';
 MySQL: okay
 user: EXECUTE stmt;
 MYSQL: 1,2,98
 user: INSERT INTO t VALUES (0);
 user: EXECUTE stmt;
 MYSQL: 1,2,98,0

SQL mandates that syntax and semantic errors be reported by PREPARE, without postponing them to EXECUTE. So, PREPARE needs to do at least parsing and resolution, to spot any such error.
Then there is an implementation question: when it receives "EXECUTE stmt", how can MySQL remember what “stmt” is?
The simplest, most obvious solution would be, during PREPARE, to simply store "stmt" => "SELECT * FROM t" in some session-local key/value dictionary. When "EXECUTE stmt" comes, MySQL would find the query string in the dictionary, and go through all stages (parsing, resolution, optimization, execution). And when the next "EXECUTE stmt" comes, it would do the same again.
Of course, that would not be efficient. At least parsing and resolution theoretically don’t need to be re-done every time: query’s text doesn’t change, so only one parsing is necessary; the structure of tables (names and type of columns) doesn’t change, so only one resolution is necessary (but see footnote 1). So, what PREPARE actually stores in the key/value dictionary, is "stmt" => tree, where “tree” is the output of the resolution phase; it’s an in-memory tree of C++ objects, each object representing an expression, a column of a table, a table, a left join, a boolean condition, …
With this design, only optimization and execution are repeated by each "EXECUTE stmt" (but see footnote 2). Repeating optimization is sometimes a smart thing to do; imagine that between the first and second EXECUTE, the table’s data distribution has significantly changed (perhaps due to other clients), then the plan of the first EXECUTE is maybe not the best one anymore, and searching for a new plan makes sense. In other cases, repeating optimization is a waste, because the old plan would be good enough. As of today, MySQL always repeats optimization; making this more flexible would be an interesting enhancement for the future, but today’s situation is: we repeat optimization.

Optimization does everything it can to speed up the query. Imagine the query references 10 tables in its FROM clause. To find the best order to access those 10 tables, we need to consider 10! possible orders, roughly 3,000,000. It takes time to consider that many plans.  The optimizer has techniques to cut this down, let me show one. If one table is referenced like this in the WHERE clause:
WHERE t.pk=39 AND ...
then, because “t.pk” is a primary key, I know that at most one row of this table will participate in the result, and it makes sense to read it with a primary key lookup. And this read has to be done only once in this execution. So let’s put “t” first in the plan, unconditionally. This technique, known as “constant tables”, divides the number of possible orders by a factor of 10 in our example (only 9 tables left to order). Now, imagine that the WHERE clause looks like this:
WHERE t.pk=39 AND t.c1>t2.c2
As soon as I read the row of “t” with a primary key lookup (looking up 39), I get access to the value of t.c1; say it is 100. My condition can thus be simplified to:
WHERE 100>t2.c2
Notice how the AND has disappeared.
The inequality predicate “100>t2.c2″ is interesting: assuming that t2.c2 is indexed, it means that a range scan is a possibility for reading t2.
This little example was meant to demonstrate that MySQL, in the optimization phase, does data-based transformations to the query: transformations which depend on the content of tables, and which apply for one single execution; indeed, when the next execution comes, it may be that the row of “t” with t.pk=39 now has t.c1=1200, so our condition simplifications don’t hold anymore. Another way to say this, is: data-based transformations must be non-permanent.
So the AND operator which was killed above during the optimization of the first EXECUTE, must be resurrected for the optimization of the second EXECUTE.
To achieve this, at the end of resolution we rename tree to permanent_tree, then we make a copy of it, which we name tree. The optimization phase has access to tree, and no access to permanent_tree. The optimization operates on tree, does data-based transformations on it. When the second EXECUTE starts, permanent_tree is fetched, copied into a new tree, on which the optimization operates. For example, the AND operator always exists in permanent_tree.

What about permanent transformations? For example, consider this query:
SELECT * FROM t1 JOIN (t2 JOIN t3 ON t2.a=t3.a) ON t2.b+t3.b>t1.b
WHERE t1.b*2>15;

The result of this query is defined by the SQL standard as: get the result of the most inner join (t2,t3), filter with t2.a=t3.a, then join the result with t1, filter with t2.b+t3.b>t1.b, then filter with t1.b*2>15. With this definition, the order of tables’ reading and the order of application of conditions are constrained. For example, reading t2 then t1 then t3 is not possible. But if we notice that the query is equivalent to:
SELECT * FROM t1, t2, t3 WHERE t2.a=t3.a AND t2.b+t3.b>t1.b
AND t1.b*2>15;

then we have 3! = 6 possible orders for tables. More plans to examine, but more potential for finding a good plan – it is very possible that the best plan, the one yielding the fastest execution, is not among the ones suggested by the SQL standard’s definition.
The equivalence between both queries is semantics-based, not data-based. Thus, the transformation from the first to the second query can be permanent: it can be done once for all, not at every EXECUTE.
Permanent transformations include:
  • JOIN-to-WHERE, like in the example above
  • outer-join-to-inner-join, when there is something in the WHERE clause which allows to deduce that NULL-complemented rows will actually not participate in the result
  • semi-join, more or less merging an “IN (subquery)” predicate into the parent query
  • IN-to-EXISTS, rewriting “x IN (subquery)” to “EXISTS (modified subquery)”.
After giving all this background (phew…), I’m now ready to explain one re-factoring which I did in MySQL 5.7.4. The situation of permanent transformations in MySQL 5.6 is the following:
  1. IN-to-EXISTS : done in the resolution phase.
  2. semi-join, JOIN-to-WHERE, outer-join-to-inner-join: done in the optimization phase.
Doing permanent transformations during optimization (step (2) above) is surprising, as optimization is re-done at every EXECUTE. Fortunately, we can internally know if an optimization is the first or not; if it’s not, we skip step (2).
So in the end, efficiency is guaranteed – permanent transformations are not re-done. Though this design agreeably looks strange.

Now, putting pieces together, on the way to more strangeness: you remember that after the resolution phase, we produce permanent_tree and save it for all future EXECUTEs. It thus contains permanent transformations done in resolution, good. But, it does not contain those done in optimization (semijoin …), as optimization runs after permanent_tree has been produced. Still we do want semi-join and friends to be reused by all future EXECUTEs, so they must be put in permanent_tree! So we get an even more strange design:
  1. resolution
    1. do some permanent transformations (IN-to-EXISTS) on tree
    2. rename tree to permanent_tree
    3. copy permanent_tree to a new tree (for optimization to do nonpermanent transformations on it)
  2. optimization
    1. if first optimization:
      1. do some more permanent transformations (semijoin, etc) on tree
      2. throw permanent_tree away
      3. copy tree to a new permanent_tree
    2. do nonpermanent transformations on tree
  3. execution
    1. read tables and send rows
    2. throw tree away
This has a few nasty effects:
  • permanent transformations are scattered: the boundary between phases is blurred, which is a code readability problem, and an impediment for future improvements in the Optimizer in general
  • efficiency loss: copying a tree in resolution phase takes some time and memory; in the first optimization we throw permanent_tree away and do the copying again… Bad.
  • real bugs. Yes, because what the resolution phase thinks of permanent_tree is not true anymore: this object has been deleted and replaced by another one, in the first optimization… so the next EXECUTE gets confused…
In MySQL 5.7.4, I have moved all permanent transformations to where they belong, so now we have the more straightforward design:
  1. resolution
    1. do all permanent transformations on tree
    2. rename tree to permanent_tree
  2. optimization
    1. copy permanent_tree to a new tree (for optimization to do nonpermanent transformations on it)
    2. do nonpermanent transformations on tree
  3. execution
    1. read tables and send rows
    2. throw tree away
If we had done this re-factoring earlier, we would have saved us some headaches. But, better late than never. It’s at least comforting that nowadays we have time to do re-factoring, even if it means spending several man-months on a task like that. It really took that much time: the idea may look simple, but the devil was in the details as usual, and if you add, on top of coding time, review time by two reviewers, and QA time to verify that I didn’t break anything… But in my team we are convinced that this is a long-term investment which will pay. Moreover, when we do such re-factoring work, it gives us the occasion to remove the little hacks which accumulated over time to work around the root problem (which the re-factoring finally addresses). And that leads to even more code simplification.

There are a few more details in the Worklog page if you are interested (note the multiple tabs there).

That’s all for today. If you reached this line, congratulations :-)


Footnotes:
1. I hear you say: “what if an ALTER TABLE is done between PREPARE and EXECUTE?! This may change names and types of columns!”. Yes, you are right, but there exists, somewhere in the prepared statement subsystem, a detector for this; it runs when EXECUTE starts, and if it spots a table structure change since the statement was prepared, it throws away the prepared statement (as “out-of-date”), silently re-prepares it and then docks into EXECUTE; the user does not notice, except if she looks at com_stmt_reprepare in SHOW STATUS. This detector is external to the Optimizer and runs before it; so if the Optimizer is involved in EXECUTE, it can safely assume that nothing has changed since PREPARE.
2. Ok, a part of resolution is repeated too. It would be good to avoid that, in the future.

lundi 18 novembre 2013

WITH RECURSIVE and MySQL

If you have been using certain DBMSs, or reading recent versions of the SQL standard, you are probably aware of the so-called "WITH clause" of SQL.
Some call it Subquery Factoring. Others call it Common Table Expression.
In its simplest form, this feature is a kind of "boosted derived table".

Assume that a table T1 has three columns:

CREATE TABLE T1(
YEAR INT, # 2000, 2001, 2002 ...
MONTH INT, # January, February, ...
SALES INT # how much we sold on that month of that year
);
Now I want to know the sales trend (increase/decrease), year after year:

SELECT D1.YEAR, (CASE WHEN D1.S>D2.S THEN 'INCREASE' ELSE 'DECREASE' END) AS TREND
FROM
  (SELECT YEAR, SUM(SALES) AS S FROM T1 GROUP BY YEAR) AS D1,
  (SELECT YEAR, SUM(SALES) AS S FROM T1 GROUP BY YEAR) AS D2
WHERE D1.YEAR = D2.YEAR-1;
Both derived tables are based on the same subquery text, but usually a DBMS is not smart enough to recognize it. Thus, it will evaluate "SELECT YEAR, SUM(SALES)... GROUP BY YEAR" twice! A first time to fill D1, a second time to fill D2. This limitation is sometimes stated as "it's not possible to refer to a derived table twice in the same query".
Such double evaluation can lead to a serious performance problem. Using WITH, this limitation does not exist, and the following statement evaluates the subquery only once:

WITH D AS (SELECT YEAR, SUM(SALES) AS S FROM T1 GROUP BY YEAR)
SELECT D1.YEAR, (CASE WHEN D1.S>D2.S THEN 'INCREASE' ELSE 'DECREASE' END) AS TREND
FROM
 D AS D1,
 D AS D2
WHERE D1.YEAR = D2.YEAR-1;
This already demonstrates one benefit of WITH.
In MySQL, WITH is not yet supported. But it can be emulated with a view:

CREATE VIEW D AS (SELECT YEAR, SUM(SALES) AS S FROM T1 GROUP BY YEAR);
SELECT D1.YEAR, (CASE WHEN D1.S>D2.S THEN 'INCREASE' ELSE 'DECREASE' END) AS TREND
FROM
 D AS D1,
 D AS D2
WHERE D1.YEAR = D2.YEAR-1;
DROP VIEW D;
Instead of a view, I could as well create D as a normal table. But not as a temporary table, because in MySQL a temporary table cannot be referred twice in the same query, as mentioned in the manual.

After this short introduction, showing the simplest form of WITH, I would like to turn to the more complex form of WITH: the RECURSIVE form.
According to the SQL standard, to use the recursive form, you should write WITH RECURSIVE. However, looking at some other DBMSs, they seem to not require the RECURSIVE word.
WITH RECURSIVE is a powerful construct. For example, it can do the same job as Oracle's CONNECT BY clause (you can check out some example conversions between both constructs).
Let's walk through an example, to understand what WITH RECURSIVE does.

Assume you have a table of employees (this is a very classical example of WITH RECURSIVE):

CREATE TABLE EMPLOYEES (
ID INT PRIMARY KEY,
NAME VARCHAR(100),
MANAGER_ID INT,
INDEX (MANAGER_ID),
FOREIGN KEY (MANAGER_ID) REFERENCES EMPLOYEES(ID)
);
INSERT INTO EMPLOYEES VALUES
(333, "Yasmina", NULL),
(198, "John", 333),
(29, "Pedro", 198),
(4610, "Sarah", 29),
(72, "Pierre", 29),
(692, "Tarek", 333);
In other words, Yasmina is CEO, John and Tarek report to her. Pedro reports to John, Sarah and Pierre report to Pedro.
In a big company, they would be thousands of rows in this table.

Now, let's say that you would like to know, for each employee: "how many people are, directly and indirectly, reporting to him/her"? Here is how I would do it. First, I would make a list of people who are not managers: with a subquery I get the list of all managers, and using NOT IN (subquery) I get the list of all non-managers:

SELECT ID, NAME, MANAGER_ID, 0 AS REPORTS
FROM EMPLOYEES
WHERE ID NOT IN (SELECT MANAGER_ID FROM EMPLOYEES WHERE MANAGER_ID IS NOT NULL);
Then I would insert the results into a new table named EMPLOYEES_EXTENDED; EXTENDED stands for "extended with more information", the new information being the fourth column named REPORTS: it is a count of people who are reporting directly or indirectly to the employee. Because  we have listed people who are not managers, they have a value of 0 in the REPORTS column.
Then, we can produce the rows for "first level" managers (the direct managers of non-managers):

SELECT M.ID, M.NAME, M.MANAGER_ID, SUM(1+E.REPORTS) AS REPORTS
FROM EMPLOYEES M JOIN EMPLOYEES_EXTENDED E ON M.ID=E.MANAGER_ID
GROUP BY M.ID, M.NAME, M.MANAGER_ID;
Explanation: for a row of M (that is, for an employee), the JOIN will produce zero or more rows, one per non-manager directly reporting to the employee.
Each such non-manager contributes to the value of REPORTS for his manager, through two numbers: 1 (the non-manager himself), and the number of direct/indirect reports of the non-manager (i.e. the value of REPORTS for the non-manager).
Then I would empty EMPLOYEES_EXTENDED, and fill it with the rows produced just above, which describe the first level managers.
Then the same query should be run again, and it would produce information about the "second level" managers. And so on.
Finally, at one point Yasmina will be the only row of EMPLOYEES_EXTENDED, and when we run the above SELECT again, the JOIN will produce no rows, because E.MANAGER_ID will be NULL (she's the CEO). We are done.

It's time for a recap: EMPLOYEES_EXTENDED has been a kind of "temporary buffer", which has successively held non-managers, first level managers, second level managers, etc. We have used recursion. The answer to the original problem is: the union of all the successive content of EMPLOYEES_EXTENDED.
Non-managers have been the start of the recursion, which is usually called "the anchor member" or "the seed". The SELECT query which moves from one step of  recursion to the next one, is the "recursive member". The complete statement looks like this:

WITH RECURSIVE
# The temporary buffer, also used as UNION result:
EMPLOYEES_EXTENDED
AS
(
  # The seed:
  SELECT ID, NAME, MANAGER_ID, 0 AS REPORTS
  FROM EMPLOYEES
  WHERE ID NOT IN (SELECT MANAGER_ID FROM EMPLOYEES WHERE MANAGER_ID IS NOT NULL)
UNION ALL
  # The recursive member:
  SELECT M.ID, M.NAME, M.MANAGER_ID, SUM(1+E.REPORTS) AS REPORTS
  FROM EMPLOYEES M JOIN EMPLOYEES_EXTENDED E ON M.ID=E.MANAGER_ID
  GROUP BY M.ID, M.NAME, M.MANAGER_ID
)
# what we want to do with the complete result (the UNION):
SELECT * FROM EMPLOYEES_EXTENDED;
MySQL does not yet support WITH RECURSIVE, but it is possible to code a generic stored procedure which can easily emulate it. Here is how you would call it:

CALL WITH_EMULATOR(
"EMPLOYEES_EXTENDED",
"
  SELECT ID, NAME, MANAGER_ID, 0 AS REPORTS
  FROM EMPLOYEES
  WHERE ID NOT IN (SELECT MANAGER_ID FROM EMPLOYEES WHERE MANAGER_ID IS NOT NULL)
",
"
  SELECT M.ID, M.NAME, M.MANAGER_ID, SUM(1+E.REPORTS) AS REPORTS
  FROM EMPLOYEES M JOIN EMPLOYEES_EXTENDED E ON M.ID=E.MANAGER_ID
  GROUP BY M.ID, M.NAME, M.MANAGER_ID
",
"SELECT * FROM EMPLOYEES_EXTENDED",
0,
""
);
You can recognize, as arguments of the stored procedure, every member of the WITH standard syntax: name of the temporary buffer, query for the seed, query for the recursive member, and what to do with the complete result. The last two arguments - 0 and the empty string - are details which you can ignore for now.

Here is the result returned by this stored procedure:

+------+---------+------------+---------+
| ID   | NAME    | MANAGER_ID | REPORTS |
+------+---------+------------+---------+
|   72 | Pierre  |         29 |       0 |
|  692 | Tarek   |        333 |       0 |
| 4610 | Sarah   |         29 |       0 |
|   29 | Pedro   |        198 |       2 |
|  333 | Yasmina |       NULL |       1 |
|  198 | John    |        333 |       3 |
|  333 | Yasmina |       NULL |       4 |
+------+---------+------------+---------+
7 rows in set
Notice how Pierre, Tarek and Sarah have zero reports, Pedro has two, which looks correct... However, Yasmina appears in two rows! Odd? Yes and no. Our algorithm starts from non-managers, the "leaves" of the tree (Yasmina being the root of the tree). Then our algorithm looks at first level managers, the direct parents of leaves. Then at second level managers. But Yasmina is both a first level manager (of the nonmanager Tarek) and a third level manager (of the nonmanagers Pierre, Tarek and Sarah). That's why she appears twice in the final result: once for the "tree branch" which ends at leaf Tarek, once for the tree branch which ends at leaves Pierre, Tarek and Sarah. The first tree branch contributes 1 direct/indirect report. The second tree branch contributes 4. The right number, which we want, is the sum of the two: 5. Thus we just need to change the final query, in the CALL:

CALL WITH_EMULATOR(
"EMPLOYEES_EXTENDED",
"
  SELECT ID, NAME, MANAGER_ID, 0 AS REPORTS
  FROM EMPLOYEES
  WHERE ID NOT IN (SELECT MANAGER_ID FROM EMPLOYEES WHERE MANAGER_ID IS NOT NULL)
",
"
  SELECT M.ID, M.NAME, M.MANAGER_ID, SUM(1+E.REPORTS) AS REPORTS
  FROM EMPLOYEES M JOIN EMPLOYEES_EXTENDED E ON M.ID=E.MANAGER_ID
  GROUP BY M.ID, M.NAME, M.MANAGER_ID
",
"
  SELECT ID, NAME, MANAGER_ID, SUM(REPORTS)
  FROM EMPLOYEES_EXTENDED
  GROUP BY ID, NAME, MANAGER_ID
",
0,
""
);
And here is finally the proper result:

+------+---------+------------+--------------+
| ID   | NAME    | MANAGER_ID | SUM(REPORTS) |
+------+---------+------------+--------------+
|   29 | Pedro   |        198 |            2 |
|   72 | Pierre  |         29 |            0 |
|  198 | John    |        333 |            3 |
|  333 | Yasmina |       NULL |            5 |
|  692 | Tarek   |        333 |            0 |
| 4610 | Sarah   |         29 |            0 |
+------+---------+------------+--------------+
6 rows in set
Let's finish by showing the body of the stored procedure. You will notice that it does heavy use of dynamic SQL, thanks to prepared statements. Its body does not depend on the particular problem to solve, it's reusable as-is for other WITH RECURSIVE use cases. I have added comments inside the body, so it should be self-explanatory. If it's not, feel free to drop a comment on this post, and I will explain further.

# Usage: the standard syntax:
#   WITH RECURSIVE recursive_table AS
#    (initial_SELECT
#     UNION ALL
#     recursive_SELECT)
#   final_SELECT;
# should be translated by you to 
# CALL WITH_EMULATOR(recursive_table, initial_SELECT, recursive_SELECT,
#                    final_SELECT, 0, "").

# ALGORITHM:
# 1) we have an initial table T0 (actual name is an argument
# "recursive_table"), we fill it with result of initial_SELECT.
# 2) We have a union table U, initially empty.
# 3) Loop:
#   add rows of T0 to U,
#   run recursive_SELECT based on T0 and put result into table T1,
#   if T1 is empty
#      then leave loop,
#      else swap T0 and T1 (renaming) and empty T1
# 4) Drop T0, T1
# 5) Rename U to T0
# 6) run final select, send relult to client

# This is for *one* recursive table.
# It would be possible to write a SP creating multiple recursive tables.

delimiter |

CREATE PROCEDURE WITH_EMULATOR(
recursive_table varchar(100), # name of recursive table
initial_SELECT varchar(65530), # seed a.k.a. anchor
recursive_SELECT varchar(65530), # recursive member
final_SELECT varchar(65530), # final SELECT on UNION result
max_recursion int unsigned, # safety against infinite loop, use 0 for default
create_table_options varchar(65530) # you can add CREATE-TABLE-time options
# to your recursive_table, to speed up initial/recursive/final SELECTs; example:
# "(KEY(some_column)) ENGINE=MEMORY"
)

BEGIN
  declare new_rows int unsigned;
  declare show_progress int default 0; # set to 1 to trace/debug execution
  declare recursive_table_next varchar(120);
  declare recursive_table_union varchar(120);
  declare recursive_table_tmp varchar(120);
  set recursive_table_next  = concat(recursive_table, "_next");
  set recursive_table_union = concat(recursive_table, "_union");
  set recursive_table_tmp   = concat(recursive_table, "_tmp");
  # If you need to reference recursive_table more than
  # once in recursive_SELECT, remove the TEMPORARY word.
  SET @str = # create and fill T0
    CONCAT("CREATE TEMPORARY TABLE ", recursive_table, " ",
    create_table_options, " AS ", initial_SELECT);
  PREPARE stmt FROM @str;
  EXECUTE stmt;
  SET @str = # create U
    CONCAT("CREATE TEMPORARY TABLE ", recursive_table_union, " LIKE ", recursive_table);
  PREPARE stmt FROM @str;
  EXECUTE stmt;
  SET @str = # create T1
    CONCAT("CREATE TEMPORARY TABLE ", recursive_table_next, " LIKE ", recursive_table);
  PREPARE stmt FROM @str;
  EXECUTE stmt;
  if max_recursion = 0 then
    set max_recursion = 100; # a default to protect the innocent
  end if;
  recursion: repeat
    # add T0 to U (this is always UNION ALL)
    SET @str =
      CONCAT("INSERT INTO ", recursive_table_union, " SELECT * FROM ", recursive_table);
    PREPARE stmt FROM @str;
    EXECUTE stmt;
    # we are done if max depth reached
    set max_recursion = max_recursion - 1;
    if not max_recursion then
      if show_progress then
        select concat("max recursion exceeded");
      end if;
      leave recursion;
    end if;
    # fill T1 by applying the recursive SELECT on T0
    SET @str =
      CONCAT("INSERT INTO ", recursive_table_next, " ", recursive_SELECT);
    PREPARE stmt FROM @str;
    EXECUTE stmt;
    # we are done if no rows in T1
    select row_count() into new_rows;
    if show_progress then
      select concat(new_rows, " new rows found");
    end if;
    if not new_rows then
      leave recursion;
    end if;
    # Prepare next iteration:
    # T1 becomes T0, to be the source of next run of recursive_SELECT,
    # T0 is recycled to be T1.
    SET @str =
      CONCAT("ALTER TABLE ", recursive_table, " RENAME ", recursive_table_tmp);
    PREPARE stmt FROM @str;
    EXECUTE stmt;
    # we use ALTER TABLE RENAME because RENAME TABLE does not support temp tables
    SET @str =
      CONCAT("ALTER TABLE ", recursive_table_next, " RENAME ", recursive_table);
    PREPARE stmt FROM @str;
    EXECUTE stmt;
    SET @str =
      CONCAT("ALTER TABLE ", recursive_table_tmp, " RENAME ", recursive_table_next);
    PREPARE stmt FROM @str;
    EXECUTE stmt;
    # empty T1
    SET @str =
      CONCAT("TRUNCATE TABLE ", recursive_table_next);
    PREPARE stmt FROM @str;
    EXECUTE stmt;
  until 0 end repeat;
  # eliminate T0 and T1
  SET @str =
    CONCAT("DROP TEMPORARY TABLE ", recursive_table_next, ", ", recursive_table);
  PREPARE stmt FROM @str;
  EXECUTE stmt;
  # Final (output) SELECT uses recursive_table name
  SET @str =
    CONCAT("ALTER TABLE ", recursive_table_union, " RENAME ", recursive_table);
  PREPARE stmt FROM @str;
  EXECUTE stmt;
  # Run final SELECT on UNION
  SET @str = final_SELECT;
  PREPARE stmt FROM @str;
  EXECUTE stmt;
  # No temporary tables may survive:
  SET @str =
    CONCAT("DROP TEMPORARY TABLE ", recursive_table);
  PREPARE stmt FROM @str;
  EXECUTE stmt;
  # We are done :-)
END|

delimiter ;
In the SQL Standard, WITH RECURSIVE allows some nice additional tweaks (depth-first or breadth-first ordering, cycle detection). In future posts I will show how to emulate them too.

mardi 19 février 2013

Fixing awkward TIMESTAMP behaviors...

There are great features in MySQL 5.6. But not only that. We also tried to correct some old behaviors and limitations which, over the years, have shown to irritate our Community. The behavior of TIMESTAMP columns is one of them.

My colleague Martin Hansson did most of the work and summarized it well in his blog. Thanks to him, since MySQL 5.6.5, it's possible to declare more than one TIMESTAMP column with the DEFAULT CURRENT_TIMESTAMP or ON UPDATE CURRENT_TIMESTAMP attributes. And it's possible to have DATETIME columns with such attributes. Two limitations lifted!

But that is not the end of the story. TIMESTAMP was still special. Unlike other datatypes, if not declared with the NULL or NOT NULL attributes, it would automatically get NOT NULL. And the first TIMESTAMP column of the table would automatically get DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP. Many people found these behaviors odd. For them, my colleague Gopal Shankar added a new option to the server, documented here. The old behavior, now depreciated, is still available if the option is not used.

But that is still not the end of the story. "CREATE TABLE ... SELECT" dealt strangely with columns having DEFAULT CURRENT_TIMESTAMP. Just look at this simple example, which shows results with version 5.6.9:

CREATE TABLE t1 (
t1a TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
t1b TIMESTAMP DEFAULT '2000-01-01 01:00:00'
);
INSERT INTO t1 VALUES ();
SELECT * FROM t1;
t1a t1b
2013-02-19 18:12:41 2000-01-01 01:00:00
The content of t1 is as expected (remember that "()" in INSERT means "insert columns' defaults").
Now let's create a second table, which should have four columns: first, two extra columns t2a and t2b, then two columns filled with values selected from t1:

CREATE TABLE t2 (
t2a TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
t2b TIMESTAMP DEFAULT '2000-01-01 02:00:00'
) SELECT * FROM t1;
SHOW CREATE TABLE t2;
Table Create Table
t2 CREATE TABLE `t2` (
  `t2a` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `t2b` timestamp NOT NULL DEFAULT '2000-01-01 02:00:00',
  `t1a` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  `t1b` timestamp NOT NULL DEFAULT '2000-01-01 01:00:00'
) ENGINE=InnoDB DEFAULT CHARSET=latin1
See how t2.t1b inherited the DEFAULT attribute of its source column t1.t1b: DEFAULT '2000-01-01 01:00:00', as expected, and as the documentation says. But! t2.t1a did NOT inherit the DEFAULT attribute of its source column t1.t1a: it rather got the strange DEFAULT '0000-00-00 00:00:00' (year zero...). That's one first problem: constant defaults are properly inherited, but function defaults are not.
Now let's look at the content of t2:

SELECT * FROM t2;
t2a t2b t1a t1b
0000-00-00 00:00:00 2000-01-01 02:00:00 2013-02-19 18:12:41 2000-01-01 01:00:00
The last two columns, which have their source in t1, have the same value as their source column, which is correct.
The two first (extra) columns did not have their values specified, so their default should have been inserted. That's what nicely happened for t2b. But not for t2a: year zero again! That's one second problem: an extra column is not filled with its default if it is a function default.

I grouped those two problems under the name of Bug#16163936, and fixed them. Here are the results in 5.6.10:

SHOW CREATE TABLE t2;
Table Create Table
t2 CREATE TABLE `t2` (
  `t2a` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `t2b` timestamp NOT NULL DEFAULT '2000-01-01 02:00:00',
  `t1a` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `t1b` timestamp NOT NULL DEFAULT '2000-01-01 01:00:00'
) ENGINE=InnoDB DEFAULT CHARSET=latin1
SELECT * FROM t2;
t2a t2b t1a t1b
2013-02-19 18:27:39 2000-01-01 02:00:00 2013-02-19 18:27:39 2000-01-01 01:00:00
All correct!

vendredi 12 octobre 2012

Cost-based choice between subquery materialization and EXISTS

In a previous post, I had demonstrated how subquery materialization, introduced in MySQL 5.6.5, improves the performance of certain queries, like query Q16 of DBT3. Such improvement was easily explained:
  • Subquery materialization has a high start up cost (it needs to create and fill the temporary table).
  • But afterwards it has fast lookups (temporary table has a hash index, no duplicates, and is in memory).
  • In other words, compared to EXISTS, the first evaluation of the IN predicate is slow (high start up cost) and all following evaluations are fast (just a hash lookup).
  • In the DBT 3 setup, one outer table (named "part") has 200,000 rows, so there are 200,000 evaluations of IN, thus subquery materialization wins over EXISTS because the time it loses in the first evaluation is more than compensated by the many faster following evaluations.
However, if there were only few outer rows, then subquery materialization should logically be slower than EXISTS (the compensation would not happen anymore)... MySQL 5.6.5, by blindly always choosing subquery materialization, takes the risk of making certain queries slower. There needs to be a cost-based choice between the two strategies, to pick the best, depending on the situation! That is what I have implemented in MySQL 5.6.7.

To show it in action, I will use query Q16 again. First I will run it with the normal "part" table which has 200,000 rows. Then I will reduce this table to only 200 rows, and run the query again. Each time, I will run EXPLAIN to see what subquery strategy is chosen by the optimizer. I will also, by tweaking the optimizer_switch variable, force the optimizer to use the other strategy which it didn't like, in order to verify that it is indeed worse.

For brevity, let me jump directly to the results, obtained with a release build of MySQL 5.6.7 on my machine:

Rows in part Optimizer chooses Execution time If I force alternative
200,000 Materialization 550 ms 830 ms
200 EXISTS 1 ms 10 ms

We can see that in both cases the optimizer has made the right choice!

mardi 10 avril 2012

Faster subqueries with materialization

In a previous post, I analyzed how a query of the famous DBT3 benchmark was
optimized by MySQL. It was this query, named "Q16" in the DBT3 jargon:

select
 p_brand,
 p_type,
 p_size,
 count(distinct ps_suppkey) as supplier_cnt
from
 partsupp,
 part
where
 p_partkey = ps_partkey
 and p_brand <> 'Brand#23'
 and p_type not like 'LARGE PLATED%'
 and p_size in (43, 1, 25, 5, 35, 12, 42, 40)
 and ps_suppkey not in (
  select
   s_suppkey
  from
   supplier
  where
   s_comment like '%Customer%Complaints%'
 )
group by
 p_brand,
 p_type,
 p_size
order by
 supplier_cnt desc,
 p_brand,
 p_type,
 p_size;

Here is a brief recap of conclusions I had drawn:
  • for this query, MySQL tranforms the IN condition to EXISTS and then
    evaluates it with the "unique_subquery" technique, which does an index
    lookup into the subquery's table.
  • IN is evaluated 120,000 times (once per combined row of the outer tables).
  • The total execution time of query Q16 is 0.65 seconds.
If you look at the original subquery, before IN becomes EXISTS, you will see that it's not correlated, which means that it does not mention columns of tables of the top query's FROM clause ('partsupp' and 'part'). Thus, its resultset is constant throughout execution of the entire top query; here it is:


mysql> select
    -> s_suppkey
    -> from
    -> supplier
    -> where
    -> s_comment like '%Customer%Complaints%';
+-----------+
| s_suppkey |
+-----------+
|       358 |
|      2820 |
|      3804 |
|      9504 |
+-----------+
4 rows in set (0.00 sec)

The transformation to EXISTS, because it injects equalities like
  `partsupp`.`ps_suppkey` = supplier`.`s_suppkey`
into the subquery's WHERE clause, makes the subquery correlated: it thus has to be executed 120,000 times, so we do 120,000 times this:
  • an index lookup in 'supplier' (which has 10,000 rows)
  • a test of the found row(s) against the LIKE condition.

Intuitively, determining the 4-row resultset once for all, and injecting it into the top query should yield better performance - it is fast to evaluate
  ps_suppkey not in (358, 2820, 3804, 9504) .
Starting from the just released MySQL 5.6.5, this transformation is automatically done by the Optimizer, and is called subquery materialization. The subquery's resultset is determined once for all and stored into an in-memory temporary table. If the temporary table has only 4 rows as in our example, searching for a match in it can be done with a scan; but if it had more rows, a hash index would help greatly. So a hash index is always created on the temporary table's columns. Last, this index is unique: there is no point in storing duplicates, and they would make the table bigger. After this one-time setup has been completed, each evaluation of IN simply does a hash index lookup into the 4-row temporary table.

My former colleague Timour Katchaounov started developing this feature years ago, when working for MySQL/Sun. In the last months, after a round of intensive QA, we have fixed some last bugs in it, in preparation for releasing in 5.6.5. But the feature still had one limitation: it was applicable only if IN was placed at certain positions in the query. For example, it couldn't be used with NOT IN. And query Q16 has a NOT IN! so the Optimizer could not apply subquery materialization to it, and was thus stuck with using EXISTS. Sad!

Why it could not work with NOT IN, is not very easy to explain. It has to do with NULL values, because they sometimes prevent using the hash index. To give an idea, look at this:
  (NULL, 1) NOT IN (SELECT ...)
Per the SQL standard, if the subquery's resultset contains at least one row of the form (x,1) where x is any number (or NULL), then the IN condition is neither TRUE, nor FALSE, it is UNKNOWN. So is the NOT IN condition, because it is the negation of IN, and NOT(UNKNOWN) is UNKNOWN.
Here are example of such rows: (NULL,1), (421,1), (236,1), (5329,1), ad infinitam.
We can see that those rows will not be found by a lookup in the hash index: this index is defined on the two columns, it has a usual "prefix-only" behaviour, which means that it cannot be used to search for "any value in first column, then 1 in second column". As long as the sentence starts with "any value in first column" a table scan is necessary; we should read each row of the temporary table and compare its second column with 1 until we find a matching row. And that:
  • will drag subquery materialization's performance down
  • will drag subquery materialization's code complexity up.
And I have even not covered all problems here: there can be more than two columns, there can be more than one NULL in the left argument of IN, there can also be NULLs inside the subquery's resultset.

In some lucky cases, the scan can be avoided, for example:
  SELECT * FROM table1 WHERE (a,b) IN (SELECT ...)
If (a,b) is (NULL,1), the IN will be UNKNOWN or FALSE. It will be UNKNOWN if the subquery's resultset contains one (x,1) as seen above; otherwise it will be FALSE. No matter what, it will not be TRUE, and this is all that WHERE wants to know - (a,b) can thus be rejected without doing a scan.
Now, for
  SELECT * FROM table1 WHERE (a,b) NOT IN (SELECT ...)
i.e.
  SELECT * FROM table1 WHERE NOT ((a,b) IN (SELECT ...))
things are different: if (a,b) is (NULL,1), the IN will be UNKNOWN or FALSE, as we said. So NOT IN will be UNKNOWN or TRUE. "Hum, can you be more specific?? I need to know if it's TRUE", will the WHERE evaluation code ask. Then we have to do the scan...

So now you understand why subquery materialization was restricted to certain placement of IN.

What I have done recently is to lift this restriction in two simple, however common, cases:
  1. If all outer and inner expressions are not nullable, then no NULL can get in the way, so there is no problem.
  2. If there is only one outer expression (and thus there is only one inner expression), figuring out the correct TRUE/FALSE/UNKNOWN answer is immediate. Understanding why... is left as an exercise to the reader :-)
Those two cases are independent: as long as one is satisfied, subquery materialization can apply to IN, no matter where it is placed (NOT IN, etc).

It turned out to be very easy to code this: I had a working prototype in an afternoon.

Q16 happens to meet the criteria of both cases: columns 'ps_suppkey' and 's_suppkey' are declared NOT NULL (first case), and the subquery has only one outer and one inner expression (second case).

So nowadays MySQL can, and does, use subquery materialization for query Q16; thanks to it, the execution time is down from 0.65 seconds to 0.47 seconds, which is a 25% improvement!

The new technique is visible in EXPLAIN. I want to first show how EXPLAIN was with the EXISTS transformation, so I temporarily disable subquery materialization, then run EXPLAIN, enable  subquery materialization, run EXPLAIN:


mysql> set optimizer_switch='materialization=off';
mysql> explain ...

+----+--------------------+----------+-----------------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
| id | select_type        | table    | type            | possible_keys        | key          | key_len | ref                 | rows   | Extra                                        |
+----+--------------------+----------+-----------------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
|  1 | PRIMARY            | part     | ALL             | PRIMARY              | NULL         | NULL    | NULL                | 199742 | Using where; Using temporary; Using filesort |
|  1 | PRIMARY            | partsupp | ref             | PRIMARY,i_ps_partkey | i_ps_partkey | 4       | dbt3.part.p_partkey |      2 | Using where; Using index                     |
|  2 | DEPENDENT SUBQUERY | supplier | unique_subquery | PRIMARY              | PRIMARY      | 4       | func                |      1 | Using where                                  |
+----+--------------------+----------+-----------------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+

mysql> set optimizer_switch='materialization=default'; # 'on' would work too
mysql> explain ...

+----+-------------+----------+------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
| id | select_type | table    | type | possible_keys        | key          | key_len | ref                 | rows   | Extra                                        |
+----+-------------+----------+------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
|  1 | PRIMARY     | part     | ALL  | PRIMARY              | NULL         | NULL    | NULL                | 199742 | Using where; Using temporary; Using filesort |
|  1 | PRIMARY     | partsupp | ref  | PRIMARY,i_ps_partkey | i_ps_partkey | 4       | dbt3.part.p_partkey |      2 | Using where; Using index                     |
|  2 | SUBQUERY    | supplier | ALL  | NULL                 | NULL         | NULL    | NULL                |  10113 | Using where                                  |
+----+-------------+----------+------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
If you compare, the big difference is that the third line says SUBQUERY and not DEPENDENT SUBQUERY anymore. DEPENDENT SUBQUERY means that it has be executed once per row of the top query. SUBQUERY means that it is executed only once.
EXPLAIN FORMAT=JSON, another new feature in MySQL 5.6.5, shows more details of materialization:

{
  "query_block": {
    "select_id": 1,
    "ordering_operation": {
      "using_temporary_table": true,
      "using_filesort": true,
      "grouping_operation": {
        "using_filesort": true,
        "nested_loop": [
          {
            "table": {
              "table_name": "part",
              "access_type": "ALL",
              "possible_keys": [
                "PRIMARY"
              ],
              "rows": 199742,
              "filtered": 100,
              "attached_condition": "((`dbt3`.`part`.`p_brand` <> 'Brand#23') and (not((`dbt3`.`part`.`p_type` like 'LARGE PLATED%'))) and (`dbt3`.`part`.`p_size` in (43,1,25,5,35,12,42,40)))"
            }
          },
          {
            "table": {
              "table_name": "partsupp",
              "access_type": "ref",
              "possible_keys": [
                "PRIMARY",
                "i_ps_partkey"
              ],
              "key": "i_ps_partkey",
              "key_length": "4",
              "ref": [
                "dbt3.part.p_partkey"
              ],
              "rows": 2,
              "filtered": 100,
              "using_index": true,
              "attached_condition": "(not(< in_optimizer >(`dbt3`.`partsupp`.`ps_suppkey`,`dbt3`.`partsupp`.`ps_suppkey` in ( < materialize > (select `dbt3`.`supplier`.`s_suppkey` from `dbt3`.`supplier` where (`dbt3`.`supplier`.`s_comment` like '%Customer%Complaints%') ), < primary_index_lookup >(`dbt3`.`partsupp`.`ps_suppkey` in < temporary table > on distinct_key where ((`dbt3`.`partsupp`.`ps_suppkey` = `materialized subselect`.`s_suppkey`)))))))",
              "attached_subqueries": [
                {
                  "using_temporary_table": true,
                  "dependent": false,
                  "cacheable": true,
                  "table": {
                    "access_type": "eq_ref",
                    "key": "< auto_key >",
                    "rows": 1
                  },
                  "query_block": {
                    "select_id": 2,
                    "table": {
                      "table_name": "supplier",
                      "access_type": "ALL",
                      "rows": 10113,
                      "filtered": 100,
                      "attached_condition": "(`dbt3`.`supplier`.`s_comment` like '%Customer%Complaints%')"
                    }
                  }
                }
              ]
            }
          }
        ]
      }
    }
  }
}
Don't forget to scroll the box above to the right, because lines are long. This shows that:
  1. For the top query, we read a row from 'part', then one row from 'partsupp', then execute the subquery.
  2. the very first execution of the subquery materializes (<materialize>) select `dbt3`.`supplier`.`s_suppkey` from `dbt3`.`supplier` where (`dbt3`.`supplier`.`s_comment` like '%Customer%Complaints%') into a temporary table
  3. Each subquery execution does a lookup on the primary key of this temporary table (<primary_index_lookup> ... in <temporary table>)
  4. Going further down, we see how the temporary table will be filled: it will be the resultset of a table scan ("access_type": "ALL") of 'supplier' with a filtering LIKE condition.
More details on the feature's usage can be found in the manual.

This is the end of this post. I hope that it puts in good light the work we have put into 5.6. There are many other Optimizer features in this version, like EXPLAIN FORMAT=JSON and others; they are described in my colleagues' blog posts.

jeudi 9 février 2012

Optimizer tracing used by others!

In a previous post, I had explained how to use MySQL's optimizer tracing, a new feature which appeared in MySQL 5.6.3.

As a developer, it feels really good to see others adopt my work and make something useful out of it! My colleague Dimitri Kravtchuk, who is one of our top Benchmarking experts, has written a blog post where he shows how the optimizer tracing has helped him to figure out why, under load, once in a while and randomly, a query performed badly. His investigation technique may be reusable by other people, so I encourage you to read more about it, here.

mardi 29 novembre 2011

Understanding the unique_subquery optimization

If you use the EXPLAIN SELECT statement to see how your subqueries are treated by MySQL, you may sometimes meet the "unique_subquery" optimization. Here is how the manual describes it:
"unique_subquery: this type replaces ref for some IN subqueries of the following form: value IN (SELECT primary_key FROM single_table WHERE some_expr); unique_subquery is just an index lookup function that replaces the subquery completely for better efficiency".
Few weeks ago, while I was reviewing a patch fixing a bug in unique_subquery, I got a "simplification" pulsion. I told myself that:
  •  unique_subquery is an optimization for a special case of simple subqueries (single inner table, using index, no aggregates);
  • we have a more general system around, used for more complex subqueries, naturally capable of handling simple ones too if we wanted;
  • this general system does not have the bug in question...
Then I wondered: what if we removed the unique_subquery optimization, and let the general system handle this simple subquery? This would certainly simplify code, and thus maintainance...But before removing it, of course, we should check whether unique_subquery brings a significant performance benefit.

So today I'm testing unique_subquery against the DBT3 benchmark. I grab a copy of MySQL 5.6.3, and focus on the sixteenth query of DBT3, which contains a subquery (in red) suitable for handling by unique_subquery:


select
 p_brand,
 p_type,
 p_size,
 count(distinct ps_suppkey) as supplier_cnt
from
 partsupp,
 part
where
 p_partkey = ps_partkey
 and p_brand <> 'Brand#23'
 and p_type not like 'LARGE PLATED%'
 and p_size in (43, 1, 25, 5, 35, 12, 42, 40)
 and ps_suppkey not in (
  select
   s_suppkey
  from
   supplier
  where
   s_comment like '%Customer%Complaints%'
 )
group by
 p_brand,
 p_type,
 p_size
order by
 supplier_cnt desc,
 p_brand,
 p_type,
 p_size;

This query executes in 0.65 seconds on my Linux box, and EXPLAIN is:

+----+--------------------+----------+-----------------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
| id | select_type        | table    | type            | possible_keys        | key          | key_len | ref                 | rows   | Extra                                        |
+----+--------------------+----------+-----------------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
|  1 | PRIMARY            | part     | ALL             | PRIMARY              | NULL         | NULL    | NULL                | 199498 | Using where; Using temporary; Using filesort |
|  1 | PRIMARY            | partsupp | ref             | PRIMARY,i_ps_partkey | i_ps_partkey | 4       | dbt3.part.p_partkey |      2 | Using where; Using index                     |
|  2 | DEPENDENT SUBQUERY | supplier | unique_subquery | PRIMARY              | PRIMARY      | 4       | func                |      1 | Using where                                  |
+----+--------------------+----------+-----------------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+

When I disable unique_subquery (by modifying MySQL's C++ code), EXPLAIN becomes:

+----+--------------------+----------+--------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
| id | select_type        | table    | type   | possible_keys        | key          | key_len | ref                 | rows   | Extra                                        |
+----+--------------------+----------+--------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+
|  1 | PRIMARY            | part     | ALL    | PRIMARY              | NULL         | NULL    | NULL                | 199498 | Using where; Using temporary; Using filesort |
|  1 | PRIMARY            | partsupp | ref    | PRIMARY,i_ps_partkey | i_ps_partkey | 4       | dbt3.part.p_partkey |      2 | Using where; Using index                     |
|  2 | DEPENDENT SUBQUERY | supplier | eq_ref | PRIMARY              | PRIMARY      | 4       | func                |      1 | Using where                                  |
+----+--------------------+----------+--------+----------------------+--------------+---------+---------------------+--------+----------------------------------------------+

The only change, as expected, is that "unique_subquery" becomes "eq_ref". The used index is the same (the primary key of the "supplier" table). The optimizer has the same notion of unicity: "unique_subquery" and "eq_ref" both denote that a single lookup is needed, as the index is UNIQUE. Same index, same number of lookups: execution could well be as fast with "eq_ref" as it was with "unique_subquery".
But... no. Query now executes in 0.80 seconds. 23% slower than with unique_subquery!

Finer-grained timing shows that the extra 0.15 seconds are indeed lost in the subquery evaluation code.

To understand this, let's follow the execution in detail, based on EXPLAIN output above.
  • First line of EXPLAIN output: we do a table scan on the "part" table ("type=ALL" means "table scan") . The "rows" column of EXPLAIN suggests that we are going to have 199,498 rows of "part".
  • Second line of EXPLAIN output: for each row from the "part" table, we do an index lookup ("ref") into the "i_ps_partkey" index of the "partsupp" table; apparently such lookup will find two rows ("rows=2").
  • At this point, we have a row made of needed columns of "part" and of "partsupp". An upper estimate of the number of those rows is 199,498 multiplied by 2: 400,000. Actually, the real number is around 120,000 (there has been filtering going on, as the "Using where" indicates).
  • Then we evaluate the WHERE clause and thus the "NOT IN (subquery)" predicate (the "DEPENDENT SUBQUERY"). 120,000 evaluations of such predicate. And that's where the difference is.
EXPLAIN EXTENDED and then SHOW WARNINGS show how the predicate
looks like. Let's start with the case where unique_subquery is disabled:


/* select#1 */ select `dbt3`.`part`.`p_brand` AS `p_brand`,`dbt3`.`part`.`p_type` AS `p_type`,`dbt3`.`part`.`p_size` AS `p_size`,count(distinct `dbt3`.`partsupp`.`ps_suppkey`) AS `supplier_cnt` from `dbt3`.`partsupp` join `dbt3`.`part` where ((`dbt3`.`partsupp`.`ps_partkey` = `dbt3`.`part`.`p_partkey`) and (`dbt3`.`part`.`p_brand` <> 'Brand#23') and (not((`dbt3`.`part`.`p_type` like 'LARGE PLATED%'))) and (`dbt3`.`part`.`p_size` in (43,1,25,5,35,12,42,40)) and (not(<in_optimizer>(`dbt3`.`partsupp`.`ps_suppkey`,<exists>(/* select#2 */ select 1 from `dbt3`.`supplier` where ((`dbt3`.`supplier`.`s_comment` like '%Customer%Complaints%') and (<cache>(`dbt3`.`partsupp`.`ps_suppkey`) = `dbt3`.`supplier`.`s_suppkey`))))))) group by `dbt3`.`part`.`p_brand`,`dbt3`.`part`.`p_type`,`dbt3`.`part`.`p_size` order by count(distinct `dbt3`.`partsupp`.`ps_suppkey`) desc,`dbt3`.`part`.`p_brand`,`dbt3`.`part`.`p_type`,`dbt3`.`part`.`p_size`

Above, the part in red says that

ps_suppkey not in (
        select
            s_suppkey
        from
            supplier
        where
            s_comment like '%Customer%Complaints%'
    )
has been transformed from "IN(non correlated subquery)" to "EXISTS(correlated subquery)", yielding this:


not exists (
        select
            1
        from
            supplier
        where
            s_comment like '%Customer%Complaints%'
                        AND s_suppkey = ps_suppkey
        )

or, more exactly (leaving out the NOT operator, for brevity):


<exists>(/* select#2 */ select 1 from `dbt3`.`supplier`
           where ((`dbt3`.`supplier`.`s_comment` like '%Customer%Complaints%')
           and (<cache>(`dbt3`.`partsupp`.`ps_suppkey`) = `dbt3`.`supplier`.`s_suppkey`)))

Evaluating this EXISTS() evaluates the new subquery. This means all the subquery evaluation machinery: calls to JOIN::exec(), sub_select(), evaluate_join_record()... Sure, deep down it does an index lookup like unique_subquery does, but all those function calls have a cost, and so has all the logic which is lying around ready to handle any complexity in the subquery, as this is generic subquery evaluation code ("if group_by do this", "if order_by do this", "if left_join do this": none of those if()s are entered, but deciding to enter them or not has a cost). Plus some initialization code. Plus some de-initialization code. This overhead, repeated 120,000 times, amounts to 0.15 seconds...

Now, EXPLAIN EXTENDED when unique_subquery is enabled:


/* select#1 */ select `dbt3`.`part`.`p_brand` AS `p_brand`,`dbt3`.`part`.`p_type` AS `p_type`,`dbt3`.`part`.`p_size` AS `p_size`,count(distinct `dbt3`.`partsupp`.`ps_suppkey`) AS `supplier_cnt` from `dbt3`.`partsupp` join `dbt3`.`part` where ((`dbt3`.`partsupp`.`ps_partkey` = `dbt3`.`part`.`p_partkey`) and (`dbt3`.`part`.`p_brand` <> 'Brand#23') and (not((`dbt3`.`part`.`p_type` like 'LARGE PLATED%'))) and (`dbt3`.`part`.`p_size` in (43,1,25,5,35,12,42,40)) and (not(<in_optimizer>(`dbt3`.`partsupp`.`ps_suppkey`,<exists>(<primary_index_lookup>(<cache>(`dbt3`.`partsupp`.`ps_suppkey`) in supplier on PRIMARY where ((`dbt3`.`supplier`.`s_comment` like '%Customer%Complaints%') and (<cache>(`dbt3`.`partsupp`.`ps_suppkey`) = `dbt3`.`supplier`.`s_suppkey`)))))))) group by `dbt3`.`part`.`p_brand`,`dbt3`.`part`.`p_type`,`dbt3`.`part`.`p_size` order by count(distinct `dbt3`.`partsupp`.`ps_suppkey`) desc,`dbt3`.`part`.`p_brand`,`dbt3`.`part`.`p_type`,`dbt3`.`part`.`p_size`

The optimizer has first done the same transformation (IN to EXISTS) as we saw before, then has done one more transformation, and EXISTS has become, as written in red above:


<exists>(<primary_index_lookup>(<cache>(`dbt3`.`partsupp`.`ps_suppkey`)
          in supplier on PRIMARY
          where ((`dbt3`.`supplier`.`s_comment` like '%Customer%Complaints%')
          and (<cache>(`dbt3`.`partsupp`.`ps_suppkey`) = `dbt3`.`supplier`.`s_suppkey`))))

which is directly an index lookup ("<primary_index_lookup>"), followed by an additional WHERE clause. So the overhead of full-blown subquery evaluation is
avoided. And this overhead is not neglectable, compared to the index lookup (assuming the relevant index pages are already in memory).

So the conclusion of my experiment is that unique_subquery is worth having. I'll have to direct simplification pulsions to some other code!

Note that there also exists a similar "index_subquery" optimization applying to non-unique indices. And it's worth having, for the same reasons.