Re-factoring some internals of prepared statements in 5.7
[ this is a re-posting of what I published on the MySQL server team blog a few days ago ]
When the MySQL server receives a SELECT query, the query goes through several consecutive phases:
But, with a prepared statement, come more requirements. MySQL has to be able to execute the query a second time (and a third time, and so on, at the user’s will). No matter the used API (the C API or the SQL API), this usually looks like the following dialogue:
SQL mandates that syntax and semantic errors be reported by PREPARE, without postponing them to EXECUTE. So, PREPARE needs to do at least parsing and resolution, to spot any such error.
Then there is an implementation question: when it receives
The simplest, most obvious solution would be, during PREPARE, to simply store
Of course, that would not be efficient. At least parsing and resolution theoretically don’t need to be re-done every time: query’s text doesn’t change, so only one parsing is necessary; the structure of tables (names and type of columns) doesn’t change, so only one resolution is necessary (but see footnote 1). So, what PREPARE actually stores in the key/value dictionary, is
With this design, only optimization and execution are repeated by each
Optimization does everything it can to speed up the query. Imagine the query references 10 tables in its FROM clause. To find the best order to access those 10 tables, we need to consider 10! possible orders, roughly 3,000,000. It takes time to consider that many plans. The optimizer has techniques to cut this down, let me show one. If one table is referenced like this in the WHERE clause:
then, because “t.pk” is a primary key, I know that at most one row of this table will participate in the result, and it makes sense to read it with a primary key lookup. And this read has to be done only once in this execution. So let’s put “t” first in the plan, unconditionally. This technique, known as “constant tables”, divides the number of possible orders by a factor of 10 in our example (only 9 tables left to order). Now, imagine that the WHERE clause looks like this:
As soon as I read the row of “t” with a primary key lookup (looking up 39), I get access to the value of t.c1; say it is 100. My condition can thus be simplified to:
Notice how the AND has disappeared.
The inequality predicate “100>t2.c2″ is interesting: assuming that t2.c2 is indexed, it means that a range scan is a possibility for reading t2.
This little example was meant to demonstrate that MySQL, in the optimization phase, does data-based transformations to the query: transformations which depend on the content of tables, and which apply for one single execution; indeed, when the next execution comes, it may be that the row of “t” with t.pk=39 now has t.c1=1200, so our condition simplifications don’t hold anymore. Another way to say this, is: data-based transformations must be non-permanent.
So the AND operator which was killed above during the optimization of the first EXECUTE, must be resurrected for the optimization of the second EXECUTE.
To achieve this, at the end of resolution we rename tree to permanent_tree, then we make a copy of it, which we name tree. The optimization phase has access to tree, and no access to permanent_tree. The optimization operates on tree, does data-based transformations on it. When the second EXECUTE starts, permanent_tree is fetched, copied into a new tree, on which the optimization operates. For example, the AND operator always exists in permanent_tree.
What about permanent transformations? For example, consider this query:
The result of this query is defined by the SQL standard as: get the result of the most inner join (t2,t3), filter with t2.a=t3.a, then join the result with t1, filter with t2.b+t3.b>t1.b, then filter with t1.b*2>15. With this definition, the order of tables’ reading and the order of application of conditions are constrained. For example, reading t2 then t1 then t3 is not possible. But if we notice that the query is equivalent to:
then we have 3! = 6 possible orders for tables. More plans to examine, but more potential for finding a good plan – it is very possible that the best plan, the one yielding the fastest execution, is not among the ones suggested by the SQL standard’s definition.
The equivalence between both queries is semantics-based, not data-based. Thus, the transformation from the first to the second query can be permanent: it can be done once for all, not at every EXECUTE.
Permanent transformations include:
So in the end, efficiency is guaranteed – permanent transformations are not re-done. Though this design agreeably looks strange.
Now, putting pieces together, on the way to more strangeness: you remember that after the resolution phase, we produce permanent_tree and save it for all future EXECUTEs. It thus contains permanent transformations done in resolution, good. But, it does not contain those done in optimization (semijoin …), as optimization runs after permanent_tree has been produced. Still we do want semi-join and friends to be reused by all future EXECUTEs, so they must be put in permanent_tree! So we get an even more strange design:
There are a few more details in the Worklog page if you are interested (note the multiple tabs there).
That’s all for today. If you reached this line, congratulations :-)
Footnotes:
1. I hear you say: “what if an ALTER TABLE is done between PREPARE and EXECUTE?! This may change names and types of columns!”. Yes, you are right, but there exists, somewhere in the prepared statement subsystem, a detector for this; it runs when EXECUTE starts, and if it spots a table structure change since the statement was prepared, it throws away the prepared statement (as “out-of-date”), silently re-prepares it and then docks into EXECUTE; the user does not notice, except if she looks at com_stmt_reprepare in
2. Ok, a part of resolution is repeated too. It would be good to avoid that, in the future.
When the MySQL server receives a SELECT query, the query goes through several consecutive phases:
- parsing: SQL words are recognized, the query is split into different parts following the SQL grammar rules: a list of selected expressions, a list of tables to read, a WHERE condition, …
- resolution: the output of the parsing stage
contains names of columns and names of tables. Resolution is about
making sense out of this. For example, in “
WHERE foo=3
“, “foo” is a column name without a table name; by applying SQL name resolution rules, we discover the table who contains “foo” (it can be complicated if subqueries or outer joins are involved). - optimization: finding the best way to read tables: the best order of tables, and for each table, the best way to access it (index lookup, index scan, …). The output is the so-called “plan”.
- execution: we read tables as dictated by the plan above, and send output rows to the client.
But, with a prepared statement, come more requirements. MySQL has to be able to execute the query a second time (and a third time, and so on, at the user’s will). No matter the used API (the C API or the SQL API), this usually looks like the following dialogue:
# Assume that "t" contains 3 rows with values 1, 2, 98 in a column user: PREPARE stmt FROM 'SELECT * FROM t'; MySQL: okay user: EXECUTE stmt; MYSQL: 1,2,98 user: INSERT INTO t VALUES (0); user: EXECUTE stmt; MYSQL: 1,2,98,0
SQL mandates that syntax and semantic errors be reported by PREPARE, without postponing them to EXECUTE. So, PREPARE needs to do at least parsing and resolution, to spot any such error.
Then there is an implementation question: when it receives
"EXECUTE stmt"
, how can MySQL remember what “stmt” is?The simplest, most obvious solution would be, during PREPARE, to simply store
"stmt" => "SELECT * FROM t"
in some session-local key/value dictionary. When "EXECUTE stmt"
comes, MySQL would find the query string in the dictionary, and go
through all stages (parsing, resolution, optimization, execution). And
when the next "EXECUTE stmt"
comes, it would do the same again.Of course, that would not be efficient. At least parsing and resolution theoretically don’t need to be re-done every time: query’s text doesn’t change, so only one parsing is necessary; the structure of tables (names and type of columns) doesn’t change, so only one resolution is necessary (but see footnote 1). So, what PREPARE actually stores in the key/value dictionary, is
"stmt" => tree
,
where “tree” is the output of the resolution phase; it’s an in-memory
tree of C++ objects, each object representing an expression, a column of
a table, a table, a left join, a boolean condition, …With this design, only optimization and execution are repeated by each
"EXECUTE stmt"
(but see footnote 2).
Repeating optimization is sometimes a smart thing to do; imagine that
between the first and second EXECUTE, the table’s data distribution has
significantly changed (perhaps due to other clients), then the plan of
the first EXECUTE is maybe not the best one anymore, and searching for a
new plan makes sense. In other cases, repeating optimization is a
waste, because the old plan would be good enough. As of today, MySQL
always repeats optimization; making this more flexible would be an
interesting enhancement for the future, but today’s situation is: we
repeat optimization.Optimization does everything it can to speed up the query. Imagine the query references 10 tables in its FROM clause. To find the best order to access those 10 tables, we need to consider 10! possible orders, roughly 3,000,000. It takes time to consider that many plans. The optimizer has techniques to cut this down, let me show one. If one table is referenced like this in the WHERE clause:
WHERE t.pk=39 AND ...
then, because “t.pk” is a primary key, I know that at most one row of this table will participate in the result, and it makes sense to read it with a primary key lookup. And this read has to be done only once in this execution. So let’s put “t” first in the plan, unconditionally. This technique, known as “constant tables”, divides the number of possible orders by a factor of 10 in our example (only 9 tables left to order). Now, imagine that the WHERE clause looks like this:
WHERE t.pk=39 AND t.c1>t2.c2
As soon as I read the row of “t” with a primary key lookup (looking up 39), I get access to the value of t.c1; say it is 100. My condition can thus be simplified to:
WHERE 100>t2.c2
Notice how the AND has disappeared.
The inequality predicate “100>t2.c2″ is interesting: assuming that t2.c2 is indexed, it means that a range scan is a possibility for reading t2.
This little example was meant to demonstrate that MySQL, in the optimization phase, does data-based transformations to the query: transformations which depend on the content of tables, and which apply for one single execution; indeed, when the next execution comes, it may be that the row of “t” with t.pk=39 now has t.c1=1200, so our condition simplifications don’t hold anymore. Another way to say this, is: data-based transformations must be non-permanent.
So the AND operator which was killed above during the optimization of the first EXECUTE, must be resurrected for the optimization of the second EXECUTE.
To achieve this, at the end of resolution we rename tree to permanent_tree, then we make a copy of it, which we name tree. The optimization phase has access to tree, and no access to permanent_tree. The optimization operates on tree, does data-based transformations on it. When the second EXECUTE starts, permanent_tree is fetched, copied into a new tree, on which the optimization operates. For example, the AND operator always exists in permanent_tree.
What about permanent transformations? For example, consider this query:
SELECT * FROM t1 JOIN (t2 JOIN t3 ON t2.a=t3.a) ON t2.b+t3.b>t1.b
WHERE t1.b*2>15;
The result of this query is defined by the SQL standard as: get the result of the most inner join (t2,t3), filter with t2.a=t3.a, then join the result with t1, filter with t2.b+t3.b>t1.b, then filter with t1.b*2>15. With this definition, the order of tables’ reading and the order of application of conditions are constrained. For example, reading t2 then t1 then t3 is not possible. But if we notice that the query is equivalent to:
SELECT * FROM t1, t2, t3 WHERE t2.a=t3.a AND t2.b+t3.b>t1.b
AND t1.b*2>15;
then we have 3! = 6 possible orders for tables. More plans to examine, but more potential for finding a good plan – it is very possible that the best plan, the one yielding the fastest execution, is not among the ones suggested by the SQL standard’s definition.
The equivalence between both queries is semantics-based, not data-based. Thus, the transformation from the first to the second query can be permanent: it can be done once for all, not at every EXECUTE.
Permanent transformations include:
- JOIN-to-WHERE, like in the example above
- outer-join-to-inner-join, when there is something in the WHERE clause which allows to deduce that NULL-complemented rows will actually not participate in the result
- semi-join, more or less merging an “IN (subquery)” predicate into the parent query
- IN-to-EXISTS, rewriting “x IN (subquery)” to “EXISTS (modified subquery)”.
- IN-to-EXISTS : done in the resolution phase.
- semi-join, JOIN-to-WHERE, outer-join-to-inner-join: done in the optimization phase.
So in the end, efficiency is guaranteed – permanent transformations are not re-done. Though this design agreeably looks strange.
Now, putting pieces together, on the way to more strangeness: you remember that after the resolution phase, we produce permanent_tree and save it for all future EXECUTEs. It thus contains permanent transformations done in resolution, good. But, it does not contain those done in optimization (semijoin …), as optimization runs after permanent_tree has been produced. Still we do want semi-join and friends to be reused by all future EXECUTEs, so they must be put in permanent_tree! So we get an even more strange design:
- resolution
- do some permanent transformations (IN-to-EXISTS) on tree
- rename tree to permanent_tree
- copy permanent_tree to a new tree (for optimization to do nonpermanent transformations on it)
- optimization
- if first optimization:
- do some more permanent transformations (semijoin, etc) on tree
- throw permanent_tree away
- copy tree to a new permanent_tree
- do nonpermanent transformations on tree
- if first optimization:
- execution
- read tables and send rows
- throw tree away
- permanent transformations are scattered: the boundary between phases is blurred, which is a code readability problem, and an impediment for future improvements in the Optimizer in general
- efficiency loss: copying a tree in resolution phase takes some time and memory; in the first optimization we throw permanent_tree away and do the copying again… Bad.
- real bugs. Yes, because what the resolution phase thinks of permanent_tree is not true anymore: this object has been deleted and replaced by another one, in the first optimization… so the next EXECUTE gets confused…
- resolution
- do all permanent transformations on tree
- rename tree to permanent_tree
- optimization
- copy permanent_tree to a new tree (for optimization to do nonpermanent transformations on it)
- do nonpermanent transformations on tree
- execution
- read tables and send rows
- throw tree away
There are a few more details in the Worklog page if you are interested (note the multiple tabs there).
That’s all for today. If you reached this line, congratulations :-)
Footnotes:
1. I hear you say: “what if an ALTER TABLE is done between PREPARE and EXECUTE?! This may change names and types of columns!”. Yes, you are right, but there exists, somewhere in the prepared statement subsystem, a detector for this; it runs when EXECUTE starts, and if it spots a table structure change since the statement was prepared, it throws away the prepared statement (as “out-of-date”), silently re-prepares it and then docks into EXECUTE; the user does not notice, except if she looks at com_stmt_reprepare in
SHOW STATUS
.
This detector is external to the Optimizer and runs before it; so if
the Optimizer is involved in EXECUTE, it can safely assume that nothing
has changed since PREPARE.2. Ok, a part of resolution is repeated too. It would be good to avoid that, in the future.
Thank you, this was a fascinating blog. I don't see this part of MySQL talked about often. May I ask a further question? When the prepared statement includes parameters, at what time are the values combined with the query? I have supposed it is around step 2.2 in your model above, but I'd be grateful to know the correct answer.
RépondreSupprimerHello Bill. Thanks for your attention to this blog!
RépondreSupprimerTaking this example:
mysql> set @a=12;
mysql> prepare s from 'select * from t1 where a=?';
mysql> execute s using @a;
In "prepare s", during parsing, the WHERE condition is turned into a "tree": an equality between a column (Item_field) and a parameter (Item_param).
Then in "execute s", we start by telling the Item_param that it has value 12, in:
Prepared_statement::set_parameters().
Then the optimization and execution of the SELECT operate on Item_param, there is no substitution with the plain constant 12. The condition is not converted to a=12. It is rather "a = the Item_param with value 12". However, this is equivalent, as Item_param is treated as a constant for this execution. Just: no substitution.
Yes, I understand a parameter isn't interpolated into the query syntax. I phrased my question imprecisely. That would require re-parsing the SQL during execution. So it must combine the parameter into the tree, that is, the copy tree, not the permanent tree.
SupprimerI was wondering if the parameter value is taken into account for final optimization? Because if not, then the presence of parameter in a query could lead to a suboptimal execution plan. The optimization should decide to use an index partially based on the selectivity of the specific value a query searches for, right?
Yes, the parameter's value is taken into account. For example, I use this query, where there is an index on column 'b':
RépondreSupprimermysql> prepare s from 'explain select * from t1 where b>?';
mysql> set @a=800;
mysql> execute s using @a;
+----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-----------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-----------------------+
| 1 | SIMPLE | t1 | NULL | range | b | b | 5 | NULL | 56 | 100.00 | Using index condition |
+----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-----------------------+
mysql> set @a=900;
mysql> execute s using @a;
+----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-----------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-----------------------+
| 1 | SIMPLE | t1 | NULL | range | b | b | 5 | NULL | 11 | 100.00 | Using index condition |
+----+-------------+-------+------------+-------+---------------+------+---------+------+------+----------+-----------------------+
Notice the changed "rows" column (56 vs 11). It's because the range optimizer uses the parameter's value to estimate how many rows will likely be found, which then influences the cost and choice of the access method.
You would see the same with a plain SELECT instead of EXPLAIN, but then you would have to look into the optimizer trace.
Excellent, thank you for the demo! It's always worthwhile to see it in action.
SupprimerThanks for providing this informative information. it is very useful you may also refer- http://www.s4techno.com/blog/2016/08/07/interview-questions-of-mysql/
RépondreSupprimer