Optimizations for derived tables in MySQL 5.6 and MariaDB 5.5+

I had been involved with subquery optimizations fairly closely, but last week I was surprised to find out that MySQL 5.6 does not support derived table merging. This feature was among the subquery features in the abandoned MySQL 6.0. In MariaDB, it was finished and released as part of MariaDB 5.3/5.5. As for MySQL, neither MySQL 5.6, nor MySQL 5.7 has this feature.

So what is this “derived merge”? It’s simple to understand. When one writes complex queries, it is common to use FROM-clause subqueries as a way to structure the query:

select
  sum(o_totalprice)
from
  (select * from orders where o_orderpriority='1-URGENT') as high_prio_orders
where
  o_orderdate between '1995-01-01' and '1995-01-07'

MySQL optimizer processes this syntax very poorly. The basic problem is that FROM-subqueries are always materialized exactly as-specified. Conditions from outside the subquery are applied only after the materialization.

In our example, table orders has an index on o_orderdate, and there is a highly selective condition o_orderdate BETWEEN ... which one can use for reading through the index. But the condition is located outside the subquery, so it will not be used when reading the table. Instead, we will get the following plan:

+----+-------------+------------+------+---------------+------+---------+------+---------+-------------+
| id | select_type | table      | type | possible_keys | key  | key_len | ref  | rows    | Extra       |
+----+-------------+------------+------+---------------+------+---------+------+---------+-------------+
|  1 | PRIMARY     | <derived2> | ALL  | NULL          | NULL | NULL    | NULL | 1505799 | Using where |
|  2 | DERIVED     | orders     | ALL  | NULL          | NULL | NULL    | NULL | 1505799 | Using where |
+----+-------------+------------+------+---------------+------+---------+------+---------+-------------+

The meaning of it is:

  1. Do a full table scan is on table `orders`. We expect to read 1.5M rows. Write rows that match o_orderpriority='1-URGENT' into a temporary table
  2. Read the temporary table back. Filter rows that match o_orderdate between ... and compute the query result

MySQL 5.6 has added some improvements to this (link to the manual). They are:

  • The temporary table is materialized as late as possible. This has no effect of the speed of our example query, but it may have an effect for more complex queries.
    • EXPLAIN also will not materialize the temporary table
  • The optimizer has an option to create and use an index on the temporary table.

However, the base problem of materializing FROM subquery before applying any other optimization still remains.

In MariaDB, EXPLAIN will be different:

+------+-------------+--------+-------+---------------+---------------+---------+------+------+------------------------------------+
| id   | select_type | table  | type  | possible_keys | key           | key_len | ref  | rows | Extra                              |
+------+-------------+--------+-------+---------------+---------------+---------+------+------+------------------------------------+
|    1 | SIMPLE      | orders | range | i_o_orderdate | i_o_orderdate | 4       | NULL | 4358 | Using index condition; Using where |
+------+-------------+--------+-------+---------------+---------------+---------+------+------+------------------------------------+

Note that we see only one line, and the table orders is accessed through an index on o_orderdate. Running EXPLAIN EXTENDED will show why:

Message: select sum(`dbt3sf1`.`orders`.`o_totalprice`) AS `sum(o_totalprice)` from `dbt3sf1`.`orders` where ((`dbt3sf1`.`orders`.`o_orderpriority` = '1-URGENT') and (`dbt3sf1`.`orders`.`o_orderDATE` between '1995-01-01' and '1995-01-07'))

There is no FROM-clause subquery anymore. It has been merged into the upper select. This allowed the optimizer to avoid doing materialization, and also to use the condition and index on o_orderdate to construct a range access.

Query execution time for this particular example went down from 15 sec to 0.25 sec, but generally, the difference can be as big as your table is big.

Slides: New subquery optimizations in MySQL 6.0 (new revision)

I’m now at MySQL Conference and have just finished my New Subquery Optimizations in MySQL 6.0 talk. If you’re eager to get the slides, here is the link to the pdf file and a mirror.

The talk has the same name as this MySQL University session but the slides have been thoroughly re-worked, and also there is new content:

  • More detailed explanation of subquery handling in 4.1/5.x
  • More reliable/justified benchmark numbers
  • Observations about subquery demographics
  • An attempt at comparative analysis of how MySQL’s subquery strategies compare to PostgreSQL’s
    • And a slide #28 that explains why we’re looking at PostgreSQL all the time

Have a nice viewing. And, if you get subqueries that should be fast but aren’t, we’re now actively looking at every subquery performance bug we get and will appreciate any input.

Slides: New subquery optimizations in MySQL 6.0

A bunch of (hopefully) self-explanatory slides about new subquery optimizations in MySQL 6.0 is available here (UPDATE: here’s a working link). The slides are from this MySQL University session, so there was an audio stream but there were some difficulties with it and it is not available now.

If you miss the audio, the next opportunity to hear about this topic is the New Subquery Optimizations in MySQL 6.0 session I’m giving at the MySQL User Conference on April, 15th.

If you don’t miss the audio, please check out the mentioned session anyway – we already have more content than there are in the slides, and I expect to get even more till the MySQL UC.

 

Slides and audio: How MySQL Handles ORDER BY, GROUP BY, and DISTINCT

I haven’t posted anything for a while, but that doesn’t mean that nothing was happening around MySQL Optimizer. In fact, exactly the reverse was true – so many thing were happening that I didn’t have time to blog.

I’ll try to make up for the lost coverage. Let’s start from smaller things. On Nov, 1 I’ve had a MySQL University session titled How MySQL Handles ORDER BY, GROUP BY, and DISTINCT.

The session page has slides and audio. If you’re ready to spend 1 hour on MySQL’s ways of doing ordering, you can listen to the audio. Otherwise, you can just scroll through the slides. They are reasonably self-contained and explain MySQL’s ORDER BY and GROUP BY handling in easy-to-digest colorful pictures.

How MySQL executes ORDER BY

In last couple of weeks there has been a tide of ORDER/GROUP BY-related optimization bugs, where I was the fixer or the reviewer. This wasn’t an easy job because there is no sane description of how GROUP BY/ORDER BY handling is supposed to work.

To figure it out, I had to write an explanation of how it works. The first part is about ORDER BY. Hopefully there will be subsequent parts that will show how GROUP BY is related to ORDER BY and how it works.

Available means to produce ordered sequences

MySQL has two methods to produce ordered streams.

The first is to use a “range“, “ref” or “index” access method over an ordered index. For versions up to 5.1, those access methods naturally return records in the index order, so we get ordering for free (the exception is NDB engine which needs to do merge-sort when it gets data from several storage nodes). In MySQL 5.2, MyISAM and InnoDB have MultiRangeRead optimization which breaks the ordering. We have a draft of how to make it preserve the ordering, but at the moment MRR is simply disabled whenever ordering is required.

The second, catch-all method, is to use the filesort algorithm. In a nutshell, filesort() does quicksort on chunks of data that fit into its memory and then uses mergesort approach to merge the chunks. The amount of memory available to filesort() is controlled by @@sort_buffer_size variable. if the sorted data doesn’t fit into memory (i.e. there is more than one chunk), filesort uses a temporary file to store the chunks.

Source data for filesort() always comes from one table. If there is a need to sort data from several tables, MySQL will first collect the data into a temporary table and then invoke filesort() for that temporary table. I don’t know the true reason for this. Codewise, filesort() wants to pull its source data using something like source.get_next_record() function, while join and union runtime produce their using result.put_next_record()-type calls, so maybe the temporary table is there only to resolve this push/pull mismatch and will go away once we get decent cursors.

filesort() has two modes of operation:

  1. Mode 1: the sorted elements contain all required columns of the source table. The result of the sorting is a linear sequence of output tuples, there is no need to access the source table after the sort is done.
  2. Mode 2: sort <sort_key, rowid> pairs and produce a sequence of rowids which one can use to get source table’s rows in the required order (but this will be essentially hit the table in random order and is not very fast)

Mode 1 is used whenever possible. Mode is used when mode1 is not applicable. This is the case when the sorted tuples have blobs or variable-length columns (TODO: check w/ evgen). Unfortunately, the EXPLAIN output provides no clue about which mode is used, so you’ll have to manually check for blobs in the output columns list.

Executing join to produce ordered stream

At the moment MySQL has three ways to run a join and produce ordered output:

Method EXPLAIN output
Use index-based access method that produces ordered output no mention of filesort
Use filesort() on 1st non-constant table “Using filesort” in the first row
Put join result into a temporary table and use filesort() on it “Using temporary; Using filesort” in the first row

Now I’ll cover those three methods in more detail. The first method can be used when the first non-const table in the join order has an index that matches the ORDER BY list. In this case we can use the index, and the ordering will “spread over” other tables onto the output. Here is a swimlane diagram of this process, where different columns represent different values of the ORDER BY expression:
diagram
This method is preferred over the other two as it requires no additional sorting steps.

The second method can be used when all ORDER BY elements refer to the first table in the join order. In this case, we can filesort() the first table and then proceed to execute the join:
diagram
Here filesort() may operate either in Mode 1 or in Mode 2. One may wonder why this is limited to doing filesort() after the first table. After all, we could do it after the second table as well – produce (tbl1, tbl2) record combinations, put them into temporary table, sort, etc. The expectation is perfectly reasonable but alas, MySQL will not even consider such query plans.

The last, the catch-all method is to write the entire join output into the temporary table and then invoke filesort:
diagram

I have an easier time recalling those three strategies when there are pictures, hopefully they’ll help you too. That’s all for today, in next posts I’ll cover the topics of how ordering affects join optimization and interoperates with GROUP BY.

Use of join buffer is now visible in EXPLAIN

UPDATE:
* s/Using join cache/Using join buffer/, changed to show the final variants of EXPLAIN output as described here
* s/join_buff_size/join_buffer_size/


Starting from 5.1.18, EXPLAIN output may show “Using join cache“, like in this example:

mysql> explain select * from t1, t2 where t1.col < 10 and t2.col < 'bar';
+----+-------------+-------+-------+-...-+--------------------------------+
| id | select_type | table | type  |     | Extra                          |
+----+-------------+-------+-------+-...-+--------------------------------+
|  1 | SIMPLE      | t1    | range |     | Using where                    |
|  1 | SIMPLE      | t2    | range |     | Using where; Using join buffer |
+----+-------------+-------+-------+-...-+--------------------------------+

The join cache is actually not a new feature. It has been available in MySQL at least since version 4.0, and for all this time it has remained invisible and undocumented. The only thing that indicated its presense was the @@join_buffer_size server variable.

We’re trying to gradually make EXPLAIN show more information. Georgi Kodinov was fixing BUG#27531 and has used that occasion to make join buffering show up in EXPLAIN output.

If you already know how MySQL’s join buffering works, that’s all the news. If not, the remainder of this post has a hopefully readable explanation of how join buffering works and when it is used.

How join buffering works

Let’s start with regular Nested Loops Join. Suppose we have a join query

select * from t1, t2, t3 where t2.key1=t1.col1 and t3.key1<40;

and the query plain is like shown in this EXPLAIN output:

...-+-------+-------+---------------+------+---------+--------------+------+-------------+
    | table | type  | possible_keys | key  | key_len | ref          | rows | Extra       |
...-+-------+-------+---------------+------+---------+--------------+------+-------------+
    | tbl1  | ALL   | NULL          | NULL | NULL    | NULL         |   10 |             |
    | tbl2  | ref   | key1          | key1 | 5       | db.tbl1.col1 |    2 | Using where |
    | tbl3  | range | key1          | key1 | 5       | NULL         |   40 | Using where |
...-+-------+-------+---------------+------+---------+--------------+------+-------------+

When no join buffering is used, the query will be executed as follows:

  for each record t1rec in table tbl1
  {
    for each record t2rec in tbl2 such that t2rec.key1=t1rec.col
    {
      for each record t3rec in tbl3 such that t3rec.key1<40
      {
        pass the (t1rec, t2rec, t3rec) row combination to output;
      }
    }
  }

Graphically the execution flow can be depicted as follows (yellow are the table scans, blue are the table rows):

nl-join-no-buffering.png

From the code and picture we see that:

  • Table tbl2 is scanned several times, but each scan accesses a different part of the table
  • Table tbl3 is scanned many times, and all performed scans are identical

It is apparent that the second and the third scans of table tbl3 bring no new information and can be removed. We do not have to re-scan tbl3 for any row combination from tables tbl1, tbl2. Instead, we could accumulate a back of such row combination, and then do one tbl3 scan for all of them. And this is what join buffering is.

In pseudo-code, the execution will look as follows:

  for each record t1rec in table tbl1
  {
    for each record t2rec in tbl2 such that t2rec.key1=t1rec.col
    {
      put (t1rec, t2rec) into the buffer
      if (buffer is full)
        flush_buffer();
    }
  }

  flush_buffer() {
    for each record t3rec in tbl3 such that t3rec.key1<40
    {
      for each record in the buffer
        pass (t1rec, t2rec, t3rec) row combination to output;
    }
    empty the buffer;
  }

And graphically it will look as follows:

nl-join-buffering.png

The EXPLAIN output will be as follows:

explain select * from t1,t2,t3 where t2.key1 = t1.col1 and t3.key1<40;
...-+-------+-------+---------------+------+---------+--------------+------+--------------------------------+
    | table | type  | possible_keys | key  | key_len | ref          | rows | Extra                          |
...-+-------+-------+---------------+------+---------+--------------+------+--------------------------------+
    | t1    | ALL   | NULL          | NULL | NULL    | NULL         |   10 |                                |
    | t2    | ref   | key1          | key1 | 5       | test.t1.col1 |    2 | Using where                    |
    | t3    | range | key1          | key1 | 5       | NULL         |   40 | Using where; Using join buffer |
...-+-------+-------+---------------+------+---------+--------------+------+--------------------------------+

In this example join buffering is used for one table, but it can be used for several tables as well. MySQL uses join buffering whenever it can, access to some table tbl_x will be bufferred if

  • The SELECT does not have an ORDER BY clause
  • We’re not at top level “select” of a multi-table UPDATE
  • tbl_x is accessed using an “independent” access method: ALL, index, range, or index_merge.
  • tbl_x is not inner w.r.t. some outer join

The server variable @@join_buffer_size specifies how much MySQL should allocate for each buffer. That is, if two tables use buffering, MySQL will allocate two buffers of @@join_buffer_size bytes each.

MySQL Optimizer and Prepared Statements

Performance-wise, the idea of Prepared Statements is that the server does certain pre-processing on PREPARE command, and then those steps are omitted when the statement is executed. If the statement is executed several times, we get an economy of

cost_of_prepare_preprocessing * (#statement_executions – 1)

This makes one want to move the CPU and IO-intensive query optimization into the PREPARE phase. Unfortunately, this will make the optimizer work much worse – optimizer’s decisions are based on the external information, and there is much less available information at PREPARE phase. The most crucial differences are that

  • The values of the ‘?’ parameter markers are not yet known
  • The results of probes done in the the queried tables cannot be relied on because the table data may change before the EXECUTE
  • [less important] Table and index statistics may change before the EXECUTE

Those limitations cause the most of MySQL’s optimizations to be re-done at every EXECUTE command. To be precise, here is a table of what is done when:

Action When
Query parse PREPARE
Negation elimination PREPARE
Subquery re-writes PREPARE
Nested JOIN simplification First EXECUTE
OUTER->INNER JOIN conversions First EXECUTE
Partition pruning Every EXECUTE
COUNT/MIN/MAX elimination Every EXECUTE
Constant subexpression removal Every EXECUTE
Equality propagation Every EXECUTE
Constant table detection Every EXECUTE
ref access analysis Every EXECUTE
range/index_merge analysis and optimization Every EXECUTE
Join optimization Every EXECUTE

Basically, the current rule is as follows:

Optimizations that depend only on query syntax are performed either at PREPARE or at first EXECUTE. Optimizations that also depend on something else are performed at every EXECUTE.

If you’re only interested in the current state of affairs, that’s all you need to know. If you want to know what [I think] is going happen in the future, read on.

The future

The above quoted rule is not a DBMS law of nature. In fact, it is already getting in the way of subquery rewrites (see e.g. BUG#27430, WL#3740) and some other server subsystems. We’ve recently had a discussion about other possible solutions. The smartest and most elegant solution was suggested by Timour:

  • Let every optimization record its “dependencies” – information that it depends on:
    • ref access analyzer: set of table indexes
    • constant table detector: the above + the set of tables that have 1 or 0 rows
    • range optimizer: all of the above + used values of parameter markers(if any) + results of records_in_range calls
    • … and so forth…
  • On subsequent EXECUTE, check which dependencies have changed and re-do the correponding optimizations.

This approach will reduce the work done on EXECUTE step to doing a set of checks. In most cases, the checks will pass, and no re-optimization will be required. This approach is hard to implement, however. MySQL’s internal data structures do not naturally support “block undo”, so it is not easy to cancel and re-do an optimization.

For now we’ve settled for a less impressive solution:

  • We’ll gradually move query rewrites that depend on table DDLs into PREPARE phase
  • On EXECUTE we’ll check if DDLs of the used tables have changed. If they have, we’ll throw out the Prepared Statement and re-prepare it.

Re-preparing the statement is expensive but it should be a very rare occurence for most applications. This task is filed as WL#3813: Re-prepare the prepared statement if its underlying objects have changed.

How to find out if an outer join was converted to inner

After this post I’ve got a question how one can tell if his outer join was converted to inner. You can find it out by looking at the warning generated by EXPLAIN EXTENDED. If the outer join wasn’t converted, you’ll see it in the rewritten query in the warning:

mysql> explain extended select * from t1 left join (t2, t3) on t2.a= t1.a;
...
3 rows in set, 1 warning (0.00 sec)

mysql> show warnings\G
*************************** 1. row ***************************
  Level: Note
   Code: 1003
Message: select `test`.`t1`.`a` AS `a`,`test`.`t2`.`a` AS `a`,`test`.`t3`.`a`
AS `a` from `test`.`t1` left join (`test`.`t2` join `test`.`t3`) on ((`test`.`t2`.
`a` = `test`.`t1`.`a`)) where 1

In this query LEFT JOIN is not converted to inner.

Now let’s try a query where outer join will be converted:

mysql> explain extended select * from t1 left join (t2, t3) on t2.a= t1.a where t2.a  show warnings\G
*************************** 1. row ***************************
  Level: Note
   Code: 1003
Message: select `test`.`t1`.`a` AS `a`,`test`.`t2`.`a` AS `a`,`test`.`t3`.`a`
AS `a` from `test`.`t1` join `test`.`t2` join `test`.`t3` where ((`test`.`t1`.
`a` = `test`.`t2`.`a`) and (`test`.`t2`.`a` < 10))

Looking at the warning text we see an inner join instead of the left join, and also we see that the ON clause has been added into the WHERE.

Yes, those warning messages are hard to read, they have excessive quoting and the lines are too long. But at the moment certain kinds of query plan details are displayed only there, so skiming through the rewritten query text may pay off when you have doubts about what is going on.

Nested outer joins

Here is MySQL’s nested outer joins optimization cheat sheet:

  • Conversions:
    • RIGHT JOIN is converted to LEFT JOIN. FULL JOIN is not supported.
    • Outer joins are converted to inner joins when possible
  • Constraints on join order:

    • “Outer tables go first”
    • “No interleaving”
  • Table access rules:

    • “Inner” table access methods are constructed from parts of the ON condition. WHERE condition can’t be used to construct table accesses.
    • Parts of ON condition are checked as soon as possible
    • Parts of the WHERE condition
      – are not checked until we’ve found a row combination that matches the ON clause
      – are checked as soon as possible after that.

Or, in more detail:

Conversions

RIGHT JOIN to LEFT JOIN conversion is obvious:

  (t1 RIGHT JOIN t2 ON cond) = (t2 LEFT JOIN t1 ON cond)

Conversion from outer to inner join is possible when the result of inner join will be the same. It will be the same if the row combination with NULL-complimented row will not pass the WHERE clause. For example, if we look at the query

  t1 LEFT JOIN t2 ON some_cond WHERE t2.a=t1.b

we’ll see that a row with t2.a IS NULL will not satisfy the WHERE condition. Hence, this outer join can be converted to inner.

Constraints on join order

Outer tables go first
any outer table used in the outer join’s ON clause must be before all of the inner tables.
No interleaving
tables contained within an outer join must form a continuous sequence in the join order. Interleaving with tables that are outside of the outer join is not allowed.

Table access rules

Now, this requires some explanation. MySQL’s nested-loops join code tries to check parts of the WHERE as soon as
possible. For example when a query

SELECT * FROM
  t1,t2, ...
WHERE
  t1.col1=c1 AND
  t2.col1=t1.col2 AND t2.col2=c3 AND
  ...

is executed using a join order of (t1, t2,…), it proceeds according to this kind of scenario:

Inner join swimlanes

We can see here that the the WHERE condition is split into parts that are checked “as early as possible”.

With outer joins is more complicated. We need to know if we’ll need to generate a NULL-complemented row combination. We won’t need to if there was a combination of inner tables that matched the ON (but not necessarily the WHERE) clause. The solution is to switch the WHERE parts checking on and off.

The best way to show it is with example: Suppose we have a query

 SELECT * FROM ... ot1 LEFT JOIN (it1, it2) ON somecond WHERE ...

and suppose the join order is (ot1, it1, it2, …). The execution will proceed in this manner:

outer join swimlanes

What’s visible there? When we start scanning table it1, we check only the ON condition. We can’t check the WHERE – we could iscard some it1’s row that is the only row that will match the ON condition, think there will be no matches, and erroneously generate the NULL-complimented row.

After we’ve found the match for the ON condition, we go back and check all parts of the WHERE we did not check because of the above mentioned reason.

After that, the execution proceeds as if this was an inner join, with ON merged into the WHERE clause.

The diagram also shows why we can’t use parts of the WHERE clause to create table acccess methods: because there are times when we can’t use parts of the WHERE for filtering. We always can use parts of the ON though.

Now it should be clear where all Table Access Rules came from.

Subqueries: the new strategy for "NULL IN (SELECT …)"

I hope this is my last post about this topic. It seems we’ve resolved all of the issues and I’ll now describe the user-visible consequences.
To recall, we’re talking about subquery predicates in form

 (oe1, ..., oeN) [NOT] IN (SELECT ie1, ..., ieN FROM ... )

that are located in a context where it matters if the predicate’s result is NULL or FALSE. The name “oe” stands for “outer expression”, ie stands for “inner expression”.

MySQL evaluates queries “from outside to inside”, i.e. we first get the values of (oe1, .. oeN) and then we run the subquery and capture the rows it produces. An apparent and very useful optimization is to “inform” the subquery that we’re looking only for rows that have “ie1=oe1”, “ie2=oe2” and so on. This is done by injecting appropriate equalities into subquery’s WHERE (or HAVING) clause. That is,

 (oe1, ..., oeN) [NOT] IN (SELECT ie1, ..., ieN FROM ... )

becomes

  EXISTS (SELECT 1 /* ie1, ..., ieN */
           FROM ... WHERE subquery_where AND
                          oe1=ie1 AND
                          oe2=ie2 AND
                          ...
                          oeN=ieN))

However, this conversion is only valid if we ignore possible NULL values. If some of the iek can be NULL, then we need to use oek=iek OR iek IS NULL instead. I’ve covered this case in detail here in NULL problem in the right part section.

Correct handling of cases where some oek IS NULL requires more radical changes. We’ve just made those changes and here they are:

The new strategy

According to SQL’s interpretation of NULL as “unknown value”,

  NULL IN (non-empty-list-of-some-values) = NULL
  NULL IN () = FALSE

So, when we want to evaluate

  NULL IN (SELECT ie FROM ... )

we need to run the SELECT and see if it will produce any rows. Note that we need to run the original SELECT, without any injected equalities mentioned in the previous section.

On the other hand, it is absolutely essential to have

  not_null_oe IN (SELECT ie FROM ...)

converted to

  EXISTS (SELECT 1 /* ie1 */ FROM ... WHERE ie1=not_null_oe ...)

If we don’t do this, subqueries will be terribly slow. We’ve solved this “inject or not inject” dilemma by wrapping the injected conditions into triggers. A subquery

  (oe1, ..., oeN) [NOT] IN (SELECT ie1, ..., ieN FROM ... )

is converted into

  EXISTS (SELECT 1 /* ie1, ..., ieN */
           FROM ... WHERE subquery_where AND
                          trigcond(oe1=ie1) AND
                          trigcond(oeN=ieN) AND
                          ...
         )

where each trigcond(X) is a special “magic” function defined as:

  trigcond(X) := X    when the "linked" outer expression oe_i is not NULL
  trigcond(X) := TRUE when the "linked" outer expression oe_i is NULL

Equalities that are wrapped into trigcond() function are not first class predicates for the query optimizer. Most optimizations cannot deal with predicates that may be turned on and off at query execution time, so they assume any trigcond(X) to be unknown function and ignore it. At the moment, triggerered equalities can be used by those optimizations:

  1. Ref-optimizer: trigcond(X=Y [OR Y IS NULL]) can be used to construct ref, eq_ref or ref_or_null table accesses.
  2. Index lookup-based subquery execution engines: trigcond(X=Y) can be used to construct unique_subquery or index_subquery access.
  3. Table condition generator: if the subquery is a join of several tables, triggered condition will be checked as soon as possible.

When the optimizer uses triggered condition to create some kind of index lookup-based access (#1 and #2 in the above list), it must have a strategy for the case when the condition is turned off. This “Plan B” strategy is always the same – do a full table scan. In EXPLAIN the plan B shows up as “Full scan on NULL key” in the “Extra” column:

mysql> explain select t1.col1, t1.col1 IN (select t2.key1 from t2 where t2.col2=t1.col2) from t1
*************************** 1. row ***************************
           id: 1
  select_type: PRIMARY
        table: t1
        ...
*************************** 2. row ***************************
           id: 2
  select_type: DEPENDENT SUBQUERY
        table: t2
         type: index_subquery
possible_keys: key1
          key: key1
      key_len: 5
          ref: func
         rows: 2
        Extra: Using where; Full scan on NULL key

And if you run EXPLAIN EXTENDED …; SHOW WARNINGS you can see the triggered condition:

*************************** 2. row ***************************
  Level: Note
   Code: 1003
Message: select `test`.`t1`.`col1` AS `col1`,<in_optimizer>(`test`.`t1`.`col1`,<exists>(<
index_lookup>(<cache>(`test`.`t1`.`col1`) in t2 on key1 checking NULL where (`test`.`t2`.
`col2` = `test`.`t1`.`col2`) having trigcond(<is_not_null_test>(`test`.`t2`.`key1`))))) AS
`t1.col1 IN (select t2.key1 from t2 where t2.col2=t1.col2)` from `test`.`t1`

Performance implications

The first apparent implication is that NULL IN (SELECT …) now may cause full table scans (slow!) where it previously did not. This is the price to pay for correct results.
For multi-table subqueries the execution of NULL IN (SELECT …) is going to be particularly slow because the join optimizer doesn’t optimize for the case when outer expression is NULL. It assumes that subquery evaluations with NULL on the left side are very rare, even if there is statistics that says otherwise

On the other hand, if you have left expression that may be NULL but actually never is, you will not lose any speed.

The practical hints are

  • A column must be declared as NOT NULL if it really is. This is important for the other parts of the query optimizer too.
  • If you don’t really need the correct NULL/FALSE answer, you can easily avoid the slow execution path: just replace
         oe IN (SELECT ie FROM ...)
    

    with

         (oe IS NOT NULL) AND (oe IN (SELECT ie FROM ...))
    

    and NULL IN (SELECT …) will never be evaluated because MySQL stops evaluating AND parts as soon as the answer is clear.

The goal of this new strategy was to improve compliance and not speed. However we’ve had an intent to not make anything unneccessarily slow. If something became slower for you please file a bug, perhaps we’ll be able to do something about it.