A proposal for method of delivering optimizer bug fixes

Working on query optimizer bugs can be a rather frustrating experience. First, as soon as some query doesn’t run as fast it theoretically could people will consider it a bug. On one hand that’s great, you get a constant stream of user input, but on the other hand you end up with a whole pile of “bugs” which you can’t hope to finish.

What’s more frustrating is that even if you manage to create a fix for an optimizer bug, there are chances it won’t be allowed into next GA (currently 5.0.70) or approaching-GA (currently 5.1.30) release (GA is our term for “stable” or “release”).

The reason behind this is that most optimizer bugfixes cause the optimizer to pick different query plans, and there’s no way to guarantee that the fix will be a change for the better for absolutely everyone. Experience shows that it is possible to have a query that hits two optimizer bugs/deficiencies at once in such a way that they cancel each other out, and get problems when one of the bugs is fixed. A more common scenario is when the optimizer makes the best choice but it just doesn’t have all the info. The top five unknowns are

  • data distributions
  • correlations between data columns
  • correlations between data value and physical record order
  • highly selective conditions on non-indexed columns
  • hot/cold caches

Neither of those can be easily checked, so we’re very conservative and have the “no query plan changes in GA versions” rule.

The problem is that it turns out our GA releases aren’t very frequent and one may have to wait a looong time before the fix makes it into official GA release. Bazaar and its ease of creation of publicly-accessible branches have rectified the situation a bit but most users want a binary and also we don’t want to end up with the need to maintain 2^N branches after N optimizer bugs.

The proposal

This aims at query optimizer (“here’s a query which uses a non-optimal plan”-type) bugfixes that affect a small amount of code in small number of places.

  • We’ll put the fix into both GA and next-after-GA versions.
  • For next-after-GA version, just put the fix in, do not support the old behavior. That’s the only feasible long-term option, we can’t afford to support all behavior we’ve had at some point in the past.
  • For the GA version, make it possible to switch new behavior on and off. The default should be the old behavior (so we only put one “if” into the old execution path. Can one hope that *that* won’t break anything?).

The mechanism to turn on the new behavior will be server command line option, something like --with-bugfix=NNNN. It’s possible to ask to turn on multiple bugfixes by using the option several times:

mysqld --with-bugfix=37642  --with-bugfix=13356

or, in my.cnf syntax:


The code of GA versions doesn’t change much, so it should be tolerable to have, say, twenty “if (bugfix_nnn) {…} else {…}” branches. mysqld binary should only know numbers of bugs which it has switchable fixes for. If it is invoked with –with-bugfix=N where N is not a bug number it knows, it should print issue a warning, something like this:

[Warning] This version doesn't have ability to switch fix BUG#NNNN , see
[Warning]   http://bugs.mysql.com/check-version.php?binary_version=X.Y.Z&bug=NNNN.

Visiting the printed URL gets you to the bugs database which has information about which fixes appeared in which versions, so it can tell you whether your binary already has the fix for BUG#NNNN integrated into the code or you need to upgrade, in which case it can tell you what is the first version that has the needed bugfix.

-end of proposal-

Any comments or feedback on this scheme are welcome. Their impact will be greater if they arrive till September, 17. We’re having a developer meeting on Sept, 17-24 and I’ll try to get this discussed and some decision made about this.

EXPLAIN CONDITIONS patch available

I’ve made a patch that makes EXPLAIN show conditions that are attached to various points of the query plan. If you run EXPLAIN CONDITIONS (or EXPLAIN CONDS) statement, the output besides the usual EXPLAIN resultset will have a second resultset that will show

  • Conditions attached to individual tables
  • Conditions that are applied before/after join buffering
  • Table and index conditions that were pushed down into the storage engine
  • … and so forth (I believe it prints out all possible conditions that are there)

It looks like this:

mysql> explain conds select * from City, Country where City.Country=Country.Code and City.Name like 'C%' and Country.Continent='Asia' and Country.Population>5000000;
| id | select_type | table   | type | possible_keys     | key       | key_len | ref             | rows | Extra                              |
|  1 | SIMPLE      | Country | ref  | PRIMARY,Continent | Continent | 21      | const           |    1 | Using index condition; Using where |
|  1 | SIMPLE      | City    | ref  | Country           | Country   | 3       | db.Country.CODE |   18 | Using where                        |
2 rows in set (0.01 sec)

| id | table   | cond_type       | cond                           |
|  1 | Country | pushed_idx_cond | (Country.Continent = 'Asia')   |
|  1 | Country | where           | (Country.Population > 5000000) |
|  1 | City    | where           | (City.`Name` like 'C%')        |
3 rows in set (0.01 sec)

Unlike EXPLAIN EXTENDED, EXPLAIN CONDS doesn’t use excessive quoting or database prefixes before all columns. Excessive parentheses are still there, I have intent to remove them.

How you can get it:

Both the branch and the patch are made against mysql-6.0 tree. The code has some intersection with new 6.0 features, eg. it prints pushed index conditions, which are in 6.0 only, so the patch can’t be automatically applied to MySQL-5.x. The conflicts should be trivial though, the downport should be a question of removing all parts of the patch that break the compilation. If you need EXPLAIN CONDS in 5.x but can’t manage the downport, please let me know, perhaps I’ll be able to lend a hand.

Code and slides: Batched Key Access feature preview

At the moment, there are two big features in the works in MySQL optimizer – Subquery optimizations and Batched Key Access. While the former is a part of MySQL 6.0, I wrote about it here in my blog, and so forth, the latter was in nearly stealth mode until a couple of weeks ago.

That’s no longer the case:

  • Batched Key Access source code is now published as mysql-6.0-bka-preview tree.
  • Igor Babaev, the author of the feature, gave a talk about Batched Key Access at MySQL User Conference, and the slides are available here.
  • There is now Batched Key Access page on forge.mysql.com which a semi-official feature homepage which describes how you can get source, binaries, etc

If you’re developing any kind of “remote” storage engine or are using MySQL in a data warehousing application where disk IO is the bottleneck, I recommend to take a look.

Slides: New subquery optimizations in MySQL 6.0 (new revision)

I’m now at MySQL Conference and have just finished my New Subquery Optimizations in MySQL 6.0 talk. If you’re eager to get the slides, here is the link to the pdf file and a mirror.

The talk has the same name as this MySQL University session but the slides have been thoroughly re-worked, and also there is new content:

  • More detailed explanation of subquery handling in 4.1/5.x
  • More reliable/justified benchmark numbers
  • Observations about subquery demographics
  • An attempt at comparative analysis of how MySQL’s subquery strategies compare to PostgreSQL’s
    • And a slide #28 that explains why we’re looking at PostgreSQL all the time

Have a nice viewing. And, if you get subqueries that should be fast but aren’t, we’re now actively looking at every subquery performance bug we get and will appreciate any input.

On possibility to have more than 64 indexes

I was asked on Sun Tech Days whether it is possible to use more than 64 indexes in MySQL. I’ve asked my colleague, Ingo Strüwing, and found out the following: yes, it is possible to have more than 64 indexes, and no additional patches are required. You can get 5.0 source tarball, ./configure it with --with-max-indexes=128 and you’ll get a server that can use up to 128 indexes. One needs to be cautious though:

  • --with-max-indexes=N option looks like you could configure server with arbitrary maximum number of indexes. That’s not true, if you use N>128 then the server will compile but it will not even be able to start.
  • 128-indexes build of server is not widely used, so potentially it can be less stable than the regular build. My experience: I compiled current 5.0 source with --with-max-indexes=128 and it has passed all bundled tests (good) but I don’t see any tests that would use more than 64 indexes (bad). I’d recommend to do some testing before you roll out anything of value on 128-indexes build.
  • Work on 128-indexes version is documented here: BUG#10932.

Hope this information was helpful.

Sun Tech Days St. Petersburg

I’m now at Sun Tech Days St Petersburg. MySQL got involved in it on a very short notice so there are no MySQL sessions or tutorials, and we only have a small booth. It’s decently manned though, there are six MySQLers here in total and we have no less than three people on the booth at nearly all times.

We organized some ad hoc sessions, so far, Alexey (aka Kaamos)’s talk about MySQL and dtrace attracted the most listeners, and Kostja was the second with his talk about new MySQL 5.1 features.

We get lots of questions on wide range of topics, the most popular are

  • Is Sun going to change MySQL license (close the source)? People are pleased to learn that MySQL will remain GPL.
  • How is MySQL development going to co-exist with PostgreSQL development inside Sun?
  • What, exactly, are MySQL licensing terms? People are not sure whether they will be able to use GPL or will need an OEM license, and what are the OEM licensing conditions and terms. It would be nice if MySQL website had more details about OEM agreements terms.

We get a fair number of technical questions as well. Here’s a quick dump in no particular order:

  • Does MySQL have special optimizations for Sun Niagara processors? What are the settings one should use when running on Niagara?
  • A complaint about poor subquery performance for a query generated by Hibernate. We figured that the issue is addressed in MySQL 6.0 by WL#3740 table pullout optimization.
  • Quite a few people are interested in in-memory transactional databases. Not all of them know about MySQL Cluster.
  • Most people are not aware of existence of MySQL Forge or MySQL University. People were interested to see slides, talks and other similar media, we’re handing out pieces of paper with MySQL Forge urls.
  • How does one create a load-balanced and/or fault-tolerant multi-machine MySQL installation? There is MySQL Proxy and there are 3rd-party solutions, there are also clustered JDBC drivers, but people would like to see a packaged or officially supported configurations. It would also be nice if there was an overview of available solutions and their relative properties.
  • MySQL’s command line client has poor tab completion in comparison to PostgreSQL’s. Indeed, hitting <TAB> in /usr/bin/mysql client always completes the list of tables and columns while in /usr/bin/psql completion is context-sensitive, e.g. if you type ALTER and hit TAB you’ll get a list of continuations available ALTER commands.
  • Is there a way to have more than 64 indexes?
  • Does MySQL’s GIS implementation allow to efficiently select primitives that are contained within (or have non-empty intersection with) a given axis-aligned rectangle? WL#3126 GIS: Precise spatial operations source tree needs to have some user-level documentation.
  • mysqldump and charsets. Quite a few people complained about having difficulties when doing export and import of databases in cp1251.
  • There needs to be some tool to assist upgrades. Several attendees said they were unable to upgrade their applications: the application stops working when moved to a newer MySQL version and they can’t figure out what went wrong. Ideally, they would like to have a tool that would just take their source code and update it with a newer MySQL, but that seems to be an “AI-complete” task. As a realistic option, we might use Proxy to run all queries against both old and new server, and report the differences.
  • MySQL cannot use index_merge over both fulltext and non-fulltext indexes, that is, it will use only one index (actually the fulltext one) for query like
    SELECT count(*) FROM tbl
    WHERE match(ft_key) AGAINST ('pizza' IN BOOLEAN MODE) AND non_ft_key='bar'

The conference is definitely worth to have presence on, also it’s nice to have enough people so we don’t have a strict booth watch schedule and there is always some rested force on the booth. Day-wise, the first day was the busiest, starting from the very opening hours. The second day had a slower start and even now (13:00) the crowd is not as thick. The good side of that is that it’s easier to have longer and uninterrupted discussions with interested attendees.

Correlated semi-join subqueries and PostgreSQL

The work on subquery optimizations goes on, and we’re already curious how the assortment of new 6.0 subquery optimizations compares to what is found in other DBMSes. MySQL’s new strategies are not DBMS theory breakthroughs, similar ideas have been implemented in other DBMSes, but when you take this list of strategies and put them into this kind of optimizer, the result has no direct equivalents in any other DBMS (or at least we believe so).

The first thing we did was to take a look at PostgreSQL as it is easily available and seems to have at least decent subquery handling (or even better than decent, I have not heard much complaints). And the first interesting difference was handling of correlated subqueries. With exception of materialization, MySQL’s new subquery strategies do not care if the subquery is correlated or not.

For example, let’s look at table pullout: First let’s try an uncorrelated subquery:

mysql> explain select * from tbl1 where col1 in (select primary_key from tbl2 where col2  show warnings\G
Message: select `test2`.`tbl1`.`col1` AS `col1`,`test2`.`tbl1`.`col2` AS `col2` from (`test2`.`tbl2`) join `test2`.`tbl1` 
where ((`test2`.`tbl1`.`col1` = `test2`.`tbl2`.`primary_key`) and (`test2`.`tbl2`.`col2` < 20))

and we see that it has been converted into an inner join (indicated by the part marked in red). Now let’s make the subquery correlated:

mysql> explain select * from tbl1 where col1 in (select primary_key from tbl2 where
    -> col2  show warnings\G
Message: select `test2`.`tbl1`.`col1` AS `col1`,`test2`.`tbl1`.`col2` AS `col2` from (`test2`.`tbl2`) join `test2`.`tbl1`
where ((`test2`.`tbl2`.`primary_key` = `test2`.`tbl1`.`col1`) and (`test2`.`tbl2`.`col2` = `test2`.`tbl1`.`col2`) and
 (`test2`.`tbl1`.`col2` < 20))

The query execution plans are different (because of the extra equality) but we see that subquery was converted into join in both cases.

Now let’s try PostgreSQL: first, uncorrelated subquery:

psergey=# explain select * from tbl1 where col1 in (select primary_key from tbl2 where col2   Index Scan using tbl1_col1 on tbl1  (cost=0.00..56.66 rows=600 width=8)
   ->  Index Scan using tbl2_pkey on tbl2  (cost=0.00..503.25 rows=12 width=4)
         Filter: (col2 < 20)

Ok, a decent plan. Now, let’s make the subquery correlated:

psergey=# explain select * from tbl1 where col1 in (select primary_key from tbl2 where
psergey(# col2   Seq Scan on tbl2  (cost=0.00..200.00 rows=1 width=4)
           Filter: ((col2 < 20) AND (col2 = $0))

and we can see that PostgreSQL starts using the “naive” approach, where the subquery predicate is evaluated using a direct, straightforward fashion and is not used for any optimizations.

So the first impression is that MySQL kinda wins this round. But wait, if you look at user subquery case analysis here, you’ll see that the vast majority of semi-join subqueries are uncorrelated. I wonder if PG folks at some point in the past have made a conscious decision not to bother optimizing correlated subqueries.

For uncorrelated subqueries, it’s different. Our first impression is that PostgreSQL has a powerful tool – hash join and its subquery variants, which it uses to get decent performance in wide variety of cases. MySQL does not have hash join (Hi Timour? Kaamos?), but the new subquery optimizations can have kinda-similar behavior. It is not clear yet whether they’ll be a good subquery hash join substitute. I intend to post the analysis here once I figure it out. Stay tuned.

Slides: New subquery optimizations in MySQL 6.0

A bunch of (hopefully) self-explanatory slides about new subquery optimizations in MySQL 6.0 is available here (UPDATE: here’s a working link). The slides are from this MySQL University session, so there was an audio stream but there were some difficulties with it and it is not available now.

If you miss the audio, the next opportunity to hear about this topic is the New Subquery Optimizations in MySQL 6.0 session I’m giving at the MySQL User Conference on April, 15th.

If you don’t miss the audio, please check out the mentioned session anyway – we already have more content than there are in the slides, and I expect to get even more till the MySQL UC.


Observations about subquery use cases

As I wrote earlier, we (me and Ranger) have done an assessment of the impact of new 6.0 subquery optimizations. First, we’ve done a search for performance issues in MySQL’s bug database. The raw list is here, and the here is a summary chart:

Search for customer issues in MySQL’s issue tracker system has produced a similar picture (raw list):

Search in DBT-{1,2,3,4} open source benchmark suites produced a radically different result though (raw data is here):

The samples are small but I think one can already conclude that the benchmarks and user cases have different kinds of subqueries. The cause of the discrepancy is not clear. Are people’s applications different from what the benchmarks simulate? If yes, what is the benchmark that simulates the apps our users are running?

A dumb search for subqueries in random open source applications using grep didn’t produce much. Again, no idea why, either I was looking at the wrong applications, or it could be that applications that use subqueries do not have SQL statements embedded in their code so one can’t find them with grep, or something else.

So far the outcome is that it’s nice that the new optimizations [will] capture a good fraction of real world cases, but there is no benchmark we could run to check or demonstrate this. Well, I’d still prefer this situation over the the opposite.

Evgen's CONNECT BY patch is in… PostgreSQL (oops, actually not. more details inside)

Evgen Potemkin is a developer in query optimizer team at MySQL, we’ve been working together for about 3 years now. So, it was a bit unexpected to read in the PostgreSQL news that (originally) his CONNECT BY patch has made it into PostgreSQL. No, we don’t have people coding for two DBMSes at the same time (yet?), the patch has been developed years before Evgen has joined MySQL.

One of the questions raised by the prospect of becoming a part of Sun was whether/how we will handle being a part of company that develops other DBMS as well. I suppose this patch news sets a good start.

It turns out the patch is not in the PostgreSQL tree after all. It is just people at http://www.postgresql.at (which is NOT just a different name for http://www.postgresql.org, and that was my error) merged the patch into a 8.3 version of PostgreSQL and made the result available to download).

Sorry for creating confusion, and thanks to those who pointed this out in the comments.