Added a copy of mainline mysql-5.1 into our Buildbot

And it has failed the testuite on every single build slave. I’ve filed BUG#45605, BUG#45630 (together with a patch), BUG#45631, BUG#45632. There is also rpl.rpl_innodb_bug28430 test failure which I didn’t report as I don’t yet have enough details about the build slave.

At the moment our setup works as follows: there is lp:~maria-captains/maria/mysql-5.1-testing branch, which our copy of lp:mysql-server. We periodically pull from the main tree into our copy, it’s a manual process. Buildbot watches for pushes to our copy and runs builds/tests after every push. The results are publicly available here.

UPDATE: Sun people say their pushbuild is nearly green (=all tests pass, on nearly all platforms). This is very odd, as our build slaves are nothing special – about half of them are recent Ubuntu installs on most popular architectures.

Test failures in 5.1, different people making different fixes for the same problem

This is what’s happening at the moment. 5.1 tree doesn’t pass the tests, both Sun and Monty Program fix that, and the fixes are different. Here is a Valgrind warning which was fixed twice: my fix my fix (correct link) made the involved mysys function to do what its name implies while Alfranio’s fix changed the replication code not to call that function anymore.

Besides the Valgrind warning, we observe failures for rpl_trigger.test and query_cache_28249.test (if you follow the links you’ll have to grep for test name. Buildbot has some room for improvent). I get these failures in maria-5.1-table-elimination tree. The problem is that when the failure is random (the query cache one is) or when I get it after merging from main, I cannot easily tell whether

  • the problem is in my new code
  • the problem is in MariaDB
  • the problem is in the original MySQL
    • and they are not aware, or
    • they are aware and there is no fix yet
    • they are aware and have the fix in some team tree (merges from team trees to main can happen as rarely as once a month)

I think we’ll have to take the main branch and have our Buildbot run tests for it, too. We’d like to add all publicly available trees (when analyzing random test failures it is the more runs the better), but our small population of build slaves (volunteers welcome!) will not manage to do that many test runs.

Changed jobs, now at Monty Program AB

This isn’t news anymore – it has been over a month – but it would be odd not to mention this all, so here it goes: at the start of May I’ve left Sun Microsystems and joined Monty’s company.

The setup at Monty Program AB is quite similar – we have an IRC channel (#maria on FreeNode), a mailing list, bazaar trees on launchpad, Worklog and a Buildbot installation. It’s actually more open than at Sun/MySQL. At Sun, everyone is on internal IRC, external public can only see a subset of Worklog (the biggest problem with it is that it’s not possible to subscribe to changes), and their Buildbot-like system (it’s called PushBuild and looks like this) is not visible to the outside public.

There are actually very good reasons why an external person might want to look at pushbuild. Everyone doing development (if you count Summer of Code students and engine developers, that’s a lot of people) or just trying to use the newest features will at some point want to get the latest source from bazaar repository. And the problem here is that the trees get broken every once in a while, and when you are pulling the sources from launchpad it is nice to know what you’re pulling. You can run the tests yourself, sure, but that takes time. And if you do take time to run the tests, then if you see a failure you won’t know if Sun/MySQL is aware of the problem, whether it is repeatable on any computer or you need to report your OS/compiler/configure flags/test run parameters and so forth and so forth.

Getting back to things at Monty Program AB: my first tasks here are Table Elimination and index_merge improvements. I intend to cover them in more detail in a separate post.

Notes from Feature Request Bonanza session at Percona Performance conference

I was taking notes during “Open Q&A: Feature Request Bonanza” session at the Percona Performance conf. The session started at 9 pm at the last day of the conference, so the room wasn’t as full as it was for other sessions, but still there was an interesting discussion. I’ve missed several requests but more than 90% of stuff is there.

DISCLAIMER People are mentioned when I could both identify them (I was in the first row which rules out those in the back) and had time to note that, so names below are contact points and not indication of who was[n’t] there. I was also somewhat tired so please re-check the statements with their authors if you’re going to make any far-reaching conclusions based on the below:

  • The first request was for partial index support: create and use indexes that only have records that match a certain condition.
  • Pre-allocate space after table creation. Monty: CREATE TABLE statement has MIN_ROWS parameter already, and it’s honored by MyISAM.
  • PeterZ: Besides allocation of space at table creation, it would be nice if it was possible to allocate table’s space in extents. Monty: This is possible in Maria.
  • Somebody requests to specify fill factor for InnoDB pages. Domas: Technically it’s there, can be changed in gdb.
  • X: I want to specify the parameter for each table and index. Monty: MariaDB has support for name=value table parameters. They are passed to engine.
  • Monty: The engine should mark which options it recognized and warnings should be issued for unrecognized options.
  • Jeremy Cole requests online DDL. Monty: Maria has lazy add/remove column.
  • PeterZ asks about online ALTER TABLE. There’s a discussion about instant vs. online/background ALTER TABLE and what kind of operations can be performed instantly or online.
  • Domas would like to see online OPTIMIZE. It should be a background process which one start/stop or set to work at some limited rate so it can run without much impact on other activity. It should be possible to set to behave more aggressively.
  • PeterZ asks what OPTIMIZE should do for SSD drives. Somebody answers that SSD drives have high cost per megabyte of storage, so OPTIMIZE should reclaim wasted space.
  • Antony Curtis requests columns to have default values which are functions without use of triggres. Monty: this will require changes in .frm file format. They want MariaDB to be compatible.
  • Call for new datatypes anyone would like to see added
    • Ryan: column encryption
    • Plug-in abstract datatypes
    • Microsecond timestamp (Monty wants to add this)
    • PeterZ: blob/text data compression

    then there’s a discussion on where compression should be handled – inside storage engine or at connector level, or somewhere inside SQL runtime, etc. There are different opinions.

  • Somebody asks for more comprehensive features in general and a comprehensive set of DTrace probes in particular. There is a counterargument that it’s not possible to have static DTrace probes for every possible case and that one should use dynamic tracing. That requires knowledge of source code though.
    Monty requests a list of missing probes.
  • A request from Baron Schwartz: there are cases when MySQL will do ref access over columns of some unique index, but will use some non-unique index because it is covering all needed columns while the unique index isn’t. MySQL should use eq_ref access in such cases. He says he has run some benchmarks and there’s a 20% speed difference between ref and eq_ref. The engine doesn’t matter.
  • Susanne asks for DROP CASCADE. Domas doesn’t want anybody to run DROP CASCADE on his servers.
  • Ryan requests that index_merge optimizer is extended to allow sort-intersect strategy. At the moment we have just ‘intersect’ which can produce intersection of rowid-ordered index scans which means that it handles equality predicates:
    SELECT ... FROM tbl WHERE t.key1='foo' AND t.key=123
    but not range predicates:
    SELECT ... FROM tbl WHERE t.key1 LIKE 'foo%' AND t.key2 BETWEEN 123 AND 134

  • Alexey Rybak requests Bitmap indexes.
  • He also requests a fully asynchronous client library, one that would allow a client app can run many queries on many servers concurrently. It seems Drizzle has a new client library that does that. They don’t support binary protocol though. Monty intends to wait until Drizzle’s client library is stabilized, then add binary protocol support to it, and then see if it could be used instead of the standard library. Someone states that new PHP connector already supports asynchronous operation.
  • Somebody asks if there is any way to scale the RAM footprint of embedded MySQL. He says he has severe RAM (but not disk) constraints. I express doubt if database would work when OS has no space for disk cache but Monty says MyISAM is capable of operating reasonably decent in such settings.
  • Ryan requests InnoDB to have instant (auto-maintained) table checksums.
  • Domas requests “fuzzy replication”. Here I can’t make sense of my notes – it’s something about losing some of latest transactions but recovering to some consistent state, but I can’t remember how does all that relate to replication.
  • Ryan says it’s annoying that InnoDB takes everything offline when it detects corruption. He suggests that InnoDB should take offline only the corrupted table (which is feasible when one is using innodb_file_per_table option — sergeyp).
  • Monty says that Maria will shut down only the corrupted table and will automatically attempt to repair it. The audience wants the same to happen for partitions.
  • Somebody asks for something related to INSERT DELAYED. There’s a reply that that can be achieved in the application with SQL statement queuing.
  • Domas tells a story about Wikipedia having a number of tables with various counters, like number of pages in categories. They do update counters at the end of the transaction and in a number of cases counter update causes deadlock which otherwise would not occur. All changes to counters are commutative/additive/reversible actions, so it would be nice if the engine [or its extension] understood that used that knowledge to avoid deadlocks. Antony and Ryan mention they work around this problem by storing counters in non-transactional MyISAM tables
  • Somebody requests settable limits on how much memory a client can use. Domas mentions that the day before he has demonstrated how a half-megabyte query can consume gigs of RAM without using any buffers. BUG#42946 and BUG#27863 are mentioned as other examples of how one could cause excessive memory consumption with seemingly innocent statements. Besides that, queries consume engines’ internal buffer/cache resources and that is very difficult to account for. Monty says that the reality is that an experienced DBA is able to bring down the server.

That’s it. It will be interesting to get back to the list after one year and see if anything of it got implemented 🙂

Optimizer news: @@optimizer_switch syntax changes and backport

In short, the news are:

  • @@optimizer_switch support was backported into MySQL 5.1
  • The switch syntax was changed from ‘no_optimization_name’ to ‘optimization_name=on|off|default’.
  • Added switches for index_merge, index_merge_intersection, index_merge_union, and index_merge_sort_union optimizations.

The changes will be available in next releases, that is, MySQL 5.1.34 and 6.0.11.
Now with more details:

New switch names

Until now, the syntax mimicked an enum column or the @@sql_mode variable. One could set the value of @@optimizer_switch to a set of keywords, e.g.

SET @@optimizer_switch='no_semijoin,no_materialization';

Presence of a no_xxx keyword meant that its optimization was disabled, its absence meant it was disabled.

As of the next MySQL 5.1/6.0, @@optimizer_switch value is a set of on/off flags:

mysql> SELECT @@optimizer_switch;
+-------------------------------------------------------------------------------------------+
| @@optimizer_switch                                                                        |
+-------------------------------------------------------------------------------------------+
| index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on |
+-------------------------------------------------------------------------------------------+

the SET statement accepts a list of commands:
SET [GLOBAL|SESSION] optimizer_switch='command,command,...'

where each command is one of

default reset all optimization settings to default
optimization_name=off disable the optimization
optimization_name=on enable the optimization
optimization_name=default set the optimization to its default state

The order of the commands does not matter (‘default’ will be executed first if present), setting the same flag twice within one SET command is not allowed. Flags that are not mentioned keep their current values:

mysql> SELECT @@optimizer_switch;
+-------------------------------------------------------------------------------------------+
| @@optimizer_switch                                                                        |
+-------------------------------------------------------------------------------------------+
| index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on |
+-------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> SET optimizer_switch='index_merge_union=off,index_merge_sort_union=off';
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT @@optimizer_switch;
+---------------------------------------------------------------------------------------------+
| @@optimizer_switch                                                                          |
+---------------------------------------------------------------------------------------------+
| index_merge=on,index_merge_union=off,index_merge_sort_union=off,index_merge_intersection=on |
+---------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

The advantages of the new way over the old one are that

  1. It is now possible to turn certain optimization on/off with a single statement (SET optimizer_switch=’malfunctioning_optimization=off’) which will not depend on what other optimizer flags exist and what their values are.
  2. One can easily see what optimizer switches are available in the current server.
  3. In contrast to the grand solution of WL#4046, I could code and push this within a reasonable amount of time.

In addition, the mysqld binary got a –optimizer-switch parameter which allows to set optimizer_switch at server startup or in my.cnf file.

The backport

This is actually the reason for making all these changes. It turns out that the optimizer can make a wrong choice when considering whether to use index_merge optimization. This can happen for both valid (unknown data correlations) and not-so-valid (mismatch between cost model and the reality) reasons. A fix for either of these problems would be too intrusive to put into the GA version (betas and major releases were invented for a reason), and we also just do not have it yet. So, we’ve decided to provide at least some resolution for those for whom index_merge made things worse and introduced the following switches

@@optimizer_switch flags in MySQL 5.1
index_merge turns on/off the all index_merge optimizations
index_merge_union turn on/off individual index_merge algorithms (names as in the documentation)
index_merge_sort_union
index_merge_intersection

MySQL 6.0 has the above switches and also subquery optimization switches:

@@optimizer_switch flags in MySQL 6.0
semijoin turns on/off the all semi-join optimizations
materialization turns on/off materialization (including semi-join materialization)
loosescan turns on/off semi-join LooseScan strategy (not to be confused with GROUP BY’s LooseScan)
firstmatch turns on/off semi-join FirstMatch strategy

All future optimizations will be switchable as well. We’ve learned the lesson.

Sun Tech Days St. Petersburg 2009

Sun Tech Days St. Petersburg was on Wednesday-Thursday the last week and we’ve had a MySQL booth there. Unlike the last year, we’re full part of Sun now so managed to get a decent-sized booth, arrange for leaflets, and Kostja gave an overall MySQL talk.

Questions at the booth (in no particular order):

  • When will Connector.NET support LINQ Entity framework? (According to Reggie Burnett: it is currently supported in Connector 6.0/Beta, which is expected to be GA soon)
  • Can Connector.Net be used with Mono? (Yes)
  • Is Workbench available for Linux (Yes)
  • When will MySQL support stored procedures? This question seems to have replaced the infamous “When will MySQL support transactions” question.
  • What is the impact of different transaction isolation levels on performance of InnoDB? (no idea. If you ran some experiments please drop a comment)
  • When will MySQL support LIMIT clause inside subqueries, in particular the
    ... WHERE IN (SELECT ... ORDER BY LIMIT n) form? (we would like to add support for this, but no plans ATM. Request taken.)
  • Does MySQL have any limitations on table/database size or number of records in the table? (I gather people are used to having limitations in free versions of SQL Server or Oracle and expect something like that in MySQL)
  • When will fulltext search support searching for different wordforms (that’s a big deal for searching in Russian texts, as the words get different suffixes depending on which grammatic case they are in)? No plans ATM. Perhaps somebody has developed a fulltext parser plugin somewhere?
  • When will InnoDB get efficient support for COUNT(*)?
  • Are there any plans to make MySQL more efficient when handling big blob columns?

We’ve got several complaints that look like bugs:

  • LEFT JOIN and multi-table DELETE fails to delete records when using foreign keys. We’ve got a test case so I’ve filed this as BUG#44207.
  • There’s something wrong with Connector/Java and timezones. I’m not sure if we’ve managed to repeat the problem on our laptops, we were promised a bug report.
  • Another person complained about a sharp slowdown in join performance when table size(s) exceed 1M rows. According to the reporter, all buffers are adequately-sized, EXPLAIN shows that the query plan is the same. No idea what this could be then, as the SQL layer doesn’t have any hard-coded buffer sizes.

Also there was this Java duke guy:
and we’ve figured that it would be nice to get a MySQL dolphin the next year. And have the developer with biggest number of bugs in his code wear it :-).

More MySQL 6.0 news: next subquery optimizations WL pushed

Three days ago I’ve finally managed to push the code for WL#3985 “Subquery optimization: smart choice between semi-join and materialization” into MySQL 6.0. I missed the clone-off date so it won’t be in the upcoming MySQL 6.0.9 release, the only way to get it before the next 6.0.10 release is from the lp:mysql-server/6.0 bazaar repository.

What’s new in the push

Before WL#3985, 6.0’s subquery optimization had these three deficiencies:

  1. For semi-join (see cheatsheet for definition) subqueries, you had to make a choice between having the optimizer use materialization or all other strategies. The default behavior was not to use materialization, you could only get it by setting a server variable to disable all other strategies.
  2. The choice among other strategies (FirstMatch, DuplicateWeedout, LooseScan) wasn’t very intelligent – roughly speaking, the optimizer would first pick a join order as if there were only inner joins, and then remember that some of them are actually semi-joins and try to find how it can resolve semi-joins with the picked join order.
  3. Materialization only worked in the outer-to-inner fashion, that is, if you got a query like
    select * from people where name in (select owner from aircraft) it would still scan the people and make lookups into a temporary table of aircraft owners. It was not possible to make it scan the temptable of aircraft owners and make lookups into people.

WL#3985 fully addresses #1 and #2, and partially addresses #3. That is, now

  • Semi-join subqueries can use Materialization in an inner-to-outer fashion
  • Join optimizer is aware of existence of semi-joins and makes a fully automatic, cost-based choice between FirstMatch, DuplicateWeedout, LooseScan, inner-to-outer and outer-to-inner variants of Materialization.

This is expected to be a considerable improvement. The most common class of subqueries,
SELECT ... WHERE expr IN (SELECT ... w/o GROUPing/UNIONs/etc) AND ...
is now covered by a reasonably complete set of execution strategies and the optimizer is expected to have the capability to choose a good strategy for every case.

Possible gotchas, and we’re looking for input

I can’t state that the subquery optimizer does have the capability to pick a good plan because we haven’t done much experiments with the subquery cost model yet. We intend to do some benchmarking, but will also very much appreciate any input on how does the subquery optimizer behave for real-world queries. The code should be reasonably stable now – there are only three known problems, all of which are not very popular edge cases:

  • LEFT JOINs. You may get wrong query results when the subquery or parent subquery use left joins.
  • “Grandparent” correlation. A query with a semi-join child subquery which has a semi-join grandchild subquery which refers to a column in the top-level select may produce wrong query plans/results under certain circumstances.
  • Different datatypes. You may get wrong query results of queries that have col1 IN (SELECT col2) where col1 and col2 are of different types (which should not happen too often in practice)

If you have subqueries with LEFT JOINs, please let us know also, because so far all LEFT JOIN+subquery cases we have were generated by the random query generator, certain properties of MySQL codebase make it difficult to make outer joins work with semi-joins, and if we don’t get any real-world LEFT JOIN examples, chances are we will disable subquery optimizations if there’s LEFT JOIN in the parent select, or in the subquery, or in either case.

MySQL 6.0 news: Batched Key Access is in

Ok this isn’t very timely reporting, but about two weeks ago Batched Key Access feature has been pushed into MySQL 6.0. You can get it from the bazaar repo now (bzr branch lp:mysql-server/6.0), or wait several more weeks till MySQL 6.0.9 is released and get it from there.

Batched Key Access in a nutshell

BKA is about accessing tables in batches when running nested loop joins. The benefits of batching table accesses are that

  • “Remote” engines save on number of roundtrips
  • Disk-based engines do reads in disk order instead of randomly probing the table, which allows to be easier on disk cache and take advantage of prefetching

Batched Key Access only works if the used storage engine supports it. At the moment there is support for MyISAM, InnoDB, Maria, Falcon (these are disk-based) and NDB (this one is remote) engines.

Documentation

At the moment there’s no manual chapter yet. There is a short introduction at Batched_Key_Access page on the forge and there are MySQL Conference 2008 session slides. The slides cover some benchmarking and give an idea about what kind of queries and dataset you need to get speedups with MyISAM/InnoDB. We’ve seen great speedup with NDB also but didn’t publish anything so far.

Observation

With Batched Key Access and condition pushdown, it is now feasible to create a remote table engine with decent performance. We have a remote engine, ha_federated and it doesn’t support BKA or condition pushdown and is a death by latency if you have queries that do not match the

SELECT * FROM table WHERE primary_key=const

pattern. I have a strong temptation to code a performance version of ha_federated myself, but have to resist it as there is subquery optimization work to be finished and optimizer “bugs” to be addressed.

This is now a rather low-hanging fruit, any takers?

A proposal for method of delivering optimizer bug fixes

Working on query optimizer bugs can be a rather frustrating experience. First, as soon as some query doesn’t run as fast it theoretically could people will consider it a bug. On one hand that’s great, you get a constant stream of user input, but on the other hand you end up with a whole pile of “bugs” which you can’t hope to finish.

What’s more frustrating is that even if you manage to create a fix for an optimizer bug, there are chances it won’t be allowed into next GA (currently 5.0.70) or approaching-GA (currently 5.1.30) release (GA is our term for “stable” or “release”).

The reason behind this is that most optimizer bugfixes cause the optimizer to pick different query plans, and there’s no way to guarantee that the fix will be a change for the better for absolutely everyone. Experience shows that it is possible to have a query that hits two optimizer bugs/deficiencies at once in such a way that they cancel each other out, and get problems when one of the bugs is fixed. A more common scenario is when the optimizer makes the best choice but it just doesn’t have all the info. The top five unknowns are

  • data distributions
  • correlations between data columns
  • correlations between data value and physical record order
  • highly selective conditions on non-indexed columns
  • hot/cold caches

Neither of those can be easily checked, so we’re very conservative and have the “no query plan changes in GA versions” rule.

The problem is that it turns out our GA releases aren’t very frequent and one may have to wait a looong time before the fix makes it into official GA release. Bazaar and its ease of creation of publicly-accessible branches have rectified the situation a bit but most users want a binary and also we don’t want to end up with the need to maintain 2^N branches after N optimizer bugs.

The proposal

This aims at query optimizer (“here’s a query which uses a non-optimal plan”-type) bugfixes that affect a small amount of code in small number of places.

  • We’ll put the fix into both GA and next-after-GA versions.
  • For next-after-GA version, just put the fix in, do not support the old behavior. That’s the only feasible long-term option, we can’t afford to support all behavior we’ve had at some point in the past.
  • For the GA version, make it possible to switch new behavior on and off. The default should be the old behavior (so we only put one “if” into the old execution path. Can one hope that *that* won’t break anything?).

The mechanism to turn on the new behavior will be server command line option, something like --with-bugfix=NNNN. It’s possible to ask to turn on multiple bugfixes by using the option several times:

mysqld --with-bugfix=37642  --with-bugfix=13356

or, in my.cnf syntax:

[mysqld]
...
with-bugfix=13356
with-bugfix=27432
...

The code of GA versions doesn’t change much, so it should be tolerable to have, say, twenty “if (bugfix_nnn) {…} else {…}” branches. mysqld binary should only know numbers of bugs which it has switchable fixes for. If it is invoked with –with-bugfix=N where N is not a bug number it knows, it should print issue a warning, something like this:

[Warning] This version doesn't have ability to switch fix BUG#NNNN , see
[Warning]   http://bugs.mysql.com/check-version.php?binary_version=X.Y.Z&bug=NNNN.

Visiting the printed URL gets you to the bugs database which has information about which fixes appeared in which versions, so it can tell you whether your binary already has the fix for BUG#NNNN integrated into the code or you need to upgrade, in which case it can tell you what is the first version that has the needed bugfix.

-end of proposal-

Any comments or feedback on this scheme are welcome. Their impact will be greater if they arrive till September, 17. We’re having a developer meeting on Sept, 17-24 and I’ll try to get this discussed and some decision made about this.

EXPLAIN CONDITIONS patch available

I’ve made a patch that makes EXPLAIN show conditions that are attached to various points of the query plan. If you run EXPLAIN CONDITIONS (or EXPLAIN CONDS) statement, the output besides the usual EXPLAIN resultset will have a second resultset that will show

  • Conditions attached to individual tables
  • Conditions that are applied before/after join buffering
  • Table and index conditions that were pushed down into the storage engine
  • … and so forth (I believe it prints out all possible conditions that are there)

It looks like this:

mysql> explain conds select * from City, Country where City.Country=Country.Code and City.Name like 'C%' and Country.Continent='Asia' and Country.Population>5000000;
+----+-------------+---------+------+-------------------+-----------+---------+-----------------+------+------------------------------------+
| id | select_type | table   | type | possible_keys     | key       | key_len | ref             | rows | Extra                              |
+----+-------------+---------+------+-------------------+-----------+---------+-----------------+------+------------------------------------+
|  1 | SIMPLE      | Country | ref  | PRIMARY,Continent | Continent | 21      | const           |    1 | Using index condition; Using where |
|  1 | SIMPLE      | City    | ref  | Country           | Country   | 3       | db.Country.CODE |   18 | Using where                        |
+----+-------------+---------+------+-------------------+-----------+---------+-----------------+------+------------------------------------+
2 rows in set (0.01 sec)

+----+---------+-----------------+--------------------------------+
| id | table   | cond_type       | cond                           |
+----+---------+-----------------+--------------------------------+
|  1 | Country | pushed_idx_cond | (Country.Continent = 'Asia')   |
|  1 | Country | where           | (Country.Population > 5000000) |
|  1 | City    | where           | (City.`Name` like 'C%')        |
+----+---------+-----------------+--------------------------------+ 
3 rows in set (0.01 sec)

Unlike EXPLAIN EXTENDED, EXPLAIN CONDS doesn’t use excessive quoting or database prefixes before all columns. Excessive parentheses are still there, I have intent to remove them.

How you can get it:

Both the branch and the patch are made against mysql-6.0 tree. The code has some intersection with new 6.0 features, eg. it prints pushed index conditions, which are in 6.0 only, so the patch can’t be automatically applied to MySQL-5.x. The conflicts should be trivial though, the downport should be a question of removing all parts of the patch that break the compilation. If you need EXPLAIN CONDS in 5.x but can’t manage the downport, please let me know, perhaps I’ll be able to lend a hand.