Attempt to split @@optimizer_switch

Three years ago MySQL has got @@optimizer_switch variable. It was introduced in MySQL 5.1.34, that is, we needed it so much that we’ve added it into the stable release.

In a nutshell,@@optimizer_switch held a comma-separated list of optimizer controls:

mysq> > select @@optimizer_switch;
+------------------------------------------------------------------------------------------+
| @@optimizer_switch                                                                       |
+------------------------------------------------------------------------------------------+
| index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on|
+------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

One could set all settings at once:

mysql> set optimizer_switch='index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=off';
Query OK, 0 rows affected (0.01 sec)

or set individual flags:

mysql> set optimizer_switch='index_merge_sort_union=off';
Query OK, 0 rows affected (0.01 sec)

The reasons for putting all optimizer parameters into one variable were:

  • make them distinct from all other settings
  • make it possible to run “SET optimizer_switch=default” and reset the optimizer to its default settings (which are not necessarily all “on”)

@@optimizer_switch solution allowed all that, and was very useful in optimizer development and troubleshooting. However, it is becoming a victim of its own success. In current development version of MariaDB @@optimizer_switch has 26 flags, and we’re thinking of adding at least two more before MariaDB 5.3 release. It now looks like this:

MariaDB [test]&> select @@optimizer_switch;
*************************** 1. row ***************************
@@optimizer_switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,
index_merge_intersection=on,index_merge_sort_intersection=off,index_condition_pushdown=on,
derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=off,
in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,
subquery_cache=on,mrr=on,mrr_cost_based=off,mrr_sort_keys=on,outer_join_with_cache=off,
semijoin_with_cache=off,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,
optimize_join_buffer_size=on,table_elimination=on
1 row in set (0.00 sec)

It is rather difficult to check the value of some particular flag. Also, there is no easy way to get all settings for subquery optimization flags (other than knowing them by heart and checking each of them).

We at MariaDB are having a discussion about switching from single @@optimizer_switch variable to a set of variables like this:

optimizer.index_merge=on
optimizer.index_merge.union=on
optimizer.index_merge.sort_union=on
optimizer.index_merge.intersection=on
optimizer.index_merge.sort_intersection=off
optimizer.index_condition_pushdown=on
optimizer.join_cache.bka=on
optimizer.join_cache.hashed=on
optimizer.join_cache.incremental=on
optimizer.join_cache.optimize_buffer_size=on
optimizer.join_cache.outer_join=off
optimizer.join_cache.semijoin=off
optimizer.mrr=on
optimizer.mrr.cost_based=off
optimizer.mrr.sort_keys=on
optimizer.semijoin=on
optimizer.semijoin.firstmatch=on
optimizer.semijoin.loosescan=on
optimizer.subquery.cache=on
optimizer.subquery.in_to_exists=on
optimizer.subquery.materialization=off
optimizer.subquery.partial_match_rowid_merge=on
optimizer.subquery.partial_match_table_scan=on
optimizer.table_elimination=on

There are various opinions about how we could make the switch while remaining compatible with the current approach, whether we should switch at all/the optimizer_switch variable/all variables to dotted.name.notation, etc.
If you have an opinion, now it is a good time to voice it at maria-developers@lists.launchpad.net.

MariaDB at FOSDEM: slides, booth questions, etc.

At FOSDEM 2010, MariaDB was represented by Kristian Nielsen and me. Together with Vlad Kolesnikov we’ve manned the joint MariaDB & PBXT stand. I’ve had the main track talk titled “MariaDB: extra features that make it a better branch of MySQL” (slides), and
Kristian had a more general talk titled “Beyond MySQL GA: patches, storage engines, forks, and pre-releases” (slides) in MySQL devroom.

There were no other MySQL-related main track talks, so overall, MySQL ecosystem was represented by one devroom, one stand, and one talk. This is exactly as much as a certain other open source database had, with the exception that our stand didn’t have anything to match the blue plush elephants and pencils and mugs.

I’ve taken notes about questions asked at the stand, here they are:

  • The most common question: So how do I upgrade from MySQL to MariaDB? People are pleased with the answer.
  • A complaint about that it is hard to get data out of a corrupted innodb database, or do anything about a corrupted database (as I understood the visitor wanted a kind of CHECK/REPAIR TABLE[SPACE] command for InnoDB)
  • A request to allow to start/stop the slow query log without restarting the server. The idea is that one doesn’t want to have the slow query log turned on at all times but wants to be able to examine performance problems then he sees them, hence the need to start/stop logging without restarting the server. (UPDATE: it is already possible to do in MySQL/MariaDB. See comments for details)
  • A complaint about poor performance of stored procedures. Unfortunately, it was not feasible to figure out what exactly was being slow, stored procedure runtime itself, or cursor implementation, or queries that the stored procedure executed, or something else. I was only able to answer that MariaDB doesn’t have any enhancements in stored procedure handling at this point.
  • Does MariaDB have any improvements for VIEW handling? The complaint was about poor performance because MySQL “recalculates VIEWs every time”. I’m not sure what is really the problem here, either it is lack of Materialized VIEWs (with indexes on them?), or known poor optimization in case where VIEW is not mergeable
  • Do we have any plans to support transactional DDL statements? (no)
  • Does MariaDB have any improvements in query cache? (no)
  • Have we in MariaDB fixed a certain MySQL bug (I’ve lost the bug#) with triggers? (no)
  • Is it possible to have indexes on MariaDB’s virtual columns? (only when the column is “stored”. indexing a non-stored virtual column would essentially give one functional indexes, and MariaDB doesn’t support them at the moment)

Due to being one of the three people manning the stand I’ve missed most of the MySQL devroom. It would be nice to learn if there were any interesting discussions there.

Ongoing MariaDB development: filtering and rewrites in mysqlbinlog

The main focus of MariaDB staff has been the MariaDB 5.1.38 release in recent weeks, but this doesn’t mean that we’ve abandoned everything else for it. There are several non-release projects going on, one of which is adding binlog filtering and markup capabilities.

In order to see how the new features fit in, let’s first look at what binlog filtering options are already present in MySQL (and so, MariaDB):

kind Master Slave mysqlbinlog
DB-level filtering –binlog-do-db
–binlog-ignore-db
–replicate-do-db=db
–replicate-ignore-db=db
–database=dbname
Table-level filtering –replicate-do-table=db.tbl
–replicate-ignore-table=db.tbl
–replicate-wild-do-table=pattern
–replicate-wild-ignore-table=pattern
Database name rewrite –replicate-rewrite-db=”from->to”
Statement-verb filtering

As long as MySQL had only statement-based replication, one could work around the blank space in mysqlbinlog column by processing mysqlbinlog output with perl/awk/etc scripts. With Row-Based Replication, mysqlbinlog’s output contains events that look like this:

BINLOG '
vjrjShMBAAAAJwAAAPcCAAAAABIAAAAAAAAAAmQyAAJ0MgABAwAB
vjrjShkBAAAAIgAAABkDAAAQABIAAAAAAAEAAf/+AgAAAA==
'/*!*/;

which practically prevents one from doing any processing on it with perl/awk or similar tools. We’ve got a request to fix this, and set to add the following filtering capabilities:

kind Master Slave mysqlbinlog
DB-level filtering –binlog-do-db
–binlog-ignore-db
–replicate-do-db=db
–replicate-ignore-db=db
–database=dbname
Table-level filtering –replicate-do-table=db.tbl
–replicate-ignore-table=db.tbl
–replicate-wild-do-table=pattern
–replicate-wild-ignore-table=pattern
MWL#40
Database name rewrite –replicate-rewrite-db=”from->to” MWL#36
Statement-verb filtering MWL#41

At this moment, MWL#36 has already been coded and pushed into mariadb-5.2 tree. The rest of the tasks will hopefully follow.

Some implementation notes

MWL#36’s --replicate-rewrite-db has the same limitations as slave’s --replicate-rewrite-db: cross-database updates and CREATE/DROP/ALTER DATABASE statements are not rewritten. We were lucky that slave’s replicate-rewrite-db had these limitations and we could follow them. The thing is, since the slave parses the queries, it is relatively easily for it to walk the parse tree and rewrite the database name wherever necessary (and thus provide handling all kinds of statements). If this was implemented on the slave, it would be very difficult to repeat in mysqlbinlog, since mysqlbinlog has no SQL parser and so is not able to reliably find references to database name in SQL statement text.

It seems we won’t be able to dodge this problem in MWL#40: Table-level filtering, though. According to the manual, replicate-ignore-table “Tells the slave thread to not replicate any statement that updates the specified table, even if any other tables might be updated by the same statement (source). In order to copy this behaviour, mysqlbinlog will need to be able to tell which tables are affected by each of the statements it processes. As written in the worklog entry, so far we see three reliable ways to do that

  • Include MySQL’s SQL parser into mysqlbinlog
  • Have the master annotate the statements with easily-parseable information about which tables are updated in the statement.
  • Do not try to solve the problem on mysqlbinlog side, delay it until the point where we do understand SQL. As the first option, let the server support @@ignore_tables session variable and then let mysqlbinlog print SET ignore_table=... as the first statement in its output.

All three approaches have certain drawbacks. The first seems like an overkill and will have no potential to ever be able to work for VIEWs. The second will increase the size of binary logs and won’t work for un-annotated binary logs produced by legacy servers. The third approach requires this @@ignore_tableshack in the server and doesn’t really do filtering, which might be a nuisance when one does some additional processing on mysqlbinlog‘s output … I’m still swaying.

MariaDB 5.1 feature: Table Elimination

MariaDB 5.1 beta is almost out, so it’s time to cover some of its features. The biggest optimizer feature is MWL#17 Table Elimination.

The basic idea behind table elimination is that sometimes it is possible to resolve the query without even accessing some of the tables that the query refers to. One can invent many kinds of such cases, but in Table Elimination we targeted only a certain class of SQL constructs that one ends up writing when they are querying highly-normalized data.

The sample queries were drawn from “Anchor Modeling”, a database modeling technique which takes normalization to the extreme. The slides at anchor modeling website have an in-depth explanation of Anchor modeling and its merits, but the part that’s important for table elimination can be shown with an example.

Suppose the database stores information about actors, together with their names, birthdates, and ratings, where ratings can change over time:

According to anchor modeling, each attribute should go into its own table:

-- the 'anchor' table which has only synthetic primary key
create table  ac_anchor(AC_ID int primary key);

-- a table for 'name' attribute:
create table ac_name(AC_ID int, ACNAM_name char(N),
                     primary key(AC_ID));

-- a table for 'birthdate' attribute:
create table ac_dob(AC_ID int,
                    ACDOB_birthdate date,
                    primary key(AC_ID));

-- a table for 'rating' attribute, which is historized:
create table ac_rating(AC_ID int,
                       ACRAT_rating int,
                       ACRAT_fromdate date,
                       primary key(AC_ID, ACRAT_fromdate));

With this approach it becomes easy to add/change/remove attributes, but this comes at a cost of added complexity in querying the data: in order to answer the simplest, select-star question of displaying actors and their current ratings one has to write outer joins:

-- Display actors, with their names and current ratings
select
  ac_anchor.AC_ID, ACNAM_Name,  ACDOB_birthdate, ACRAT_rating
from
  ac_anchor
  left join ac_name on ac_anchor.AC_ID=ac_name.AC_ID
  left join ac_dob on ac_anchor.AC_ID=ac_dob.AC_ID
  left join ac_rating on (ac_anchor.AC_ID=ac_rating.AC_ID and
                          ac_rating.ACRAT_fromdate = 
                            (select max(sub.ACRAT_fromdate)
                             from ac_rating sub where sub.AC_ID = ac_rating.AC_ID))

Apparently one won’t want to write such join every time they need to access actor’s properties, so they’ll create a view:

create view actors as
  select  ac_anchor.AC_ID, ACNAM_Name,  ACDOB_birthdate, ACRAT_rating
  from <see the select above>

which will allow one to access data as if it was stored in a regular way:

select ACRAT_rating from actors where ACNAM_name='Gary Oldman'

And this is where table elimination will be needed

Table elimination

The first thing the optimizer will do will be to merge the VIEW definition into the query and obtain

select ACRAT_rating
from
  ac_anchor
  left join ac_name on ac_anchor.AC_ID=ac_name.AC_ID
  left join ac_dob on ac_anchor.AC_ID=ac_dob.AC_ID
  left join ac_rating on (ac_anchor.AC_ID=ac_rating.AC_ID and
                          ac_rating.ACRAT_fromdate = 
                            (select max(sub.ACRAT_fromdate)
                             from ac_rating sub where sub.AC_ID = ac_rating.AC_ID))
where
 ACNAM_name='Gary Oldman'

Now, it’s important to realize that the obtained query has a useless part (highlighted in magenta). Indeed,

  • left join ac_dob on ac_dob.AC_ID=... will produce exactly one matching record:
    • primary key(ac_dob.AC_ID) guarantees that there will be at most one match for any value of ac_anchor.AC_ID,
    • and if there won’t be a match, LEFT JOIN will generate a NULL-complemented “row”,
  • and we don’t care what the matching record is, as table ac_dob is not used anywhere else in the query

This means that the … left join ac_dob on … part can be removed from the query and this is what Table Elimination module does. The detection logic is rather smart, for example it would be able to remove the … left join ac_rating on … part as well, together with the subquery (in the above example it won’t be removed because ac_rating used in the selection list of the query). Table Elimination module can also handle nested outer joins and multi-table outer joins.

User interface

One can check that table elimination is working by looking at the output of EXPLAIN [EXTENDED] and not finding there the tables that were eliminated:


MySQL [test]> explain select ACRAT_rating from actors where ACNAM_name='Gary Oldman';
+----+--------------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| id | select_type        | table     | type   | possible_keys | key     | key_len | ref                  | rows | Extra       |
+----+--------------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
|  1 | PRIMARY            | ac_anchor | index  | PRIMARY       | PRIMARY | 4       | NULL                 |    2 | Using index |
|  1 | PRIMARY            | ac_name   | eq_ref | PRIMARY       | PRIMARY | 4       | test.ac_anchor.AC_ID |    1 | Using where |
|  1 | PRIMARY            | ac_rating | ref    | PRIMARY       | PRIMARY | 4       | test.ac_anchor.AC_ID |    1 |             |
|  3 | DEPENDENT SUBQUERY | sub       | ref    | PRIMARY       | PRIMARY | 4       | test.ac_rating.AC_ID |    1 | Using index |
+----+--------------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
4 rows in set (0.01 sec)

Note that ac_dob table is not in the output. Now let’s try getting birthdate instead:


MySQL [test]> explain select ACDOB_birthdate from actors where ACNAM_name='Gary Oldman';
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| id | select_type | table     | type   | possible_keys | key     | key_len | ref                  | rows | Extra       |
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
|  1 | PRIMARY     | ac_anchor | index  | PRIMARY       | PRIMARY | 4       | NULL                 |    2 | Using index |
|  1 | PRIMARY     | ac_name   | eq_ref | PRIMARY       | PRIMARY | 4       | test.ac_anchor.AC_ID |    1 | Using where |
|  1 | PRIMARY     | ac_dob    | eq_ref | PRIMARY       | PRIMARY | 4       | test.ac_anchor.AC_ID |    1 |             |
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
3 rows in set (0.01 sec)

The ac_dob table is there while ac_rating and the subquery are gone. Now, if we just want to check the name of the actor


MySQL [test]> explain select count(*) from actors where ACNAM_name='Gary Oldman';
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
| id | select_type | table     | type   | possible_keys | key     | key_len | ref                  | rows | Extra       |
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
|  1 | PRIMARY     | ac_anchor | index  | PRIMARY       | PRIMARY | 4       | NULL                 |    2 | Using index |
|  1 | PRIMARY     | ac_name   | eq_ref | PRIMARY       | PRIMARY | 4       | test.ac_anchor.AC_ID |    1 | Using where |
+----+-------------+-----------+--------+---------------+---------+---------+----------------------+------+-------------+
2 rows in set (0.01 sec)

then it will eliminate both ac_dob and ac_rating tables.

Removing tables from a query is not expected to make the query slower, and it does not cut off any optimization opportunities, so we’ve made table elimination unconditional and don’t plan on having any kind of query hints for it.

I wanted to add an @@optimizer_switch flag anyway, just in case, but Monty was against that and eventually we have agreed that @@optimizer_switch will have table_elimination=on|off switch only in debug builds of the server.

Added a copy of mainline mysql-5.1 into our Buildbot

And it has failed the testuite on every single build slave. I’ve filed BUG#45605, BUG#45630 (together with a patch), BUG#45631, BUG#45632. There is also rpl.rpl_innodb_bug28430 test failure which I didn’t report as I don’t yet have enough details about the build slave.

At the moment our setup works as follows: there is lp:~maria-captains/maria/mysql-5.1-testing branch, which our copy of lp:mysql-server. We periodically pull from the main tree into our copy, it’s a manual process. Buildbot watches for pushes to our copy and runs builds/tests after every push. The results are publicly available here.

UPDATE: Sun people say their pushbuild is nearly green (=all tests pass, on nearly all platforms). This is very odd, as our build slaves are nothing special – about half of them are recent Ubuntu installs on most popular architectures.

Test failures in 5.1, different people making different fixes for the same problem

This is what’s happening at the moment. 5.1 tree doesn’t pass the tests, both Sun and Monty Program fix that, and the fixes are different. Here is a Valgrind warning which was fixed twice: my fix my fix (correct link) made the involved mysys function to do what its name implies while Alfranio’s fix changed the replication code not to call that function anymore.

Besides the Valgrind warning, we observe failures for rpl_trigger.test and query_cache_28249.test (if you follow the links you’ll have to grep for test name. Buildbot has some room for improvent). I get these failures in maria-5.1-table-elimination tree. The problem is that when the failure is random (the query cache one is) or when I get it after merging from main, I cannot easily tell whether

  • the problem is in my new code
  • the problem is in MariaDB
  • the problem is in the original MySQL
    • and they are not aware, or
    • they are aware and there is no fix yet
    • they are aware and have the fix in some team tree (merges from team trees to main can happen as rarely as once a month)

I think we’ll have to take the main branch and have our Buildbot run tests for it, too. We’d like to add all publicly available trees (when analyzing random test failures it is the more runs the better), but our small population of build slaves (volunteers welcome!) will not manage to do that many test runs.

Changed jobs, now at Monty Program AB

This isn’t news anymore – it has been over a month – but it would be odd not to mention this all, so here it goes: at the start of May I’ve left Sun Microsystems and joined Monty’s company.

The setup at Monty Program AB is quite similar – we have an IRC channel (#maria on FreeNode), a mailing list, bazaar trees on launchpad, Worklog and a Buildbot installation. It’s actually more open than at Sun/MySQL. At Sun, everyone is on internal IRC, external public can only see a subset of Worklog (the biggest problem with it is that it’s not possible to subscribe to changes), and their Buildbot-like system (it’s called PushBuild and looks like this) is not visible to the outside public.

There are actually very good reasons why an external person might want to look at pushbuild. Everyone doing development (if you count Summer of Code students and engine developers, that’s a lot of people) or just trying to use the newest features will at some point want to get the latest source from bazaar repository. And the problem here is that the trees get broken every once in a while, and when you are pulling the sources from launchpad it is nice to know what you’re pulling. You can run the tests yourself, sure, but that takes time. And if you do take time to run the tests, then if you see a failure you won’t know if Sun/MySQL is aware of the problem, whether it is repeatable on any computer or you need to report your OS/compiler/configure flags/test run parameters and so forth and so forth.

Getting back to things at Monty Program AB: my first tasks here are Table Elimination and index_merge improvements. I intend to cover them in more detail in a separate post.

Notes from Feature Request Bonanza session at Percona Performance conference

I was taking notes during “Open Q&A: Feature Request Bonanza” session at the Percona Performance conf. The session started at 9 pm at the last day of the conference, so the room wasn’t as full as it was for other sessions, but still there was an interesting discussion. I’ve missed several requests but more than 90% of stuff is there.

DISCLAIMER People are mentioned when I could both identify them (I was in the first row which rules out those in the back) and had time to note that, so names below are contact points and not indication of who was[n’t] there. I was also somewhat tired so please re-check the statements with their authors if you’re going to make any far-reaching conclusions based on the below:

  • The first request was for partial index support: create and use indexes that only have records that match a certain condition.
  • Pre-allocate space after table creation. Monty: CREATE TABLE statement has MIN_ROWS parameter already, and it’s honored by MyISAM.
  • PeterZ: Besides allocation of space at table creation, it would be nice if it was possible to allocate table’s space in extents. Monty: This is possible in Maria.
  • Somebody requests to specify fill factor for InnoDB pages. Domas: Technically it’s there, can be changed in gdb.
  • X: I want to specify the parameter for each table and index. Monty: MariaDB has support for name=value table parameters. They are passed to engine.
  • Monty: The engine should mark which options it recognized and warnings should be issued for unrecognized options.
  • Jeremy Cole requests online DDL. Monty: Maria has lazy add/remove column.
  • PeterZ asks about online ALTER TABLE. There’s a discussion about instant vs. online/background ALTER TABLE and what kind of operations can be performed instantly or online.
  • Domas would like to see online OPTIMIZE. It should be a background process which one start/stop or set to work at some limited rate so it can run without much impact on other activity. It should be possible to set to behave more aggressively.
  • PeterZ asks what OPTIMIZE should do for SSD drives. Somebody answers that SSD drives have high cost per megabyte of storage, so OPTIMIZE should reclaim wasted space.
  • Antony Curtis requests columns to have default values which are functions without use of triggres. Monty: this will require changes in .frm file format. They want MariaDB to be compatible.
  • Call for new datatypes anyone would like to see added
    • Ryan: column encryption
    • Plug-in abstract datatypes
    • Microsecond timestamp (Monty wants to add this)
    • PeterZ: blob/text data compression

    then there’s a discussion on where compression should be handled – inside storage engine or at connector level, or somewhere inside SQL runtime, etc. There are different opinions.

  • Somebody asks for more comprehensive features in general and a comprehensive set of DTrace probes in particular. There is a counterargument that it’s not possible to have static DTrace probes for every possible case and that one should use dynamic tracing. That requires knowledge of source code though.
    Monty requests a list of missing probes.
  • A request from Baron Schwartz: there are cases when MySQL will do ref access over columns of some unique index, but will use some non-unique index because it is covering all needed columns while the unique index isn’t. MySQL should use eq_ref access in such cases. He says he has run some benchmarks and there’s a 20% speed difference between ref and eq_ref. The engine doesn’t matter.
  • Susanne asks for DROP CASCADE. Domas doesn’t want anybody to run DROP CASCADE on his servers.
  • Ryan requests that index_merge optimizer is extended to allow sort-intersect strategy. At the moment we have just ‘intersect’ which can produce intersection of rowid-ordered index scans which means that it handles equality predicates:
    SELECT ... FROM tbl WHERE t.key1='foo' AND t.key=123
    but not range predicates:
    SELECT ... FROM tbl WHERE t.key1 LIKE 'foo%' AND t.key2 BETWEEN 123 AND 134

  • Alexey Rybak requests Bitmap indexes.
  • He also requests a fully asynchronous client library, one that would allow a client app can run many queries on many servers concurrently. It seems Drizzle has a new client library that does that. They don’t support binary protocol though. Monty intends to wait until Drizzle’s client library is stabilized, then add binary protocol support to it, and then see if it could be used instead of the standard library. Someone states that new PHP connector already supports asynchronous operation.
  • Somebody asks if there is any way to scale the RAM footprint of embedded MySQL. He says he has severe RAM (but not disk) constraints. I express doubt if database would work when OS has no space for disk cache but Monty says MyISAM is capable of operating reasonably decent in such settings.
  • Ryan requests InnoDB to have instant (auto-maintained) table checksums.
  • Domas requests “fuzzy replication”. Here I can’t make sense of my notes – it’s something about losing some of latest transactions but recovering to some consistent state, but I can’t remember how does all that relate to replication.
  • Ryan says it’s annoying that InnoDB takes everything offline when it detects corruption. He suggests that InnoDB should take offline only the corrupted table (which is feasible when one is using innodb_file_per_table option — sergeyp).
  • Monty says that Maria will shut down only the corrupted table and will automatically attempt to repair it. The audience wants the same to happen for partitions.
  • Somebody asks for something related to INSERT DELAYED. There’s a reply that that can be achieved in the application with SQL statement queuing.
  • Domas tells a story about Wikipedia having a number of tables with various counters, like number of pages in categories. They do update counters at the end of the transaction and in a number of cases counter update causes deadlock which otherwise would not occur. All changes to counters are commutative/additive/reversible actions, so it would be nice if the engine [or its extension] understood that used that knowledge to avoid deadlocks. Antony and Ryan mention they work around this problem by storing counters in non-transactional MyISAM tables
  • Somebody requests settable limits on how much memory a client can use. Domas mentions that the day before he has demonstrated how a half-megabyte query can consume gigs of RAM without using any buffers. BUG#42946 and BUG#27863 are mentioned as other examples of how one could cause excessive memory consumption with seemingly innocent statements. Besides that, queries consume engines’ internal buffer/cache resources and that is very difficult to account for. Monty says that the reality is that an experienced DBA is able to bring down the server.

That’s it. It will be interesting to get back to the list after one year and see if anything of it got implemented 🙂