如何优化 MySQL 数据库的写入?
我有一个在 EC2 上运行的写入密集型应用程序。关于如何优化它以便能够在 MySQL 数据库上进行数千个并发写入有什么想法吗?
I have a write intensive application running on EC2. Any thoughts on how to optimize it to be able to make several thousands concurrent writes on the MySQL DB?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
写缩放是一个难题。也许,写入缩放的秘密在于读取缩放。即缓存尽可能多地读取,以便写入获得所有吞吐量。
话虽如此,我们可以做很多事情:
1)从数据模型开始。设计一个数据模型,以便您永远不会删除或更新表。唯一的操作是插入。使用生效日期、生效顺序和生效状态,仅使用插入命令即可实现插入、更新和删除操作。这个概念称为仅追加模型。检查 RethinkDB..
2) 将并发插入标志设置为 1。这可以确保表在读取过程中保持插入。
3)当你只有尾部插入时,你可能不需要行级锁。因此,使用 MyISAM(这并不是要从 InnoDB 中拿走任何东西,我稍后会谈到)。
4) 如果这一切都没有多大作用,请在内存引擎中创建一个副本表。如果您有一个名为 MY_DATA 的表,请在内存表中创建一个名为 MY_DATA_MEM 的表。
5) 将所有插入重定向到MEM 表。创建一个联合两个表的视图,并使用该视图作为您的读取源。
6) 编写一个守护程序,定期将 MEM 内容移至 Main 表并从 Mem 表中删除。将 MOVE 操作实现为 Mem 表上的删除触发器可能是理想的选择(我希望触发器可以在内存引擎上实现,但不完全确定)。
7) 不要对 MEM 表进行任何删除或更新(它们很慢),还要注意表中键的基数(HASH 与 B 树:低卡 -> 哈希,高卡 -> B -Tree)
8) 即使上述所有方法都不起作用,请放弃 jdbc/odbc。转移到 InnoDB 并使用 Handler Socket 接口进行直接插入(Google for Yoshinori-San MySQL)
我自己没有使用过 HS,但基准测试令人印象深刻。 Google Code 上甚至还有一个 Java HS 项目。
希望有帮助..
Write scaling is a hard problem. Perhaps, secret to write scaling is in read scaling. That is, cache reads as much as possible, so that the writes get all the throughput.
Having said that, there are a bunch of things one can do:
1) Start with the data model. Design a data model so that you do not ever delete or update a table. Only operation is an insert. Use Effective Date, Effective Sequence and Effective Status to implement Insert, Update and Delete operations using just the Insert Command. This concept is called Append Only model. Checkout RethinkDB..
2) Set the Concurrent Insert flag to 1. This makes sure that the tables keep inserting while reads are in progress.
3) When you have only Inserts at the tail, you may not need row-level locks. So, use MyISAM (this is not to take anything away from InnoDB, which I will come to later).
4) If all this does not do much, create a replica table in Memory Engine. If you have a table called MY_DATA, create a table called MY_DATA_MEM in memory table.
5) Redirect all Inserts to the MEM table. Create a View that UNIONS both tables and use that view as your Read Source.
6) Write a daemon that periodically moves MEM contents to the Main table and deletes from the Mem table. It may be ideal to implement the MOVE operation as a Delete trigger on the Mem table (I am hoping triggers are possible on Memory Engine, not entirely sure).
7) Do not do any deletes or Updates on the MEM table (they are slow) also pay attention to the cardinality of the keys in your table (HASH vs B-Tree : Low Card -> Hash, High Card-> B-Tree)
8) Even if all the above does not work, ditch jdbc/odbc. Move to InnoDB and use Handler Socket interface to do the direct inserts (Google for Yoshinori-San MySQL)
I have not used the HS myself, but the benchmarks are impressive. There is a even Java HS Project on Google Code.
Hope that helps..