只读 MyISAM 表的加速
我们有一个很大的 MyISAM 表,用于归档旧数据。这种归档每月执行一次,除了这些情况外,数据永远不会写入表中。无论如何,有没有办法“告诉”MySQL该表是只读的,以便MySQL可以优化从此表读取的性能?我研究过 MEMORY 存储引擎,但问题是这个表太大了,它会占用服务器内存的很大一部分,这是我不想要的。
希望我的问题足够清楚,我是数据库管理方面的新手,因此欢迎任何意见或建议。
We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want.
Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
无需取消并重新压缩历史表:如果要访问历史记录的单个表,可以使用合并表来组合压缩的只读历史表。
因此,假设您有一个活动表和具有相同表结构的压缩历史表,您可以使用以下方案:
表:
创建合并表:
一个月后,压缩 active_month 表并重命名将其压缩为compressed_month_3。现在表是:
并且您可以更新历史表
Instead of un-and re-compressing the history table: If you want to access a single table for the history, you can use a merge table to combine the compressed read-only history tables.
Thus assuming you have an active table and the compressed history tables with the same table structure, you could use the following scheme:
The tables:
Create a merge table:
After a month, compress the active_month table and rename it to compressed_month_3. Now the tables are:
and you can update the history table
是的,您可以压缩 myisam 表。
这是 5.0 的文档: http://dev.mysql.com /doc/refman/5.0/en/myisampack.html
Yes, you can compress the myisam tables.
Here is the doc from 5.0 : http://dev.mysql.com/doc/refman/5.0/en/myisampack.html
您可以使用
myisampack
生成快速、压缩的,只读表。(不确定如果必须返回大部分行,这是否会损害性能;建议进行测试;压缩和磁盘读取之间可能需要权衡)。
我想说:当然也应用通常的方法:
You could use
myisampack
to generate fast, compressed, read-only tables.(Not really sure if that hurts performance if you have to return most of the rows; testing is advisable; there could be a trade-off between compression and disk reads).
I'd say: also certainly apply the usual: