discuz!X 帖子表过大导致查询过慢有什么解决方法?
discuz!X 帖子表过大导致查询过慢有什么解决方法?
pre_forum_post 表 1.5GB
pre_forum_thread 表 455MB
下面是使用了show full processlist的结果。
mysql> show full processlist;
+------+------+---------------------+--------------+---------+------+------------+----------------------------- --------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+------+---------------------+--------------+---------+------+------------+----------------------------- --------------------------------+
| 580 | root | 10.132.xx.xx:42314 | lcbdata2 | Query | 0 | statistics | SELECT * FROM pre_forum_post _3 WHERE tid=189595 AND first=1 |
| 1606 | root | localhost | NULL | Query | 0 | NULL | show full processlist |
+------+------+---------------------+--------------+---------+------+------------+----------------------------- --------------------------------+
2 rows in set (0.01 sec)
mysql> show full processlist;
+------+------+---------------------+--------------+---------+------+----------------+------------------------- --------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------- ------------+
| Id | User | Host | db | Command | Time | State | Info |
+------+------+---------------------+--------------+---------+------+----------------+------------------------- --------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------- ------------+
| 580 | root | 10.132.xx.xx:42314 | lcbdata2 | Query | 31 | Sending data | SELECT * FROM pre_forum_thread WHERE fid=13 AND displayorder>=0 ORDER BY `tid` DESC LIMIT 20 |
| 1606 | root | localhost | NULL | Query | 0 | NULL | show full processlist |
| 1818 | root | 10.241.xx.xx:51084 | lcbdata2 | Query | 26 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('203395') |
| 1820 | root | 10.241.xx.xx:51085 | lcbdata2 | Query | 25 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('384661') |
| 1830 | root | 10.241.xx.xx:51092 | lcbdata2 | Query | 22 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('91359') |
| 1831 | root | 10.241.xx.xx:51093 | lcbdata2 | Query | 22 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('9125') |
| 1832 | root | 10.132.38.129:42654 | lcbdata2 | Query | 21 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('101747') |
| 1847 | root | 10.241.xx.xx:51101 | lcbdata2 | Query | 14 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('98942') |
| 1848 | root | 10.241.xx.xx:51102 | lcbdata2 | Query | 13 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('376834') |
| 1849 | root | 10.132.38.129:42671 | lcbdata2 | Query | 12 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('29271') |
| 1850 | root | 10.241.xx.xx:51103 | lcbdata2 | Query | 12 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('841670') |
| 1857 | root | 10.132.38.129:42677 | lcbdata2 | Query | 11 | Locked | UPDATE LOW_PRIORITY pre_forum_thread SET `views`=`views`+'1' WHERE tid IN ('141532') |
| 1868 | root | 10.241.xx.xx:51115 | lcbdata2 | Query | 7 | Sorting result | SELECT * FROM pre_forum_thread WHERE `fid` IN('223','192','40','14','4') AND `displayorder` IN('0','1') ORDER BY displayorder DESC, l astpost DESC LIMIT 50 |
| 1870 | root | 10.132.38.129:42692 | lcbdata2 | Query | 5 | Sorting result | SELECT * FROM pre_forum_thread WHERE `fid` IN('223','192','40','14','4') AND `displayorder` IN('0','1') ORDER BY displayorder DESC, l astpost DESC LIMIT 50 |
| 1873 | root | 10.241.xx.xx:51118 | lcbdata2 | Query | 5 | Sending data | SELECT COUNT(*) FROM pre _forum_thread WHERE fid=624 AND displayorder=-2 AND authorid=1554427 |
| 1877 | root | 10.241.xx.xx:51120 | lcbdata2 | Query | 2 | Sorting result | SELECT * FROM pre_forum_thread WHERE `fid` IN('457','378','377','433','370','363','372','371','361','369','362','364','365','366','367 ','368','373','376','375','374','8') AND `displayorder` IN('0','1') ORDER BY displayorder DESC, lastpost DESC LIMIT 50 |
+------+------+---------------------+--------------+---------+------+----------------+------------------------- --------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------+
这个是My.cnf 的配置文件:
[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-external-locking
key_buffer_size = 384M
max_allowed_packet = 3M
table_open_cache = 512
sort_buffer_size = 1M
net_buffer_length = 16K
read_buffer_size = 1M
read_rnd_buffer_size = 2M
myisam_sort_buffer_size = 16M
thread_cache_size = 8
datadir = /data/mysql
server-id = 1
innodb_data_home_dir = /usr/local/mysql/var
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/var
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(21)
是5.1.x 的
一个表1.5G很大吗?你这个表每天的增量是多少?麻木的去分表分区反而影响效率。
每天的增长量不大,但是查询量很大
相对来说不大,但是这个需要看服务器硬件配置来决定的。
@mark35 把dz的mysql换成了postgre,据说效果挺好
http://www.discuz.net/thread-2990167-1-1.html
@mark35
@mark35 把dz的mysql换成了postgre,据说效果挺好
http://www.discuz.net/thread-2990167-1-1.html
是指付钱请人解决的问题。
哈哈,这个方法不能永久解决问题,如果使用这个方法的话。那么我也该辞职了。
如果你的mysql版本支持表分区 那么表分区最省事
dz 自带的哪个就不说了,问题一大堆…中看不中用的功能……
DZx 支持分表,但其实现代码里bug很多,可google下别人怎么做的
逻辑分表?
逻辑分表
分表吧
额,本身就是linux
当然 前提是OS为Linux
嗯,这是个好方法,但是不是最优的。
加内存 立竿见影
嗯,谢谢。不错的方法
@mark35 把dz的mysql换成了postgre,据说效果挺好
http://www.discuz.net/thread-2990167-1-1.html