为什么插入查询有时需要很长时间才能完成?
这是一个非常简单的问题。将数据插入表中通常工作正常,除了少数情况下插入查询需要几秒钟的时间。 (我不尝试批量插入数据。)因此,我为插入过程设置了一个模拟,以找出为什么插入查询偶尔需要超过 2 秒的时间来运行。 Joshua建议索引文件可能正在调整;我删除了id(主键字段),但延迟仍然发生。
我有一个 MyISAM 表:daniel_test_insert
(该表开始时完全为空):
create table if not exists daniel_test_insert (
id int unsigned auto_increment not null,
value_str varchar(255) not null default '',
value_int int unsigned default 0 not null,
primary key (id)
)
我向其中插入数据,有时插入查询需要 >跑2秒。 此表上没有读取 - 仅由单线程程序串行写入。
我运行了完全相同的查询 100,000 次,以找出查询有时需要很长时间的原因。到目前为止,这似乎是一个随机事件。
例如,此查询花费了 4.194 秒(对于插入来说非常长的时间):(
Query: INSERT INTO daniel_test_insert SET value_int=12345, value_str='afjdaldjsf aljsdfl ajsdfljadfjalsdj fajd as f' - ran for 4.194 seconds
status | duration | cpu_user | cpu_system | context_voluntary | context_involuntary | page_faults_minor
starting | 0.000042 | 0.000000 | 0.000000 | 0 | 0 | 0
checking permissions | 0.000024 | 0.000000 | 0.000000 | 0 | 0 | 0
Opening tables | 0.000024 | 0.001000 | 0.000000 | 0 | 0 | 0
System lock | 0.000022 | 0.000000 | 0.000000 | 0 | 0 | 0
Table lock | 0.000020 | 0.000000 | 0.000000 | 0 | 0 | 0
init | 0.000029 | 0.000000 | 0.000000 | 1 | 0 | 0
update | 4.067331 | 12.151152 | 5.298194 | 204894 | 18806 | 477995
end | 0.000094 | 0.000000 | 0.000000 | 8 | 0 | 0
query end | 0.000033 | 0.000000 | 0.000000 | 1 | 0 | 0
freeing items | 0.000030 | 0.000000 | 0.000000 | 1 | 0 | 0
closing tables | 0.125736 | 0.278958 | 0.072989 | 4294 | 604 | 2301
logging slow query | 0.000099 | 0.000000 | 0.000000 | 1 | 0 | 0
logging slow query | 0.000102 | 0.000000 | 0.000000 | 7 | 0 | 0
cleaning up | 0.000035 | 0.000000 | 0.000000 | 7 | 0 | 0
这是 SHOW PROFILE 命令的缩写版本,我扔掉了全为零的列。)
现在更新有令人难以置信的上下文切换数量和轻微的页面错误。此数据库上的 Opened_Tables 大约每 10 秒增加 1 个(未耗尽 table_cache 空间)
统计信息:
MySQL 5.0.89
硬件:32 Gigs内存 / 8 核 @ 2.66GHz; raid 10 SCSI 硬盘 (SCSI II???)
我已查询硬盘驱动器和 raid 控制器:没有报告错误。 CPU 大约有 50% 空闲。
iostat -x 5(报告硬盘利用率低于 10%) 顶部报告 1 分钟平均负载约为 10(对于我们的数据库计算机来说是正常的)
交换空间已使用 156k(32 GB 内存)
交换空间已使用 156k(32 GB 找出导致性能滞后的原因。这种情况不会发生在我们的低负载从站上,只会发生在我们的高负载主站上。内存和 innodb 表也会发生这种情况。有人有什么建议吗? (这是一个生产系统,所以没有什么奇怪的!)
This is a pretty simple problem. Inserting data into the table normally works fine, except for a few times where the insert query takes a few seconds. (I am not trying to bulk insert data.) So I setup a simulation for the insert process to find out why the insert query occasionally takes more than 2 seconds to run. Joshua suggested that the index file may be being adjusted; I removed the id (primary key field), but the delay still happens.
I have a MyISAM table: daniel_test_insert
(this table starts completely empty):
create table if not exists daniel_test_insert (
id int unsigned auto_increment not null,
value_str varchar(255) not null default '',
value_int int unsigned default 0 not null,
primary key (id)
)
I insert data into it, and sometimes a insert query takes > 2 seconds to run. There are no reads on this table - Only writes, in serial, by a single threaded program.
I ran the exact same query 100,000 times to find why the query occasionall takes a long time. So far, it appears to be a random occurrence.
This query for example took 4.194 seconds (a very long time for an insert):
Query: INSERT INTO daniel_test_insert SET value_int=12345, value_str='afjdaldjsf aljsdfl ajsdfljadfjalsdj fajd as f' - ran for 4.194 seconds
status | duration | cpu_user | cpu_system | context_voluntary | context_involuntary | page_faults_minor
starting | 0.000042 | 0.000000 | 0.000000 | 0 | 0 | 0
checking permissions | 0.000024 | 0.000000 | 0.000000 | 0 | 0 | 0
Opening tables | 0.000024 | 0.001000 | 0.000000 | 0 | 0 | 0
System lock | 0.000022 | 0.000000 | 0.000000 | 0 | 0 | 0
Table lock | 0.000020 | 0.000000 | 0.000000 | 0 | 0 | 0
init | 0.000029 | 0.000000 | 0.000000 | 1 | 0 | 0
update | 4.067331 | 12.151152 | 5.298194 | 204894 | 18806 | 477995
end | 0.000094 | 0.000000 | 0.000000 | 8 | 0 | 0
query end | 0.000033 | 0.000000 | 0.000000 | 1 | 0 | 0
freeing items | 0.000030 | 0.000000 | 0.000000 | 1 | 0 | 0
closing tables | 0.125736 | 0.278958 | 0.072989 | 4294 | 604 | 2301
logging slow query | 0.000099 | 0.000000 | 0.000000 | 1 | 0 | 0
logging slow query | 0.000102 | 0.000000 | 0.000000 | 7 | 0 | 0
cleaning up | 0.000035 | 0.000000 | 0.000000 | 7 | 0 | 0
(This is an abbreviated version of the SHOW PROFILE command, I threw out the columns that were all zero.)
Now the update has an incredible number of context switches and minor page faults. Opened_Tables increases about 1 per 10 seconds on this database (not running out of table_cache space)
Stats:
MySQL 5.0.89
Hardware: 32 Gigs of ram / 8 cores @ 2.66GHz; raid 10 SCSI harddisks (SCSI II???)
I have had the hard drives and raid controller queried: No errors are being reported.
CPUs are about 50% idle.iostat -x 5 (reports less than 10% utilization for harddisks)
top report load average about 10 for 1 minute (normal for our db machine)Swap space has 156k used (32 gigs of ram)
I'm at a loss to find out what is causing this performance lag. This does NOT happen on our low-load slaves, only on our high load master. This also happens with memory and innodb tables. Does anyone have any suggestions? (This is a production system, so nothing exotic!)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(11)
我在我的系统上注意到了同样的现象。通常需要 1 毫秒的查询突然会需要 1-2 秒。我的所有案例都是简单的单表 INSERT/UPDATE/REPLACE 语句 --- 不在任何 SELECT 上。没有明显的负载、锁定或螺纹堆积。
我曾怀疑这是由于清除脏页、刷新磁盘更改或某些隐藏的互斥体造成的,但我尚未缩小范围。
也排除了
唯一我此时得到的其他观察结果源于我在多台计算机上运行相同数据库的事实。我有一个大量读取的应用程序,因此我使用具有复制的环境 - 大部分负载都在从属服务器上。我注意到,即使主服务器上的负载最小,但这种现象在主服务器上发生得更多。即使我没有看到锁定问题,也许是 Innodb/Mysql 在(线程)并发方面遇到了问题?回想一下,从站上的更新将是单线程的。
MySQL 版本 5.1.48
更新
我想我已经找到了解决我的案例问题的线索。在我的一些服务器上,我比其他服务器更注意到这种现象。查看不同服务器之间的差异,并进行调整,我被引导到 MySQL innodb 系统变量
innodb_flush_log_at_trx_commit
。我发现该文档读起来有点尴尬,但是 innodb_flush_log_at_trx_commit 可以采用 1,2,0 的值:
每次提交的日志文件以及日志
每次提交时文件都会刷新到磁盘。
每次提交的日志文件以及日志
文件大约每 1-2 秒刷新到磁盘一次。
每秒的日志文件,以及日志
文件每秒刷新到磁盘。
实际上,按照报告和记录的顺序 (1,2,0),您应该获得交易绩效的提高,但风险也会增加。
话虽如此,我发现使用
innodb_flush_log_at_trx_commit=0
的服务器比使用innodb_flush_log_at_trx_commit=2
的服务器表现更差(即“长更新”多 10-100 倍)。 。此外,当我将其切换到 2 时,不良实例的情况立即得到改善(请注意,您可以即时更改它)。所以,我的问题是,你的设置是什么?请注意,我并不是责怪此参数,而是强调它的上下文与此问题相关。
I have noticed the same phenomenon on my systems. Queries which normally take a millisecond will suddenly take 1-2 seconds. All of my cases are simple, single table INSERT/UPDATE/REPLACE statements --- not on any SELECTs. No load, locking, or thread build up is evident.
I had suspected that it's due to clearing out dirty pages, flushing changes to disk, or some hidden mutex, but I have yet to narrow it down.
Also Ruled Out
The only other observation I have at this point is derived from the fact I'm running the same db on multiple machines. I have a heavy read application so I'm using an environment with replication -- most of the load is on the slaves. I've noticed that even though there is minimal load on the master, the phenomenon occurs more there. Even though I see no locking issues, maybe it's Innodb/Mysql having trouble with (thread) concurrency? Recall that the updates on the slave will be single threaded.
MySQL Verion 5.1.48
Update
I think I have a lead for the problem on my case. On some of my servers, I noticed this phenomenon on more than the others. Seeing what was different between the different servers, and tweaking things around, I was lead to the MySQL innodb system variable
innodb_flush_log_at_trx_commit
.I found the doc a bit awkward to read, but
innodb_flush_log_at_trx_commit
can take the values of 1,2,0:the log file for every commit, and the log
file is flushed to disk for every commit.
the log file for every commit, and the log
file is flushed to disk approximately every 1-2 seconds.
the log file every second, and the log
file is flushed to disk every second.
Effectively, in the order (1,2,0), as reported and documented, you're supposed to get with increasing performance in trade for increased risk.
Having said that, I found that the servers with
innodb_flush_log_at_trx_commit=0
were performing worse (i.e. having 10-100 times more "long updates") than the servers withinnodb_flush_log_at_trx_commit=2
. Moreover, things immediately improved on the bad instances when I switched it to 2 (note you can change it on the fly).So, my question is, what is yours set to? Note that I'm not blaming this parameter, but rather highlighting that it's context is related to this issue.
我在使用 INNODB 表时遇到了这个问题。 (INNODB索引重写比MYISAM还要慢)
我想你正在对其他一些表进行多个其他查询,所以问题是MySQL必须处理变得更大的文件中的磁盘写入,并且需要为这些文件分配额外的空间文件。
如果您使用 MYISAM 表,我强烈建议使用
命令; MYISAM 的速度非常快(即使使用主键),并且文件可以格式化为 csv,您可以指定列名称(或者您可以将 NULL 作为自动增量字段的值)。
在此处查看 MYSQL 文档。
I had this problem using INNODB tables. (and INNODB indexes are even slower to rewrite than MYISAM)
I suppose you are doing multiple other queries on some other tables, so the problem would be that MySQL has to handle disk writes in files that get larger and needs to allocate additional space to those files.
If you use MYISAM tables I strongly suggest using
command; MYISAM is sensationally fast with this (even with primary keys) and the file can be formatted as csv and you can specify the column names (or you can put NULL as the value for the autoincrement field).
View MYSQL doc here.
我给您的第一个提示是禁用自动提交功能,而不是手动提交。
这有利于性能,因为在所有 INSERT 语句完成后,索引缓冲区仅刷新到磁盘一次。通常,索引缓冲区刷新次数与 INSERT 语句的数量一样多。
但也许您能做的最好的事情是,如果您的应用程序可以做到这一点,您可以使用一个选择来执行批量插入。
这是通过矢量绑定完成的,这是最快的方法。
但仅当您正在使用的 mysql 驱动程序可以进行参数向量绑定时,才考虑此选项。
否则我会倾向于第一种可能性并为每 1000 次插入锁定表。不要将其锁定为 100k 插入,因为您会遇到缓冲区溢出。
The first Tip I would give you, is to disable the autocommit functionality and than commit manually.
This benefits performance because the index buffer is flushed to disk only once, after all INSERT statements have completed. Normally, there would be as many index buffer flushes as there are INSERT statements.
But propably best you can do, and if that is possible in your application, you do a bulk insert with one single select.
This is done via Vector Binding and it's the fastest way you can go.
But consider this option only if parameter vector binding is possible with your mysql driver you're using.
Otherwise I would tend to the first possibility and LOCK the table for every 1000 inserts. Don't lock it for 100k inserts, because you'l get a buffer overflow.
您能否再创建一张包含 400(非空)列的表并再次运行测试?如果缓慢插入的数量变得更高,这可能表明 MySQL 正在浪费时间写入记录。 (我不知道它是如何工作的,但他可能会分配更多的块,或者移动一些东西以避免碎片......真的不知道)
Can you create one more table with 400 (not null) columns and run your test again? If the number of slow inserts became higher this could indicate MySQL is wasting time writing your records. (I dont know how it works, but he may be alocating more blocks, or moving something to avoid fragmentation.... really dont know)
我们升级到 MySQL 5.1,在此事件期间,查询缓存成为了一个问题,出现了很多“释放项目?”的问题。线程状态。然后我们删除了查询缓存。
升级到 MySQL 5.1 或删除查询缓存解决了这个问题。
仅供未来读者参考。
-丹尼尔
We upgraded to MySQL 5.1 and during this event the Query cache became an issue with a lot of "Freeing items?" thread states. We then removed the query cache.
Either the upgrade to MySQL 5.1 or removing the query cache resolved this issue.
FYI, to future readers.
-daniel
我们遇到了完全相同的问题并在此报告:
http://bugs.mysql.com/bug.php?id=62381
我们正在使用 5.1.52,还没有解决方案。我们可能需要关闭 QC 以避免这种性能影响。
We hit exactly the same issue and reported here:
http://bugs.mysql.com/bug.php?id=62381
We are using 5.1.52 and don't have solution yet. We may need to turn QC off to avoid this perf hit.
如果您使用 for 循环进行一次多次插入,那么请在每次循环后使用 PHP 的 sleep(“以秒为单位的时间”) 函数休息一下。
if you are using multiple insertion at one using for loop, then please take a break after every loop using PHP's sleep("time in seconds") function.
阅读有关 Myisam 性能的文章:
http://adminlinux.blogspot.com/2010/ 05/mysql-alaving-memory-for-caches.html
搜索:
“The MyISAM key block size The key block size is important”(减去单引号),这可能就是发生的情况。我认为他们在 5.1 中解决了其中一些类型的问题
Read this on Myisam Performance:
http://adminlinux.blogspot.com/2010/05/mysql-allocating-memory-for-caches.html
Search for:
'The MyISAM key block size The key block size is important' (minus the single quotes), this could be what's going on. I think they fixed some of these types of issues with 5.1
您可以检查磁盘子系统的统计信息吗? I/O 是否饱和?这听起来像是内部数据库工作正在将内容刷新到磁盘/日志。
Can you check the stats on the disk subsystem? Is the I/O satuated? This sounds like internal DB work going on flushing stuff to disk/log.
要检查您的磁盘是否表现不佳,如果您使用的是 Windows,您可以创建一个批处理 cmd 文件来创建 10,000 个文件:
将其保存在临时目录中,例如 test.cmd
使用 /E 启用运行 CMD 的命令扩展: ON 参数
然后运行批处理并查看第一个和最后一个输出文件之间的时间是否以秒或分钟为单位不同。
在 Unix/Linux 上,您可以编写类似的 shell 脚本。
To check if your disk is behaving badly, and if you're in Windows, you can create a batch cmd file that creates 10,000 files:
save it in a temp dir, like test.cmd
Enable command extensions running CMD with the /E:ON parameter
Then run your batch and see if the time between the first and the last out file differ in seconds or minutes.
On Unix/Linux you can write a similare shell script.
服务器中是否有 SSD 驱动器?某些 SSD 驱动器会出现“studder”问题,这可能会导致您出现上述症状。
无论如何,我会尝试找出延迟是发生在 MySQL 还是磁盘子系统中。
你的服务器是什么操作系统,MySQL数据在什么文件系统上?
By any chance is there an SSD drive in the server? Some SSD drives suffer from 'studder', which could cause your symptom.
In any case, I would try to find out if the delay is occurring in MySQL or in the disk subsystem.
What OS is your server, and what file system is the MySQL data on?