MySQL MyISAM 磁盘绑定扩展问题/驱动器缓存

发布于 2024-08-09 15:34:34 字数 5199 浏览 6 评论 0原文

我有以下查找表:

CREATE TABLE `widgetuser` (
 `widgetuserid` char(40) NOT NULL,
 `userid` int(10) unsigned NOT NULL,
 `created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
 PRIMARY KEY (`widgetuserid`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 DELAY_KEY_WRITE=1;

我有一个具有相同结构但没有键的 widgetuser_tmp 表,我用此数据填充 widgetuser 表(4mio 行):

mysql> insert into widgetuser select * from widgetuser_tmp limit 0,4000000;flush tables;
Query OK, 4000000 rows affected (33.14 sec)
Records: 4000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (0.91 sec)

在写入时,它会以 15MB/s 的速度直接进入 RAID-1 ,磁盘利用率 <50%,我们没有看到任何读取,因为我用源表填充了磁盘缓存:

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  3839.20    0.00   52.40     0.00    15.20   594.20    12.46  237.75   5.57  29.20
sdb               0.00  3839.00    0.00   52.60     0.00    15.20   591.94    14.50  275.59   7.19  37.80

我插入了接下来的 1 Mio 行,一切都很好,并且 wMB/s 在刷新后立即返回到 0:

mysql> insert into widgetuser select * from widgetuser_tmp limit 4000000,1000000;flush tables;
Query OK, 1000000 rows affected (10.18 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (0.87 sec)

mysql> insert into widgetuser select * from widgetuser_tmp limit 5000000,1000000;flush tables;
Query OK, 1000000 rows affected (10.21 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (1.02 sec)

mysql> insert into widgetuser select * from widgetuser_tmp limit 6000000,1000000;flush tables;
Query OK, 1000000 rows affected (10.67 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (1.17 sec)

但是当我执行 7mio 批处理时,结果看起来仍然相同,但在 iostat -mdx sda sdb 5 中,突然我们有 100% util 持续 30 秒:

mysql> insert into widgetuser select * from widgetuser_tmp limit 7000000,1000000;flush tables;
Query OK, 1000000 rows affected (10.73 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (1.21 sec)

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    88.60    0.00  295.60     0.00     1.52    10.53   130.60  435.93   3.38 100.00
sdb               0.00    89.20    0.00  300.80     0.00     1.57    10.68   143.99  483.97   3.32 100.00

刷新后数据文件没有被触及:

-rw-rw---- 1 mysql mysql 1032000000 2009-10-30 12:10 widgetuser.MYD
-rw-rw---- 1 mysql mysql  522777600 2009-10-30 12:11 widgetuser.MYI  

表状态也正常:

+----------------+--------+---------+------------+----------+----------------+-------------+-------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+-------------------+---------+
| Name           | Engine | Version | Row_format | Rows     | Avg_row_length | Data_length | Max_data_length   | Index_length | Data_free | Auto_increment | Create_time         | Update_time         | Check_time | Collation       | Checksum | Create_options    | Comment |
+----------------+--------+---------+------------+----------+----------------+-------------+-------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+-------------------+---------+
| widgetuser     | MyISAM |      10 | Fixed      |  8000000 |            129 |  1032000000 | 36310271995674623 |    522777600 |         0 |           NULL | 2009-10-30 11:59:41 | 2009-10-30 12:10:59 | NULL       | utf8_general_ci |     NULL | delay_key_write=1 |         |
+----------------+--------+---------+------------+----------+----------------+-------------+-------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+-------------------+---------+

当我继续时(因为我们的驱动器利用率为 100%),情况很快变得更糟:

mysql> insert into widgetuser select * from widgetuser_tmp limit 9000000,1000000;flush tables;
Query OK, 1000000 rows affected (31.93 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (2.34 sec)

mysql> insert into widgetuser select * from widgetuser_tmp limit 10000000,1000000;flush tables;
Query OK, 1000000 rows affected (2 min 39.72 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (7.82 sec)

检查主键以查看新条目是否唯一。一旦密钥无法放入内存(key_buffer_size=512MB = ca.8Mio 条目),它需要从驱动器(-cache)中获取丢失的密钥部分进行检查。因此,我们应该看到更多的读取和更慢的插入时间 - 我们没有看到更慢的读取,因为密钥缓冲在磁盘缓存中。但我的问题是:谁突然写了这么多,在哪里,为什么以及如何解决这个问题?任何想法都表示赞赏!

进一步的想法和见解:

  • 由于 1MB/s 随机写入遵循完成的语句,因此已经通过了唯一验证,
  • 它是一个带有 ahci 的软件 raid-1,磁盘有 93% 空闲且有能力大约 80wMB/s
  • 机器有 8GB 内存、5GB 缓存、MySQL 占用 600MB、1,7GB 可用
  • MySQL 5.1.31-1ubuntu2-log
  • 延迟_key_write 不会改变此行为
  • myisam_sort_buffer_size = 2 GB (但这里没有使用?)
  • key_buffer_size = 512 MB
  • bin_log 已关闭
  • Linux 2.6.28-15-server #52-Ubuntu SMP Wed Sep 9 11:34:09 UTC 2009 x86_64 GNU/Linux

I have the following lookup-table:

CREATE TABLE `widgetuser` (
 `widgetuserid` char(40) NOT NULL,
 `userid` int(10) unsigned NOT NULL,
 `created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
 PRIMARY KEY (`widgetuserid`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 DELAY_KEY_WRITE=1;

I have a widgetuser_tmp Table with the same structure but no key and I fill the widgetuser table with this data (4mio rows):

mysql> insert into widgetuser select * from widgetuser_tmp limit 0,4000000;flush tables;
Query OK, 4000000 rows affected (33.14 sec)
Records: 4000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (0.91 sec)

While it is writing, it goes directly to RAID-1 with 15MB/s, disk util <50% and we see no reads, since I filled the disk cache with the source table:

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00  3839.20    0.00   52.40     0.00    15.20   594.20    12.46  237.75   5.57  29.20
sdb               0.00  3839.00    0.00   52.60     0.00    15.20   591.94    14.50  275.59   7.19  37.80

I insert the next 1 Mio rows, it's all fine and the wMB/s goes back to 0 right after the flush:

mysql> insert into widgetuser select * from widgetuser_tmp limit 4000000,1000000;flush tables;
Query OK, 1000000 rows affected (10.18 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (0.87 sec)

mysql> insert into widgetuser select * from widgetuser_tmp limit 5000000,1000000;flush tables;
Query OK, 1000000 rows affected (10.21 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (1.02 sec)

mysql> insert into widgetuser select * from widgetuser_tmp limit 6000000,1000000;flush tables;
Query OK, 1000000 rows affected (10.67 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (1.17 sec)

But when I do the 7mio batch, the result still looks the same, but in the iostat -mdx sda sdb 5 suddenly we have 100% util for 30 seconds:

mysql> insert into widgetuser select * from widgetuser_tmp limit 7000000,1000000;flush tables;
Query OK, 1000000 rows affected (10.73 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (1.21 sec)

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    88.60    0.00  295.60     0.00     1.52    10.53   130.60  435.93   3.38 100.00
sdb               0.00    89.20    0.00  300.80     0.00     1.57    10.68   143.99  483.97   3.32 100.00

The data-files are not touched after the flush:

-rw-rw---- 1 mysql mysql 1032000000 2009-10-30 12:10 widgetuser.MYD
-rw-rw---- 1 mysql mysql  522777600 2009-10-30 12:11 widgetuser.MYI  

And also the table status seams normal:

+----------------+--------+---------+------------+----------+----------------+-------------+-------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+-------------------+---------+
| Name           | Engine | Version | Row_format | Rows     | Avg_row_length | Data_length | Max_data_length   | Index_length | Data_free | Auto_increment | Create_time         | Update_time         | Check_time | Collation       | Checksum | Create_options    | Comment |
+----------------+--------+---------+------------+----------+----------------+-------------+-------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+-------------------+---------+
| widgetuser     | MyISAM |      10 | Fixed      |  8000000 |            129 |  1032000000 | 36310271995674623 |    522777600 |         0 |           NULL | 2009-10-30 11:59:41 | 2009-10-30 12:10:59 | NULL       | utf8_general_ci |     NULL | delay_key_write=1 |         |
+----------------+--------+---------+------------+----------+----------------+-------------+-------------------+--------------+-----------+----------------+---------------------+---------------------+------------+-----------------+----------+-------------------+---------+

And when I continue (since we have 100% drive utilization), it get's worse very fast:

mysql> insert into widgetuser select * from widgetuser_tmp limit 9000000,1000000;flush tables;
Query OK, 1000000 rows affected (31.93 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (2.34 sec)

mysql> insert into widgetuser select * from widgetuser_tmp limit 10000000,1000000;flush tables;
Query OK, 1000000 rows affected (2 min 39.72 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
Query OK, 0 rows affected (7.82 sec)

The primary key is checked to see whether the new entry is unique or not. As soon as the key does not fit into memory (key_buffer_size=512MB = ca. 8Mio entries), it needs to fetch the missing key parts from the drive(-cache) for checking it. Therefore we should see more reads and slower insert times - we don't see the slower reads since the key is buffered in disk cache. But my question: who is writing suddenly so much and where and why and how can I fix this? Any ideas are appreciated!

Futher ideas and insights:

  • since the 1MB/s random writes follow the finished statement, the unique validation is already passed
  • it is a software raid-1 with ahci on, disks are 93% free and capable of about 80wMB/s
  • the machine has 8GB ram, 5GB cache, 600MB taken by MySQL, 1,7GB free
  • MySQL 5.1.31-1ubuntu2-log
  • the delay_key_write does not change this behavior
  • myisam_sort_buffer_size = 2 GB (not used here, though?)
  • key_buffer_size = 512 MB
  • bin_log is off
  • Linux 2.6.28-15-server #52-Ubuntu SMP Wed Sep 9 11:34:09 UTC 2009 x86_64 GNU/Linux

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

赠佳期 2024-08-16 15:34:34

从您的问题中尚不完全清楚您期望或得到什么行为。以下是您可能不知道的一些事情

  • FLUSH TABLES 会破坏 MyISAM 键缓存 - 它不仅会写入脏块,还会丢弃干净的块,因此必须再次获取每个索引块才能进行修改
  • MyISAM 使用 1k 的块大小默认值可能小于您的文件系统块;这可能会产生性能问题
  • 如果您想要任何类型的持久性(您可能不这样做,因为您使用的是 MyISAM),那么您应该在控制器中使用带有电池支持缓存的硬件 raid。

我的猜测是,要么索引不再适合键缓冲区,要么它必须执行更多写入操作,这会触发读取,因为它们是无缓冲的,会超出块大小边界。

尝试将 myisam_block_size 更改为 4k 或更高并重建表(这是仅 my.cnf 的选项,仅在重新启动后对新表生效)。

您可以使用 myisamchk -dv 检查表上的块大小

It's not entirely clear from your question what behaviour you're expecting, or getting. Here are some things you might not know

  • FLUSH TABLES blows away the MyISAM key cache - it doesn't just write dirty blocks, it also discards clean ones so every index block must be fetched again to be modified
  • MyISAM uses a block size of 1k by default which is probably smaller than your filesystem blocks; this can create performance problems
  • If you intend any kind of durability (which you presumably don't, because you're using MyISAM), then you should use hardware raid with a battery-backed cache in the controller.

My guess is that either the index no longer fits in the key buffer, or that it's having to do a lot more writes, which trigger reads because they're unbuffered writes off the block-size boundaries.

Try changing myisam_block_size to 4k or higher and rebuild the table (this is a my.cnf-only option which only takes effect on new tables after a restart).

You can examine the block size on a table with myisamchk -dv

若无相欠,怎会相见 2024-08-16 15:34:34

我正在使用mariadb5528,如果key_buffer_size的使用率> 90%,似乎delay_key_write不起作用
所以将key_buffer_size扩大到2G。

i'm using mariadb5528,if usage of key_buffer_size >90% ,it seems that delay_key_write don't work
so enlarge the key_buffer_size to 2G.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文