多个表或使用分区?
我已经看到这个问题几乎在许多线程上得到了回答,但没有考虑对这个特定领域的影响:
我希望在 MySQL 中存储大量仪表(500 个并且不断增长)的时间序列数据,每个仪表提供一个以 5 分钟为间隔的浮点值。最简单的结构是: - 量规_id - 时间戳 - 值
(其中 gauge_id 和时间戳组合为主键)
这意味着每年每个量规大约有 105120 行 - 所有这些都需要存储 10 或 20 年。对于 1000 个仪表,我们每年将查看 1 亿条记录。
数据是批量写入的,通常每个通道的值从远程源聚合到 XML 文件中,并每小时或每天读入数据库。因此,最多每小时的刀片数量与我们的仪表数量一样多。
对数据的读取操作将基于时间范围按每个仪表进行(因此仪表之间没有数据的连接操作)。例如,获取两个日期之间仪表 X 的所有值。 通常,这还包括某种形式的聚合/插值函数 - 因此用户可能希望查看任意范围的每日平均值或每周最大值等。 同样,读取次数相对较低,但这些需要 MySQL 在 1 秒内做出响应。
在这个阶段,我倾向于每个量规 1 个表,而不是在 MySQL 中根据 gauge_id 分区一个大表。 其逻辑是,这将使分片更容易,简化备份,并且在任何阶段出现数据错误时,本质上使仪表更容易删除/重建。 代价是写和读操作都稍微复杂一些。
对此有什么想法吗?
-------- 更新 --------
我在我的 MacBook 2.4GHz core 2 duo、4 GB RAM 上进行了一些测试。
设置下表:
CREATE TABLE `test` (
`channel_id` int(10) NOT NULL,
`time` datetime NOT NULL,
`value` int(10) NOT NULL,
KEY `channel_id` (`channel_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
用存储过程填充:
CREATE PROCEDURE `addTestData`(IN ID INT, IN RECORDS INT)
BEGIN
DECLARE i INT DEFAULT 1;
DECLARE j DATETIME DEFAULT '1970-01-01 00:00:00';
WHILE (i<=RECORDS) DO
INSERT INTO test VALUES(ID,j,999);
SET i=i+1;
SET j= j + INTERVAL 15 MINUTE;
END WHILE;
END $$
然后我调用创建前 100 万条记录
call addTestData(1,1000000);
插入在 47 秒内
SELECT * FROM `test` WHERE channel_id = 1 and YEAR(time) = '1970';
执行在 0.0006 秒内
SELECT AVG(value) as value, DATE(time) as date FROM `test`
WHERE channel_id = 1 and YEAR(time) = '1970' group by date;
执行在 4.6 秒内执行(MAX、SUM 函数同时执行)。
添加 4 个计量表后:
call addTestData(2,1000000);
call addTestData(3,1000000);
call addTestData(4,1000000);
call addTestData(5,1000000);
每个插入操作在 47 秒内执行,表使用 78 兆字节,
我运行了相同的两个查询 - 并获得了与表中 100 万条记录完全相同的执行时间(较大的查询为 4.6 秒)。
因此,除了分片、备份和未来硬件驱动对任何单个仪表表的更改(即多个读数、数据间隔更改)的潜在用途之外,在可预见的情况下似乎没有必要拆分为多个表。甚至没有尝试使用分区运行查询,似乎没有任何原因。
--------但是-------------
由于 4.6 秒的查询时间并不理想,我们显然需要做一些优化。 作为第一步,我重构了查询:
SELECT
AVG(value) as value,
DATE(time) as date
FROM
(SELECT * FROM test
WHERE channel_id = 1 and YEAR(time) = '1970')
as temp
group by date;
在包含 500 万条记录(超过 5 个channel_id)的表上运行,查询需要 4.3 秒。 如果我在一个有 1 个通道、100 万条记录的表上运行它,它会在 0.36 秒内运行! 对此我有点摸不着头脑...
对 500 万条记录的表进行分区
ALTER TABLE test PARTITION BY HASH(channel_id) PARTITIONS 5;
随后也在 0.35 秒内完成了上面的复合查询,同样的性能增益。
I've seen the question almost answered on a number of threads, but not considering the implications for this specific domain:
I am looking to store time series data in MySQL for a large number of gauges (500 and growing) which each provide a single float value at 5 minute intervals. At simplest, the structure would be:
- gauge_id
- timestamp
- value
(where gauge_id and timestamp combine as primary key)
This means roughly 105120 rows per gauge per year - all of which needs to be stored for 10 or 20 years. For 1000 gauges we'll be looking at 100 million records per year then.
Data is written in batches, typically values for each channel are aggregated into an XML file from remote source and read in to the database either hourly or daily. SO at most, there are as many inserts per hour as we have gauges.
Read operations on the data would be per gauge (so no join operations of data between gauges) based on time range. So e.g. to get all values for gauge X between two dates.
Usually, this will also include some form of aggregation/interpolation function - so a user may want to see daily averages, or weekly max, etc for arbitrary ranges.
Again, relatively low number of reads, but these need a response in under 1 second from MySQL.
At this stage I am leanign toward 1 table per gauge, rather than partitioning one huge table in MySQL on gauge_id.
The logic is this will make sharding easier down the line, simplify backup, and essentially make gauges easier to remove/rebuild if there are data errors at any stage.
The cost is that both write and read operatiosn are a little more complex.
Any thoughts on this?
-------- UPDATE --------
I ran a few tests on my MacBook 2.4gHz core 2 duo, 4 gigs of ram.
Set up the following table:
CREATE TABLE `test` (
`channel_id` int(10) NOT NULL,
`time` datetime NOT NULL,
`value` int(10) NOT NULL,
KEY `channel_id` (`channel_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Populated with a stored procedure:
CREATE PROCEDURE `addTestData`(IN ID INT, IN RECORDS INT)
BEGIN
DECLARE i INT DEFAULT 1;
DECLARE j DATETIME DEFAULT '1970-01-01 00:00:00';
WHILE (i<=RECORDS) DO
INSERT INTO test VALUES(ID,j,999);
SET i=i+1;
SET j= j + INTERVAL 15 MINUTE;
END WHILE;
END $
while I then called to create first 1 million records
call addTestData(1,1000000);
insert executed in 47 secs
SELECT * FROM `test` WHERE channel_id = 1 and YEAR(time) = '1970';
executed in 0.0006 secs
SELECT AVG(value) as value, DATE(time) as date FROM `test`
WHERE channel_id = 1 and YEAR(time) = '1970' group by date;
executed in 4.6 secs (MAX, SUM functions executed in same time).
after adding 4 more gauges:
call addTestData(2,1000000);
call addTestData(3,1000000);
call addTestData(4,1000000);
call addTestData(5,1000000);
insert executed each in 47 secs, 78 megabytes used for the table
I ran the same two queries - and got exactly the same execution time as with 1 million records in the table (4.6 secs for the bigger query).
So, bar the potential use for sharding, backup and future hardware driven changes to any individual gauge's table (ie multiple readings, change of data interval), there seemed to be no need to split into multipel tables for the foreseeable. Did not even try running the query with partitions, there did not seem to be any reason.
--------HOWEVER-------------
Since 4.6 seconds for a query is not ideal, we obviously need to do some optimising.
As a first step I restructured the query:
SELECT
AVG(value) as value,
DATE(time) as date
FROM
(SELECT * FROM test
WHERE channel_id = 1 and YEAR(time) = '1970')
as temp
group by date;
Run on a table with 5 million records (over 5 channel_id's) the query takes 4.3 seconds.
If I run it on a table with 1 channel, 1 million records, it runs in 0.36 seconds!!
Scratching my head a little over this...
Partitioning the table of 5 million records
ALTER TABLE test PARTITION BY HASH(channel_id) PARTITIONS 5;
Subsequently completes the compound query above in 0.35 seconds also, same performance gain.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对我来说,你的场景中没有任何东西可以证明按量规分区是合理的,如果你在 gauge_id 上有索引,那么性能不会成为问题,因为 MySql 将通过使用索引立即找到与某个量规相关的行,之后其他操作将是就像为每个仪表处理一个专用表一样。
分区可能合理的唯一情况是,如果您访问最近的仪表数据(例如最新的 10%)比旧数据(剩余 90%)更频繁,如果在这种情况下分区为两个“最近”和“存档”表可能会给你带来很多性能优势。
如果对单个表的操作不涉及索引,那么在合并表上执行相同的操作不会花费太长时间,因为如果操作涉及索引,MySql 首先使用 gauge_id 上的索引将结果缩小到某些仪表行您应该将索引设置为以“gauge_id”开头的合并表上的多列索引,例如各个表上的
INDEX( timestamp )
应变为INDEX( gauge_id, timestamp )
那么在大多数情况下,该操作将花费与单个表相同的时间。另外,不要被“5 亿行”这样的数字吓倒,数据库就是为处理如此多的数据而设计的。我的评论主要基于经验,几乎每次我处于您的情况并决定使用单独的表格时,出于某种原因,我最终将表格合并回一个表格,因为大多数情况下,当项目成熟时就会发生这种情况这是一个痛苦的过程。我真切地体会到了“关系数据库不是为了那样使用而设计的”。
我真的很喜欢听到其他人对此的意见,顺便说一句,在采取任何一种方式之前都要做很多测试,MySql 有很多 意外行为。
For me there is nothing in your scenario that justify partitioning by gauge, if you have an index on gauge_id the performance would not be an issue because MySql will find rows related to a certain gauge immediately by using the index, after that other operations will be like dealing with a dedicated table for each gauge.
The only situation in which partitioning might be justifiable is if you access very recent gauge data (say newest 10%) very more often that the old data (remaining 90%) if that's the case partitioning into two "recent" and "archive" tables might give you a lot of performance advantage.
If your operation on individual tables doesn't involve an index then the same operation shouldn't take much longer on the merged table because MySql first narrows down the results to the certain gauge rows using the index on gauge_id, if the operation involves an index you should make the index a multi-column index on the merged table starting with 'gauge_id' e.g.
INDEX( timestamp )
on individual tables should becomeINDEX( gauge_id, timestamp )
then in most cases the operation will take the same time as individual tables. Also don't be put off by numbers like '500 million rows', databases are designed to work with that amount of data.My remarks are mostly based on experience almost every time I was in your situation and decided to go with individual tables, for one reason or another I ended up merging the tables back into one and since most of the times that happens when the project has matured it is a painful process. I have really experienced "relational databases are not designed to be used like that".
I really like to hear others input on this, by the way do a lot testing before going either way, MySql has a lot of unexpected behaviors.