MySQL 300+ 表有哪些优化技巧? 万条记录?

发布于 2024-07-12 00:12:33 字数 254 浏览 6 评论 0原文

我正在考虑在许多服务器上存储来自 JVM 的一些 JMX 数据大约 90 天。 该数据将是诸如堆大小和线程计数之类的统计数据。 这意味着其中一个表将包含大约 3.88 亿条记录。

我根据这些数据构建了一些图表,以便您可以比较从 Mbean 检索到的统计数据。 这意味着我将使用时间戳按一定时间间隔抓取一些数据。

所以真正的问题是,是否有办法优化表或查询,以便您可以在合理的时间内执行这些查询?

谢谢,

乔什

I am looking at storing some JMX data from JVMs on many servers for about 90 days. This data would be statistics like heap size and thread count. This will mean that one of the tables will have around 388 million records.

From this data I am building some graphs so you can compare the stats retrieved from the Mbeans. This means I will be grabbing some data at an interval using timestamps.

So the real question is, Is there anyway to optimize the table or query so you can perform these queries in a reasonable amount of time?

Thanks,

Josh

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

轮廓§ 2024-07-19 00:12:33

您可以执行以下操作:

  1. 构建索引以匹配您正在运行的查询。 运行 EXPLAIN 以查看运行和发出的查询类型确保它们都尽可能使用索引。

  2. 对表进行分区。 分区是一种通过特定(聚合)键将大表拆分为多个较小表的技术。 MySQL 从 版本开始内部支持此功能。 5.1.

  3. 如有必要,构建汇总表来缓存查询中成本较高的部分。 然后针对汇总表运行查询。 同样,临时内存表可用于存储表的简化视图,作为预处理阶段。

There are several things you can do:

  1. Build your indexes to match the queries you are running. Run EXPLAIN to see the types of queries that are run and make sure that they all use an index where possible.

  2. Partition your table. Paritioning is a technique for splitting a large table into several smaller ones by a specific (aggregate) key. MySQL supports this internally from ver. 5.1.

  3. If necessary, build summary tables that cache the costlier parts of your queries. Then run your queries against the summary tables. Similarly, temporary in-memory tables can be used to store a simplified view of your table as a pre-processing stage.

放血 2024-07-19 00:12:33

3个建议:

  1. 索引
  2. 索引
  3. 索引

p.s. 对于时间戳,您可能会遇到性能问题——具体取决于 MySQL 内部处理 DATETIME 和 TIMESTAMP 的方式,将时间戳存储为整数可能会更好。 (自 1970 年以来的 # 秒或其他)

3 suggestions:

  1. index
  2. index
  3. index

p.s. for timestamps you may run into performance issues -- depending on how MySQL handles DATETIME and TIMESTAMP internally, it may be better to store timestamps as integers. (# secs since 1970 or whatever)

美男兮 2024-07-19 00:12:33

好吧,首先,我建议您使用“离线”处理来生成“图形就绪”数据(对于大多数常见情况),而不是尝试按需查询原始数据。

Well, for a start, I would suggest you use "offline" processing to produce 'graph ready' data (for most of the common cases) rather than trying to query the raw data on demand.

驱逐舰岛风号 2024-07-19 00:12:33

如果您使用的是 MYSQL 5.1,则可以使用新功能。
但请注意,它们包含很多错误。

首先你应该使用索引。
如果这还不够,您可以尝试使用分区来拆分表。

如果这也不起作用,您还可以尝试负载平衡。

If you are using MYSQL 5.1 you can use the new features.
but be warned they contain lot of bugs.

first you should use indexes.
if this is not enough you can try to split the tables by using partitioning.

if this also wont work, you can also try load balancing.

莫言歌 2024-07-19 00:12:33

一些建议。

您可能要对这些东西运行聚合查询,因此在将数据加载到表中之后(或同时),您应该预先聚合数据,例如按小时、按用户或按时间预先计算总计一周,无论如何,您得到了想法,并将其存储在用于报告图表的缓存表中。 如果您可以将数据集缩小一个数量级,那么对您有好处!

这意味着我将使用时间戳按一定时间间隔抓取一些数据。

那么这意味着您只使用最近 X 天的数据?

如果要删除几千万行,从表中删除旧数据可能会非常慢,分区对此非常有用(只需删除旧分区)。 它还将同一时间段的所有记录在磁盘上紧密分组,因此缓存效率更高。

现在如果你使用MySQL,我强烈建议使用MyISAM表。 你无法获得防崩溃性或事务,并且锁定是愚蠢的,但表的大小比 InnoDB 小得多,这意味着它可以容纳在 RAM 中,这意味着访问速度更快。

由于大型聚合可能涉及大量相当连续的磁盘 IO,因此像 RAID10(或 SSD)这样的快速 IO 系统是一个优势。

是否有办法优化表或查询,以便您可以执行这些查询
在合理的时间内?

这取决于表和查询; 在不了解更多情况下无法给出任何建议。

如果您需要具有大聚合和连接的复杂报告查询,请记住 MySQL 不支持任何花哨的 JOIN、散列聚合或任何其他真正有用的东西,基本上它唯一能做的就是嵌套循环索引扫描,这对于缓存表,如果涉及一些随机访问,在其他情况下绝对是残酷的。

我建议你用 Postgres 进行测试。 对于大型聚合,更智能的优化器确实可以很好地工作。

示例:

CREATE TABLE t (id INTEGER PRIMARY KEY AUTO_INCREMENT, category INT NOT NULL, counter INT NOT NULL) ENGINE=MyISAM;
INSERT INTO t (category, counter) SELECT n%10, n&255 FROM serie;

(serie 包含 16M 行,n = 1 .. 16000000)

MySQL    Postgres     
58 s     100s       INSERT
75s      51s        CREATE INDEX on (category,id) (useless)
9.3s     5s         SELECT category, sum(counter) FROM t GROUP BY category;
1.7s     0.5s       SELECT category, sum(counter) FROM t WHERE id>15000000 GROUP BY category;

在像这样的简单查询中,pg 的速度大约快 2-3 倍(如果涉及复杂的联接,差异会更大)。

A few suggestions.

You're probably going to run aggregate queries on this stuff, so after (or while) you load the data into your tables, you should pre-aggregate the data, for instance pre-compute totals by hour, or by user, or by week, whatever, you get the idea, and store that in cache tables that you use for your reporting graphs. If you can shrink your dataset by an order of magnitude, then, good for you !

This means I will be grabbing some data at an interval using timestamps.

So this means you only use data from the last X days ?

Deleting old data from tables can be horribly slow if you got a few tens of millions of rows to delete, partitioning is great for that (just drop that old partition). It also groups all records from the same time period close together on disk so it's a lot more cache-efficient.

Now if you use MySQL, I strongly suggest using MyISAM tables. You don't get crash-proofness or transactions and locking is dumb, but the size of the table is much smaller than InnoDB, which means it can fit in RAM, which means much quicker access.

Since big aggregates can involve lots of rather sequential disk IO, a fast IO system like RAID10 (or SSD) is a plus.

Is there anyway to optimize the table or query so you can perform these queries
in a reasonable amount of time?

That depends on the table and the queries ; can't give any advice without knowing more.

If you need complicated reporting queries with big aggregates and joins, remember that MySQL does not support any fancy JOINs, or hash-aggregates, or anything else useful really, basically the only thing it can do is nested-loop indexscan which is good on a cached table, and absolutely atrocious on other cases if some random access is involved.

I suggest you test with Postgres. For big aggregates the smarter optimizer does work well.

Example :

CREATE TABLE t (id INTEGER PRIMARY KEY AUTO_INCREMENT, category INT NOT NULL, counter INT NOT NULL) ENGINE=MyISAM;
INSERT INTO t (category, counter) SELECT n%10, n&255 FROM serie;

(serie contains 16M lines with n = 1 .. 16000000)

MySQL    Postgres     
58 s     100s       INSERT
75s      51s        CREATE INDEX on (category,id) (useless)
9.3s     5s         SELECT category, sum(counter) FROM t GROUP BY category;
1.7s     0.5s       SELECT category, sum(counter) FROM t WHERE id>15000000 GROUP BY category;

On a simple query like this pg is about 2-3x faster (the difference would be much larger if complex joins were involved).

子栖 2024-07-19 00:12:33
  1. 解释您的 SELECT 查询
  2. 获取唯一行时的 LIMIT 1
    SELECT * FROM user WHERE state = 'Alabama' // 错误
    SELECT 1 FROM user WHERE state = 'Alabama' LIMIT 1

  3. 为搜索字段建立索引
    索引不仅仅适用于主键或唯一键。 如果表中存在您要搜索的列,则几乎应该始终为它们建立索引。

  4. 索引并使用相同的列类型进行连接
    如果您的应用程序包含许多 JOIN 查询,您需要确保您连接所依据的列在两个表上都建立了索引。 这会影响 MySQL 内部优化连接操作的方式。

  5. 不要按 RAND 排序()
    如果您确实需要结果中的随机行,那么有更好的方法可以实现。 当然,它需要额外的代码,但您可以防止随着数据增长而呈指数级恶化的瓶颈。 问题是,MySQL 必须对表中的每一行执行 RAND() 操作(这需要处理能力),然后再对其进行排序并只给出 1 行。

  6. 使用 ENUM 而不是 VARCHAR
    ENUM 类型的列非常快速且紧凑。 在内部,它们的存储方式类似于 TINYINT,但它们可以包含和显示字符串值。

  7. 如果可以的话使用 NOT NULL
    除非您有非常具体的原因使用 NULL 值,否则您应该始终将列设置为 NOT NULL。

    “NULL 列需要行中的额外空间来记录它们的值是否为 NULL。对于 MyISAM 表,每个 NULL 列需要额外一位,向上舍入到最接近的字节。”

  8. 将 IP 地址存储为 UNSIGNED INT
    在查询中,您可以使用 INET_ATON() 将 IP 转换为整数,并使用 INET_NTOA() 进行反之亦然。 PHP 中也有类似的函数,称为 ip2long() 和 long2ip()。

  1. EXPLAIN Your SELECT Queries
  2. LIMIT 1 When Getting a Unique Row
    SELECT * FROM user WHERE state = 'Alabama' // wrong
    SELECT 1 FROM user WHERE state = 'Alabama' LIMIT 1

  3. Index the Search Fields
    Indexes are not just for the primary keys or the unique keys. If there are any columns in your table that you will search by, you should almost always index them.

  4. Index and Use Same Column Types for Joins
    If your application contains many JOIN queries, you need to make sure that the columns you join by are indexed on both tables. This affects how MySQL internally optimizes the join operation.

  5. Do Not ORDER BY RAND()
    If you really need random rows out of your results, there are much better ways of doing it. Granted it takes additional code, but you will prevent a bottleneck that gets exponentially worse as your data grows. The problem is, MySQL will have to perform RAND() operation (which takes processing power) for every single row in the table before sorting it and giving you just 1 row.

  6. Use ENUM over VARCHAR
    ENUM type columns are very fast and compact. Internally they are stored like TINYINT, yet they can contain and display string values.

  7. Use NOT NULL If You Can
    Unless you have a very specific reason to use a NULL value, you should always set your columns as NOT NULL.

    "NULL columns require additional space in the row to record whether their values are NULL. For MyISAM tables, each NULL column takes one bit extra, rounded up to the nearest byte."

  8. Store IP Addresses as UNSIGNED INT
    In your queries you can use the INET_ATON() to convert and IP to an integer, and INET_NTOA() for vice versa. There are also similar functions in PHP called ip2long() and long2ip().

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文