确定 PostgreSQL 中有问题的查询

发布于 2024-10-22 05:07:52 字数 273 浏览 1 评论 0原文

有时我支持的 PostgreSQ 数据库会过载。我需要查明导致数据库过载的确切查询。我在相关时间范围内创建了 pgFounie 输出: http://bit.ly/f2aikx

在该报告中有一行
查询峰值:2011-03-15 10:41:06、2011-03-15 10:44:18、2011-03-15 10:48:25 4 个查询/秒

目前尚不清楚所提到的时间戳是否为峰值的开始或结束。

Sometimes a PostgreSQ database I support is overloaded. I need to pinpoint the exact queries that cause the database to be overloaded. I have created a pgFounie output on the relevant time frame:
http://bit.ly/f2aikx

In that report there is a row
Query peak: 4 queries/s at 2011-03-15 10:41:06, 2011-03-15 10:44:18, 2011-03-15 10:48:25

It is not clear whether the mentioned timestamps are for the start or end of the peak.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

两仪 2024-10-29 05:07:52

我不确定问题是什么,但我会尝试一些事情:

“查询峰值”指标是指在三个单独的秒内,您看到每秒 4 个查询的峰值吞吐量。

以下是我如何精确定位有问题的查询:

  1. 为此实例定义“重载”。这将帮助您确定问题的真正原因。假设重载被定义为“慢速查询”,
  2. 检查 pgFouine 输出中的慢速查询。它有助于将它们分组到“花费最多时间 (N) 的查询”部分。在那里,您还可以单击“显示示例”来查看一些让您感到悲伤的查询。
  3. 抽取其中一些查询的样本并对它们运行 EXPLAIN ANALYZE 以获取实际的执行计划。
  4. 查看同时运行的其他计划。这些可能会导致 I/O 争用,
  5. 请自行分析计划或使用 http://explain.depesz.com/ 获取执行计划的分析。留意桌线轴之类的东西。
  6. 相应地调整查询或调整 PostgreSQL 设置。
  7. 冲洗并重复

从长远来看,我会更改 pgFouine 中的设置,只记录执行时间超过 100 毫秒的查询。您可以使用 postgresql.conf 文件中的 log_min_duration_statement 设置来执行此操作。

I'm not sure what the question is, but I'll take a stab at a few things:

The "Query peak" metric is referring to three separate seconds where you saw a peak throughput of 4 queries per second.

Here's how I would approach pinpointing your problematic queries:

  1. Define "overloaded" for this instance. That will help you determine what is actually causing the problem. Let's assume that overloaded is defined as "slow queries"
  2. Examine slow queries in the pgFouine output. It helpfully groups them in the "Queries that took the most time (N)" section. Looking in there you can also click on "Show Examples" to see a few queries that are giving you grief.
  3. Take a sample of a few of those queries and run an EXPLAIN ANALYZE on them to get actual execution plans.
  4. Look at the other plans running at the same time. These may be causing I/O contention
  5. Analyze the plans yourself or else use http://explain.depesz.com/ to get analysis of your execution plans. Watch for things like table spools.
  6. Tune queries or adjust PostgreSQL settings accordingly.
  7. Rinse and repeat

In the long run, I would change settings in pgFouine to only log queries that execute for over 100ms. You can do that using the log_min_duration_statement setting in the postgresql.conf file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文