SQL Server Service Broker:如何确定开销或最大性能?

发布于 2024-10-11 02:24:53 字数 1072 浏览 5 评论 0原文

我们正在尝试确定是否正确使用 Service Broker 并从中获得最大性能。我们一直在调整 SB 对话和处理,从 3000/分钟增加到 8000/分钟,但 CPU 一直保持在 100%。此外,有时 SB 队列会保持空状态,但在流量相似的日子里,队列可能会备份 500k。

该机器是四核(16 核),无 HT,32GB RAM 和 26GB 分配给 SQL Server,启用了 AWE。

SQL Server 2008 SP1(无 CU),企业版。 Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64) 2009 年 3 月 29 日 10:11:52 版权所有 (c) 1988-2008 Windows NT 6.1(内部版本 7600:)上的 Microsoft Corporation Enterprise Edition(64 位)

消息为插入到服务代理队列中,该队列拉取消息组并通过 CLR 运行它们,CLR 解析 XML(不是简单的解析,唉)并插入到表中。 CLR 比我们的 T-SQL 代码快得多。

每个调度程序平均有 35 个可运行任务

我们每晚运行统计/索引维护。

我们设置服务器 MAXDOP = 1 来尝试提高性能。

我们已将 tempdb 文件的数量增加到 64 个以避免 SGAM 争用,这与 TF1118 相结合似乎已经阻止了 TEMPDB 争用。

查看 sys.dm_os_waiting_tasks,我们通常有大约 60 个任务在 THREADPOOL 上等待,只有少数任务在其他类型上等待。

我们的信号等待率为 70%(资源等待 = 30%)。

我们已验证 TokenAndUserPermCache 保持在 20mb 以下。

查看 sys.dm_os_latch_stats,我们在 1 分钟内看到 40-200k BUFFER 锁存器,其中大部分位于 sysdesend 和我们用来处理对话框的用户表上。

我们还看到较高的 SOS_SCHEDULER_WAIT,这也表明 CPU 压力。但这是因为 CLR 异常繁忙,还是因为 Service Broker 开销?我很乐意提供代码 - 让我知道我需要在这里发布什么。

提前致谢。

We are trying to determine whether we are using Service Broker appropriately and getting the max performance out of it. We have been tweaking our SB conversations and handling, and have gone from 3000/minute to 8000/minute, but CPU has stayed constant at 100%. Additionally, on some days the SB queue stays empty, but on similar-traffic days the queue can back up by 500k.

The machine is a quad-quad (16 cores), no HT, 32gb RAM and 26gb assigned to SQL Server, with AWE enabled.

SQL Server 2008 SP1 (no CUs), Enterprise Edition.
Microsoft SQL Server 2008 (SP1) - 10.0.2531.0 (X64) Mar 29 2009 10:11:52 Copyright (c) 1988-2008 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 (Build 7600: )

The messages are inserted into a service broker queue, which pulls groups of messages and runs them through a CLR, which parses the XML (not a simple parse, alas) and inserts into a table. The CLR is considerably faster than our T-SQL code.

We have an average of 35 runnable tasks per scheduler

We run nightly stats/index maintenance.

We have set server MAXDOP = 1 to try and help performance.

We've upped our number of tempdb files to 64 to avoid SGAM contention, which combined with TF1118 seems to have stopped TEMPDB contention.

Looking at sys.dm_os_waiting_tasks, we typically have ~60 tasks waiting on THREADPOOL, with only a handful on other types.

Our signal waits are 70% (resource waits = 30%).

We've verified that the TokenAndUserPermCache stays under 20mb.

Looking at sys.dm_os_latch_stats, we see 40-200k BUFFER latches in 1 minute, which are mostly on sysdesend and a user table we use to deal with Dialogs.

We also see high SOS_SCHEDULER_WAIT, which also indicates CPU pressure. But is that because of the CLR being freakishly busy, or because of Service Broker overhead? I'll happily provide code - let me know what I need to post here.

Thanks in advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

浪漫人生路 2024-10-18 02:24:53
  1. 您仅使用 SSB 作为本地排队/处理机制,还是涉及任何远程消息传递(x 机传输)?
  2. 多少个队列?
  3. 是否涉及激活,我想是的,有多少个 max_queue_readers ?
  4. 有什么东西可以与 500k 峰值联系起来吗?它们需要多长时间才能排空?

黑暗中的一些镜头:

大约 60 个任务在 16 个 CPU 的机器上等待工作人员...我通常认为还可以,但对于专用于 SSB 处理的机器来说有点奇怪,因为这样的机器往往很少有长时间运行的任务(激活的作业),而不是对于许多短时间运行的程序来说,因此它们不会显示 THREADPOOL 等待。

  1. Do you use SSB only as a local queueing/processing mechanism, or is there any remote message delivery (x-machine transmission) involved?
  2. How many queues?
  3. Is Activation involved, I assume yes, how many max_queue_readers?
  4. Anything you can asscoiate with the 500k spikes? How long does it take for them to drain?

Some shots in the dark:

~60 tasks waiting for workers on a 16 CPU machine ... I woudl normally consider OK, but for a machine dedicated to SSB processing is a bit weird, as such machines tend to have few long running tasks (the activated jobs) as opposed to many short running ones, so they don't tend to show THREADPOOL waits.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文