较大的事务日志会导致 cpu 升高吗

发布于 2024-10-08 22:09:20 字数 476 浏览 11 评论 0原文

我有一个客户端,在 Sql Server 2005 上有一个非常大的数据库。分配给数据库的总空间为 15Gb,其中大约 5Gb 分配给数据库,10Gb 分配给事务日志。就在最近,连接到该数据库的 Web 应用程序超时。

我跟踪了网页上的操作,并检查了在执行这些 Web 操作时执行的查询。执行计划没有什么问题。

查询本身使用了多个联接,但完成得非常快。然而,数据库服务器的 CPU 在几秒钟内升至 100%。当多个用户同时在系统上工作时会出现此问题(当我说多个时......请阅读大约 5 个)。在此情况下开始发生超时。

我想我的问题是,较大的事务日志会导致 CPU 性能问题吗?目前磁盘上大约有 12Gb 的可用空间。配置有点超出我的掌控,但数据库和日志都在同一个物理磁盘上。

我知道日志文件很大并且需要注意,但我只是想了解这是否会导致 CPU 峰值(即试图找到相关性)。超时是最近才发生的事情,这个应用程序已经响应了几年(即它是最近的表现)。

非常感谢,

I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out.

I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan.

The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur.

I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk.

I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation).

Many Thanks,

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

紫罗兰の梦幻 2024-10-15 22:09:21

如果这么大的日志不会造成问题,我不会感到惊讶,但也可能有其他问题。最近统计数据有更新吗?当某些自动化作业运行时是否会出现峰值,当出现峰值时是否有明确的时间模式 - 然后看看还有什么正在运行?在峰值开始发生时,您是否在服务器上加载了任何内容的新版本?

无论如何,事务日志都需要修复。它如此之大的原因是它没有被备份(或没有足够频繁地备份)。仅仅备份数据库还不够,还必须备份日志。我们每 15 分钟备份一次,但我们的系统是一个高度事务性的系统,我们不能丢失数据。

While I wouldn't be surprised if having a log that size wasn't causing a problem, there are other things it could be as well. Have the statistics been updated lately? Are the spikes happening when some automated job is running, is there a clear time pattern to when you have the spikes - then look at what else is running? Did you load a new version of anything on the server about the time the spikes started happeining?

In any event, the transaction log needs to be fixed. The reason it is so large is that it is not being backed up (or not backed up frequently enough). It is not enough to back up the database, you must also back up the log. We back ours up every 15 minutes but ours is a highly transactional system and we cannot afford to lose data.

ペ泪落弦音 2024-10-15 22:09:20

由于缺乏数据,很难准确地说,但在事务日志检查点上通常会观察到峰值。

检查点是将事务日志中顺序附加和存储的数据应用到实际数据文件的过程。

这涉及大量 I/O,包括 CPU 操作,并且可能是 CPU 活动峰值的原因。

通常,当事务日志 70% 已满或 SQL Server 确定恢复过程(重新应用日志)需要的时间超过 1< 时,就会出现检查点/code> 分钟。

It's hard to say exactly given the lack of data, but the spikes are commonly observed on transaction log checkpoint.

A checkpoint is a procedure of applying data sequentially appended and stored in the transaction log to the actual datafiles.

This involves lots of I/O, including CPU operations, and may be a reason of the CPU activity spikes.

Normally, a checkpoint occurs when a transaction log is 70% full or when SQL Server decides that a recovery procedure (reapplying the log) would take longer than 1 minute.

巴黎盛开的樱花 2024-10-15 22:09:20

您的首要任务应该是解决事务日志大小。数据库备份是否正确以及备份频率如何。解决这些问题,然后查看 CPU 峰值是否消失。
CHECKPOINT 是读取事务日志并将更改应用到数据库文件的过程,如果事务日志很大,那么它会影响它吗?

Your first priority should be to address the transaction log size. Is the DB being backed up correctly, and how frequently. Address theses issues and then see if the CPU spikes go away.
CHECKPOINT is the process of reading your transaction log and applying the changes to the DB file, if the transaction log is HUGE then it makes sense it could affect it?

捎一片雪花 2024-10-15 22:09:20

您可以尝试扩展自动增长:Kimberley Tripp 建议以 GB 为单位的事务日志自动增长为 500MB 以上:

http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx

(见第7点)

You could try extending the autogrowth: Kimberley Tripp suggests upwards of 500MB autogrowth for transaction logs measured in GBs:

http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx

(see point 7)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文