SQL Server 文件组在大型 INSERT INTO 语句期间已满
考虑一个 SQL 脚本,该脚本旨在将 SQL 2000 数据库中的一个表中的行复制到另一个表中。传输涉及一个简单的 750,000 行:
INSERT INTO TableB([ColA],[ColB]....[ColG])
SELECT [ColA],[ColB]....[ColG]
FROM TableA
这是一个长时间运行的查询,部分原因可能是 ColB
的类型为 ntext
。 SELECT
语句中有一些 CONVERT()
操作。
困难在于,在运行约 15 分钟后,SQL Server 会引发此异常。
无法为数据库“[DB]”中的对象“[TABLE]”.“[PRIMARY_KEY]”分配空间,因为“PRIMARY”文件组已满。 通过删除不需要的文件、删除文件组中的对象、向文件组添加其他文件或为文件组中的现有文件设置自动增长来创建磁盘空间。
更多信息
- 自动增长已开启。
- 磁盘上有足够的可用空间 (~20GB)
- 单个 .mdf 约为 6GB
- 源表或目标表上没有触发器
问题
需要通过 Management Studio 或 T-SQL 设置哪些选项才能允许数据库根据需要增长?您建议采取哪些其他补救措施?
解决方案
数据库无法根据需要增长,因为我将此数据库托管在 SQL Server 2008 Express 实例上。升级到 SQL Server 的非阉割版本将解决此问题。
Consider a SQL script designed to copy rows from one table to another in a SQL 2000 database. The transfer involves 750,000 rows in a simple:
INSERT INTO TableB([ColA],[ColB]....[ColG])
SELECT [ColA],[ColB]....[ColG]
FROM TableA
This is a long running query, perhaps in part because ColB
is of type ntext
.
There are a handful of CONVERT()
operations in the SELECT
statement.
The difficulty is that after ~15 mins of operation, this exception is raised by SQL Server.
Could not allocate space for object '[TABLE]'.'[PRIMARY_KEY]' in database '[DB]' because the 'PRIMARY' filegroup is full.
Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
More Info
- Autogrowth is already on.
- there is more than enough free space on disk (~20gb)
- the single .mdf is ~6gb
- no triggers on the source or target tables
Question
What options need to be set, either via Management Studio, or via T-SQL to allow the database to grow as required? What other remedies would you suggest?
Resolution
The db could not grow as needed because I was hosting this database on an instance of SQL Server 2008 Express. Upgrading to a non-neutered version of SQL Server will solve this problem.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
最佳建议:预先调整数据库大小,而不是强制它按需增长(这可能是一个缓慢的操作)。
发生此错误的原因之一是自动增长间隔设置得太大。除了明显的情况(尝试在磁盘上只有 20GB 的情况下增长 25GB)之外,较大的增长间隔可能需要很长的时间来分配,这可能会导致查询超时。
编辑:根据您的新屏幕截图,看起来间隔不是问题。但是,我原来的建议仍然有效。尝试自己手动增长数据库,看看它是否允许您:
Best advice: Pre-size your database larger, instead of forcing it to grow on-demand (which can be a slow operation).
One reason this error could occur is if your autogrow interval is set too large. Besides the obvious (trying to grow by 25GB with only 20GB on disk), a large growth interval can take a very long time to allocate, which can cause your query to time out.
EDIT: Based on your new screenshot, doesn't look like the interval is the problem. But, my original advice still stands. Try to manually grow the database yourself, and see if it lets you:
如果您可以分享有关主文件组构成和自动增长设置(即主文件组中包含的所有文件以及每个文件的自动增长设置)的屏幕截图/信息,那也会很有帮助。在没有看到任何其他内容的情况下,第一个想法是您可能为组成 PRIMARY 组的一个/多个文件指定了 maxFileSize,但这只是一种预感,没有实际看到信息。
If you can share a screenshot/information on your PRIMARY filegroup makeup and autogrow settings (i.e. all files included in the PRIMARY and the autogrow settings for each) that would be helpful as well. First thought without seeing anything additional would be that you potentially have a maxFileSize specified for one/more of the files that makeup your PRIMARY group, but that's just a hunch without actually seeing the information.
相关桌子上是否有任何触发器?我之前在存在触发器时也看到过类似的结果,实际上是日志文件 (ldf) 的扩展达到了仅记录触发器运行的所有查询而不是 mdf 本身的限制。如果有任何触发器,我会考虑在您执行此更新时禁用它们,看看是否有帮助(我认为这是一次性数据迁移而不是重复事件?)
Is there any triggers on the table in question? I've seen similar results before when there was triggers and it was actually the expansion of the log file (ldf) which reached the limit just logging all the queries that were being ran by triggers and not the mdf itself. If there is any triggers I'd consider disabling them while you do this update and seeing if that helps (I assume this is a 1-off data migration rather than a recurring event?)