Oracle 如何避免写入 UNDO/REDO 日志
我有一个 Oracle PL/SQL 脚本。它处理大约 5100 万个寄存器,并将结果写入 5 个不同的表。
问题是我昨晚让进程继续运行,显然 UNDO 日志中存在溢出。
特别是,我们对回滚这个脚本不感兴趣,如果它失败,我们可以再次运行它。
有没有办法优化撤消/重做日志的使用?避免写入或最小化这些写入?
据我了解,除了使用 APPEND 插入之外,设置输出表的 NOLOGGING 属性也会有所帮助(如所述 此处)。
I have an Oracle PL / SQL script. It processes about 51 millon registers, and writes results to 5 different tables.
The problem is that I left the process running last night, and apparently there was an overflow in the UNDO logs.
Particularly, we are not interested in Rollbacking this script, if it fails, we can run it again.
Is there any way optimize the usage of the undo / redo logs? Avoid writing them or minimizing those writes?
As far as I understand, setting the NOLOGGING attribute the output tables would help, in addition to using the APPEND insert (as said here).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
您不应仅在一批中处理 5100 万个寄存器。例如,尝试将其分成数千个较小的块。如果您在每个较小的批次之后执行 COMMIT(无论如何您都会这样做,因为您说您不会回滚),则重做/撤消日志的使用将仅适用于未提交的部分,并且您将避免溢出。
You should not process the 51 million registers in only one batch. Try to split it up into smaller chunks of a few thousands for example. If you perform a COMMIT after each smaller batch (which you would do anyway as you say that you're not going to rollback) the redo/undo logs usage will only be for the uncommitted parts and you will avoid an overflow.
这实际上是一个减少你正在做的工作量的问题。
直接路径插入上的表 UNDO 总是很小,因为系统只需记录应从段中删除某些范围的块。不过,索引仍然需要大量撤销。直接路径插入上不记录日志可以最大限度地减少表 REDO。
It is really a matter or reducing the amount of work that you are doing.
Table UNDO on a direct path insert is always small as the system just has to record that certain ranges of blocks should be removed from the segment. Indexes will still require substantial undo though. Nologging on a direct path insert minimises table REDO.
此外,禁用约束和索引也可以加快插入速度。您可以使用 nologging 重建索引。
In addition, disabling constraints and indexes may also speed up the inserts. You can rebuild indexes with nologging.
您还可以使用隐藏参数 _disable_logging = true 来减少重做,但要注意生成的导入将是不可恢复的。
You can also use the hidden parameter _disable_logging = true to reduce redo, but beware that the resulting import will be unrecoverable.
“特别是,我们对回滚这个脚本不感兴趣,如果它失败,我们可以再次运行它。”
一个问题是,您是否需要并且准备好返回到以前的备份以便再次运行脚本吗?
也就是说,如果该过程在 1000 万行后失败,只需重新运行脚本就会导致插入 6100 万行,或者跳过/忽略 1000 万行,或者更新 1000 万行并插入 4100 万行。
另外,如果您执行 NOLOGGING 插入,则可能需要在作业完成后立即进行全新备份。在脚本运行期间执行时间点恢复到某个时间时可能会遇到问题,因此您还需要考虑脚本运行时数据库上发生的其他活动。
根据您编写 PL/SQL 脚本的方式,您可能会发现使用大型 SQL 而不是逐行处理可以减少撤消(例如,通过最大限度地减少对已处理块的重新访问)。
除非您真正了解减少撤消或承诺允许重用撤消的影响,否则我的第一个建议就是简单地增加撤消表空间的大小。当然,你确实有一个缺点,如果你生成了撤消的存储桶负载,那么失败将需要很长时间才能回滚。
"Particularly, we are not interested in Rollbacking this script, if it fails, we can run it again."
One question is, would you need to, and are you prepared to, go back to a previous backup in order to run the script again ?
That is, if the process failed after 10 million rows, would simply re-running the script result in 61 million rows being inserted, or would the 10 million be skipped/ignored or would you have 10 million updated and 41 million inserted.
Also, if you do a NOLOGGING insert, it will probably require a fresh backup immediately after the job. You would have problems doing a point-in-time recovery to a time during the script run, so you also need to consider what other activity is happening on the database while the script is running.
Depending on how you've written the PL/SQL script, you may find that using large SQLs, rather than row-by-row processing, can reduce undo (eg by minimising revisits to processed blocks).
Unless you really understand the impact of reducing undo or committing to allow reuse of the undo, my first recommendation would be simply to increase the size of the undo tablespace. Of course you do have the drawback that, if you have generated bucket loads of undo, then a failure will take a LONG time to rollback.