HTTPWatch IE 自动化通过 Ruby 内存不足错误
我正在使用 HTTPWatch Ruby 脚本来自动化 Internet Explorer 并抓取网站以查找损坏的链接。有关 ruby 站点蜘蛛脚本的信息,请参阅此处。一段时间后,HTTPWatch 插件失败并出现以下错误:
获取缓存对象失败 # 1. len = 2048 url = http://domainname/dckh1h0mntja0m8xa1qugzm3n_59c9/dbs.gif?&dcsdat=1284571577008&dcssip=domainname&dcsuri=/closet_detail.aspx& dcsql=%3Fid= 34200&WT.co_f=10.10.30.9-90436560.30102765&WT.vt_sid=10.10.30.9-90436560.30102765.1284565529237&WT.tz=-4&WT.bh=13&WT.ul= en-us&WT.cd= 16&WT.sr=1680x1050&WT.jo=是&WT.ti=Generics%2520%2526%2520Super%2520Man%2520Center%25E2%2580%2594测试...&WT.vt_f_tlh=1284571573 错误 = 8:没有足够的存储空间来处理此命令。
源代码.cpp第858行 hr = 0x80070008
(此进程已将 MiniDump 写入)
SafeTerminate 版本:7.0.26
当我查看任务管理器时,IExplorer.exe 占用了大约 1.5 Gigs 内存。我想知道这是不是缓存满的问题?或者这是 URL 太长的问题?有人有什么建议吗?
I am using the HTTPWatch Ruby script to automate Internet Explorer and crawl a website looking for broken links. See here for information on the ruby site spider script. After a while the HTTPWatch plugin fails with the following error:
Get Cache Object failed # 1. len = 2048 url = http://domainname/dckh1h0mntja0m8xa1qugzm3n_59c9/dbs.gif?&dcsdat=1284571577008&dcssip=domainname&dcsuri=/closet_detail.aspx&dcsqry=%3Fid=34200&WT.co_f=10.10.30.9-90436560.30102765&WT.vt_sid=10.10.30.9-90436560.30102765.1284565529237&WT.tz=-4&WT.bh=13&WT.ul=en-us&WT.cd=16&WT.sr=1680x1050&WT.jo=Yes&WT.ti=Generics%2520%2526%2520Super%2520Man%2520Center%25E2%2580%2594Testing...&WT.vt_f_tlh=1284571573
Error = 8 : Not enough storage is available to process this command.
Line 858 source.cpp
hr = 0x80070008
(A MiniDump has already been written by this process to )
SafeTerminate
Version: 7.0.26
When I look in task manager IExplorer.exe is taking up like 1.5 Gigs of memory. I'm wondering if this is a problem of the cache filling up? Or is this a problem with the URL being too long? Anyone have any suggestions?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
好吧,看来我能够回答我自己的问题了。由于 HTTPWatch 是一个 IE 插件,因此看起来 Internet Explorer 内存不足。事实上,是 HTTPWatch 日志文件变得如此之大。解决方法是使用 Save() 和 Clear() 每隔一段时间转储 HttpWatch 日志。
Ok, it looks like I was able to answer my own question. Since HTTPWatch is a IE plug-in that's why it looked like Internet Explorer was running out of memory. In fact, it is the HTTPWatch log file that is getting so large. The work-around is to dump the HttpWatch log at an interval using Save() and then Clear().