Suds 不会重用缓存的 WSDL 和 XSD,尽管我希望它能够
我很确定 suds 没有像我期望的那样缓存我的 WSDL 和 XSD。我是这样知道缓存的对象没有被使用的:
- 创建客户端大约需要 30 秒:
client = Client(url)
- 记录器条目显示在整个过程中对 XSD 和 WSDL 文件的消化是一致的30 秒
- Wireshark 在整个 30 秒内向存储 XSD 和 WSDL 文件的服务器显示一致的 TCP 流量
- 我每次运行程序时都会看到缓存中的文件正在更新
我有一个小程序可以创建泡沫客户端,发送单个请求,获取响应,然后结束。我的期望是,每次运行该程序时,它都应该从文件缓存而不是 URL 中获取 WSDL 和 XSD 文件。这就是为什么我认为:
client.options.cache.duration
设置为('days', 1)
client.options.cache.location
> 设置为c:\docume~1\mlin\locals~1\temp\suds
,我看到每次运行程序时都会生成并重新生成缓存文件- 有一段时间我想也许缓存在之间没有被重用程序运行,但我认为如果是这种情况,就不会使用文件缓存,因为内存缓存就可以了
我是否误解了泡沫缓存的工作原理?
I'm pretty sure suds is not caching my WSDLs and XSDs like I expect it to. Here's how I know that cached objects are not being used:
- It takes about 30 seconds to create a client:
client = Client(url)
- The logger entries show consistent digestion of the XSD and WSDL files during the entire 30 seconds
- Wireshark is showing consistent TCP traffic to the server storing the XSD and WSDL files during the entire 30 seconds
- I see the files in the cache being updated each time I run my program
I have a small program that creates a suds client, sends a single request, gets the response, then ends. My expectation is that each time I run the program, it should fetch the WSDL and XSD files from the file cache, not from the URLs. Here's why I think that:
client.options.cache.duration
is set to('days', 1)
client.options.cache.location
is set toc:\docume~1\mlin\locals~1\temp\suds
and I see the cache files being generated and re-generated each time I run the program- For a moment I thought that maybe the cache is not reused between runs of a program, but I don't think a file cache would be used if that were the case, because an in-memory cache would do just fine
Am I misunderstanding how suds caching is supposed to work?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
问题出在 suds 库本身。在cache.py中,尽管
ObjectCache.get()
总是获取有效的文件指针,但在执行pickle.load(fp)
时却遇到了异常(EOFError)。发生这种情况时,只需再次下载该文件。以下是事件顺序:
DocumentReader.open():
所以保存新的缓存文件并不重要,因为下次我跑步时也会发生同样的事情。所有 WSDL 和 XSD 文件都会发生这种情况。
我通过在读写时以二进制模式打开缓存文件来解决这个问题。具体来说,我在cache.py中所做的更改:
1) 在
FileCache.put()
中,将此行:更改为
2)在
FileCache中.getf()
,将此行:更改为
我不太了解代码库,无法知道这些更改是否安全,但它正在从文件缓存,服务仍然成功运行,并且正在加载客户端从 16 秒缩短到 2.5 秒。如果你问我的话就更好了。
希望这个修复或类似的东西可以被引入到肥皂水主线中。我已经将其发送到 suds 邮件列表(redhat dot com 上的 fedora-suds-list)。
The problem is in the suds library itself. In cache.py, although
ObjectCache.get()
is always getting a valid file pointer, it's hitting an exception (EOFError) doingpickle.load(fp)
. When that happens, the file is just downloaded again.Here's the sequence of events:
DocumentReader.open():
So it doesn't really matter that the new cache file was saved, because the same thing happens the next time I run. This happens for ALL of WSDL and XSD files.
I fixed that problem by opening the cache file in binary mode when reading and writing. Specifically, the changes I made were in cache.py:
1) In
FileCache.put()
, change this line:to
2) In
FileCache.getf()
, change this line:to
I don't know the codebase well enough to know if these changes are safe, but it is pulling the objects from the file cache, the service is still running successfully, and loading the client went from 16 seconds down to 2.5 seconds. Much better if you ask me.
Hopefully this fix, or something similar can be introduced back into the suds main line. I've already sent this to the suds mailing list (fedora-suds-list at redhat dot com).