如何防止 suds 通过网络获取 xml.xsd?
我正在使用 Python 的 suds 库,它尝试通过网络获取 xml.xsd。不幸的是,w3c 服务器由于像我这样的其他程序而受到攻击,通常无法提供该文档。
如何拦截 suds 的 URL 获取以始终获取该文件的本地副本,即使无需第一次将其成功下载到长期缓存中?
I'm using Python's suds library which tries to fetch xml.xsd over the network. Unfortunately, the w3c server is hammered due to other programs like mine and cannot usually serve the document.
How do I intercept suds' URL fetching to always grab a local copy of this file, even without having to download it into a long-lived cache successfully the first time?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
获取 xml.xsd 的问题与“http://www.w3.org/XML/1998/namespace”命名空间有关,大多数 WSDL 都需要该命名空间。该命名空间默认映射到 http://www.w3.org/2001/xml.xsd< /a>.
您可以覆盖此名称空间的位置绑定以指向本地文件:
The problem with fetching xml.xsd has to do with the "http://www.w3.org/XML/1998/namespace" namespace, which is required for most WSDLs. This namespace is mapped by default to http://www.w3.org/2001/xml.xsd.
You may override the location binding for this namespace to point to a local file:
suds 库有一个
suds.store.DocumentStore
类,它在 uri 中保存捆绑的 XML ->文本词典。它可以像这样修补:不幸的是,这不起作用,因为
DocumentStore
只接受对suds://
协议的请求。过了一小片猴子,你就可以开始做生意了。也可以重写传递给 suds
Client()
的Cache()
实例,但缓存基于 Python 的hash() 处理数字 id
并且无法获取其内容的 URL。The suds library has a class
suds.store.DocumentStore
that holds bundled XML in a uri -> text dictionary. It can be patched like so:Unfortunately this doesn't work because
DocumentStore
only honors requests for thesuds://
protocol. One monkey patch later and you're in business.It would also be possible to override the
Cache()
instance passed to your sudsClient()
, but the cache deals with numeric ids based on Python'shash()
and does not get the URLs of its contents.