发送大肥皂请求时出现异常
Tomcat 6 上部署了一个 Web 服务,并通过 apache-cxf 2.3.3 公开。使用 wsdl2java 生成的源存根能够调用此服务。
事情看起来很好,直到我发送了大请求(~1Mb)。该请求未得到处理,并且失败,但出现异常:
Interceptor for {http://localhost/}ResourceAllocationServiceSoapService has thrown
exception, unwinding now org.apache.cxf.binding.soap.SoapFault:
Error reading XMLStreamReader.
...
com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog
at [row,col {unknown-source}]: [1,0]
这里有某种最大请求长度,我完全坚持下去。
There is a web-service deployed on tomcat 6 and exposed via apache-cxf 2.3.3. A generated sources stubs using wsdl2java to be able to call this service.
Things seemed fine until I sent big request(~1Mb). This request wasn't processed and failing with exception:
Interceptor for {http://localhost/}ResourceAllocationServiceSoapService has thrown
exception, unwinding now org.apache.cxf.binding.soap.SoapFault:
Error reading XMLStreamReader.
...
com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog
at [row,col {unknown-source}]: [1,0]
Is some kind of max request length here, I'm totally stuck with it.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
弗拉基米尔的建议奏效了。下面的代码将帮助其他人理解将 1000000 放在哪里。
Vladimir's suggestion worked. This code below will help others with understanding where to put the 1000000.
我知道出了什么问题了。实际上,这是拦截器代码中的错误:
当我用它替换它时,
事情开始正常工作。
因此,该请求只是在复制流期间被中继。
I figured out what was wrong. Actually it was bug inside interceptor's code:
When I replaced this with
things start working fine.
So the request was just trunkated during copying of streams.
使用 CachedOutputStream 类时,我遇到了同样的问题:“com.ctc.wstx.exc.WstxEOFException:序言中出现意外的 EOF”。
查看 CachedOutputStream 类的源,阈值用于在存储流数据从“内存中”到“文件”之间切换。
假设流对超过阈值的数据进行操作,它会存储在文件中,因此以下代码将会破坏
增加“阈值”确实有帮助,因为所有流数据都存储到内存中,在这种情况下,调用cachedInputStream.close()并不会真正关闭底层流实现,以便稍后仍然可以从中读取。
这是上述代码的“固定”版本(至少对我来说毫无例外)
当在 tmpInputStream 上调用 close() 时临时文件被删除,并且没有更多其他引用它,请参阅 CachedOutputStream.maybeDeleteTempFile() 的源代码
I run into same issue of geting "com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog" when using CachedOutputStream class.
Looking at sources of CachedOutputStream class the threshold is used to switch between storing stream's data from "in memory" to "a file".
Assuming stream operates on data that exceeds threshold it gets stored in a file thus following code is going to break
Increasing 'threshold' does help as all stream data is stored into memory and in such scenario calling cachedInputStream.close() does not really close the underlying stream implementation so one can still read from it later on.
Here is 'fixed' version of above code (at least it worked without exception for me)
Temporary file gets deleted when close() is called on tmpInputStream and there are no more other references to it, see source code of CachedOutputStream.maybeDeleteTempFile()