使用 dom4j 从流中读取单个 XML 文档
我尝试使用 dom4j 一次从流中读取一个 XML 文档,对其进行处理,然后继续处理流中的下一个文档。 不幸的是,dom4j 的 SAXReader(在幕后使用 JAXP)持续读取并阻塞了以下文档元素。
有没有办法让 SAXReader 在找到文档元素的末尾后停止读取流? 有更好的方法来实现这一点吗?
I'm trying to read a single XML document from stream at a time using dom4j, process it, then proceed to the next document on the stream. Unfortunately, dom4j's SAXReader (using JAXP under the covers) keeps reading and chokes on the following document element.
Is there a way to get the SAXReader to stop reading the stream once it finds the end of the document element? Is there a better way to accomplish this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
我能够使用一些内部 JAXP 类使其与一些体操一起使用:
这不是最干净的解决方案,因为它涉及内部 JAXP 类的子类化,但它确实有效。
I was able to get this to work with some gymnastics using some internal JAXP classes:
This isn't the cleanest solution, as it involves subclassing internal JAXP classes, but it does work.
最有可能的是,您不希望同一流中同时存在多个文档。 我不认为 SAXReader 足够聪明,无法在读到第一个文档的末尾时停止。 为什么需要在同一个流中拥有多个文档?
Most likely, you don't want to have more than one document in the same stream at the same time. I don't think that the SAXReader is smart enough to stop when it gets to the end of the first document. Why is it necessary to have multiple documents in the same stream like this?
我认为您必须添加一个适配器,即包装流的东西,并让该东西在看到下一个文档的开头时返回文件结尾。 据我所知,所编写的解析器将一直持续到文件末尾或出现错误...并且看到另一个
肯定会是一个错误。
I think you'd have to add an adapter, something to wrap the stream and have that thing return end of file when it sees the beginning of the next document. As far as I know ,the parsers as written, will go until the end of the file or an error... and seeing another
<?xml version="1.0"?>
would certainly be an error.假设您首先负责将文档放入流中,应该很容易以某种方式分隔文档。 例如:
然后从流读取时读入数组,直到遇到 DOC_TERMINATOR。
由于 4 是无效的字符值,除非您明确添加它,否则您不会遇到它。 从而允许您拆分文档。 现在只需将结果 char 数组包装起来以输入到 SAX 中即可。
请注意,当循环获得长度为 0 的文档时,循环将终止。这意味着您应该在最后一个文档之后添加第二个 DOC_TERMINATOR,或者您需要在 getNextDocument() 中添加一些内容来检测流的结尾。
Assuming you are responsible for placing documents into the stream in the first place should be easy to delimit the documents in some fashion. For example:
Then when reading from the stream read into a array until DOC_TERMINATOR is encountered.
Since 4 is an invalid character value you won't encounter except where you explicitly add it. Thus allowing you to split the documents. Now just wrap the resuling char array for input into SAX and your good to go.
Note that the loop terminates when it gets a doc of length 0. This means that you should either add a second DOC_TERMINATOR after the last document of you need to add something to detect the end of the stream in getNextDocument().
我之前已经通过用我自己创建的另一个具有非常简单的解析功能的读取器包装基本读取器来完成此操作。 假设您知道文档的结束标记,包装器将简单地解析匹配项,例如“”。 当它检测到它返回EOF时。 通过解析第一个开始标签并在匹配的结束标签上返回 EOF,可以使包装器变得自适应。 我发现没有必要实际检测结束标记的级别,因为我没有在文档内部使用文档标记,因此可以保证第一次出现结束标记就结束了文档。
我记得,技巧之一是让包装器块 close(),因为 DOM 读取器关闭输入源。
因此,给定 Reader 输入,您的代码可能如下所示:
如果遇到 EOF,eof() 方法将返回 true。 next() 方法标记读取器停止为 read() 返回 -1。
希望这能为您指明一个有用的方向。
--
猕猴桃。
I have done this before by wrappering the base reader with another reader of my own creation that had very simple parsing capability. Assuming you know the closing tag for the document, the wrapper simply parses for a match, e.g. for "</MyDocument>". When it detects that it returns EOF. The wrapper can be made adaptive by parsing out the first opening tag and returning EOF on the matching closing tag. I found it was not necessary to actually detect the level for the closing tag since no document I had used the document tag within itself, so it was guaranteed that the first occurrence of the closing tag ended the document.
As I recall, one of the tricks was to have the wrapper block close(), since the DOM reader closes the input source.
So, given Reader input, your code might look like:
The eof() method returns true if EOF is encountered. The next() method flags the reader to stop returning -1 for read().
Hopefully this points you in a useful direction.
--
Kiwi.
我会将输入流读入内部缓冲区。 根据预期的总流大小,我要么读取整个流,然后解析它,要么检测一个 xml 和下一个 xml 之间的边界(查找
处理具有一个 xml 的流和处理具有多个 xml 的流之间的唯一真正区别是缓冲区和拆分逻辑。
I would read the input stream into an internal buffer. Depending on the expected total stream size I would either read the entire stream and then parse it or detect the boundary between one xml and the next (look for
The only real difference then between handling a stream with one xml and a stream with multiple xmls is the buffer and split logic.