解析二进制文件时出错...(主要是 PDF)

发布于 2024-12-05 17:12:35 字数 5797 浏览 0 评论 0原文

我正在尝试使用 Apache Tika 通过对二进制文件使用 ByteArrayInputStream 来解析 pdf 文件...并且开始出现某些 pdf 文件的错误,并且对于某些文件来说它解析得很好.. 早些时候我能够使用解析相同的 pdf 文件Tika,但是现在当我尝试使用 ByteArrayInputStream 时,我开始收到错误..我认为 ByteArray 有一些问题这是我收到的错误..

org.apache.tika.exception.TikaException: Unexpected RuntimeException from org.apache.tika.parser.pdf.PDFParser@652489c0

这是我的代码...

if (page.isBinary()) {
   handleBinary(page, curURL);
}

public int handleBinary(Page page, WebURL curURL) {
    try {
          binaryParser.parse(page.getBinaryData());
          page.setText(binaryParser.getText());
          handleMetaData(page, binaryParser.getMetaData());


          //System.out.println(" pdf url " +page.getWebURL().getURL());
          //System.out.println("Text" +page.getText());
    } catch (Exception e) {
          // TODO: handle exception
    }
          return PROCESS_OK;
}

        public class BinaryParser {

            private String text;
            private Map<String, String> metaData;

            private Tika tika;

            public BinaryParser() {
                tika = new Tika();
            }

            public void parse(byte[] data) {
                InputStream is = null;
                try {
                    is = new ByteArrayInputStream(data);
                    text = null;
                    Metadata md = new Metadata();
                    metaData = new HashMap<String, String>();
                    text = tika.parseToString(is, md).trim();
                    processMetaData(md);
                } catch (Exception e) {
                    e.printStackTrace();
                } finally {
                    IOUtils.closeQuietly(is);
                }
            }

            public String getText() {
                return text;
            }

            public void setText(String text) {
                this.text = text;
            }


            private void processMetaData(Metadata md){
                if ((getMetaData() == null) || (!getMetaData().isEmpty())) {
                    setMetaData(new HashMap<String, String>());
                }
                for (String name : md.names()){
                    getMetaData().put(name.toLowerCase(), md.get(name));
                }
            }

            public Map<String, String> getMetaData() {
                return metaData;
            }

            public void setMetaData(Map<String, String> metaData) {
                this.metaData = metaData;
            }

        }

    public class Page {

        private WebURL url;

        private String html;

        // Data for textual content
        private String text;

        private String title;

        private String keywords;
        private String authors;
        private String description;
        private String contentType;
        private String contentEncoding;

        private byte[] binaryData;

        private List<WebURL> urls;

        private ByteBuffer bBuf;

        private final static String defaultEncoding = Configurations
                .getStringProperty("crawler.default_encoding", "UTF-8");

        public boolean load(final InputStream in, final int totalsize,
                final boolean isBinary) {
            if (totalsize > 0) {
                this.bBuf = ByteBuffer.allocate(totalsize + 1024);
            } else {
                this.bBuf = ByteBuffer.allocate(PageFetcher.MAX_DOWNLOAD_SIZE);
            }
            final byte[] b = new byte[1024];
            int len;
            double finished = 0;
            try {
                while ((len = in.read(b)) != -1) {
                    if (finished + b.length > this.bBuf.capacity()) {
                        break;
                    }
                    this.bBuf.put(b, 0, len);
                    finished += len;
                }
            } catch (final BufferOverflowException boe) {
                System.out.println("Page size exceeds maximum allowed.");
                return false;
            } catch (final Exception e) {
                System.err.println(e.getMessage());
                return false;
            }

            this.bBuf.flip();
            if (isBinary) {
                binaryData = new byte[bBuf.limit()];
                bBuf.get(binaryData);
            } else {
                this.html = "";
                this.html += Charset.forName(defaultEncoding).decode(this.bBuf);
                this.bBuf.clear();
                if (this.html.length() == 0) {
                    return false;
                }
            }
            return true;
        }
    public boolean isBinary() {
        return binaryData != null;
    }

    public byte[] getBinaryData() {
        return binaryData;
    }

任何建议有什么问题我是 正在做...!!

更新:- 升级到 pdfbox 1.6.0 版本后,我开始在某些 pdf 中出现此错误...

Parsing Error, Skipping Object
java.io.IOException: expected='endstream' actual='' org.apache.pdfbox.io.PushBackInputStream@70dbdc4b
    at org.apache.pdfbox.pdfparser.BaseParser.parseCOSStream(BaseParser.java:439)
    at org.apache.pdfbox.pdfparser.PDFParser.parseObject(PDFParser.java:552)
    at org.apache.pdfbox.pdfparser.PDFParser.parse(PDFParser.java:184)
    at org.apache.pdfbox.pdmodel.PDDocument.load(PDDocument.java:1088)
    at org.apache.pdfbox.pdmodel.PDDocument.load(PDDocument.java:1053)

而对于某些 pdf 则出现此错误...

 Did not found XRef object at specified startxref position 0
Invalid dictionary, found: '' but expected: '/'
 WARN [Crawler 2] Did not found XRef object at specified startxref position 0

I am trying to parse pdf file using Apache Tika by using ByteArrayInputStream for Binary files... And started getting error for some pdf file and for some it is parsing very well.. Earlier I was able to parse same pdf files using Tika, but now when I tried using ByteArrayInputStream, I started getting error..I think there is some problem with the ByteArray This is the Error I am getting..

org.apache.tika.exception.TikaException: Unexpected RuntimeException from org.apache.tika.parser.pdf.PDFParser@652489c0

And this is my code...

if (page.isBinary()) {
   handleBinary(page, curURL);
}

public int handleBinary(Page page, WebURL curURL) {
    try {
          binaryParser.parse(page.getBinaryData());
          page.setText(binaryParser.getText());
          handleMetaData(page, binaryParser.getMetaData());


          //System.out.println(" pdf url " +page.getWebURL().getURL());
          //System.out.println("Text" +page.getText());
    } catch (Exception e) {
          // TODO: handle exception
    }
          return PROCESS_OK;
}

        public class BinaryParser {

            private String text;
            private Map<String, String> metaData;

            private Tika tika;

            public BinaryParser() {
                tika = new Tika();
            }

            public void parse(byte[] data) {
                InputStream is = null;
                try {
                    is = new ByteArrayInputStream(data);
                    text = null;
                    Metadata md = new Metadata();
                    metaData = new HashMap<String, String>();
                    text = tika.parseToString(is, md).trim();
                    processMetaData(md);
                } catch (Exception e) {
                    e.printStackTrace();
                } finally {
                    IOUtils.closeQuietly(is);
                }
            }

            public String getText() {
                return text;
            }

            public void setText(String text) {
                this.text = text;
            }


            private void processMetaData(Metadata md){
                if ((getMetaData() == null) || (!getMetaData().isEmpty())) {
                    setMetaData(new HashMap<String, String>());
                }
                for (String name : md.names()){
                    getMetaData().put(name.toLowerCase(), md.get(name));
                }
            }

            public Map<String, String> getMetaData() {
                return metaData;
            }

            public void setMetaData(Map<String, String> metaData) {
                this.metaData = metaData;
            }

        }

    public class Page {

        private WebURL url;

        private String html;

        // Data for textual content
        private String text;

        private String title;

        private String keywords;
        private String authors;
        private String description;
        private String contentType;
        private String contentEncoding;

        private byte[] binaryData;

        private List<WebURL> urls;

        private ByteBuffer bBuf;

        private final static String defaultEncoding = Configurations
                .getStringProperty("crawler.default_encoding", "UTF-8");

        public boolean load(final InputStream in, final int totalsize,
                final boolean isBinary) {
            if (totalsize > 0) {
                this.bBuf = ByteBuffer.allocate(totalsize + 1024);
            } else {
                this.bBuf = ByteBuffer.allocate(PageFetcher.MAX_DOWNLOAD_SIZE);
            }
            final byte[] b = new byte[1024];
            int len;
            double finished = 0;
            try {
                while ((len = in.read(b)) != -1) {
                    if (finished + b.length > this.bBuf.capacity()) {
                        break;
                    }
                    this.bBuf.put(b, 0, len);
                    finished += len;
                }
            } catch (final BufferOverflowException boe) {
                System.out.println("Page size exceeds maximum allowed.");
                return false;
            } catch (final Exception e) {
                System.err.println(e.getMessage());
                return false;
            }

            this.bBuf.flip();
            if (isBinary) {
                binaryData = new byte[bBuf.limit()];
                bBuf.get(binaryData);
            } else {
                this.html = "";
                this.html += Charset.forName(defaultEncoding).decode(this.bBuf);
                this.bBuf.clear();
                if (this.html.length() == 0) {
                    return false;
                }
            }
            return true;
        }
    public boolean isBinary() {
        return binaryData != null;
    }

    public byte[] getBinaryData() {
        return binaryData;
    }

Any suggestions what wrong I am doing...!!

UPDATED:-
After upgrading to pdfbox 1.6.0 version, I started getting this error for some pdf...

Parsing Error, Skipping Object
java.io.IOException: expected='endstream' actual='' org.apache.pdfbox.io.PushBackInputStream@70dbdc4b
    at org.apache.pdfbox.pdfparser.BaseParser.parseCOSStream(BaseParser.java:439)
    at org.apache.pdfbox.pdfparser.PDFParser.parseObject(PDFParser.java:552)
    at org.apache.pdfbox.pdfparser.PDFParser.parse(PDFParser.java:184)
    at org.apache.pdfbox.pdmodel.PDDocument.load(PDDocument.java:1088)
    at org.apache.pdfbox.pdmodel.PDDocument.load(PDDocument.java:1053)

And for some pdf this error...

 Did not found XRef object at specified startxref position 0
Invalid dictionary, found: '' but expected: '/'
 WARN [Crawler 2] Did not found XRef object at specified startxref position 0

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

独闯女儿国 2024-12-12 17:12:35

这是 PDFBox 版本 1.4.0 的一个已知错误。只需更新到PDFBox 1.5.0+

查看此发行说明

[PDFBOX-578] NPE PDPageNode.getCount 中的 NullPointerException

以及此 JIRA 票证

This is a known bug of PDFBox version 1.4.0. Just update to PDFBox 1.5.0+.

Check this release notes:

[PDFBOX-578] NPE NullPointerException in PDPageNode.getCount

And this JIRA ticket.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文