优化 PDF Word 搜索
我有一个应用程序,它迭代 pdf 文件的目录并搜索字符串。我正在使用 PDFBox 从 PDF 中提取文本,代码非常简单。起初,要搜索 13 个文件,需要一分半钟才能加载结果,但我注意到 PDFBox 在日志文件中放入了很多内容。我更改了日志记录级别,这有很大帮助,但加载页面仍然需要 30 秒以上。有人对如何优化代码或确定文档中有多少点击次数的其他方法有任何建议吗?我使用了 Lucene,但它似乎只提供目录中的点击次数,而不是特定文件中的点击次数。
这是我从 PDF 中获取文本的代码。
public static String parsePDF (String filename) throws IOException
{
FileInputStream fi = new FileInputStream(new File(filename));
PDFParser parser = new PDFParser(fi);
parser.parse();
COSDocument cd = parser.getDocument();
PDFTextStripper stripper = new PDFTextStripper();
String pdfText = stripper.getText(new PDDocument(cd));
cd.close();
return pdfText;
}
I have an application that iterates over a directory of pdf files and searches for a string. I am using PDFBox to extract the text from the PDF and the code is pretty straightforward. At first to search through 13 files it was taking a minute in a half to load the results but I noticed that PDFBox was putting a lot of stuff in the log file file. I changed the logging level and that helped alot but it is still taking over 30 seconds to load a page. Does anybody have any suggestions on how I can optimize the code or another way to determine how many hits are in a document? I played around with Lucene but it seems to only give you the number of hits in a directory not number of hits in a particular file.
Here is my code to get the text out of a PDF.
public static String parsePDF (String filename) throws IOException
{
FileInputStream fi = new FileInputStream(new File(filename));
PDFParser parser = new PDFParser(fi);
parser.parse();
COSDocument cd = parser.getDocument();
PDFTextStripper stripper = new PDFTextStripper();
String pdfText = stripper.getText(new PDDocument(cd));
cd.close();
return pdfText;
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
Lucene 允许您单独索引每个文档。
而不是直接使用PDFBox。您可以使用 Apache Tika 提取文本并将其提供给 lucene。 Tika 在内部使用 PDFBox。但是,它提供了易于使用的 api 以及从任何类型提取内容的能力无缝文档。
一旦您拥有目录中每个文件的每个 lucene 文档,您就可以针对完整索引执行搜索。
Lucene 匹配搜索词并返回与文档内容匹配的结果(文件)数量。
还可以使用 lucene api 获取每个 lucene 文档/文件中的命中数。
这称为术语频率,可以针对正在搜索的文档和字段进行计算。
示例来自 在 Lucene / Lucene.net 搜索中,如何计算每个文档的点击次数?
Lucene would allow you to index each of the document seperately.
Instead of using PDFBox directly. you can use Apache Tika for extracting text and feeding it to lucene. Tika uses PDFBox internally. However, it provides easy to use api as well as ability to extract content from any types of document seamlessly.
Once you have each lucene document for each of the file in your directory, you can perform search against the complete index.
Lucene matches the search term and would return back number of results (files) which match the content in the document.
It is also possible to get the hits in each of the lucene document/file using the lucene api.
This is called the term frequency, and can be calculated for the document and field being searched upon.
Example from In a Lucene / Lucene.net search, how do I count the number of hits per document?