Lucene 上打开的文件太多错误
我正在进行的项目是对一定数量的数据(长文本)建立索引,并将它们与每个时间间隔(大约 15 到 30 分钟)的单词列表进行比较。
一段时间后,比如说第 35 轮,在开始索引第 36 轮上的新数据集时,发生了此错误:
[ERROR] (2011-06-01 10:08:59,169) org.demo.service.LuceneService.countDocsInIndex(?:?) : Exception on countDocsInIndex:
java.io.FileNotFoundException: /usr/share/demo/index/tag/data/_z.tvd (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:69)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:90)
at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:91)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:78)
at org.apache.lucene.index.TermVectorsReader.<init>(TermVectorsReader.java:81)
at org.apache.lucene.index.SegmentReader$CoreReaders.openDocStores(SegmentReader.java:299)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:580)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:556)
at org.apache.lucene.index.DirectoryReader.<init>(DirectoryReader.java:113)
at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(ReadOnlyDirectoryReader.java:29)
at org.apache.lucene.index.DirectoryReader$1.doBody(DirectoryReader.java:81)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:736)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:75)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:428)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:274)
at org.demo.service.LuceneService.countDocsInIndex(Unknown Source)
at org.demo.processing.worker.DataFilterWorker.indexTweets(Unknown Source)
at org.demo.processing.worker.DataFilterWorker.processTweets(Unknown Source)
at org.demo.processing.worker.DataFilterWorker.run(Unknown Source)
at java.lang.Thread.run(Thread.java:636)
我已经尝试通过以下方式设置打开文件的最大数量:
ulimit -n <number>
但过了一段时间,当间隔大约有 1050 行时文本,也会出现同样的错误。但这种事只发生过一次。
我应该遵循修改 Lucene IndexWriter 的 mergeFactor 的建议吗 (打开的文件太多)- SOLR 还是这是索引数据量的问题?
我还读到它是批量索引或交互式索引之间的选择。 如何仅通过频繁更新来确定索引是否是交互式的? 那么我应该将这个项目归类为交互式索引吗?
更新:我正在添加 IndexWriter 的片段:
writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_30), IndexWriter.MaxFieldLength.UNLIMITED);
似乎 maxMerge (?或字段长度...)已设置为无限制。
The project I'm working on is indexing a certain number of data (with long texts) and comparing them with list of words per interval (about 15 to 30 minutes).
After some time, say 35th round, while starting to index new set of data on 36th round this error occurred:
[ERROR] (2011-06-01 10:08:59,169) org.demo.service.LuceneService.countDocsInIndex(?:?) : Exception on countDocsInIndex:
java.io.FileNotFoundException: /usr/share/demo/index/tag/data/_z.tvd (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:69)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:90)
at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:91)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:78)
at org.apache.lucene.index.TermVectorsReader.<init>(TermVectorsReader.java:81)
at org.apache.lucene.index.SegmentReader$CoreReaders.openDocStores(SegmentReader.java:299)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:580)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:556)
at org.apache.lucene.index.DirectoryReader.<init>(DirectoryReader.java:113)
at org.apache.lucene.index.ReadOnlyDirectoryReader.<init>(ReadOnlyDirectoryReader.java:29)
at org.apache.lucene.index.DirectoryReader$1.doBody(DirectoryReader.java:81)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:736)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:75)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:428)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:274)
at org.demo.service.LuceneService.countDocsInIndex(Unknown Source)
at org.demo.processing.worker.DataFilterWorker.indexTweets(Unknown Source)
at org.demo.processing.worker.DataFilterWorker.processTweets(Unknown Source)
at org.demo.processing.worker.DataFilterWorker.run(Unknown Source)
at java.lang.Thread.run(Thread.java:636)
I've already tried setting maximum number of open files by:
ulimit -n <number>
But after some time, when the interval has about 1050 rows of long texts, the same error occurs. But it only occurred once.
Should I follow the advice of modifying Lucene IndexWriter's mergeFactor from (Too many open files) - SOLR or is this an issue on the amount of data being indexed?
I've also read that it's a choice between batch indexing or interactive indexing.
How would one determine if indexing is interactive, just by frequent updates?
Should I categorize this project under interactive indexing then?
UPDATE: I'm adding snippet of my IndexWriter:
writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_30), IndexWriter.MaxFieldLength.UNLIMITED);
Seems like maxMerge (? or field length...) is already set to unlimited.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
我已经使用了 ulimit 但错误仍然显示。
然后我检查了 lucene 功能的定制核心适配器。
事实证明,有太多 IndexWriter.open 目录处于打开状态。
需要注意的是,处理后,总是会调用关闭打开的目录。
I already used the ulimit but error still shows.
Then I inspected the customized core adapters for lucene functions.
Turns out there's too many IndexWriter.open directory that is LEFT OPEN.
Should note that after processing, will always call on closing the directory opened.
您需要仔细检查
ulimit
值是否已实际保留并设置为正确的值(无论最大值是多少)。您的应用程序很可能没有正确关闭索引读取器/写入器。我在 Lucene 邮件列表中看到过很多这样的故事,几乎总是用户应用程序受到指责,而不是 Lucene 本身。
You need to double check if
ulimit
value has actually been persisted and set to a proper value (whatever maximum is).It is very likely that your app is not closing index readers/writers properly. I've seen many stories like this in the Lucene mailing list and it was almost always the user app which was to blame, not the Lucene itself.
使用复合索引来减少文件数量。设置此标志后,lucene 会将段写入单个 .cfs 文件而不是多个文件。这将显着减少文件数量。
Use compound index to reduce file count. When this flag is set, lucene will write a segment as single .cfs file instead of multiple files. This will reduce the number of files significantly.