Lucene 可删除文件被锁定
我用来重新索引 MassIndexer。我从某个网站获得了代码示例(不记得在哪里)。
massIndexe.purgeAllOnStart(true) // true by default, highly recommended
.optimizeAfterPurge(true) // true is default, saves some disk space
.optimizeOnFinish(true) // true by default
.batchSizeToLoadObjects(100)
.threadsForSubsequentFetching(15)
.threadsToLoadObjects(10)
.limitIndexedObjectsTo(1000)
.cacheMode(CacheMode.IGNORE) // defaults to CacheMode.IGNORE
.startAndWait();
但经过几次重新索引后,索引的大小确实非常巨大。任何建议我如何解决这个问题。
卢塞恩 说:
这是 Windows 上的正常行为 每当你也有读者时 (IndexReaders 或 IndexSearchers)打开 针对您正在优化的索引。 Lucene 尝试删除旧段 文件合并后 (优化)。然而,由于 Windows 不允许删除以下文件 开放阅读,Lucene 捕获了一个 IOException 删除这些文件并 然后记录这些待处理的 将可删除的文件放入“可删除” 文件。在接下来的段合并时, 显式优化()会发生这种情况 或 close() 调用,并且每当 IndexWriter 刷新其内部 RAM目录到磁盘(每个 IndexWriter.DEFAULT_MAX_BUFFERED_DOCS (默认10)addDocuments),Lucene 将再次尝试删除这些文件 (和其他)以及任何 仍然失败将被重写为 可删除的文件。
但我相信有办法解决这个问题。在任何情况下,索引都会占用所有可用空间,因为它随时都会被某人使用。
I use to reindex MassIndexer. I got example of code from some site(can't remember where).
massIndexe.purgeAllOnStart(true) // true by default, highly recommended
.optimizeAfterPurge(true) // true is default, saves some disk space
.optimizeOnFinish(true) // true by default
.batchSizeToLoadObjects(100)
.threadsForSubsequentFetching(15)
.threadsToLoadObjects(10)
.limitIndexedObjectsTo(1000)
.cacheMode(CacheMode.IGNORE) // defaults to CacheMode.IGNORE
.startAndWait();
But after several reindex size of index really really huge. any suggestion how I can resolve this.
Lucene say :
This is normal behavior on Windows
whenever you also have readers
(IndexReaders or IndexSearchers) open
against the index you are optimizing.
Lucene tries to remove old segments
files once they have been merged
(optimized). However, because Windows
does not allow removing files that are
open for reading, Lucene catches an
IOException deleting these files and
and then records these pending
deletable files into the "deletable"
file. On the next segments merge,
which happens with explicit optimize()
or close() calls and also whenever the
IndexWriter flushes its internal
RAMDirectory to disk (every
IndexWriter.DEFAULT_MAX_BUFFERED_DOCS
(default 10) addDocuments), Lucene
will try again to delete these files
(and additional ones) and any that
still fail will be rewritten to the
deletable file.
but I believe that have some way ti resolve this. In any case index will take all free space because in any time it in use by someone..
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论