处理Solr读写超时异常
我是 solr 的新手。 我开发了一个使用 solr 进行索引的网站。 我想处理 solr 读取和写入索引期间可能发生的超时。请指导我如何处理这些异常。 我使用 solrj 作为 solr 客户端,我的网站和 solr 服务器在 tomcat 上运行。
谢谢你!
I am new to solr.
I have developed a an website which uses solr for indexing.
I want to handle the timeouts that can occur during solr read and write index.Please guide me on how can i handle these exceptions.
I am using solrj as solr client and my website and solr server are running on the tomcat.
Thnak you!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
提交和优化是为搜索者提供更新的操作。它们旨在在更新之后运行,而不是在查询之前运行。
此外,它们是昂贵的操作,这就是为什么您会偶尔遇到超时。除非您有一些特殊要求,否则我建议在 solrconfig.xml 中设置
选项。顾名思义,它将根据可配置的标准自动发出提交,例如未提交文档的最大数量或添加文档后的最长时间。Optimize比Commit更昂贵,它基本上重写了索引。优化的频率取决于您提交更改的频率以及每次提交的更改数量。
另请参阅:
Commit and Optimize are operations to make updates available to searchers. They are intended to be run after updates, not before queries.
Furthermore, they are expensive operations, which is why you're getting sporadical timeouts. Unless you have some special requirements, I recommend setting the
<autoCommit/>
option in your solrconfig.xml. As the name says, it will automatically issue the commit depending on configurable criteria like maximum number of uncommitted documents or maximum time after adding documents.Optimize is even more expensive than Commit, it basically rewrites the index. The frequency of an Optimize depends on how often you Commit changes and how many changes there are per commit.
See also: