logstash中的elasticsearch输出插件在先前删除后不插入文档

发布于 2025-02-06 00:10:53 字数 806 浏览 3 评论 0原文

我的应用与LogStash通信,并将其发送对象。一种选项是删除文档,另一个是删除文档。 当时做一件事时,它可以完美地工作。 但是,当我删除文档,然后几个MS后,我将同一文档删除时,该文档就被删除了,而不是应该发生的情况:应删除该文档,然后插入。

请注意,如果我在删除和UPSERT之间等待大约1秒钟左右,则可以正常工作。

这是LogStash(设置Elasticsearch设置为调试)的日志,

[2022-06-09T18:25:04,535][DEBUG][logstash.outputs.elasticsearch][main][a9f569aea4eb379a8e7975c049f3a3af91b5aa5f0a331341c59ef8732f0f881e] Sending final bulk request for batch. {:action_count=>1, :payload_size=>534, :content_length=>534, :batch_offset=>0}
[2022-06-09T18:25:04,581][DEBUG][logstash.outputs.elasticsearch][main][d79b88d9d994ca71ad54b53446220613444ec138dc5edde62e6eaab5691bb002] Sending final bulk request for batch. {:action_count=>1, :payload_size=>119, :content_length=>119, :batch_offset=>0}

任何想法如何解决?

谢谢, 里兰

My app communicates with logstash, sending it objects. One of the options is upserting a document, and the other is deleting a document.
When doing one thing at the time it works perfect.
However, when I delete a document and then a couple of ms later I upsert the same document, the document is just deleted, as opposed to what should happen: the document should be deleted, and then inserted back on.

Notice that if I wait about 1 sec or so between the delete and upsert, it works fine.

This is the logs from Logstash (elasticsearch is set to debug)

[2022-06-09T18:25:04,535][DEBUG][logstash.outputs.elasticsearch][main][a9f569aea4eb379a8e7975c049f3a3af91b5aa5f0a331341c59ef8732f0f881e] Sending final bulk request for batch. {:action_count=>1, :payload_size=>534, :content_length=>534, :batch_offset=>0}
[2022-06-09T18:25:04,581][DEBUG][logstash.outputs.elasticsearch][main][d79b88d9d994ca71ad54b53446220613444ec138dc5edde62e6eaab5691bb002] Sending final bulk request for batch. {:action_count=>1, :payload_size=>119, :content_length=>119, :batch_offset=>0}

Any idea how to solve this?

Thanks,
Liran

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

江城子 2025-02-13 00:10:53

默认情况下,LogStash不能保留事件的顺序。设置管道订购可以用于控制这一点。如果有多个工作线程,那么这也可能导致已重新订购事件,因此您需要管道。工作人员设置为1。请注意,您有两个事件,一个534字节事件,然后是119个字节事件。在我看来,由于两者都包含_id,因此第一个是文档的UpSert,并包含文档的正文,第二个是删除,仅具有I​​D。这表明您的事件已以您期望的不同顺序发送到Elasticsearch。

By default, logstash does not preserve the order of events. The setting pipeline.ordered can be used to control this. If there are multiple worker threads then this can also lead to events being re-ordered, so you will need pipeline.workers set to 1. Note that you have two events, a 534 byte event followed by a 119 byte event. It seems likely to me, that since both will contain the _id, the first one is the upsert of the document and contains the body of the document, and the second one is the delete, which just has the id. That suggests that your events were sent to elasticsearch in a different order that you expect.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文