Datadog事件监视器中的唯一过滤器
我正在寻找DataDog事件监视器中的独特过滤器。像下面一样。
方案:
我在Kubernetes中运行多个微服务。每个服务都会以格式前景螺纹 - < id> gt;正在等待状态
。对于每个thread-< id>
将产生多个日志消息。我正在使用Grok Parser的管道功能来获取threadType
ie 前景过程
和thread> thread-< id>
.ie是线程11
。当每次服务阻止了5个以上的唯一线程时,我需要创建一个监视器和警报。我可以通过为每个服务创建一个单独的监视器来实现这一目标,但是我需要创建约120个监视器。因此,我希望查看唯一
dataDog中的过滤器还是其他任何实现此目的的机制?
示例日志:
foreground-process thread-2 is in waiting for state
foreground-process thread-11 is in waiting for state
foreground-process thread-2 is in waiting for state
foreground-process thread-9 is in waiting for state
foreground-process thread-2 is in waiting for state
I am looking for a unique filter in the Datadog event monitor. something like below.
Scenario:
I have multiple microservices running in Kubernetes. Each of the services will produce a log message in the format foreground-process thread-<ID> is in waiting for state
. For each thread-<ID>
multiple log messages would be produced. I am using the pipeline feature with grok parser to fetch threadType
i.e foreground-process
and thread-<ID>
.i.e is thread-11
. I need to create a monitor and alert when more than 5 unique threads are blocked per service. I can achieve this by creating a separate monitor for each service but then I need to create around 120 monitors. So I am looking to see if there is unique
filter in datadog or any other mechanism to achieve this?
Sample logs:
foreground-process thread-2 is in waiting for state
foreground-process thread-11 is in waiting for state
foreground-process thread-2 is in waiting for state
foreground-process thread-9 is in waiting for state
foreground-process thread-2 is in waiting for state
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
要计算唯一数量的事物,请在黄色
中更改
box中的 box,以计算您要计算唯一值的facet。*
countTo count the unique number of things, change that
*
in the yellowCount
box to the facet you want to count unique values for.