libpcap setfilter()函数和丢包

发布于 2024-10-05 19:58:18 字数 1242 浏览 11 评论 0原文

这是我在这里的第一个问题@stackoverflow。

我正在为一些 VoIP 生产服务器编写一个监控工具,特别是一个嗅探工具,它允许使用 Perl 中的 pcap 库捕获与给定模式匹配的所有流量(VoIP 呼叫)。

我不能使用较差的选择性过滤器,例如“udp”,然后在我的应用程序代码中进行所有过滤,因为这会涉及太多流量,并且内核无法处理报告数据包丢失。

然后我要做的就是在捕获过程中迭代地构建更具选择性的过滤器。一开始,我仅捕获(所有)SIP 信令流量和 IP 片段(在任何情况下,模式匹配都必须在应用程序级别完成),然后当我在 SIP 数据包中找到有关 RTP 的一些信息时,我将“or”子句添加到具有特定 IP 和 PORT 的实际过滤器字符串,并使用 setfilter() 重新设置过滤器。

所以基本上是这样的:

  1. 初始过滤器:“(udp 和端口 5060) 或 (udp 和 ip[6:2] & 0x1fff != 0)” ->捕获所有 SIP 流量和 IP 片段

  2. 更新的过滤器:“(udp 和端口 5060)或(udp 和 ip[6:2] & 0x1fff != 0)或(主机 IP 和端口 PORT)”->还捕获特定 IP、PORT 上的 RTP

  3. 更新的过滤器:“(udp 和端口 5060)或(udp 和 ip[6:2] & 0x1fff != 0)或(主机 IP 和端口 PORT)或(主机IP2和端口PORT2)”->还捕获第二个 RTP 流

这非常有效,因为我能够获得 RTP 流的“真实”数据包丢失情况以进行监控,而使用我的工具的选择性过滤器版本较差时,RTP 数据包丢失百分比并不可靠,因为存在一些数据包由于内核丢包而丢失。

但让我们来看看这种方法的缺点。

在捕获时调用 setfilter() 涉及到这样一个事实:libpcap 会丢弃“在更改过滤器时”收到的数据包,如函数 set_kernel_filter() 的代码注释中所述,将其放入 pcap-linux.c(已检查 libpcap 版本 0.9 和 1.1)。

因此,当我调用 setfilter() 并且某些数据包以 IP 碎片形式到达时,我确实丢失了一些碎片,而 libpcap 统计信息最后并未报告这一点:我发现它正在挖掘痕迹。

现在,我明白了这个操作是由 libpcap 完成的原因,但就我而言,我绝对不需要有任何数据包丢失(我不关心获得一些不相关的流量)。

您知道如何在不修改 libpcap 代码的情况下解决这个问题吗?

this is my first question here @stackoverflow.

I'm writing a monitoring tool for some VoIP production servers, particularly a sniff tool that allows to capture all traffic (VoIP calls) that match a given pattern using pcap library in Perl.

I cannot use poor selective filters like e.g. "udp" and then do all the filtering in my app's code, because that would involve too much traffic and the kernel wouldn't cope reporting packet loss.

What I do then is to iteratively build the more selective filter possible during the capture. At the beginning I capture only (all) SIP signalling traffic and IP fragments (the pattern match has to be done at application level in any case) then when I find some information about RTP into SIP packets, I add 'or' clauses to the actual filter-string with specific IP and PORT and re-set the filter with setfilter().

So basically something like this:

  1. Initial filter : "(udp and port 5060) or (udp and ip[6:2] & 0x1fff != 0)" -> captures all SIP traffic and IP fragments

  2. Updated filter : "(udp and port 5060) or (udp and ip[6:2] & 0x1fff != 0) or (host IP and port PORT)" -> Captures also the RTP on specific IP,PORT

  3. Updated filter : "(udp and port 5060) or (udp and ip[6:2] & 0x1fff != 0) or (host IP and port PORT) or (host IP2 and port PORT2)" -> Captures a second RTP stream as well

And so on.

This works quite well, as I'm able to get the 'real' packet loss of RTP streams for monitoring purposes, whereas with a poor selective filter version of my tool, the RTP packet loss percentage wasn't reliable because there was some packets missing due to packet drop by kernel.

But let's get to the drawback of this approach.

Calling setfilter() while capturing involves the fact that libpcap drops packets received "while changing the filter" as stated in code comments for function set_kernel_filter() into pcap-linux.c (checked libpcap version 0.9 and 1.1).

So what happens is that when I call setfilter() and some packets arrive IP-fragmented, I do loose some fragments, and this is not reported by libpcap statistics at the end: I spotted it digging into traces.

Now, I understand the reason why this action is done by libpcap, but in my case I definitely need not to have any packet drop (I don't care about getting some unrelated traffic).

Would you have any idea on how to solve this problem that is not modifying libpcap's code?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

左岸枫 2024-10-12 19:58:18

使用更具体的过滤器启动一个新流程怎么样?您可以同时进行两个并行的 pcap 捕获。一段时间后(或检查两者是否收到相同的数据包),您可以停止原始数据包。

What about starting up a new process with the more specific filter. You could have two parallel pcap captures going at once. After some time (or checking that both received the same packets) you could stop the original.

你是暖光i 2024-10-12 19:58:18

您可以捕获所有 RTP 流量吗?

捕获过滤器 中,对于 RTP 流量的建议是:

udp[1] & 1 != 1 && udp[3] & 1 != 1 && udp[8] & 0x80 == 0x80 && length < 250

正如链接所指出的,您会得到一些误报,其中DNS 和可能的其他 UDP 数据包偶尔包含 RTP 数据包使用的标头字节 0x80,但是该数字应该可以忽略不计,并且不足以导致内核丢弃。

Can you just capture all RTP traffic?

From capture filters the suggestion for RTP traffic is:

udp[1] & 1 != 1 && udp[3] & 1 != 1 && udp[8] & 0x80 == 0x80 && length < 250

As the link points out you will get a few false positives where DNS and possibly other UDP packets occassionally contain the header byte, 0x80, used by RTP packets, however the number should be negligible and not enough to cause kernel drops.

北笙凉宸 2024-10-12 19:58:18

圆孔,方钉。

您的工具不太适合您的需求。

另一种选择是执行一级过滤器(如上所述,捕获的内容比想要的多得多)并将其通过管道传输到另一个工具中,该工具实现您想要的更精细的过滤器(细化到每次调用的情况)。如果由于 RTP 流量过大,第一级过滤器对于内核来说太多了,那么您可能需要做其他事情,例如保持稳定的进程来捕获单个调用(因此您不会更改“主”上的过滤器)过程;它只是指导其他人如何设置他们的过滤器。)

是的,这可能意味着合并捕获,无论是在运行中(将它们全部传递到“保存捕获”过程)还是在事后合并。

您确实意识到,如果不快速安装过滤器,您很可能会错过 RTP 数据包。不要忘记,RTP 数据包可能会在 200 OK 到来之前(或同时到来)到达发起者,并且它们可能会在 ACK 之前(或在 ACK 之上)返回应答者。另外,不要忘记不带 SDP 的 INVITE(在 200 OK 中提供,在 ACK 中回答)。等等等等:-)

Round hole, square peg.

You have a tool that doesn't quite fit your need.

Another option is to do a first-level filter (as above, that captures a lot more than wanted) and pipe it into an another tool that implements the finer filter you want (down to the per-call case). If that first-level filter is too much for the kernel due to heavy RTP traffic, then you may need to do something else like keep a stable of processes to capture individual calls (so you're not changing the filter on the "main" process; it's simply instructing the others how to set their filters.)

Yes, this may mean merging captures, either on the fly (pass them all to a "save the capture" process) or after the fact.

You do realize that you may well miss RTP packets anyways if you don't install your filters fast. Don't forget that RTP packets could come in for the originator before the 200 OK comes in (or right together), and they may go back to the answerer before the ACK (or on top of it). Also don't forget INVITE with no SDP (offer in the 200 OK, answer in ACK). Etc, etc. :-)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文