recv 是否从 pcaps 缓冲区中删除数据包?
假设一台计算机上运行着两个程序(为了简单起见,这是在 Linux 上运行的唯一用户程序),其中一个调用 recv(),另一个使用 pcap 来检测传入数据包。数据包到达,使用 pcap 的程序和使用 recv 的程序都会检测到该数据包。但是,是否有任何情况(例如,recv() 在调用 pcap_next() 之间返回)其中两者之一将无法获取数据包?
我真的不明白缓冲系统在这里是如何工作的,所以解释越详细越好 - 是否有任何可能的情况,其中一个程序会看到另一个程序看不到的数据包?如果是这样,那是什么以及如何预防?
Say there are two programs running on a computer (for the sake of simplification, the only user programs running on linux) one of which calls recv(), and one of which is using pcap to detect incoming packets. A packet arrives, and it is detected by both the program using pcap, and by the program using recv. But, is there any case (for instance recv() returning between calls to pcap_next()) in which one of these two will not get the packet?
I really don't understand how the buffering system works here, so the more detailed explanation the better - is there any conceivable case in which one of these programs would see a packet that the other does not? And if so, what is it and how can I prevent it?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
AFAIK,确实存在这样的情况:一个会收到数据,而另一个不会(两种方式)。我可能在这里弄错了一些细节,但我相信有人会纠正我。
Pcap 使用不同的机制来嗅探接口,但一般情况的工作原理如下:
我猜想没有什么困难的方法可以保证两个程序都收到两个数据包。这需要在缓冲区满时阻塞(这可能导致饥饿、死锁和各种问题)。对于以太网以外的互连也是可能的,但一般原则是尽力而为。
然而,除非系统负载很重,否则我会说丢失率会非常低,并且大多数数据包都会被所有人收到。您可以通过增加缓冲区大小来降低丢失的风险。快速谷歌搜索调整了这个,但我确信有一百万更多的方法来做到这一点。
如果需要硬性保证,我认为需要更强大的网络模型。我听说过 Netgraph 对于此类任务的出色表现。您也可以只安装一个物理盒子来检查数据包(这是您可以获得的最难的保证)。
AFAIK, there do exist cases where one would receive the data and the other wouldn't (both ways). It's possible that I've gotten some of the details wrong here, but I'm sure someone will correct me.
Pcap uses different mechanisms to sniff on interfaces, but here's how the general case works:
I would guess that there is no hard way to guarantee that both programs receive both packets. That would require blocking on a buffer when it's full (and that could lead to starvation, deadlock, all kinds of problems). It may be possible with interconnects other than Ethernet, but the general philosophy there is best-effort.
Unless the system is under heavy-load however, I would say that the loss rates would be quite low and that most packets would be received by all. You can decrease the risk of loss by increasing the buffer sizes. A quick google search tuned this up, but I'm sure there's a million more ways to do it.
If you need hard guarantees, I think a more powerful model of the network is needed. I've heard great things about Netgraph for these kinds of tasks. You could also just install a physical box that inspects packets (the hardest guarantee you can get).