Boost ASIO 缓冲不起作用

发布于 2024-10-03 00:34:15 字数 693 浏览 0 评论 0原文

我正在编写一个网络应用程序,它使用 ASIO/UDP 在单个远程/区域设置端点对之间发送和接收。我使用 udp::socket::receive 来接收数据,代码中的所有内容都按逻辑工作,但我丢失了大量数据包。我发现,在接收功能上未阻塞时收到的任何数据包都会丢失 - 它没有缓冲。这特别奇怪,因为我使用以下命令将接收缓冲区设置为 2MB:

sock_udp.connect( remote_endpoint );
sock_udp.set_option( boost::asio::socket_base::receive_buffer_size(2*1024*1024) );

这以及事实上,如果我只发送两个大约 100 字节的数据包,如果我花任何时间处理第一个数据包,我仍然会丢失第二个数据包。

我认为这可能是 udp::socket::receive 的缺陷,因此我重新编写了网络代码以使用 udp::socket::async_receive 但我仍然遇到同样的问题。也就是说,一旦调用我的处理程序,我就会丢弃所有数据包,直到再次调用 async_receive 为止。

我从根本上误解了什么吗?我应该使用不同的方法来增强缓冲传入的数据包吗?

如果有帮助的话,我已经验证了这种情况在使用自定义 gcc4.2 构建的 XCode 中的 OS X 以及使用 gcc4.5 的 Ubuntu 10.10 中都会发生。我还没能在 Windows 中尝试过。

I'm writing a networking application that uses ASIO/UDP to send and receive between a single remote/locale endpoint pair. I had used udp::socket::receive to receive data and everything in my code worked logically, but I was losing an enormous number of packets. What I discovered was that any packet received while not blocked on the receive function was lost - it wasn't buffering. This was particularly odd because I had set the receive buffer to 2MB using the following command:

sock_udp.connect( remote_endpoint );
sock_udp.set_option( boost::asio::socket_base::receive_buffer_size(2*1024*1024) );

This and the fact that if I sent only two packets of about 100 bytes each I would still lose the second one if I spent any time processing the first.

I figured that this was perhaps a flaw with udp::socket::receive, so I re-wrote my networking code to use udp::socket::async_receive but I still have the same problem. That is, once my handler is called I drop any packets until I call async_receive again.

Am I fundamentally misunderstanding something? Is there a different approach I should be using for boost to buffer incoming packets?

If it helps, I've verified that this happens both in OS X in XCode using their custom gcc4.2 build, as well as Ubuntu 10.10 using gcc4.5. I have no yet been able to try it in Windows.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

冷清清 2024-10-10 00:34:15

这里的总体想法是,您的程序应该花费绝大多数时间等待套接字传递某些内容,要么在 UDP 接收中被阻塞,要么在 io_service 中等待套接字已异步接收某些内容的通知。套接字在操作系统中隐式地有一个小缓冲区用于接收数据包,没有办法避免它。所以问题更有可能在于你的程序的行为方式。

  • 你的线程是否在 ASIO io_service 之外的任何地方?如果是这样,您很容易溢出任何底层套接字缓冲区。
  • 您能否证明平均而言,阻塞调用之间花费的时间小于发送数据包之间的时间?
  • 从套接字接收数据后,您必须再次调用 async_receive 。例如,您可以从接收处理程序中发出另一个 async_receive 。

The general idea here is that your program should spend the vast majority of it's time waiting on the socket to deliver something, either blocked in the UDP receive or waiting in the io_service for notification that the socket has asynchronously received something. The socket implicitly has a small buffer in the OS for receiving packets, there's no way to avoid it. So the problem is more likely in how your program is behaving.

  • Is your thread anywhere but within the ASIO io_service? If so you can easily overflow any underlying socket buffer.
  • Can you prove that, on average, the time spent between blocking calls is less than the time between packets being sent?
  • You do have to call async_receive again after you receive data from the socket. For example you can issue another async_receive from within your receive handler.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文