UDP - 微突发期间丢失数据

发布于 2024-11-19 20:27:55 字数 850 浏览 2 评论 0原文

下面的代码在 99.9 的时间内运行良好(即不会丢失消息)。但是,当数据报之间以 2-3 微秒的速率出现微突发数据报时,我就会遇到数据丢失的情况。 boost notification_one() 成员调用需要 5 到 10 微秒才能完成,因此其本身就是这些条件下的关键瓶颈。关于如何提高性能有什么建议吗?

接收者/“生产者”代码线程:

if (bytes_recvd > 0) {
    InQ.mut.lock();
    string t;
    t.append(data_, bytes_recvd);
    InQ.msg_queue.push(t);    // < 1 microsecs
    InQ.mut.unlock();
    InQ.cond.notify_one();    // 5 - 10 microsecs
}

消费者代码线程:

//snip......
std::string s;
while (1) {
    InQ.mut.lock();
    if (!InQ.msg_queue.empty()) {
        s.clear();
        s = InQ.msg_queue.front();
        InQ.msg_queue.pop();
    }
    InQ.mut.unlock();
    if (s.length()) {
        processDatagram((char *)s.c_str(), s.length());
        s.clear();
    }
    boost::mutex::scoped_lock lock(InQ.mut);
    InQ.cond.wait(lock);
}

The code below runs great (ie. doesn't drop messages) 99.9 of the time. But when there's a microburst of datagrams coming in at the rate of 2-3 microseconds between datagrams, then I experience data loss. The boost notify_one() member call requires 5 to 10 microseconds to complete, so that by itself is the key bottleneck under these conditions. Any suggestions on how to improve performance?

Receiver/"producer" code thread:

if (bytes_recvd > 0) {
    InQ.mut.lock();
    string t;
    t.append(data_, bytes_recvd);
    InQ.msg_queue.push(t);    // < 1 microsecs
    InQ.mut.unlock();
    InQ.cond.notify_one();    // 5 - 10 microsecs
}

Consumer code thread:

//snip......
std::string s;
while (1) {
    InQ.mut.lock();
    if (!InQ.msg_queue.empty()) {
        s.clear();
        s = InQ.msg_queue.front();
        InQ.msg_queue.pop();
    }
    InQ.mut.unlock();
    if (s.length()) {
        processDatagram((char *)s.c_str(), s.length());
        s.clear();
    }
    boost::mutex::scoped_lock lock(InQ.mut);
    InQ.cond.wait(lock);
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

云胡 2024-11-26 20:27:55

只需更改

if (!InQ.msg_queue.empty()) {

while (!InQ.msg_queue.empty()) {

这样数据包就不必唤醒线程来处理,如果线程已经唤醒并且忙碌,它将在睡眠之前看到新数据包。

好吧,事情没那么简单,因为你需要释放数据包之间的锁,但这个想法是可行的——在睡觉之前,检查队列是否为空。

Just change

if (!InQ.msg_queue.empty()) {

to

while (!InQ.msg_queue.empty()) {

That way packets don't have to wake the thread to get processed, if the thread is already awake and busy, it will see the new packet before sleeping.

Ok, it's not quite that simple, because you need to release the lock between packets, but the idea will work -- before sleeping, check whether the queue is empty.

不气馁 2024-11-26 20:27:55

如果丢失数据,请尝试增加套接字缓冲区读取大小。如果您使用 boost::asio,请查看此选项:boost::asio::socket_base::receiver_buffer_size。通常,对于高吞吐量 UDP 应用程序,我们将套接字缓冲区大小设置为 1MB(在某些情况下更大)。

另外,请确保您在接收调用中使用的缓冲区不要太大,它们应该仅足以处理您的最大预期数据报大小(这显然取决于实现)。

If you're losing data try increasing your socket buffer read size. If you're using boost::asio, look into this option: boost::asio::socket_base::receiver_buffer_size. Generally for our high throughput UDP applications we set the socket buffer size to 1MB (more in some cases).

Also, make sure that the buffers you're using in your receive calls are not too large, they should only be large enough to handle your maximum expected datagram size (which is obviously implementation dependent).

演多会厌 2024-11-26 20:27:55

你明显的障碍在于空调。
您的主要希望是使用无锁 Q 实现。这对你来说可能是一个显而易见的陈述。
当然,真正让无锁 q 为您工作的唯一方法是,如果您拥有多核并且不介意专注于消耗性任务。

Your obvious clog is in the conditioning.
Your main hope would be in using a lockless Q implementation. This is probably an obvious statement to you.
The only way to really get the lockless q to work for you, of course, is if you have multicores and don't mind dedicating on to the consuming task.

意犹 2024-11-26 20:27:55

一些一般建议:

  • 增加套接字接收缓冲区大小。
  • 读取所有可用的数据报,然后将它们全部传递以进行处理。
  • 避免数据复制,传递指针。
  • 将锁定范围减少到绝对最小值,例如,仅将指针推入/弹出该互斥体下的队列。
  • 如果以上所有方法都失败了,请研究无锁数据结构来传递数据。

希望这有帮助。

Some general suggestions:

  • Increase socket receive buffer size.
  • Read all available datagrams, then pass them all on for processing.
  • Avoid data copying, pass pointers around.
  • Reduce lock scope to absolute minimum, say, only push/pop a pointer onto/off the queue under that mutex.
  • If all above fails you, look into lock-free data structures to pass data around.

Hope this helps.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文