让多个 UDP 数据报套接字处于待命状态是否有意义?是“同时”的吗?数据包被内核丢弃或排队?

发布于 2024-08-27 19:31:14 字数 409 浏览 11 评论 0原文

我正在 Android 上编写一个网络应用程序。

我正在考虑使用一个 UDP 端口和数据报套接字来接收发送给它的所有数据报,然后对这些消息有不同的处理队列。

我怀疑是否应该有第二个或第三个 UDP 套接字处于备用状态。有些消息会非常短(100 字节左右),但其他消息则必须传输文件。

我担心的是,如果 Android 内核太忙于处理较大的消息,它会丢弃小消息吗?

更新 “后一个函数调用 sock_queue_rcv_skb()(在 sock.h 中),它将 UDP 数据包在套接字的接收缓冲区上排队。如果缓冲区上没有剩余空间,则数据包将被丢弃。还进行过滤就是这个函数执行的,就像TCP一样调用sk_filter(),最后调用data_ready(),UDP包接收完成。

I'm coding a networking application on Android.

I'm thinking of having a single UDP port and Datagram socket that receives all the datagrams that are sent to it and then have different processing queues for these messages.

I'm doubting if I should have a second or third UDP socket on standby. Some messages will be very short (100bytes or so), but others will have to transfer files.

My concern is, will the Android kernel drop the small messages if it's too busy handling the bigger ones?

Update
"The latter function calls sock_queue_rcv_skb() (in sock.h), which queues the UDP packet on the socket's receive buffer. If no more space is left on the buffer, the packet is discarded. Filtering also is performed by this function, which calls sk_filter() just like TCP did. Finally, data_ready() is called, and UDP packet reception is completed."

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

一袭白衣梦中忆 2024-09-03 19:31:14

让我们首先了解一些基础知识:

每个套接字都有一个接收和一个发送缓冲区。当网络硬件发出新数据包到达的信号并且接收缓冲区已满时,数据包将被丢弃。缓冲区大小通过 SO_RCVBUFSO_SNDBUF 套接字选项控制,请参阅 setsockopt(3)。操作系统设置了一些默认值(并且有 /etc/sysctl.conf 文件)。这是在 BSD 系统上:

~$ sysctl -a|grep space
net.inet.tcp.recvspace=16384
net.inet.tcp.sendspace=16384
net.inet.udp.recvspace=41600
net.inet.udp.sendspace=9216

TCP 和 UDP 之间的区别在于前者负责数据的排序和丢弃数据包的重传,以及流量控制(慢速读取器)会减慢快速写入速度),而后者则不会。

所以,是的,使用 UDP 传输文件不是最好的选择,但但可行。人们只需重新发明 TCP 的一部分,并权衡重新发明的开销与 TCP 的开销。不过,普遍的看法是 UDP 最适合能够容忍某些数据包重新排序/丢失的应用程序(例如音频/视频流)。

还有一种错误的观念,即每个套接字都需要一个单独的线程来发送/接收数据,这与事实相去甚远。许多优秀的高性能网络应用程序都是在没有线程的情况下编写的,而是使用非阻塞套接字和一些轮询机制(参见选择(2)轮询( 2)epoll(7)< /a>)。

对于问题本身:

是的,如果应用程序太忙而无法在套接字的接收缓冲区中保留足够的可用空间,则内核可能会丢弃数据包。但由于每个套接字都有自己的套接字,因此控制流和数据流的分离会有所帮助。就我个人而言,我会选择一个简单的 TCP 服务器设置 - 侦听端口,接受每个客户端的连接,在 TCP 流之上实现有意义的协议。我同意使用 UDP 和低级协议状态机非常有趣,但它已经完成了,并且在调整 TCP 性能方面进行了数十年的研究。归根结底,最重要的是应用程序的可靠性(第一)和性能(第二)。

希望这有帮助。

Let's get some basics down first:

Every socket has a receive and a send buffer. When network hardware signals the arrival of a new packet and the receive buffer is full, the packet is dropped. The buffer sizes are controlled via SO_RCVBUF and SO_SNDBUF socket options, see setsockopt(3). The OS sets some defaults (and there's the /etc/sysctl.conf file). This is on a BSD system:

~$ sysctl -a|grep space
net.inet.tcp.recvspace=16384
net.inet.tcp.sendspace=16384
net.inet.udp.recvspace=41600
net.inet.udp.sendspace=9216

The difference between TCP and UDP is that the former takes care of sequencing of data and retransmission of dropped packets, plus a flow control (slow reader slows down fast writer), while the latter doesn't.

So yes, using UDP to transfer files is not the best, but workable, option. One just have to reinvent a part of TCP and weigh that re-invention's overhead against the one of TCP. Then again, the general wisdom is that UDP is best suited for applications that can tolerate some packet reordering/loss (e.g. audio/video streams).

Then there's the mis-guided notion that every socket needs a separate thread for sending/receiving data, which is far from truth. Many excellent high-performance network applications have been written without threads, but using non-blocking sockets and some polling mechanism (see select(2), poll(2), epoll(7)).

To the question itself:

Yes, the kernel might drop packets if the application is too busy to keep enough available space in sockets' receive buffers. But since each socket has its own, the separation of control and data streams would help. Personally though I would go for a simple TCP server setup - listen on a port, accept a connection per client, implement a meaningful protocol on top of a TCP stream. I agree that playing with UDP and low-level protocol state machines is a lot of fun, but it has been done already, and decades of research went into tuning TCP performance. What matters at the end of the day is reliability (first) and performance (second) of your application.

Hope this helps.

羁〃客ぐ 2024-09-03 19:31:14

对于传输文件来说,UDP 不是一个好主意,因为您无法保证数据包的接收顺序,或者是否会收到它们。如果您想在此基础上构建一个容错传输层,那么您应该使用 TCP/IP,因为它正是这样做的。

UDP 不会缓冲或排队接收的数据包。如果收到数据包并且您正在等待数据,您就会收到它。如果在您的程序正忙于进行其他处理时收到数据包,则您根本不会收到该数据包。因此,如果您收到两个“同时数据包”(嗯,两个非常接近),那么如果您对每个数据包进行任何重要处理,您很可能会错过其中一个。

我不认为打开额外的端口对您有多大帮助。如果您正忙于处理来自端口 1 的数据包,那么您将错过来自您正在监视的任何其他端口的任何数据包,除非每个数据包都在专用线程上运行。您最好将数据包快速复制到您自己的缓冲区中,并将其传递给另一个线程来处理它,以便您的侦听器线程可以尽快恢复侦听。

UDP is a bad idea for transferring files, as you cannot guarantee the order in which packets will be received, or whether they will be received at all. If you are thinking to build a fault tolerant transport layer on top of this, you should just use TCP/IP instead, as that's exactly what it does.

UDP does not buffer or queue received packets. If a packet is received and you are waiting for data, you will receive it. If a packet is received while your program is busy doing some other processing, you won't get the packet at all. So if you receive two "simultaneous packets" (well, two very close together) there is a good chance you might miss one of them if you are doing any significant processing of each packet.

I don't see how having extra ports open will help you much. If you're busy processing a packet from port 1, then you'll miss any packets coming in on any other ports you're watching, unless each is running on a dedicated thread. You would be much better off copying the packet quickly into your own buffer and passing it on to another thread to process it, so your listener thread can get back to listening asap.

醉殇 2024-09-03 19:31:14

TCP 的流量控制将帮助您减少丢包。它具有容错能力,并确保数据包按顺序到达。

TCP's flow control will help you reduce dropped packets. Its fault tolerant and makes sure that packet will arrive in sequence.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文