boost::asio 发送数据比通过 TCP 接收数据更快。或者如何禁用缓冲
我创建了一个客户端/服务器程序,客户端启动 Writer 类的实例,服务器启动 Writer 类的实例 读者课。然后Writer会写入一个DATA_SIZE字节的数据 每 USLEEP 毫秒异步发送给 Reader。
Writer 的每个连续的 async_write 请求都已完成 仅当前一个请求的“写入时”处理程序具有 被召唤。
问题是,如果 Writer(客户端)正在将更多数据写入 套接字比阅读器(服务器)能够接收这似乎 行为:
Writer 将开始写入(我认为)系统缓冲区,甚至 尽管读者尚未收到数据,但它将 调用“写入时”处理程序不会出现错误。
当缓冲区已满时,boost::asio不会触发“on write” 处理程序不再,直到缓冲区变小。
与此同时,Reader 仍在接收小块 数据。
事实上,阅读器在我关闭后继续接收字节 Writer 程序似乎证明了这个理论的正确性。
我需要实现的是防止这种缓冲,因为 数据需要“实时”(尽可能)。
我猜我需要使用一些套接字选项的组合 asio 提供,如 no_delay 或 send_buffer_size,但我只是猜测 在这里,因为我还没有成功地尝试过这些。
我认为人们能想到的第一个解决方案是使用 UDP 而不是 TCP。这将是这种情况,因为我需要切换到 在不久的将来也会出于其他原因使用 UDP,但我会 首先想了解如何使用 TCP 来实现这一点 把它直接记在我的脑海里,以防我也会有类似的情况 将来某一天出现问题。
注意1:在我开始在 asio 库中尝试异步操作之前,我已经使用线程、锁和 asio::sockets 实现了相同的场景,并且当时没有经历过这样的缓冲。我不得不切换到异步 API,因为 asio 似乎不允许同步调用的定时中断。
注意2:这是一个演示该问题的工作示例:http://pastie.org/3122025< /a>
编辑:我又做了一个测试,在我的NOTE1中我提到当我使用asio::iosockets时我没有遇到这种缓冲。所以我想确定一下并创建了这个测试: http://pastie.org/3125452 事实证明 < strong>asio::iosockets 存在缓冲事件,因此一定有其他原因导致其顺利进行,可能是较低的 FPS。
I have created a client/server program, the client starts
an instance of Writer class and the server starts an instance of
Reader class. Writer will then write a DATA_SIZE bytes of data
asynchronously to the Reader every USLEEP mili seconds.
Every successive async_write request by the Writer is done
only if the "on write" handler from the previous request had
been called.
The problem is, If the Writer (client) is writing more data into the
socket than the Reader (server) is capable of receiving this seems
to be the behaviour:
Writer will start writing into (I think) system buffer and even
though the data had not yet been received by the Reader it will be
calling the "on write" handler without an error.When the buffer is full, boost::asio won't fire the "on write"
handler anymore, untill the buffer gets smaller.In the meanwhile, the Reader is still receiving small chunks
of data.The fact that the Reader keeps receiving bytes after I close
the Writer program seems to prove this theory correct.
What I need to achieve is to prevent this buffering because the
data need to be "real time" (as much as possible).
I'm guessing I need to use some combination of the socket options that
asio offers, like the no_delay or send_buffer_size, but I'm just guessing
here as I haven't had success experimenting with these.
I think that the first solution that one can think of is to use
UDP instead of TCP. This will be the case as I'll need to switch to
UDP for other reasons as well in the near future, but I would
first like to find out how to do it with TCP just for the sake
of having it straight in my head in case I'll have a similar
problem some other day in the future.
NOTE1: Before I started experimenting with asynchronous operations in asio library I had implemented this same scenario using threads, locks and asio::sockets and did not experience such buffering at that time. I had to switch to the asynchronous API because asio does not seem to allow timed interruptions of synchronous calls.
NOTE2: Here is a working example that demonstrates the problem: http://pastie.org/3122025
EDIT: I've done one more test, in my NOTE1 I mentioned that when I was using asio::iosockets I did not experience this buffering. So I wanted to be sure and created this test: http://pastie.org/3125452 It turns out that the buffering is there event with asio::iosockets, so there must have been something else that caused it to go smoothly, possibly lower FPS.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
TCP/IP 无疑是为了最大化吞吐量而设计的,因为大多数网络应用程序的目的是在主机之间传输数据。在这种情况下,预计 N 个字节的传输将花费 T 秒,并且显然接收器处理数据的速度是否有点慢并不重要。事实上,正如您所注意到的,TCP/IP 协议实现了滑动窗口,它允许发送方缓冲一些数据,以便数据始终准备好发送,但将最终的限制控制权留给接收方。接收器可以全速运行、自行调整速度甚至暂停传输。
如果您不需要吞吐量,而是希望保证发送方传输的数据尽可能接近实时,那么您需要确保发送方在收到确认之前不会写入下一个数据包从接收器处得知它已经处理了先前的数据包。因此,与其盲目地一个接一个地发送数据包直到被阻止,不如定义一个消息结构来控制消息从接收方发送回发送方。
显然,使用这种方法,您的权衡是每个发送的数据包更接近发送者的实时性,但您限制了可以传输的数据量,同时稍微增加了协议使用的总带宽(即附加控制消息)。另请记住,“接近实时”是相对的,因为您仍然会面临网络延迟以及接收器处理数据的能力。因此,您还可以查看特定应用程序的设计约束,以确定您真正需要的“接近”程度。
如果您需要非常接近,但同时您不关心数据包是否丢失,因为旧数据包数据被新数据取代,那么 UDP/IP 可能是更好的选择。然而,a) 如果您有可靠的交付要求,您最终可能会重新发明 TCP/IP 轮子的一部分,并且 b) 请记住,某些网络(企业防火墙)往往会阻止 UDP/IP,同时允许 TCP/IP 流量,并且 c) )甚至 UDP/IP 也不是完全实时的。
TCP/IP is definitely geared for maximizing throughput as intention of most network applications is to transfer data between hosts. In such scenarios it is expected that a transfer of N bytes will take T seconds and clearly it doesn't matter if receiver is a little slow to process data. In fact, as you noticed TCP/IP protocol implements the sliding window which allows the sender to buffer some data so that it is always ready to be sent but leaves the ultimate throttling control up to the receiver. Receiver can go full speed, pace itself or even pause transmission.
If you don't need throughput and instead want to guarantee that the data your sender is transmitting is as close to real time as possible, then what you need is to make sure the sender doesn't write the next packet until he receives an acknowledgement from the receiver that it has processed the previous data packet. So instead of blindly sending packet after packet until you are blocked, define a message structure for control messages to be sent back from the receiver back to the sender.
Obviously with this approach, your trade off is that each sent packet is closer to real-time of the sender but you are limiting how much data you can transfer while slightly increasing total bandwidth used by your protocol (i.e. additional control messages). Also keep in mind that "close to real-time" is relative because you will still face delays in the network as well as ability of the receiver to process data. So you might also take a look at the design constraints of your specific application to determine how "close" do you really need to be.
If you need to be very close, but at the same time you don't care if packets are lost because old packet data is superseded by new data, then UDP/IP might be a better alternative. However, a) if you have reliable deliver requirements, you might ends up reinventing a portion of tcp/ip's wheel and b) keep in mind that certain networks (corporate firewalls) tend to block UDP/IP while allowing TCP/IP traffic and c) even UDP/IP won't be exact real-time.