Netty 自适应 UDP 组播支持
新手在使用 Netty 3.2.4 处理 UDP 视频流时遇到困难。在不同的机器上,我们使用 Netty 看到丢失的字节等。 Netty 获取字节后,我们有一个小计数器,以查看收到了多少字节。这种差异超出了 UDP 不可靠性所造成的范围。在我们的例子中,我们还将字节保存到文件中以播放视频。在 VLC 中播放视频确实可以说明丢失的字节数。 (发送的数据包大小约为 1000 字节)。
问题
- 我们对 Netty API 的假设是否正确,即无法将 AdaptiveReceiveBufferSizePredictor 用于 UDP 流侦听器?
- 对于我们所看到的行为有更好的解释吗?
- 有更好的解决方案吗?有没有办法将自适应预测器与 UDP 结合使用?
我们尝试过的
...
DatagramChannelFactory datagramChannelFactory = new OioDatagramChannelFactory(
Executors.newCachedThreadPool());
connectionlessBootstrap = new ConnectionlessBootstrap(datagramChannelFactory);
...
datagramChannel = (DatagramChannel) connectionlessBootstrap.bind(new
InetSocketAddress(multicastPort));
datagramChannel.getConfig().setReceiveBufferSizePredictor(new
FixedReceiveBufferSizePredictor(2*1024*1024));
...
根据文档和 Google 搜索,我认为正确的方法是使用 OioDatagramChannelFactory 而不是 NioDatagramChannelFactory。
此外,虽然我找不到明确说明,但您只能将FixedReceiveBufferSizePredictor与OioDatagramChannelFactory一起使用(与AdaptiveReceiveBufferSizePredictor相比)。我们通过查看源代码发现了这一点,并意识到 AdaptiveReceiveBufferSizePredictor 的 previousReceiveBufferSize() 方法不是从 OioDatagramWorker 类调用的(而它是从 NioDatagramWorker 调用的)
因此,我们最初将 FixedReceivedBufferSizePredictor 设置为 (2* 1024*1024)
观察到的行为
在不同的机器上运行(不同的处理能力),我们看到 Netty 接收的字节数不同。在我们的例子中,我们通过 UDP 流式传输视频,并且能够使用流式字节的回放来诊断读取的字节的质量(发送的数据包大小约为 1000 字节)。
然后我们尝试了不同的缓冲区大小,发现 1024*1024 似乎能让事情工作得更好......但真的不知道为什么。
在研究FixedReceivedBufferSizePredictor的工作原理时,我们意识到它只是在每次数据包进入时创建一个新的缓冲区。在我们的例子中,无论数据包是1000字节还是3字节,它都会创建一个2*1024*1024字节的新缓冲区。 MB。我们的数据包只有 1000 字节,所以我们认为这不是我们的问题。这里的任何逻辑是否会导致性能问题?例如,每次数据包进入时创建缓冲区? 。
我们的解决方法
然后,我们考虑了使缓冲区大小动态化的方法,但意识到我们无法使用如上所述的 AdaptiveReceiveBufferSizePredictor。我们尝试并创建了自己的 MyAdaptiveReceiveBufferSizePredictor 以及随附的 MyOioDatagramChannelFactory、*Channel、*ChannelFactory、*PipelineSink、*Worker 类(最终调用 MyAdaptiveReceiveBufferSizePredictor)。预测器只是根据最后一个数据包大小更改缓冲区大小,将缓冲区大小加倍或减小。这似乎有所改善。
Newbies having trouble processing a UDP video stream using Netty 3.2.4. On different machines we see dropped bytes etc using Netty. We have a little counter after Netty gets the bytes in, to see how many bytes are received. The variance is more than what just what UDP unreliability would account for. In our case, we also save the bytes to a file to playback the video. Playing the video in VLC really illustrates the dropped bytes. (Packet sizes being sent were around 1000 Bytes).
Questions
- Are our assumptions of the Netty API correct in terms of not being able to use the AdaptiveReceiveBufferSizePredictor for UDP stream listener?
- Is there a better explanation of the behavior we're seeing?
- Is there a better solution? Is there a way to use an adaptive predictor with UDP?
What We've Tried
...
DatagramChannelFactory datagramChannelFactory = new OioDatagramChannelFactory(
Executors.newCachedThreadPool());
connectionlessBootstrap = new ConnectionlessBootstrap(datagramChannelFactory);
...
datagramChannel = (DatagramChannel) connectionlessBootstrap.bind(new
InetSocketAddress(multicastPort));
datagramChannel.getConfig().setReceiveBufferSizePredictor(new
FixedReceiveBufferSizePredictor(2*1024*1024));
...
From documentation and Google searches, I think the correct way to do this is to use a OioDatagramChannelFactory instead of a NioDatagramChannelFactory.
Additionally, while I couldn't find it explicity stated, you can only use a FixedReceiveBufferSizePredictor with the OioDatagramChannelFactory (vs AdaptiveReceiveBufferSizePredictor). We found this out by looking at the source code and realizing that the AdaptiveReceiveBufferSizePredictor's previousReceiveBufferSize() method was not being called from the OioDatagramWorker class (whereas it was called from the NioDatagramWorker)
So, we originally set the FixedReceivedBufferSizePredictor to (2*1024*1024)
Observed Behavior
Running on different machines(different processing power) we're seeing a different number of bytes being taken in by Netty. In our case, we are streaming video via UDP and we are able to use the playback of the streamed bytes to diagnose the quality of the bytes read in (Packet sizes being sent were around 1000 Bytes).
We then experimented with different buffer sizes and found that 1024*1024 seemed to make things work better...but really have no clue why.
In looking at how FixedReceivedBufferSizePredictor works, we realized that it simply creates a new buffer each time a packet comes in. In our case it would create a new buffer of 2*1024*1024 Bytes whether the packet was 1000 Bytes or 3 MB. Our packets were only 1000 Bytes, so we we didn't think that was our problem. Could any of this logic in here be causing a performance problem? For example, the creation of the buffer each time a packet comes in?
Our Workaround
We then thought about ways to make the buffer size dynamic but realized we couldn't use the AdaptiveReceiveBufferSizePredictor as noted above. We experimented and created our own MyAdaptiveReceiveBufferSizePredictor along with the accompanying MyOioDatagramChannelFactory, *Channel, *ChannelFactory, *PipelineSink, *Worker classes (that eventually call the MyAdaptiveReceiveBufferSizePredictor). The predictor simply changes the buffer size to double the buffer size based on the last packet size or reduce it. This seemed to improve things.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
不太确定是什么导致了您的性能问题,但我发现了此线程。
这可能是由于为每个传入数据包创建 ChannelBuffers 造成的,在这种情况下,您必须等待 里程碑 4.0.0。
Not right sure what causes your performance issues but I found this thread.
It might be caused by the creation of ChannelBuffers for each incoming packet in which case you'll have to wait for Milestone 4.0.0.