如何编写高性能的Netty客户端

发布于 2024-12-20 04:14:00 字数 284 浏览 0 评论 0原文

我想要一个极其高效的 TCP 客户端来发送 google 协议缓冲区消息。我一直在使用 Netty 库来开发服务器/客户端。

在测试中,服务器似乎能够每秒处理高达 500k 事务,不会出现太多问题,但客户端往往会达到每秒 180k 事务的峰值。

我的客户端基于 Netty 文档中提供的示例,但不同之处在于我只想发送消息,然后忘记了,我不需要响应(大多数示例都会得到响应)。有没有办法优化我的客户端,以便我可以获得更高的 TPS?

我的客户应该维护多个通道,还是我应该能够通过单个通道实现比这更高的吞吐量?

I want an extremely efficient TCP client to send google protocol buffer messages. I have been using the Netty library to develop a server/client.

In tests the server seems to be able to handle up to 500k transactions per second, without to many problems, but the client tends to peak around 180k transactions per second.

I have based my client on the examples provided in the Netty documentation, but the difference is I just want to send the message and forget, I don't want a response (which most of the examples get). Is there anyway to optimize my client, so that I can achieve a higher TPS ?

Should my client maintain multiple channels, or should I be able to achieve a higher throughput than this with a single channel?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

拔了角的鹿 2024-12-27 04:14:00

1) 如果客户端只对发送感兴趣,而不是接收,您可以随时禁用从通道读取,如下所示

channel.setReadable(false);

2) 通过每个客户端拥有多个客户端通道,您可以非常轻松地增加吞吐量,而且它也可以扩展。

3)并且您可以进行以下调整以提高总体性能(对于读/写)

  • 最好通过使用 OrderdMemoryAwareThreadPoolExecutor 添加 EXecutionHandler 来拥有像 pipline 一样的 SEDA,(具有最佳值的最小、最大通道内存)< /p>

    bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
        @覆盖
        公共 ChannelPipeline getPipeline() 抛出异常 {
            返回 Channels.pipeline(
                    executionHandler1,//可共享
                    新的消息解码器处理程序(),
                    新的消息编码处理程序(),
                    executionHandler2,//可共享
                    新的 BusinessLogicHandler1(),
                    新的 BusinessLogicHandler2());
        }
    });
    
  • 将通道的 writeBufferHighWaterMark 设置为最佳值(确保设置大的值不会造成拥塞)

    bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);

  • 设置SO_READ, SO_WRITE缓冲区大小

    bootstrap.setOption("sendBufferSize", 1048576);
    bootstrap.setOption("receiveBufferSize", 1048576);

  • 启用TCP无延迟

    bootstrap.setOption("tcpNoDelay", true);

1) If the client is only interested in sending, not in receiving, you can always disable reading from channel like below

channel.setReadable(false);

2) You can increase the throughput very easily by having multiple client channels per client, and also it can scale too.

3) and you can do following tweaks to improve the performance in general (for read/ write)

  • Its better to have a SEDA like pipline by adding a EXecutionHandler with OrderdMemoryAwareThreadPoolExecutor, (with min, max channel memory with optimal value)

    bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
        @Override
        public ChannelPipeline getPipeline() throws Exception {
            return Channels.pipeline(
                    executionHandler1,//sharable
                    new MessageDecoderHandler(),
                    new MessageEncoderHandler(),
                    executionHandler2,//sharable
                    new BusinessLogicHandler1(),
                    new BusinessLogicHandler2());
        }
    });
    
  • Setting the writeBufferHighWaterMark of the channel to optimal value (Make sure that setting a big value will not create congestion)

    bootstrap.setOption("writeBufferHighWaterMark", 10 * 64 * 1024);

  • Setting the SO_READ, SO_WRITE buffer size

    bootstrap.setOption("sendBufferSize", 1048576);
    bootstrap.setOption("receiveBufferSize", 1048576);

  • Enabling the TCP No delay

    bootstrap.setOption("tcpNoDelay", true);

阳光下的泡沫是彩色的 2024-12-27 04:14:00

我不确定“tcpNoDelay”是否有助于提高吞吐量。延迟是为了提高性能。尽管如此,我尝试了一下,发现吞吐量实际上下降了90%以上。

I am not sure if "tcpNoDelay" helps to improve the throughput. Delay is there to improve the performance. None the less, I tried it and saw that the throughput actually fell more than 90%.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文