如何在GRPC中创建多个TCP连接
上下文:我有一个从服务检索数据的应用程序。 iperf 显示单个 TCP 连接的下载吞吐量仅为 20MBps,但要求是 100MBps。
我尝试使用 10 个 TCP 连接(iperf -c
)来进行 iperf
操作,结果显示带宽可能为 100MBps。所以我想使用多个TCP连接。
我们使用grpc来传输数据。来自一些谷歌和stackoverflow(例如 https://github.com/grpc/grpc/issues/21332 ),我知道
- grpc 利用 HTTP/2,它本身支持 IO 多路复用,但我的应用程序的限制是 TCP b/w,因此必须使用多个 TCP 连接
通道
抽象- 是表示 TCP 连接存根供客户端使用的 。因此,应该可以让多个存根共享一个通道。
我的做法是,
grpc::ChannelArguments args;
args.SetInt(GRPC_ARG_USE_LOCAL_SUBCHANNEL_POOL, 1);
std::shared_ptr<Channel> channel(grpc::CreateCustomChannel(
ip_port, grpc::InsecureChannelCredentials(), args));
stub_ = NewStub(channel);
但是从 ss
中,我只看到创建了一些 TCP 连接。
> ss -antp | grep 443 | wc -l
3
而且并非所有这些都来自我的程序。
我的问题是:如何在 C++ 中的 grpc 中创建多个 TCP 连接?
context: I have an application which retrieves data from service. iperf
shows download throughout for one single TCP connections is only 20MBps, but the requirement is 100MBps.
I tried iperf
with 10 TCP connections (iperf -c <IP port> -P 10
), and it shows the bandwidth could be 100MBps. So I want to use multiple TCP connections.
We use grpc to transfer data. From some google and stackoverflow (like https://github.com/grpc/grpc/issues/21332), I know
- grpc leverages HTTP/2, which natively supports IO multiplexing, but the limitation for my application is TCP b/w, so have to use multiple TCP connections
channel
is the abstraction which represents TCP connectionstub
is for client use. So it should be possible to have multiple stubs sharing one singlechannel
My way of doing it is
grpc::ChannelArguments args;
args.SetInt(GRPC_ARG_USE_LOCAL_SUBCHANNEL_POOL, 1);
std::shared_ptr<Channel> channel(grpc::CreateCustomChannel(
ip_port, grpc::InsecureChannelCredentials(), args));
stub_ = NewStub(channel);
But from ss
, I only see a few TCP connections created.
> ss -antp | grep 443 | wc -l
3
And not all of them are from my program.
My question is: how to create multiple TCP connection in grpc in C++?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这更多的是问题背后的愿望,该问题实现了TCP吞吐量的100 MB/s。 TCP连接性能的众多限制之一是:
吞吐量&lt; = Windowsize / roundtriptime
如果您看到与平行连接的IPERF的性能提高,则很有很强的指示表明您的单流性能是“窗口限制”。如果您控制或影响连接的两端(能够运行IPERF建议是这种情况),则可以将限制提高到TCP窗口大小,并可能将单流吞吐量转到所需的位置。
您想调整的最可能的sysctl是:
net.ipv4.tcp_wmem
net.ipv4.tcp_rmem
您需要在两侧调整。
“最好的”要做的是进行圆形拖上时间(例如通过ping测量),然后将其(以秒为单位)乘以所需的带宽。那将是您需要多少TCP窗口。为了使用Linux,确保这两个Sysctl中每个值的第三个值至少是该值的两倍。
或者,您可以将其翼并使这些第三个值现在是十倍。为什么要十?因为您说您有十个平行流的所需吞吐量,这表明TCP窗口的总吞吐量为十倍。
This goes more to the desire behind the question, which is achieving 100 MB/s of TCP throughput. One of the many limits to the performance of a TCP connection is:
Throughput <= WindowSize / RoundTripTime
If you see increased performance in iperf with parallel connections, it is a strong indication your single-stream performance is "window limited." If you control or influence both ends of the connection (being able to run iperf suggests that is the case) you can increase the limits to the TCP window size and perhaps get your single-stream throughput to where you want it.
The most likely sysctls you would want to tune would be:
net.ipv4.tcp_wmem
net.ipv4.tcp_rmem
You need to tune on both sides.
The "best" thing to do is to take the RoundTripTime (say measured via ping) and multiply that (in units of seconds) by the desired bandwidth. That will be how much TCP window you need. To get that with Linux, make certain the third value of each of those two sysctls is at least twice that value.
Or you can just wing it and make those third values ten times what they are now. Why ten? Because you said you got the desired throughput with ten parallel streams, which suggests ten times the aggregate TCP Window got you what you wanted.