优化 Linux 套接字

发布于 2024-12-12 04:01:15 字数 748 浏览 0 评论 0原文

我想请教一些关于优化linux socket的问题。 我尝试使用 boost 和简单的 Linux 套接字制作一个多线程负载均衡器。 负载均衡器的工作原理就像这些步骤一样简单:

  1. 一个请求进来,tcp监听器将接受一个套接字,只需说clientSocket并创建一个新线程
  2. 当线程启动时,它将创建一个后端 serverSocket
  3. 建立后,我生成一个新线程来从 serverSocket 读取并发送数据/响应clientSocket
  4. 对于主线程,我调用一个函数,该函数将从 clientSocket 读取并发送到 serverSocket
  5. 当这两个套接字之一无效时,工作人员将关闭套接字和模具

我还使用ting库中的Waitset,该库使用epoll,使recv方法处于阻塞模式,以便它会等到有事件发生,然后从套接字读取数据。

问题是,当我使用 AB、-n 10000 -c 100 -k 测试负载均衡器时,结果非常令人失望。我只有~1600 tps。我尝试记录每个请求所花费的请求时间,但结果很好。每次往返都有< 1000 微秒/1 毫秒。

但是,当我记录传入请求间隔时,处理的下一个请求大约 >距收到的当前请求 5000 微秒/5 毫秒。也许有人可以在这里建议一个更好的解决方案来优化套接字操作?谢谢。

I'd like to ask some questions about optimizing linux socket.
I try to make a multithreaded loadbalancer by using boost and simple linux socket.
The loadbalancer works just as simple as these steps:

  1. A request comes in and tcp listener will accept a socket, just say it clientSocket and create a new thread
  2. When the thread start, it will create a back-end socket, just say it serverSocket to the back end server (service)
  3. After serverSocket established, I spawn a new thread to read from serverSocket and send the data/response to clientSocket
  4. And for the main thread, I call a function that will read from clientSocket and send to serverSocket
  5. When one of those two sockets become invalid, the worker will close both sockets and dies

I also use Waitset from ting library, which is using epoll, to make the recv method in blocking mode, so that it will wait until there's an event occured and then read the data from the socket.

The problem is when I tested the loadbalancer with AB, -n 10000 -c 100 -k, the result is very disappointing. I only got ~1600 tps. I tried to log request time taken for each request, but the result was good. Each round-trip got < 1000 microsecs/1 milisecs.

But when I log for incoming request intervals, next request processed about > 5000 microsecs/5 milisecs from current request received. Maybe anyone can suggest a better solution to optimize the socket operation here? Thank you.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

冰之心 2024-12-19 04:01:15

你让这变得过于复杂。每个连接的线程不会超出简单的示例,请阅读 C10K 问题 了解更多详细信息。

我建议阅读Boost.Asio 负载均衡器的库。它在 Linux 系统上使用 epoll(4) 来处理异步事件多路分解,并且比每个连接一个线程的扩展性要好得多。

You are making this overly complex. A thread per connection does not scale beyond trivial examples, read the C10K problem for more details.

I suggest reading about the Boost.Asio library for your load balancer. It uses epoll(4) on Linux systems for asynchronous event demultiplexing and will scale much better than a thread per connection.

梨涡 2024-12-19 04:01:15

好吧,问题在于您为每个连接创建一个线程。这不会很好地扩展。那么为什么不创建一个线程来使用 epoll 监视传入的连接请求和 in/out/hup 事件。线程不做其他事情,简单高效。当数据可用时,将其传递给执行工作的线程工作人员。您可以通过输入/输出队列加入事件线程和线程工作线程(初始化时创建的线程池)。

好吧,如果当你有很多连接时这还不够高效,你可以平衡多进程中的连接。然后模型就变成了在初始化期间分叉几个子进程,并将服务器套接字传递给每个子进程。当连接请求到来时,每个子进程都有机会接受。真正的多进程负载均衡。

根据上面的模型,一台服务器20000+个连接不是问题。希望对您有帮助:)

Well, the problem lies in that you are creating one thread per connection. That won't scale out well. So why don't you just create one thread that just monitors the incoming connection request and the in/out/hup events with epoll. The thread does not do other things to make it simple and efficient. When data is available, pass it to the thread workers, which do the work. You can join the event thread and the thread workers (thread pool created when init) by in/out queues.

Well, if this is not efficient enough when you have a lot of connections, you can balance the connections in multi-processes. Then the model becomes that you fork several child processes during initialization, and pass the server socket to each of them. When a connection request comes in, each of the child processes have the chance to accept. The real load balancing with multi-process.

According the model above, 20,000+ connections in one server are not a problem. Hope that helps you :)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文