Ruby 并发:非阻塞 I/O 与线程

发布于 2025-01-05 03:10:45 字数 2156 浏览 2 评论 0原文

我正在 Ruby (1.9.3-p0) 中尝试并发,并创建了一个非常简单、I/O 繁重的代理任务。首先,我尝试了非阻塞方法:

require 'rack'
require 'rack/fiber_pool'
require 'em-http'
require 'em-synchrony'
require 'em-synchrony/em-http'

proxy = lambda {|*|
  result = EM::Synchrony.sync EventMachine::HttpRequest.new('http://google.com').get
  [200, {}, [result.response]]
}

use Rack::FiberPool, :size => 1000
run proxy

=begin
$ thin -p 3000 -e production -R rack-synchrony.ru start
>> Thin web server (v1.3.1 codename Triple Espresso)

$ ab -c100 -n100 http://localhost:3000/
Concurrency Level:      100
Time taken for tests:   5.602 seconds
HTML transferred:       21900 bytes
Requests per second:    17.85 [#/sec] (mean)
Time per request:       5602.174 [ms] (mean)
=end

嗯,我想我一定做错了什么。对于我们主要等待 I/O 的任务,平均请求时间为 5.6 秒?我尝试了另一个:

require 'sinatra'
require 'sinatra/synchrony'
require 'em-synchrony/em-http'

get '/' do
  EM::HttpRequest.new("http://google.com").get.response
end

=begin
$ ruby sinatra-synchrony.rb -p 3000 -e production
== Sinatra/1.3.1 has taken the stage on 3000 for production with backup from Thin
>> Thin web server (v1.3.1 codename Triple Espresso)

$ ab -c100 -n100 http://localhost:3000/
Concurrency Level:      100
Time taken for tests:   5.476 seconds
HTML transferred:       21900 bytes
Requests per second:    18.26 [#/sec] (mean)
Time per request:       5475.756 [ms] (mean)
=end

嗯,好一点,但还不是我所说的成功。最后,我尝试了线程实现:

require 'rack'
require 'excon'

proxy = lambda {|*|
  result = Excon.get('http://google.com')
  [200, {}, [result.body]]
}    
run proxy

=begin
$ thin -p 3000 -e production -R rack-threaded.ru --threaded --no-epoll start
>> Thin web server (v1.3.1 codename Triple Espresso)

$ ab -c100 -n100 http://localhost:3000/
Concurrency Level:      100
Time taken for tests:   2.014 seconds
HTML transferred:       21900 bytes
Requests per second:    49.65 [#/sec] (mean)
Time per request:       2014.005 [ms] (mean)
=end

这真的非常令人惊讶。我在这里错过了什么吗?为什么 EM 在这里表现如此糟糕?我需要做一些调整吗?我尝试了各种组合(Unicorn、几种 Rainbows 配置等),但它们都没有接近简单、旧的 I/O 阻塞线程。

非常欢迎提出更好实施的想法、意见和建议。

I am playing around with concurrency in Ruby (1.9.3-p0), and have created a very simple, I/O-heavy proxy task. First, I tried the non-blocking approach:

require 'rack'
require 'rack/fiber_pool'
require 'em-http'
require 'em-synchrony'
require 'em-synchrony/em-http'

proxy = lambda {|*|
  result = EM::Synchrony.sync EventMachine::HttpRequest.new('http://google.com').get
  [200, {}, [result.response]]
}

use Rack::FiberPool, :size => 1000
run proxy

=begin
$ thin -p 3000 -e production -R rack-synchrony.ru start
>> Thin web server (v1.3.1 codename Triple Espresso)

$ ab -c100 -n100 http://localhost:3000/
Concurrency Level:      100
Time taken for tests:   5.602 seconds
HTML transferred:       21900 bytes
Requests per second:    17.85 [#/sec] (mean)
Time per request:       5602.174 [ms] (mean)
=end

Hmm, I thought I must be doing something wrong. An average request time of 5.6s for a task where we are mostly waiting for I/O? I tried another one:

require 'sinatra'
require 'sinatra/synchrony'
require 'em-synchrony/em-http'

get '/' do
  EM::HttpRequest.new("http://google.com").get.response
end

=begin
$ ruby sinatra-synchrony.rb -p 3000 -e production
== Sinatra/1.3.1 has taken the stage on 3000 for production with backup from Thin
>> Thin web server (v1.3.1 codename Triple Espresso)

$ ab -c100 -n100 http://localhost:3000/
Concurrency Level:      100
Time taken for tests:   5.476 seconds
HTML transferred:       21900 bytes
Requests per second:    18.26 [#/sec] (mean)
Time per request:       5475.756 [ms] (mean)
=end

Hmm, a little better, but not what I would call a success. Finally, I tried a threaded implementation:

require 'rack'
require 'excon'

proxy = lambda {|*|
  result = Excon.get('http://google.com')
  [200, {}, [result.body]]
}    
run proxy

=begin
$ thin -p 3000 -e production -R rack-threaded.ru --threaded --no-epoll start
>> Thin web server (v1.3.1 codename Triple Espresso)

$ ab -c100 -n100 http://localhost:3000/
Concurrency Level:      100
Time taken for tests:   2.014 seconds
HTML transferred:       21900 bytes
Requests per second:    49.65 [#/sec] (mean)
Time per request:       2014.005 [ms] (mean)
=end

That was really, really surprising. Am I missing something here? Why is EM performing so badly here? Is there some tuning I need to do? I tried various combinations (Unicorn, several Rainbows configurations, etc), but none of them came even close to the simple, old I/O-blocking threading.

Ideas, comments and - obviously - suggestions for better implementations are very welcome.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

别靠近我心 2025-01-12 03:10:45

看看您的“每个请求的时间”如何恰好等于“测试所花费的时间”总数?这是一个报告算术工件,因为您的请求计数 (-n) 等于您的并发级别 (-c)。平均时间是总时间*并发数/请求数。因此,当 -n == -c 时报告的平均值将是最长请求的时间。你应该用 -n > 来进行腹肌跑-c 通过几个因素来得到合理的措施。

您似乎使用的是旧版本的 ab,因为相对较新的版本默认报告更详细的结果。直接针对 google 运行,当 -n == -c 时,我显示类似的总时间 == 平均时间,并且当 -n > 时得到更合理的数字。 -c。您确实想查看请求/秒、所有并发请求的平均值以及最终服务级别细分,以便更好地理解。

$ ab -c50 -n50 http://google.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 
gt;
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking google.com (be patient).....done


Server Software:        gws
Server Hostname:        google.com
Server Port:            80

Document Path:          /
Document Length:        219 bytes

Concurrency Level:      50
Time taken for tests:   0.023 seconds           <<== note same as below
Complete requests:      50
Failed requests:        0
Write errors:           0
Non-2xx responses:      50
Total transferred:      27000 bytes
HTML transferred:       10950 bytes
Requests per second:    2220.05 [#/sec] (mean)
Time per request:       22.522 [ms] (mean)      <<== note same as above
Time per request:       0.450 [ms] (mean, across all concurrent requests)
Transfer rate:          1170.73 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    2   0.6      3       3
Processing:     8    9   2.1      9      19
Waiting:        8    9   2.1      9      19
Total:         11   12   2.1     11      22
WARNING: The median and mean for the initial connection time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%     11
  66%     12
  75%     12
  80%     12
  90%     12
  95%     12
  98%     22
  99%     22
 100%     22 (longest request)        <<== note same as total and mean above


$ ab -c50 -n500 http://google.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 
gt;
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking google.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests


Server Software:        gws
Server Hostname:        google.com
Server Port:            80

Document Path:          /
Document Length:        219 bytes

Concurrency Level:      50
Time taken for tests:   0.110 seconds
Complete requests:      500
Failed requests:        0
Write errors:           0
Non-2xx responses:      500
Total transferred:      270000 bytes
HTML transferred:       109500 bytes
Requests per second:    4554.31 [#/sec] (mean)
Time per request:       10.979 [ms] (mean)
Time per request:       0.220 [ms] (mean, across all concurrent requests)
Transfer rate:          2401.69 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    1   0.7      1       3
Processing:     8    9   0.7      9      13
Waiting:        8    9   0.7      9      13
Total:          9   10   1.3     10      16

Percentage of the requests served within a certain time (ms)
  50%     10
  66%     11
  75%     11
  80%     12
  90%     12
  95%     13
  98%     14
  99%     15
 100%     16 (longest request)

See how your "Time per request" exactly equals total "Time taken for tests"? This is a reporting arithmetic artifact due to your request count (-n) being equal to your concurrency level (-c). The mean-time is the total-time*concurrency/num-requests. So the reported mean when -n == -c will be the time of the longest request. You should conduct your ab runs with -n > -c by several factors to get reasonable measures.

You seem to be using an old version of ab as a relatively current one reports far more detailed results by default. Running directly against google I show similar total-time == mean time when -n == -c, and get more reasonable numbers when -n > -c. You really want to look at the req/sec, mean across all concurrent requests, and the final service level breakdown to get a better understanding.

$ ab -c50 -n50 http://google.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 
gt;
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking google.com (be patient).....done


Server Software:        gws
Server Hostname:        google.com
Server Port:            80

Document Path:          /
Document Length:        219 bytes

Concurrency Level:      50
Time taken for tests:   0.023 seconds           <<== note same as below
Complete requests:      50
Failed requests:        0
Write errors:           0
Non-2xx responses:      50
Total transferred:      27000 bytes
HTML transferred:       10950 bytes
Requests per second:    2220.05 [#/sec] (mean)
Time per request:       22.522 [ms] (mean)      <<== note same as above
Time per request:       0.450 [ms] (mean, across all concurrent requests)
Transfer rate:          1170.73 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    2   0.6      3       3
Processing:     8    9   2.1      9      19
Waiting:        8    9   2.1      9      19
Total:         11   12   2.1     11      22
WARNING: The median and mean for the initial connection time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%     11
  66%     12
  75%     12
  80%     12
  90%     12
  95%     12
  98%     22
  99%     22
 100%     22 (longest request)        <<== note same as total and mean above


$ ab -c50 -n500 http://google.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 
gt;
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking google.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests


Server Software:        gws
Server Hostname:        google.com
Server Port:            80

Document Path:          /
Document Length:        219 bytes

Concurrency Level:      50
Time taken for tests:   0.110 seconds
Complete requests:      500
Failed requests:        0
Write errors:           0
Non-2xx responses:      500
Total transferred:      270000 bytes
HTML transferred:       109500 bytes
Requests per second:    4554.31 [#/sec] (mean)
Time per request:       10.979 [ms] (mean)
Time per request:       0.220 [ms] (mean, across all concurrent requests)
Transfer rate:          2401.69 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    1   0.7      1       3
Processing:     8    9   0.7      9      13
Waiting:        8    9   0.7      9      13
Total:          9   10   1.3     10      16

Percentage of the requests served within a certain time (ms)
  50%     10
  66%     11
  75%     11
  80%     12
  90%     12
  95%     13
  98%     14
  99%     15
 100%     16 (longest request)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文