使用 Thin 和 Sinatra 异步迭代请求的响应
如果您在 Sinatra 中的响应返回一个“eachable”对象,Sinatra 的事件循环将“each”您的结果,并以流式传输方式生成结果作为 HTTP 响应。但是,如果存在对 Sinatra 的并发请求,它将在处理另一请求之前迭代一个响应的所有元素。如果我们有一个指向某个数据库查询结果的游标,这意味着我们必须等待所有数据都可用,然后才能处理并发查询。
我看过 async-sinatra gem 和 http ://macournoyer.com/blog/2009/06/04/pusher-and-async-with-thin/,认为这些可以解决我的问题,但我已经尝试过这个例子:
require 'sinatra/async'
class AsyncTest < Sinatra::Base
register Sinatra::Async
aget '/' do
body "hello async"
end
aget '/delay/:n' do |n|
EM.add_timer(n.to_i) { body { "delayed for #{n} seconds" } }
end
end
和 /delay/5
请求并不像我期望的那样同时工作,即我同时发出 3 个请求,Chrome 的调试器注意到响应时间大约为 5、10 和 15 秒。
我是否缺少一些设置,或者是否有其他方法告诉 Sinatra/Thin 以并发方式处理请求?
更新:这是另一个问题(或者可能澄清问题): 同时运行 curl -i http://localhost:3000/delay/5
具有正确的行为(2 个请求每个在大约 5 秒内返回)。运行 ab -c 10 -n 50 http://locahost:3000/delay/5< /code>(Apache 基准测试实用程序)还会返回总时间(约 25 秒)的合理值。 Firefox 表现出与 Chrome 相同的行为。浏览器的功能与命令行实用程序有何不同?
If your response in Sinatra returns an 'eachable' object, Sinatra's event loop will 'each' your result and yield the results in a streaming fashion as the HTTP response. However, if there are concurrent requests to Sinatra, it will iterate through all the elements of one response before handling another request. If we have a cursor to the results of some DB query, that means we have to wait for all the data to be available before handling a concurrent query.
I've looked at the async-sinatra gem and http://macournoyer.com/blog/2009/06/04/pusher-and-async-with-thin/, thinking these would solve my problem, but I've tried out this example:
require 'sinatra/async'
class AsyncTest < Sinatra::Base
register Sinatra::Async
aget '/' do
body "hello async"
end
aget '/delay/:n' do |n|
EM.add_timer(n.to_i) { body { "delayed for #{n} seconds" } }
end
end
and the /delay/5
request doesn't work concurrently as I expect it to, i.e. I make 3 requests concurrently and Chrome's debugger notes the response times as roughly 5, 10, and 15 seconds.
Am I missing some setup or is there another way to tell Sinatra/Thin to handle requests in a concurrent manner?
Update: Here's another wrench in this (or possibly clears things up):
Running curl -i http://localhost:3000/delay/5
concurrently has the correct behavior (2 requests each come back in ~5 seconds). Running ab -c 10 -n 50 http://locahost:3000/delay/5
(the Apache benchmark utility) also returns something reasonable for the total time (~25 seconds). Firefox exhibits the same behavior as Chrome. What are the browsers doing different from the command-line utilities?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
所以最后,我发现这个例子确实有效,我最终可以让 Sinatra 同时传输每个结果,主要是在 Pusher 和 Async 页面中使用
EM.defer
想法。 curl 和 Apache 基准测试证实这是有效的。它在浏览器中不起作用的原因是因为浏览器限制了同一 URL 的连接数量。我知道到单个域的并发连接存在限制(数量也很少),但并不是(表面上)到单个 URI 的所有连接都被序列化:
http://maillist.caucho.com/pipermail/resin-interest/2009-August/003998.html
我不知道这是否可配置,我只在 Firefox 中看到域范围的配置,但这就是问题所在。
So in the end, I found out that the example did indeed work and I could eventually get Sinatra to stream each-able results concurrently, primarily using the
EM.defer
idea in Pusher and Async page. curl and Apache benchmarking confirmed that this was working.The reason why it didn't work in the browser is because browsers limit the number of connections to the same URL. I was aware of there being a limit to concurrent connections to a single domain (also a low number), but not that (seemingly) all connections to a single URI are serialized:
http://maillist.caucho.com/pipermail/resin-interest/2009-August/003998.html
I don't know if this is configurable, I only see domain-wide configuration in Firefox, but that was the issue.
当您要处理对象的响应时,请执行以下操作:
如果您不需要,则不要等待该子进程结束..使用:
这是处理多个请求的简单方法,但我不确定您可能对这些数据库查询使用什么 ORM,但是您可能会遇到多个进程尝试访问数据库的表/行级锁定问题,如果这就是您所说的处理请求时的意思...
When you are about to handle the response of the object do this:
and if you don't need to do not wait for this child process to end.. with a:
It's a simple way to handle multiple requests, however I am not sure what ORM you may be using for those DB queries, but you could get into table/row level locking problems with multiple processes trying to hit the db, if that is what you mean when you say handling requests...