http 客户端如何将 http 响应与请求(使用 Netty)或一般情况关联起来?
http 端点是否应该响应来自特定客户端的请求以便接收它们?
如果在代理后面的集群处理请求或使用 NIO 处理请求(其中一个请求比另一个请求完成得更快)的情况下没有意义怎么办?
是否有一种标准方法将唯一 id 与每个 http 请求关联以与响应关联?在 http 组件、httpclient 或curl 等客户端中如何处理此问题?
问题归结为以下情况:
假设我正在从服务器下载文件并且请求尚未完成。客户端是否能够在同一保持活动连接上完成其他请求?
Is a http end point suppose to respond to requests from a particular client in order that they are received?
What about if it doesn't make sense to in the case of requests handled by cluster behind a proxy or in requests handled with NIO where one request is finished faster than the other?
Is there a standard way of associating a unique id with each http request to associate with the response? How is this handled in clients like http componenets httpclient or curl?
The question comes down to the following case:
Suppose, I am downloading a file from a server and the request is not finished. Is a client capable of completing other requests on the same keep-alive connection?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
每当打开 TCP 连接时,就会通过源端口、目标端口以及 IP 地址来识别该连接。因此,如果我在目标端口 80(HTTP 默认端口)上连接到 www.google.com,我需要操作系统将生成的免费源端口。
然后,Web 服务器的回复被发送到源端口(和 IP)。这也是 NAT 的工作原理,记住哪个源端口属于哪个内部 IP 地址(对于传入连接反之亦然)。
至于您的编辑:不,单个http连接可以同时执行一个命令(GET/POST/等)。如果您在从先前发出的命令检索数据时发送另一个命令,则结果可能会因客户端和服务器实现而异。我猜想,例如Apache,在发送第一个请求的数据后,会传输第二个请求的结果。
Whenever a TCP connection is opened, the connection is recognized by the source and destination ports and IP addresses. So if I connect to www.google.com on destination port 80 (default for HTTP), I need a free source port which the OS will generate.
The reply of the web server is then sent to the source port (and IP). This is also how NAT works, remembering which source port belongs to which internal IP address (and vice versa for incoming connections).
As for your edit: no, a single http connection can execute one command (GET/POST/etc) at the same time. If you send another command while you are retreiving data from a previously issued command, the results may vary per client and server implementation. I guess that Apache, for example, will transmit the result of the second request after the data of the first request is sent.
我不会重写 CodeCaster 的答案,因为它措辞非常好。
回应您的编辑 - 不。它不是。单个持久 HTTP 连接一次只能用于一个请求,否则会变得非常混乱。因为 HTTP 没有定义任何形式的请求/响应跟踪机制,所以这是不可能的。
应该注意的是,还有其他协议使用类似的消息格式(符合 RFC822),它确实允许这样做(使用诸如 SIP 的 cSeq 标头),并且可以在自定义 HTTP 应用程序,但 HTTP 没有定义任何执行此操作的标准机制,因此无法做任何可以假设在任何地方都有效的事情。它还会给第二条消息的响应带来问题 - 您是否等待第一个响应完成后再发送第二个响应,或者在发送第二个响应时尝试暂停第一个响应?您将如何以保证消息不会损坏的方式进行通信?
另请注意,SIP(通常)通过 UDP 运行,UDP 不保证数据包排序,这使得 cSeq 系统更加必要。
如果您想在另一个事务仍在进行时向服务器发送请求,则需要创建到服务器的新连接,从而创建新的 TCP 流。
Facebook 在构建 CDN 时对此进行了一些研究,他们得出的结论是,您可以在任何时间有效地拥有 2 或 3 个打开的 HTTP 流,但由于额外的数据包开销成本,更多的数据会减少总体传输时间。如果我能找到链接,我会链接到该博客条目...
I won't re-write CodeCaster's answer because it is very well worded.
In response to your edit - no. It is not. A single persistent HTTP connection can only be used for one request at once, or it would get very confusing. Because HTTP does not define any form of request/response tracking mechanism, it simply would not be possible.
It should be noted that there are other protocols which use a similar message format (conforming to RFC822), which do allow for this (using mechanisms such as SIP's cSeq header), and it would be possible to implement this in a custom HTTP app, but HTTP does not define any standard mechanism for doing this, and therefore nothing can be done that could be assumed to work everywhere. It would also present a problem with the response for the second message - do you wait for the first response to finish before sending the second response, or try and pause the first response while you send the second response? How will you communicate this in a way that guarantees messages won't become corrupted?
Note also that SIP (usually) operates over UDP, which does not guarantee packet ordering, making the cSeq system more of a necessity.
If you want to send a request to a server while another transaction is still in progress, you will need to create a new connection to the server, and hence a new TCP stream.
Facebook did some research into this while they were building their CDN, and they concluded that you can efficiently have 2 or 3 open HTTP streams at any one time, but any more than that reduces overall transfer time because of the extra packet overhead cost. I would link to the blog entry if I could find the link...