StreamReader ReadToEnd() 在 HttpWebRequest EndGetResponse() 之后 - 最具可扩展性?
我在一些 ASP.NET 页面的后端调用 RESTful Web 服务。
我正在使用 ASP.NET 异步页面,因此在幕后我正在使用以下方法:
HttpWebRequest BeginGetResponse()
和
HttpWebRequest EndGetResponse()
在我的例子中,响应字符串始终是 JSON 字符串。我使用以下代码来读取整个字符串:
using (StreamReader sr = new StreamReader(myHttpWebResponse.GetResponseStream()))
{
myObject.JSONData = sr.ReadToEnd();
}
这种方法在可扩展性方面好吗?我见过其他代码示例,它们使用 Read() 来检索块中的响应数据。我的主要目标是可扩展性,因此可以在许多并发页面点击中进行此后端调用。
谢谢, 坦率
I am calling a RESTful web service in the back-end of some ASP.NET pages.
I am using ASP.NET asynchronous pages, so under the hood I am using the methods:
HttpWebRequest BeginGetResponse()
and
HttpWebRequest EndGetResponse()
The response string in my case is always a JSON string. I use the following code to read the entire string:
using (StreamReader sr = new StreamReader(myHttpWebResponse.GetResponseStream()))
{
myObject.JSONData = sr.ReadToEnd();
}
Is this method OK in terms of scalability? I have seen other code samples that instead retrieve the response data in blocks using Read(). My primary goal is scalability, so this back-end call can be made across many concurrent page hits.
Thanks,
Frank
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这取决于您所说的“可扩展”是什么意思。如果您谈论的是能够处理越来越大的文件,我会说它的可扩展性不是很好。由于您使用的是单个 ReadToEnd,因此巨大的流需要将整个流读入内存,然后进行操作。随着应用程序流的数量、复杂性和大小的增长,您将发现这将开始阻碍服务器处理请求的性能。您还可能会发现您的应用程序池将在您的请求期间开始自行回收(如果您最终占用了那么多虚拟内存)。
如果流总是很小,并且您只关心创建的流的数量,那么我不明白为什么只要您的流依赖于打开的文件、数据库连接等,它就不会扩展。
It depends on what you mean by "scalable". If you're talking about being able to handle bigger and bigger files, I'd say it's not terribly scalable. Since you're using a single ReadToEnd, a huge stream would require the entire stream be read into memory and then acted upon. As the application streams grow in number, complexity and size you're going to find that this will begin to hamper the server's performance to handle requests. You may also find that your application pool will begin to recycle itself DURING your request (if you end up taking that much virtual memory).
If the stream is always going to be smallish and you're only concerned with the number of streams created, I don't see why this wouldn't scale as long as your streams were dependent on open files, database connections, etc.