Java 中的异步 Web 请求?

发布于 2024-10-06 09:56:28 字数 211 浏览 0 评论 0原文

我正在用 Java 编写一个简单的网络爬虫。我希望它每秒能够下载尽可能多的页面。是否有一个包可以让在 Java 中轻松执行异步 HTTP Web 请求?我已经使用了 HttpURLConnection 但这是阻塞的。我也知道 Apache 的 HTTPCore NIO 中有一些东西,但我正在寻找更轻量级的东西。我尝试使用这个包,并且在多个线程上使用 HttpURLConnection 获得了更好的吞吐量。

I am writing a simple web crawler in Java. I want it to be able to download as many pages per second as possible. Is there a package out there that makes doing asynchronous HTTP web requests easy in Java? I have used the HttpURLConnection but that is blocking. I also know there is something in Apache's HTTPCore NIO, but I am looking for something more lightweight. I tried using this package and I was getting better throughput using the HttpURLConnection on multiple threads.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

风柔一江水 2024-10-13 09:56:28

一般来说,只要线程数低于 1000,数据密集型协议在使用经典阻塞 I/O 的原始吞吐量方面往往比 NIO 表现更好。至少基于 ( Apache HttpClient 使用的 HTTP 基准 [1]

使用带有线程的阻塞 HTTP 客户端可能会更好,只要线程数量适中 (<250)

如果你绝对确定你想要一个基于 NIO 的 HTTP 客户端 我可以推荐 Jetty HTTP 客户端,我个人认为它是目前最好的异步 HTTP 客户端。

[1] http://wiki.apache.org/HttpComponents/HttpClient3vsHttpClient4vsHttpCore

Generally data intensive protocols tend to perform better in terms of a raw throughput with the classic blocking I/O compared than NIO as long as the number of threads is below 1000. At least that is certainly the case with the client side HTTP based on (likely imperfect and possibly biased) HTTP benchmark used by Apache HttpClient [1]

One may be much better off using a blocking HTTP client with threads as long as the number of threads is moderate (<250)

If you are absolutely sure you want a NIO based HTTP client I can recommend Jetty HTTP client which I personally consider the best asynchronous HTTP client at the moment.

[1] http://wiki.apache.org/HttpComponents/HttpClient3vsHttpClient4vsHttpCore

虚拟世界 2024-10-13 09:56:28

虽然该用户没有问同样的问题,但您可能会发现他的问题的答案很有用:异步 HTTP Java 客户端

顺便说一句,如果您要“每秒下载尽可能多的页面”,您应该记住,爬虫可能会无意中使薄弱的服务器瘫痪。在您将您的创作释放到您自己的个人测试设置之外的任何内容之前,您可能应该阅读“robots.txt”以及解释该文件的适当方法。

While this user wasn't asking the same question, you may find answers to his question useful: Asynchronous HTTP Client for Java

As a side-note, if you're going to download "as many pages per second as possible", you should bear in mind that crawlers can inadvertently grind a weak server to a halt. You should probably read up on "robots.txt" and the appropriate way of interpreting this file before you unleash your creation on anything outside of your own personal test setup.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文