Java I/O 与使用 Linux NPTL 的 Java 新 I/O (NIO)

发布于 2024-09-29 11:20:52 字数 691 浏览 0 评论 0原文

我的网络服务器使用通常的 Java I/O 和每个连接线程机制。如今,随着用户的增加(长轮询连接),他们已经屈服了。然而,连接大多处于空闲状态。虽然可以通过添加更多网络服务器来解决这个问题,但我一直在尝试对 NIO实施。

我对此的印象很复杂。我读过一些基准测试,其中 Linux 中新的 NPTL 库的常规 I/O 性能优于 NIO。

通过 Java I/O 配置和使用适用于 Linux 的最新 NPTL 的真实体验是什么?性能有提升吗?

关于更大范围的问题:

I/O 和阻塞线程的最大数量是多少(我们在 Tomcat 线程池)在标准服务器类机器(具有四核处理器的戴尔)中,我们期望正常执行(使用 Linux NPTL 库?)。如果线程池变得非常大,比如超过 1000 个线程,会产生什么影响?

任何引用和指针都将非常感激。

My webservers use the usual Java I/O with thread per connection mechanism. Nowadays, they are getting on their knees with increased user (long polling connection). However, the connections are mostly idle. While this can be solved by adding more webservers, I have been trying to do some research on the NIO implementation.

I got a mixed impression about it. I have read about benchmarks where regular I/O with the new NPTL library in Linux outperforms NIO.

What is the real life experience of configuring and using the latest NPTL for Linux with Java I/O? Is there any increased performance?

And on a larger scope question:

What is the maximum number of I/O and blocking threads (that we configure in the Tomcat thread pool) in a standard server class machine (Dell with a quad-core processor) we expect to perform normally (with Linux NPTL library?). What's the impact if the threadpool gets really big, say more than 1000 threads?

Any references and pointers will be very much appreciated.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

情释 2024-10-06 11:20:52

挑衅性博客文章,“避免 NIO,获得更好的吞吐量。” Paul Tyma 的(2008) 博客 声称拥有约 5000 个线程,没有任何问题;我听到人们声称更多:

  1. 启用 NPTL 后,Sun 和 Blackwidow JVM 1.4.2 可以轻松扩展到 5000+
    线程。阻塞模型是
    始终比使用快 25-35%
    NIO 选择器。很多技巧
    EmberIO 人建议的是
    受雇 - 使用多个选择器,
    执行多 (2) 次读取,如果第一个
    读取返回的 EAGAIN 等值
    爪哇。但我们还是无法打败平原
    Linux 的每连接线程模型
    NPTL。

我认为这里的关键是衡量开销和性能,并且只有当您知道需要并且能够证明改进时才转向非阻塞 I/O。您的决定应考虑编写和维护非阻塞代码的额外工作。我的看法是,如果您的应用程序可以使用同步/阻塞 I/O 清晰地表达,那么就这样做。如果您的应用程序适合非阻塞 I/O,并且您不会只是在应用程序空间中糟糕地重新发明阻塞 I/O,那么可以根据测量的性能需求考虑迁移到 nio当我浏览谷歌搜索结果时,我感到惊讶,几乎没有资源实际上引用了任何(最近的)数字

另请参阅 Paul Tyma 的演示幻灯片:旧方法再次焕然一新。根据他在 Google 的工作,具体数字表明同步线程 I/O 在 Linux 上具有相当大的可扩展性,并认为“NIO 更快”是一个曾经正确的神话,但现在不再正确了。 彗星日报上有一些很好的附加评论。他引用了 NPTL 的以下结果(轶事,与基准测试仍然没有可靠的联系,等等......):

测试中,NPTL成功启动
IA-32 上的 100,000 个线程一分为二
秒。相比之下,本次测试
在没有 NPTL 的内核下会有
耗时约15分钟

如果您确实遇到了可扩展性问题,您可能需要 使用 XX:ThreadStackSize。既然您提到Tomcat 请参阅此处

最后,如果您决心并决心使用非阻塞 I/O,请尽一切努力在 现有框架上构建知道自己在做什么的人。我已经浪费了太多自己的时间试图获得一个复杂的非阻塞 I/O 解决方案(出于错误的原因)。

另请参阅相关内容

Provocative blog posting, "Avoid NIO, get better throughput." Paul Tyma's(2008) blog claims ~5000 threads without any trouble; I've heard folks claim more:

  1. With NPTL on, Sun and Blackwidow JVM 1.4.2 scaled easily to 5000+
    threads. Blocking model was
    consistently 25-35% faster than using
    NIO selectors. Lot of techniques
    suggested by EmberIO folks were
    employed - using multiple selectors,
    doing multiple (2) reads if the first
    read returned EAGAIN equivalent in
    Java. Yet we couldn't beat the plain
    thread per connection model with Linux
    NPTL.

I think the key here is to measure the overhead and performance, and make the move to non-blocking I/O only when you know you need to and can demonstrate an improvement. The additional effort to write and maintain non-blocking code should be factored in to your decision. My take is, if your application can be cleanly expressed using synchronous/blocking I/O, DO THAT. If your application is amenable to non-blocking I/O and you won't just be re-inventing blocking I/O badly in application-space, CONSIDER moving to nio based on measured performance needs. I'm amazed when I poke around the google results for this how few of the resources actually cite any (recent) numbers!

Also, see Paul Tyma's presentation slides: The old way is new again. Based on his work at Google, concrete numbers suggest that synchronous threaded I/O is quite scalable on Linux, and consider "NIO is faster" a myth that was true for awhile, but no longer. Some good additional commentary here on Comet Daily. He cites the following (anecdotal, still no solid link to benchmarks, etc...) result on NPTL:

In tests, NPTL succeeded in starting
100,000 threads on a IA-32 in two
seconds. In comparison, this test
under a kernel without NPTL would have
taken around 15 minutes

If you really are running into scalability problems, you may want to tune the thread stack size using XX:ThreadStackSize. Since you mention Tomcat see here.

Finally, if you're bound and determined to use non-blocking I/O, make every effort to build on an existing framework by people who know what they're doing. I've wasted far too much of my own time trying to get an intricate non-blocking I/O solution right (for the wrong reasons).

See also related on SO.

夏の忆 2024-10-06 11:20:52

您可能会发现有用的链接:

您也可以看看http://nodejs.org/ 这不是 JVM 技术,但可以完美处理数千个连接(如果我没记错的话) ,在幕后使用 NPTL)

JVM 下一些经过验证的良好 NIO Web 框架:

The links you may find useful:

You may also have a look at http://nodejs.org/ which is not a JVM-technology, but perfectly handles thousands of connections (and, if I'm not mistaken, uses NPTL behind the scenes)

Some good proven NIO web frameworks under JVM:

靑春怀旧 2024-10-06 11:20:52

Sajid,我看到你正在做 Comet(长轮询)。

几乎没有人谈论在 NIO 中执行 Comet 事件的用户代码的问题。调度 Comet 事件的 NIO 线程调用您的代码,如果您的代码不够好,您将阻塞此关键线程,并且其他 Comet 连接必须等待,因为 NIO 线程正在执行与 SO 的线程调度程序类似的工作。这不是 Comet with IO 中的问题,因为线程仅用于您的 Comet 事件/任务,并且调度程序可以在需要时放弃您的线程(使用 NIO 方法并不那么容易)。

我看到的“同步 Comet”(基于 IO)的唯一问题是线程堆栈的内存消耗。

Sajid, I see that you are doing Comet (long polling).

Almost nobody talks about the problem of executing user code for Comet events in NIO. The NIO thread dispatching Comet events calls your code, if your code is not good enough you are blocking this critical thread and other Comet connections MUST WAIT because the NIO thread is doing a similar work to the thread scheduler of the S.O.. This is not the problem in Comet with IO because the thread is only for your Comet event/task and the scheduler can relinquish your thread when it wants (not so easy with a NIO approach).

The only problem I see with "synchronous Comet" (IO based) is memory consumption of thread stacks.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文