FileOutStream.write(byte[]) 总是阻塞吗?
我想知道 FileOutputStream.write(byte[]) 是否总是阻塞当前线程,导致 ThreadContext 切换,或者如果操作系统缓冲区足够大以处理字节,则此操作是否不会阻塞。
产生这些想法的原因是,我想知道我在应用程序中使用 log4j 进行的日志记录是否会真正影响性能,以及使用由单独线程读取并写入日志文件的日志记录消息队列是否会更快(我知道如果应用程序退出并且队列中的语句未刷新到磁盘,则吞咽日志语句的缺点)。
不,我还没有介绍它,这些都是相当概念化的想法。
I wondered if FileOutputStream.write(byte[]) is always blocking the current thread, leading to a ThreadContext switch, or can it be that this operation does not block if the OS buffers are large enought to handle the bytes.
The reason for these thoughts are, I wondered if the logging I do with log4j in my application is a real performance hit, and if it would be faster to use a Queue of logging messages which is read by a separate thread and written to the logfiles (I know the disadvantages of swallowed logging statement if the app quits and the statements in the queue are not flushed to disk).
No, I didn't profile it yet, these are rather conceptual thoughts.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
不必如此。
FileOutputStream.write(byte[]) 是本机方法。常识表明 write() 可能只写入内部缓冲区,稍后调用 flush() 实际上会提交它。
Need not be.
FileOutputStream.write(byte[]) is a native method. Common sense would suggest that write() may just write to the internal buffers, and a later call to flush() would actually commit it.
您可以使用 log4j
org.apache.log4j .AsyncAppender
和日志记录调用不会阻塞。实际的日志记录是在另一个线程中完成的,因此您无需担心对 log4j 的调用无法及时返回。You can use the log4j
org.apache.log4j.AsyncAppender
and logging calls will not block. The actual logging is done in another thread so you won't need to worry about calls to log4j not returning in a timely manner.默认情况下
immediateFlush
< /a> 已启用,这意味着日志记录速度较慢,但可确保每个附加请求实际上都被写出。如果您不关心应用程序崩溃时是否写出最后一行,则可以将其设置为 false。另外,请查看 Log4j:性能提示 上的这篇文章,位于作者获得了一些使用
immediateFlush
、bufferedIO
和asyncAppender
的测试统计数据。他的结论是,对于本地日志记录,“设置immediateFlush=false
,并将bufferedIO
保留为默认值不缓冲”,并且“asycAppender
实际上比正常的非异步需要更长的时间”。By default
immediateFlush
is enabled which means that logging is slower but ensures that each append request is actually written out. You can set this to false if you don't care whether or not the last lines are written out if your application crashes.Also, take a look at this post on Log4j: Performance Tips, in which the author has got some test stats on using
immediateFlush
,bufferedIO
andasyncAppender
. He concludes, that for local logging "setimmediateFlush=false
, and leavebufferedIO
at the default of don't buffer" and that "asycAppender
actually takes longer than normal non-asyc".它可能取决于操作系统、驱动程序和底层文件系统。例如,如果启用了写入缓存,它可能会立即返回。我见过每天同步写入千兆字节的日志,只要 IO 不存在瓶颈,就不会太大影响性能。如果您担心响应时间,异步编写它们可能仍然值得。它消除了未来潜在的问题,例如,如果您更改为写入网络驱动器而网络出现问题。
It's likely going to depend on the OS, drivers and underlying file system. If write caching is enabled for example it'll probably return right away. I've seen gigabytes/day of logs written synchronously without affecting performance too much, as long as IO isn't bottlenecked. It's still probably worth writing them asynchronously if you're concerned about response times. And it eliminates potential future issues, e.g. if you changed to writing to network drive and the network has issues.