如何转换和写入大图像而不导致 OOM 错误?
我将图像以 ImageIcons 的形式存储在数据库中,我希望将其提供给我们的网页,但是对于大图像,我遇到了内存不足的异常。
这是我目前的做法,
[编辑]我扩展了 ImageUtilities 以提供一个非透明的 BufferedImage,它简化了代码,
BufferedImage rgbbi = ImageUtilities.toBufferedImage(icon.getImage());
ServletOutputStream out = null;
try {
// Get the Servlets output stream.
out = responseSupplier.get().getOutputStream();
// write image to our piped stream
ImageIO.write(rgbbi, "jpg", out);
} catch (IOException e1) {
logger.severe("Exception writing image: " + e1.getMessage());
} finally {
try {
out.close();
} catch (IOException e) {
logger.info("Error closing output stream, " + e.getMessage());
}
}
抛出的异常如下,
Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space
at java.awt.image.DataBufferInt.<init>(DataBufferInt.java:41)
at java.awt.image.Raster.createPackedRaster(Raster.java:458)
at java.awt.image.DirectColorModel.createCompatibleWritableRaster(DirectColorModel.java:1015)
at sun.awt.image.ImageRepresentation.createBufferedImage(ImageRepresentation.java:230)
at sun.awt.image.ImageRepresentation.setPixels(ImageRepresentation.java:484)
at sun.awt.image.ImageDecoder.setPixels(ImageDecoder.java:120)
at sun.awt.image.JPEGImageDecoder.sendPixels(JPEGImageDecoder.java:97)
at sun.awt.image.JPEGImageDecoder.readImage(Native Method)
at sun.awt.image.JPEGImageDecoder.produceImage(JPEGImageDecoder.java:119)
at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246)
at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172)
at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136)
Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space
...
有没有办法重写这是为了流式传输 ImageIO.write 的输出并以某种方式限制其缓冲区大小?
[编辑] 我也不能只增加堆大小,我需要提供的图像在 10000x7000 像素范围内,作为一个字节数组,计算出 (10000px x 7000px x 24bits)
280MB。我认为在 servlet 中为图像转换分配的堆大小是不合理的。
示例图片 大
I have images stored in a database in the form of ImageIcons that I would like to serve to our web page, however for large images I am getting out of memory exceptions.
Here is how I currently do it,
[Edit] I expanded my ImageUtilities to provide a non transparent BufferedImage which simplifies the code,
BufferedImage rgbbi = ImageUtilities.toBufferedImage(icon.getImage());
ServletOutputStream out = null;
try {
// Get the Servlets output stream.
out = responseSupplier.get().getOutputStream();
// write image to our piped stream
ImageIO.write(rgbbi, "jpg", out);
} catch (IOException e1) {
logger.severe("Exception writing image: " + e1.getMessage());
} finally {
try {
out.close();
} catch (IOException e) {
logger.info("Error closing output stream, " + e.getMessage());
}
}
The exceptions that are being thrown are the following,
Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space
at java.awt.image.DataBufferInt.<init>(DataBufferInt.java:41)
at java.awt.image.Raster.createPackedRaster(Raster.java:458)
at java.awt.image.DirectColorModel.createCompatibleWritableRaster(DirectColorModel.java:1015)
at sun.awt.image.ImageRepresentation.createBufferedImage(ImageRepresentation.java:230)
at sun.awt.image.ImageRepresentation.setPixels(ImageRepresentation.java:484)
at sun.awt.image.ImageDecoder.setPixels(ImageDecoder.java:120)
at sun.awt.image.JPEGImageDecoder.sendPixels(JPEGImageDecoder.java:97)
at sun.awt.image.JPEGImageDecoder.readImage(Native Method)
at sun.awt.image.JPEGImageDecoder.produceImage(JPEGImageDecoder.java:119)
at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246)
at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172)
at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136)
Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space
Exception in thread "Image Fetcher 0" java.lang.OutOfMemoryError: Java heap space
...
Is there a way I can rewrite this to stream the output of ImageIO.write
and limit its buffer size somehow?
[Edit]
I can't just increase the heap size either, the images I need to serve are in the range of 10000x7000 pixels, as a byte array that works out (10000px x 7000px x 24bits)
280MB. I think that is an unreasonable heap size to allocate for image conversion in a servlet.
An example Image Large
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
正如评论中所指出的,将 10000x7000 图像作为 ImageIcons 存储在数据库中,并通过 servlet 提供它们,听起来是糟糕的设计。
尽管如此,我指出这个 PNGJ 库(免责声明:我编码了它),它允许您阅读/按顺序逐行写入 PNG 图像。当然,只有当您以这种格式存储大图像时,这才有用。
As pointed out in the comments, to store 10000x7000 images in a database, as ImageIcons, and serve them through a servlet, smells as bad design.
Nevertheless, I point out this PNGJ library (disclaimer: I coded it) that allows you read/write images in PNG sequentially, line by line. Of course, this would only be useful if you store your big images in that format.
我假设您的屏幕上没有足够的像素来显示完整的图像。由于您似乎需要 RAM 中的未压缩版本用于显示,因此您将需要与图像大小所暗示的堆一样多的堆。话虽如此,还有很多更好的方法。
我的学士论文是关于同时有效地显示多个高达 40000x40000 像素的大图像。我们最终实现了带有多级缓存的 LOD。这意味着图像被调整大小,并且每个尺寸被切成方形块,从而形成图像金字塔。我们必须进行一些实验才能找到最佳的块大小。它因系统而异,但可以安全地假设为 64x64 到 256x256 像素之间。
接下来的事情是实现一个调度算法来上传正确的块,以保持 texel:pixel 的比例为 1:1。为了获得更好的质量,我们在金字塔切片之间使用了三线性插值。
“多级”意味着图像块被上传到显卡的 VRAM,其中 RAM 作为 L1 缓存,HD 作为 L2 缓存(前提是图像位于网络上),但这种优化在您的情况下可能会过度。
总而言之,当您只是要求内存控制时,需要考虑很多事情。如果这是一个大型项目,那么实施 LOD 是完成这项工作的正确工具。
I am assuming you do not have enough pixels on your your screen to display a complete image. As you seem to need an uncompressed version of it in RAM for the display, you will need exactly as much heap as the image size implies. Having said that, there are many better ways.
I wrote my bachelor thesis on efficiently displaying multiple large images with up to 40000x40000 px simultaneously. We ended up implementing an LOD with a multilevel cache. Meaning the image was resized and each size was chopped up into square chunks, resulting in an image pyramid. We had to expariment a bit to find an optimal chunk size. It varies from system to system but may be safely assumed to be somewhere between 64x64 and 256x256 px.
Next thing was to implement a scheduling algorithm for uploading the right chuncks in order to keep the ratio of 1:1 of texel:pixel. To achieve better quality, we used trilinear interpolation between the slices of the pyramid.
The "multilevel" means that image chunks were uploaded to the VRAM of the graphics card with RAM as the L1 cach and the HD as the L2 cache (provided the image is on the network), but this optimisation might be excessive in your case.
All in all, this is lots of things to consider, while you were just asking for memory control. If this is a major project though, implementing an LOD is the right tool for the job.
更多内存似乎是转换的唯一答案,而无需我自己编写。
我的解决方案是不转换图像并使用 此答案用于检索图像 MIME 类型以便能够设置标头。
More memory seems to be the only answer for conversion without me having to write my own.
My solution was then to just not convert the images and use the method described in this answer to retrieve the image mime type to be able to set the header.
您将无法使用像您正在使用的内置类来执行此操作,因为它们被设计为批量处理位图。你最好通过 Image Magick(或者现在的任何东西)之类的东西从 java 中运行它们。
您只需要做一次吗?
您可能不得不自己编写所有这些内容,加载文件,处理“像素”并将其写出来。这将是最好的方法,而不是加载整个内容,转换(即复制)它,然后写出来。我不知道像 Image Magick 这样的东西是否适用于流或内存图像。
AlexR 的附录:
为了正确执行此操作,他需要将文件解码为某种可流式传输的格式。例如,JPEG 将图像分为 8x8 块,单独压缩它们,然后将这些块流式传输出去。当它流式传输块时,块本身会被压缩(因此,如果您有 10 个黑色块,您会得到 1 个计数为 10 的黑色块)。
原始位图只不过是字节块,对于具有 Alpha 的高色彩空间,它是 4 个字节(红色、绿色、蓝色和 Alpha 各一个)。大多数色彩空间转换发生在像素级别。其他更复杂的过滤器作用于像素和周围像素(高斯模糊是一个简单的例子)。
为了简单起见,特别是对于许多不同的格式,更容易将“整个图像”加载到内存中,处理其原始位图,在转换时复制该位图,然后以任何格式写回原始图像(例如,将彩色 JPEG 转换为灰度 PNG)。
对于大图像,就像这个人正在处理的那样,它的内存非常昂贵。
因此,最好的情况是,他会编写特定的代码来分部分读取文件,即将文件流式输入,转换每一点,然后再次将其流式输出。这只需要很少的内存,但他可能必须自己完成大部分工作。
所以,是的,他可以“只是逐字节读取图像”,但处理和算法可能会相当复杂。
You're not going to be able to do this using the inbuilt classes like you're using, since they're designed to work on bitmaps wholesale. You might be better off running these out of java via something like Image Magick (or whatever it is these days).
Do you just need to do this once?
You might be stuck having to write all this yourself, loading the file, processing the "pixels" and writing it out. That would be the BEST way to do it, rather than loading the entire thing, converting (i.e. copying) it, and writing it out. I don't know if things like Image Magick work on streams or memory images.
Addenda for AlexR:
To do this PROPERLY, he needs to decode the file in to some streamable format. For example, JPEG divides images in to 8x8 blocks, compresses them individually, then it streams those blocks out. While it is streaming the blocks out, the blocks themselves are compressed (so if you had 10 black blocks, you get like 1 black block with a count of 10).
A raw bit map is little more than blocks of bytes, for high color spaces with alpha, it's 4 bytes (one each for Red, Green, Blue, and Alpha). Most colorspace conversions happen at the pixel level. Other more sophisticated filters work on the pixel and surrounding pixels (Gaussian blur is a simple example).
For simplicity, especially with lots of different formats, it's easier to "load the whole image" in to memory, work on its raw bit map, copy that bit map while converting it, then write the raw image back out in whatever format (say, converting a color JPEG to a Gray Scale PNG).
For large images, as what this person is dealing with, it happens to be VERY expensive with memory.
So, OPTIMALLY, he'd write specific code to read the file in portions, that is stream it in, convert each little bit, and stream it back out again. This would take very little memory, but he'd likely have to do most of the work himself.
So, yes, he can "just read the image byte-by-byte", but the processing and algorithms will likely be rather involved.