jpeg压缩比
是否有一个表格给出了给定质量下 jpeg 图像的压缩比?
类似于 wiki 页面上给出的表格,但更多值除外。
公式也能达到这个目的。
奖励:维基页面上的[压缩比]值对于所有图像来说都大致正确吗?该比例是否取决于图像的内容和图像的大小?
这些问题的目的:我试图确定给定质量的压缩图像大小的上限。
注意:我不想自己制作一张桌子(我已经有了)。我正在寻找其他数据来与我自己的数据进行核对。
Is there a table that gives the compression ratio of a jpeg image at a given quality?
Something like the table given on the wiki page, except for more values.
A formula could also do the trick.
Bonus: Are the [compression ratio] values on the wiki page roughly true for all images? Does the ratio depend on what the image is and the size of the image?
Purpose of these questions: I am trying to determine the upper bound of the size of a compressed image for a given quality.
Note: I am not looking to make a table myself(I already have). I am looking for other data to check with my own.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我有完全相同的问题,但令我失望的是没有人创建这样的表格(基于单个经典 Lena 图像或 JPEG 墓碑的研究看起来很荒谬)。这就是我自己进行研究的原因。我不能说它是完美的,但它绝对比其他的好。
我用不同的设备拍摄了 60 张不同尺寸的现实生活照片。我创建了一个脚本,用不同的 JPEG 质量值压缩它们(它使用我们公司的图像库,但它基于 libjpeg,所以它也适用于其他软件)并将结果保存到 CSV 文件。经过一些 Excel 魔法后,我得到了以下值(注意,我没有计算任何低于 55 的 JPEG 质量,因为它们对我来说似乎毫无用处):
说实话,值的分散性很大(例如对于Q=55 最小压缩比为 22.91,最大值为 116.55)并且分布不正态。因此,要理解对于特定的 JPEG 质量来说,什么值应该被视为典型值并不容易。但我认为这些值作为粗略估计是不错的。
我写了一篇博客文章解释了我如何收到这些数字。
http://www.graphicsmill.com/blog/ 2014/11/06/不同 JPEG 质量值的压缩比
希望任何人都会发现它有用。
I had exactly the same question and I was disappointed that no one created such table (studies based on a single classic Lena image or JPEG tombstone are looking ridiculous). That's why I made my own study. I cannot say that it is perfect, but it is definitely better than others.
I took 60 real life photos from different devices with different dimensions. I created a script which compress them with different JPEG quality values (it uses our company imaging library, but it is based on libjpeg, so it should be fine for other software as well) and saved results to CSV file. After some Excel magic, I came to the following values (note, I did not calculated anything for JPEG quality lower than 55 as they seem to be useless to me):
To tell the truth, the dispersion of the values is significant (e.g. for Q=55 min compression ratio is 22.91 while max value is 116.55) and the distribution is not normal. So it is not so easy to understand what value should be taken as typical for a specific JPEG quality. But I think these values are good as a rough estimate.
I wrote a blog post which explains how I received these numbers.
http://www.graphicsmill.com/blog/2014/11/06/Compression-ratio-for-different-JPEG-quality-values
Hopefully anyone will find it useful.
进一步浏览维基百科会导致 http://en.wikipedia.org/wiki/Standard_test_image和柯达的测试套件。虽然它们有点过时而且很小,但你可以自己做桌子。
另外,来自 NASA.gov 的恒星和星系图片应该能够很好地强调压缩器,它们很大,几乎完全由微小的斑点细节组成,并且以未压缩的格式分发。换句话说,哈勃 GOTCHOO!
Browsing Wikipedia a little more led to http://en.wikipedia.org/wiki/Standard_test_image and Kodak's test suite. Although they're a little outdated and small, you could make your own table.
Alternately, pictures of stars and galaxies from NASA.gov should stress the compressor well, being large, almost exclusively composed of tiny speckled detail, and distributed in uncompressed format. In other words, HUBBLE GOTCHOO!
您获得的压缩将取决于图像的类型以及大小。显然,即使是同一场景,较大的图像也会产生较大的文件。
例如,我的数码相机(佳能 EOS 450)中的一组随机照片的大小范围为 1.8MB 到 3.6MB。另一组有更多的变化 - 1.5MB 到 4.6MB。
The compression you get will depend on what the image is of as well as the size. Obviously a larger image will produce a larger file even if it's of the same scene.
As an example, a random set of photos from my digital camera (a Canon EOS 450) range from 1.8MB to 3.6MB. Another set has even more variation - 1.5MB to 4.6MB.
如果我理解正确的话,在 JPEG 中实现压缩的关键机制之一是对图像的每个 8x8 像素块进行频率分析,并使用随指定压缩质量变化的“量化矩阵”缩放结果幅度。
高频分量的缩放通常会导致块包含许多零,而这些零的编码成本可以忽略不计。
由此我们可以推断,原则上质量和最终压缩比之间不存在独立于图像的关系。在不明显改变其内容的情况下可以从块中丢弃的频率分量的数量必然取决于这些分量的强度,即块是否包含尖锐边缘、高度可变的内容、噪声等。
If I understand correctly, one of the key mechanisms for attaining compression in JPEG is using frequency analysis on every 8x8 pixel block of the image and scaling the resulting amplitudes with a "quantization matrix" that varies with the specified compression quality.
The scaling of high frequency components often result in the block containing many zeros, which can be encoded at negligible cost.
From this we can deduce that in principle there is no relation between the quality and the final compression ratio that will be independent of the image. The number of frequency components that can be dropped from a block without perceptually altering its content significantly will necessarily depend on the intensity of those components, i.e. whether the block contains a sharp edge, highly variable content, noise, etc.