最好的图像缩小算法(质量方面)是什么?
我想找出哪种算法最适合用于缩小光栅图片。 我所说的“最好”是指能够提供最好看的结果的那种。 我知道双三次,但是还有更好的吗? 例如,我从一些人那里听说 Adobe Lightroom 有某种专有算法,它可以产生比我使用的标准双三次更好的结果。 不幸的是,我想在我的软件中使用这个算法,所以 Adobe 精心保护的商业秘密是行不通的。
补充:
我查看了 Paint.NET,令我惊讶的是,在缩小图片尺寸时,超级采样似乎比双三次更好。 这让我想知道插值算法是否是正确的选择。
它还让我想起了我自己“发明”但从未实现的算法。 我想它也有一个名字(因为这种微不足道的东西不可能是我一个人的想法),但我在流行的名字中找不到它。 超级采样是最接近的一种。
这个想法是这样的 - 对于目标图片中的每个像素,计算它在源图片中的位置。 它可能会覆盖一个或多个其他像素。 然后就可以计算这些像素的面积和颜色。 然后,为了获得目标像素的颜色,只需计算这些颜色的平均值,并将它们的面积添加为“权重”。 因此,如果目标像素覆盖黄色源像素的 1/3 和绿色源像素的 1/4,我会得到 (1/3*黄色 + 1/4*绿色)/(1/3+ 1/4)。
这自然会是计算密集型的,但它应该尽可能接近理想,不是吗?
这个算法有名字吗?
I want to find out which algorithm is the best that can be used for downsizing a raster picture. With best I mean the one that gives the nicest-looking results. I know of bicubic, but is there something better yet? For example, I've heard from some people that Adobe Lightroom has some kind of proprietary algorithm which produces better results than standard bicubic that I was using. Unfortunately I would like to use this algorithm myself in my software, so Adobe's carefully guarded trade secrets won't do.
Added:
I checked out Paint.NET and to my surprise it seems that Super Sampling is better than bicubic when downsizing a picture. That makes me wonder if interpolation algorithms are the way to go at all.
It also reminded me of an algorithm I had "invented" myself, but never implemented. I suppose it also has a name (as something this trivial cannot be the idea of me alone), but I couldn't find it among the popular ones. Super Sampling was the closest one.
The idea is this - for every pixel in target picture, calculate where it would be in the source picture. It would probably overlay one or more other pixels. It would then be possible to calculate the areas and colors of these pixels. Then, to get the color of the target pixel, one would simply calculate the average of these colors, adding their areas as "weights". So, if a target pixel would cover 1/3 of a yellow source pixel, and 1/4 of a green source pixel, I'd get (1/3*yellow + 1/4*green)/(1/3+1/4).
This would naturally be computationally intensive, but it should be as close to the ideal as possible, no?
Is there a name for this algorithm?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(9)
不幸的是,我找不到原始调查的链接,但随着好莱坞电影摄影师从胶片转向数字图像,这个问题出现了很多,所以有人(也许是 SMPTE,也许是 ASC)聚集了一群专业电影摄影师并向他们展示了镜头已经使用一堆不同的算法重新调整了比例。 结果是,对于这些观看大型电影的专业人士来说,一致认为 Mitchell(也称为高质量 Catmull-Rom)最适合放大和sinc 是缩小规模的最佳选择。 但sinc是一个理论上的滤波器,它会趋于无穷大,因此无法完全实现,所以我不知道他们所说的“sinc”实际上是什么意思。 它可能指的是 sinc 的截断版本。 Lanczos 是 sinc 的几个实用变体之一,它试图改进截断它的功能,并且可能是缩小静态图像的最佳默认选择。 但与往常一样,这取决于图像和您想要的内容:例如,在缩小线条图以保留线条的情况下,您可能更喜欢强调保留边缘,而在缩小花朵照片时,这将是不受欢迎的。
Cambridge in Color 中有一个很好的示例,展示了各种算法的结果。
fxguide 的人们整理了关于缩放算法的大量信息(以及大量有关合成和其他图像处理的其他内容)值得一看。 它们还包括可能对您自己的测试有用的测试图像。
现在,如果您确实想了解的话,ImageMagick 有一个有关重采样过滤器的详细指南。
具有讽刺意味的是,关于缩小图像的争议比放大图像的争议更大,理论上,缩小图像是可以完美完成的,因为你只是丢弃信息,而放大图像则试图添加不包含信息的信息。不存在。 但从兰佐斯开始。
Unfortunately, I cannot find a link to the original survey, but as Hollywood cinematographers moved from film to digital images, this question came up a lot, so someone (maybe SMPTE, maybe the ASC) gathered a bunch of professional cinematographers and showed them footage that had been rescaled using a bunch of different algorithms. The results were that for these pros looking at huge motion pictures, the consensus was that Mitchell (also known as a high-quality Catmull-Rom) is the best for scaling up and sinc is the best for scaling down. But sinc is a theoretical filter that goes off to infinity and thus cannot be completely implemented, so I don't know what they actually meant by 'sinc'. It probably refers to a truncated version of sinc. Lanczos is one of several practical variants of sinc that tries to improve on just truncating it and is probably the best default choice for scaling down still images. But as usual, it depends on the image and what you want: shrinking a line drawing to preserve lines is, for example, a case where you might prefer an emphasis on preserving edges that would be unwelcome when shrinking a photo of flowers.
There is a good example of the results of various algorithms at Cambridge in Color.
The folks at fxguide put together a lot of information on scaling algorithms (along with a lot of other stuff about compositing and other image processing) which is worth taking a look at. They also include test images that may be useful in doing your own tests.
Now ImageMagick has an extensive guide on resampling filters if you really want to get into it.
It is kind of ironic that there is more controversy about scaling down an image, which is theoretically something that can be done perfectly since you are only throwing away information, than there is about scaling up, where you are trying to add information that doesn't exist. But start with Lanczos.
有 Lanczos 采样,它比双三次采样慢,但会产生更高质量的图像。
There is Lanczos sampling which is slower than bicubic, but produces higher quality images.
当缩小比例小于 1/2 时,(双)线性和(双)三次重采样不仅丑陋,而且非常不正确。 它们会导致非常糟糕的混叠,类似于如果您按 1/2 因子下采样然后使用最近邻下采样所得到的结果。
就我个人而言,我建议对大多数下采样任务使用(区域)平均样本。 它非常简单、快速并且接近最佳。 高斯重采样(选择的半径与因子的倒数成正比,例如半径 5 用于下采样 1/5)可能会产生更好的结果,但计算开销更大,而且在数学上更合理。
使用高斯重采样的一个可能原因是,与大多数其他算法不同,只要您选择适合重采样因子的半径,它就可以正确地进行上采样和下采样(不会引入伪影/混叠)。 否则,要支持两个方向,您需要两种单独的算法 - 用于下采样的面积平均(这将降级为最近邻的上采样),以及类似(双)立方的算法用于上采样(这将降级为最近邻的下采样)。 从数学上看待高斯重采样这一良好特性的一种方法是,具有非常大半径的高斯近似于面积平均,而具有非常小的半径的高斯近似于(双)线性插值。
(Bi-)linear and (bi-)cubic resampling are not just ugly but horribly incorrect when downscaling by a factor smaller than 1/2. They will result in very bad aliasing akin to what you'd get if you downscampled by a factor of 1/2 then used nearest-neighbor downsampling.
Personally I would recommend (area-)averaging samples for most downsampling tasks. It's very simple and fast and near-optimal. Gaussian resampling (with radius chosen proportional to the reciprocal of the factor, e.g. radius 5 for downsampling by 1/5) may give better results with a bit more computational overhead, and it's more mathematically sound.
One possible reason to use gaussian resampling is that, unlike most other algorithms, it works correctly (does not introduce artifacts/aliasing) for both upsampling and downsampling, as long as you choose a radius appropriate to the resampling factor. Otherwise to support both directions you need two separate algorithms - area averaging for downsampling (which would degrade to nearest-neighbor for upsampling), and something like (bi-)cubic for upsampling (which would degrade to nearest-neighbor for downsampling). One way of seeing this nice property of gaussian resampling mathematically is that gaussian with very large radius approximates area-averaging, and gaussian with very small radius approximates (bi-)linear interpolation.
不久前我在 Slashdot 上看到一篇关于 Seam Carving 的文章,可能值得研究一下。
I saw an article on Slashdot about Seam Carving a while ago, it might be worth looking into.
您描述的算法称为线性插值,是最快的算法之一,但不是图像上最好的算法。
The algorithm you describe is called linear interpolation, and is one of the fastest algorithms, but isn't the best on images.
在文献中它可能被称为“盒子”或“窗口”重采样。
实际上,正如您想象的那样,它的计算成本更低。
它还可用于创建中间位图,随后由双三次插值使用该中间位图,以避免在下采样超过 1/2 时出现锯齿。
It might be referred as "box" or "window" resampling in literature.
It is actually less computational expensive as you think.
It can also be used to create a intermediate bitmap that is subsequently used by bi-cubic interpolation to avoid aliasing when downsampled by more than 1/2.
“神奇内核”可能是最好的图像调整大小算法,与 Lanczos 相比,具有更出色的结果和性能。 Facebook 和 Instagram 都使用它。
更多信息请访问 https://johncostella.com/magic/
"The magic kernel" is likely the best image resizing algorithm, with superior results and performance when compared to Lanczos. It is used by both Facebook and Instagram.
More information is available at https://johncostella.com/magic/
如果有人感兴趣,这里是我的面积平均缩放算法的 C++ 实现:
If anyone's interested, here is my C++ implementation of area averaging scaling algorithm:
没有任何一种最佳的缩小算法。 这在很大程度上取决于图像内容,甚至取决于您对图像所做的操作。 例如,如果您正在进行涉及梯度的图像处理,通常最好将其拟合到可微样条(例如 B 样条)并获取它们的导数。 如果图像的空间频率相对较低,几乎任何东西都会工作得相当好(您描述的面积比例方法很流行;它在 OpenCV 中称为 INTER_AREA,尽管它实际上更像是抗锯齿器而不是插值器),但它得到复杂的高频内容(锐利的边缘,高对比度)。 在这些情况下,您通常必须执行某种抗锯齿操作,要么内置到重采样器中,要么作为单独的步骤。
几乎在所有情况下真正适用的一条规则是最近邻的质量最差,其次是双线性。 如果您可能有足够的处理时间来完成比双线性更好的事情,请不要使用双线性。 双线性的唯一优点是它非常快、易于编码并且通常在 GPU 硬件中得到支持。
有多种高阶重采样方案。 我在文献中看到过几十种,我想说其中大约 10 种值得一看,具体取决于你在做什么。 IMO,最好的方法是针对您正在做的事情获取一组典型图像,通过一系列常见的嫌疑对象(Keys 卷积双三次、Catmull-Rom、Lanczos2/4、Lanczos3/6、O-MOMS、 B 样条...),看看什么通常最适合您的应用。 一旦您使用 4x4 重采样器,很可能不会有一个真正一致的获胜者,除非您的图像都非常相似。 有时您会看到像 Lanczos3 这样的 6x6 取得了一些持续的改进,但大多数时候,从 2x2 双线性到任何 4x4 的进步才是巨大的胜利。 当然,这就是为什么大多数图像处理软件支持不同的选择。 如果一种东西一直效果最好,那么每个人都会使用它。
There isn't any one best algorithm for downscaling. It depends a lot on the image content and even what you're doing with the image. For example, if you're doing image processing involving gradients, it often works best to fit it to a differentiable spline (e.g. B-splines) and take the derivatives of those. If the spatial frequency of the image is relatively low, almost anything will work reasonably well (the area-proportional approach you're describing is popular; it's called INTER_AREA in OpenCV though it's really more of an antialiaser than an interpolator), but it gets complicated with high frequency content (sharp edges, high contrast). In those cases, you usually have to do some kind of antialiasing, either built into the resampler or as a separate step.
The one rule that really holds in almost every case is that nearest neighbor has the worst quality, followed by bilinear. If you can possibly afford the processing time to do something better than bilinear, don't use bilinear. The only merits of bilinear are that it's really fast, easy to code, and often supported in GPU hardware.
There are a multitude of higher-order resampling schemes. I've seen dozens in the literature and I'd say about 10 of them are worth looking at depending on what you're doing. IMO, the best approach is to take a set of typical images for what you're doing, run them through a list of the usual suspects (Keys convolutional bicubic, Catmull-Rom, Lanczos2/4, Lanczos3/6, O-MOMS, B-spline...) and see what usually works best for your application. Chances are, once you go up to a 4x4 resampler, there won't be a single really consistent winner unless your images are all very similar. Sometimes you'll see some consistent improvement with a 6x6 like Lanczos3, but most of the time, the step up from the 2x2 bilinear to any 4x4 is the big win. This is, of course, why most image processing software supports different choices. If one thing worked best all the time, everybody'd be using it.