缩小尺度时的插值算法

发布于 2024-07-20 01:09:34 字数 343 浏览 4 评论 0原文

我试图理解缩小规模。 我可以看到在放大时如何使用双三次和最近邻等插值算法来“填充旧的已知点(对于图像而言是像素)之间的空白”。

但缩小规模? 我看不出如何在那里使用任何插值技术。 没有空白可填!

我已经被这个问题困扰太久了,请给我一个正确的方向。 实际上,当您删除已知数据时,如何进行插值?

编辑:假设我们有一个一维图像,每个点有一个颜色通道。 按平均像素值缩放 6 到 3 个点的缩减算法如下所示: 1,2,3,4,5,6 = (1+2)/2,(3+4)/2,(5+6)/2 我走在正确的轨道上吗? 这种插值是缩小规模而不是仅仅丢弃数据吗?

Im trying to understand downscaling. I can see how interpolation algorithms such as bicubic and nearest neighbour can be used when when upscaling, to "fill in the blanks" between the old, known points (pixels, in case of images).

But downscaling? I cant see how any interpolation technique can be used there. There are no blanks to fill!

Ive been stuck with this for far to long, give me a nudge in the right direction. How do you interpolate when you, in fact, remove known data?

Edit: Lets assume we have a one dimensional image, with one colour channel per point. A downscale algorithm scaling 6 to 3 points by average pixel value looks like this:
1,2,3,4,5,6 = (1+2)/2,(3+4)/2,(5+6)/2
Am I on the right track here? Is this interpolation in downscaling rather than just discarding data?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

天气好吗我好吗 2024-07-27 01:09:34

如果将原始像素概念化为宽度为 n,则该像素的中心距离任一边缘为 n/2。

人们可以假设像素中心的这一点定义了颜色。

如果要进行下采样,您可以从概念上这样考虑:即使您正在减小物理尺寸,也可以认为您保持了相同的尺寸,但减少了像素数量(从概念上讲,像素数量在增加)。 然后可以进行数学计算...

示例:假设您的图像高 1 像素,宽 3 像素,并且您只需水平缩小尺寸。 假设您要将其更改为 2 像素宽。 现在原始图像是 3n,您将其转换为 2 个像素,因此每个新像素将占用原始图像像素的 (3/2)。

不再考虑中心...新像素的中心位于 (3/4)n 和 (9/4)n [即 (3/4) + (3/2)]。 原始像素的中心位于 (1/2)n、(3/2)n 和 (5/2)n。 因此,每个中心都位于我们找到原始像素中心的位置之间 - 没有一个中心与原始像素中心匹配。 让我们看看 (3/4)n 处的第一个像素 - 它距原始第一个像素 (1/4)n,距原始第二个像素 (3/4)n。

如果我们想保持平滑的图像,请使用反比关系:取第一个像素的颜色值的 (3/4) + 第二个像素的颜色值的 (1/4),因为新的像素中心从概念上讲,与第二个原始像素中心(距离为 n/4)相比,距离第一个原始像素中心(距离为 n/4)更近。

因此,不必真正丢弃数据 - 只需计算其邻居的适当比率(在整个图像的物理尺寸不变的概念空间中)。 这是平均而不是严格的跳过/丢弃。

在二维图像中,比率的计算更加复杂,但要点是相同的。 进行插值,并从最接近的原始“邻居”中提取更多值。 如果下采样不是非常严重,生成的图像应该看起来与原始图像非常相似。

If one conceptualizes an original pixel as having a width n, then the center of the pixel is n/2 from either edge.

One may assume that this point, in the center of the pixel defines the color.

If you are downsampling, you can think about it this way conceptually: even though you are reducing the physical size, instead think that you are maintaining the same dimensions, but reducing the number of pixels (which are increasing in size - conceptually). Then one can do the math...

Example: say your image is 1 pixel high and 3 pixels wide, and you are only going to downscale horizontally. Lets say you are going to change this to 2 pixels wide. Now the original image is 3n, and you are turning it to 2 pixels, so therefore each new pixel will take up (3/2) of an original image pixel.

Not think about the centers again... the new pixels' centers are at (3/4)n and at (9/4)n [which is (3/4) + (3/2)]. The original pixels' centers were at (1/2)n, (3/2)n, and (5/2)n. Thus each center is somewhere between where we would find the original pixel's centers - none match up with the original pixels' centers. Let's look at the first pixel at (3/4)n - it is (1/4)n away from the original first pixel, and (3/4)n away from the original second pixel.

If we want to maintain a smooth image, use the inverse relationship: take (3/4) of the color values of the first pixel + (1/4) of the color values of the second, since the new pixel center, conceptually, will be closer to the first original pixel center (n/4 away) than it will be to the second (3n/4 away).

Thus one does not have to truly discard data - one just calculates the appropriate ratios from its neighbors (in a conceptual space where physical size of the total image is not changing). It is an averaging rather than a strict skipping/discarding.

In a 2d image the ratios are more complicated to calculate, but the gist is the same. Interpolate, and pull more of the value from the closest original "neighbors". The resultant image should look quite similar to the original provided the downsample is not terribly severe.

残龙傲雪 2024-07-27 01:09:34

无论是放大还是缩小,正在进行的“插值”实际上都是重新采样。

如果缩小版本中的样本数量不是样本总数(像素等)的偶数,则简单地丢弃数据将产生在图像中显示为“锯齿”的采样误差。 相反,如果您使用您提到的算法之一对新样本在现有样本之间的空间中的位置进行插值,则结果会更加平滑。

您可以将其概念化为首先放大到新旧尺寸的最小公倍数,然后通过丢弃样本来缩小尺寸,但不会实际生成中间结果。

Whether upscaling or downscaling, the "interpolation" going on is in fact re-sampling.

If the number of samples in the scaled-down version is not an even divisor of the full number of samples (pixels, etc), simply discarding data will produce sampling errors that appear in an image as "jaggies". If instead, you interpolate where the new samples would lie in the space between the existing samples using one of the algorithms you mention, the results are much smoother.

You can conceptualize this as first scaling up to the least common multiple of the old and new size, then scaling back down by discarding samples, only without actually generating the intermediate result.

半山落雨半山空 2024-07-27 01:09:34

此草图显示了几个像素的截面,这些像素从三个像素(黑色曲线)开始,然后使用插值(蓝色曲线)向下采样到两个像素(红色曲线)。 插值由原始三个像素确定,最后两个像素设置为每个最终像素中心的插值值。 (如果这里不清楚,垂直轴显示的是单个颜色通道的每个像素的强度。)

替代文本http://img391.imageshack.us/img391/3310/downsampling.png

This sketch shows a section through a few pixels that start off as three pixels (black curve) and are downsampled to two pixels (red curve) using the interpolation (blue curve). The interpolation is determined from the original three pixels and the two final pixel are set to the value of the interpolation at the center of each final pixel. (In case it's unclear here, the vertical axis shows is the intensity of each pixel for a single color channel.)

alt text http://img391.imageshack.us/img391/3310/downsampling.png

世界等同你 2024-07-27 01:09:34

在这里,原始图像位于顶部,然后是中间的简单去除算法,底部是插值算法。

考虑一个大聚光灯。 中心的光最亮,边缘的光变暗。 当你把它照射得更远时,你会期望光束突然失去边缘附近的黑暗并变成坚实的光轮廓吗?

不,同样的事情也发生在 stackoverflow 徽标上。 正如您在第一次缩小比例中看到的那样,图片边缘失去了柔和度,看起来很糟糕。 第二次缩小通过平均像素周围来保持边缘的平滑度。

您可以尝试的一个简单的卷积过滤器是将像素及其周围所有其他像素的 RGB 值相加,然后进行简单的平均。 然后用该值替换像素。 然后,您可以丢弃相邻像素,因为您已经将该信息包含在中心像素中。

替代文字

Here you have the original image on top, then a naive removal algorithm in the middle, and an interpolating one at the bottom.

Consider a big spotlight. The light at the center is the brightest, and the light at the edges become darker. When you shine it farther away, would you expect the light beam to suddenly lose the darkness near the edges and become a solid outline of light?

No, and the same thing is happening here to the stackoverflow logo. As you can see in the first downscaling, the picture has lost the softness in its edges and looks horrible. The second downscaling has kept the smoothness at the edges by averaging the pixel surroundings.

A simple convolution filter for you to try is to add the RGB values of the pixel and all other pixels surrounding it, and do a simple average. Then replace the pixel with that value. You can then discard the adjacent pixels since you've already included that information in the central pixel.

alt text

绅刃 2024-07-27 01:09:34

无论我们是放大还是缩小,我们都需要确定(在某种程度上的准确性)两个像素之间的点的颜色值是多少。

让我们采用单行像素:

P     P     P     P     P     P     P     P     P

当我们上采样时,我们想知道在中间点使用的像素值:

P   P   P   P   P   P   P   P   P   P   P   P   P

当我们下采样时,我们还想知道在中间点使用的像素值:

P       P       P       P       P       P       P

(当然,我们希望在二维而不是一维上执行此操作,但原理相同。)

因此无论如何,我们需要进行插值以确定正确的样本值。 根据我们想要结果的准确程度,有不同的插值技术。 理想情况下,我们应该对涉及的所有数学进行正确的重新采样......但即使这样也只是严格完成的插值!

Whether we're upscaling or downscaling, we need to determine (to some degree of accuracy) what the colour value at a point between two pixels will be.

Lets take a single row of pixels:

P     P     P     P     P     P     P     P     P

and we upsample, we want to know the pixel values to use at the in-between points:

P   P   P   P   P   P   P   P   P   P   P   P   P

and when we downsample, we also want to know the pixels values to use at the in-between points:

P       P       P       P       P       P       P

(Of course, we want to do this in two dimensions rather than one, but it's the same principle.)

So regardless, we need to interpolate to determine the right sample value. Depending on how accurate we want the results, there are different interpolation techniques. Ideally, we'd be properly resampling with all the maths involved... but even that is just interpolation done rigourously!

醉梦枕江山 2024-07-27 01:09:34

如果您使用窗口正弦滤波器(例如 lanczos),它实际上会滤除无法以较低分辨率表示的高频细节。 平均滤波器不会执行此操作,从而导致伪影。 sinc 滤波器还可以产生更清晰的图像,并且适用于放大和缩小。

如果你用 sinc 放大图像,然后将其缩小回原始尺寸,你会得到几乎完全相同的图像,而如果你在缩小尺寸时只是平均像素,你最终会得到比原始尺寸稍微模糊的图像。 如果您使用傅立叶变换来调整大小(加窗 sinc 试图对其进行近似),您将得到精确的原始图像,除了舍入误差之外。

不过,有些人不喜欢使用 sinc 滤波器时尖锐边缘周围的轻微振铃。 我建议对缩小矢量图形使用平均,对缩小照片使用正弦。

If you use a windowed sinc filter, such as lanczos, it actually filters out high frequency details that cannot be represented at the lower resolution. An averaging filter doesn't do this, causing artifacts. A sinc filter also produces a sharper image, and works for both upscaling and downscaling.

If you were to upscale an image with sinc, then downscale it back to the original size, you would get almost the exact same image back, whereas if you just averaged the pixels when downsizing, you would end up with something slightly blurrier than the original. If you used a fourier transform to resize, which the windowed sinc tries to approximate, you would get the exact original image back, apart from rounding errors.

Some people don't like the slight ringing around sharp edges that come from using a sinc filter though. I'd suggest averaging for downscaling vector graphics, and sinc for downscaling photos.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文