如何使用渐变来分配两个图像的颜色强度?
我正在使用 MATLAB 研究自动图像拼接算法。到目前为止,我已经下载了一个与我想要的源代码非常相似的源代码,因此我目前正在研究该代码是如何工作的。
问题是,当将两个或更多图像拼接在一起时,它们的颜色强度很可能彼此不同,因此缝合的接缝将是肉眼可见的......所以,现在,我正在尝试找出如何使用图像梯度重新分配它们的颜色强度,以便整个拼接图像具有相同的颜色强度。
我希望有人能帮助我,如果可以的话,非常感谢...
I am working on an automatic image stitching algorithm using MATLAB. So far, I have downloaded a source code much like the one that I had in mind and so, I'm currently studying how the code work.
The problem is, when stitching two or more images together, their color intensity will most probably be different from each other so the stitched seams will be visible to the eye... So, right now, I'm trying to find out how to redistribute their color intensity using the images gradients so that the whole stitched image will have the same color intensity.
I hope someone can help me out there and if so, thank you very much...
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果图像重叠量很大,并且拼接算法在配准重叠区域方面做得非常好,则一个非常简单的解决方案是将重叠区域中两个图像的像素值混合在一起,使用加权平均值权重从 0 到 1,具体取决于距重叠区域边缘的距离。
其中,当我们靠近重叠区域的 imageA 侧时,weightA 接近 1,当我们靠近重叠区域的 imageB 侧时,weightB 接近 1,并且weightA 和weightB 的总和始终为1。
上述解决方案不是特别原则性的,并且确实依赖于缝合算法在重叠区域中很好地进行图像配准。
该问题的另一个更有原则的解决方案是消除强度差异的来源,尝试使图像平面上的像素响应均匀化。
该解决方案的形式将取决于强度差的来源,而强度差的来源将取决于光学和场景照明条件。
例如,当处理在同一时间从同一地点拍摄的户外场景照片时,主要效果可能是“渐晕”效果,这可能是由于多种不同的原因造成的,包括所采取的不同路径之间的差异通过相机光学器件的光。
作为另一个例子,当处理通过显微镜以倾斜角度照明的样品拍摄的照片时,主要效应可能是由于图像中最接近光线的部分和最远的部分之间的照明差异造成的。
渐晕通常表现为以镜头光轴在图像平面上的投影为中心的径向对称函数。要校正渐晕,您应该尝试拟合合适的径向对称函数。
光照变化可以采用不同的函数形式,但在许多情况下拟合简单的线性近似就足够了。
根据场景以及可用图像的数量和变化,您可能需要拍摄校准图像以正确适应这些功能。
上述方法对强度差异源的函数形式做出了假设,但没有对场景或其统计数据做出假设。
另一种方法可能是对场景做出一些假设,例如,所有重要信息都由高于某个阈值的空间频率表示。然后,您可以删除所有低图像强度空间频率分量。这将使图像“变平”,消除大部分低频渐晕和照明问题。
这种方法可能适用于显微镜图像、卫星图像或其他场景的图像,其中大部分兴趣在于细节,而不是构图的戏剧性。
有许多论文解决了这个问题,其中许多论文的技术复杂程度超出了上述讨论。例如,请参阅 D Goldman,“晕影和曝光校准和补偿”,IEEE 传输模式分析和机器智能,第 32 卷,第 12 期,第 2276-2288 页
If the images overlap by a significant amount, and the stitching algorithm does a very good job of registering the overlap region, a very simple solution would be to blend the pixel values from the two images together in the overlap region, using a weighted average with weights going from 0-1 depending on the distance from the edge of the overlap region.
where weightA is approaches 1 as we get closer to the imageA side of the overlap region, weightB approaches 1 as we get closer to the imageB side of the overlap region, and the sum of weightA and weightB is always 1.
The above solution is not particularly principled, and does depend on the stitching algorithm doing a very good job of image registration in the overlap region.
Another, more principled solution to the problem would be to remove the source of the intensity difference, attempting to homogenize the response of the pixels across the image plane.
The form of this solution will depend on the source of the intensity difference, which will depend on the optics and the scene lighting conditions.
For example when dealing with photographs of outdoor scenes, taken at the same time from the same location, then the dominant effect will likely be "vignetting" effects, which can be due to a variety of different causes, including differences between the various paths taken by the light through camera optics.
As another example, when dealing with photographs taken through a microscope of a sample illuminated at an oblique angle, the dominant effect will likely be due to the difference in illumination between those parts of the image closest to the light and those far away.
Vignetting generally manifests itself as a radially symmetric function centred around the projection of the optical axis of the lens onto the image plane. To correct for vignetting, you should try to fit a suitable radially symmetric function.
Lighting changes can take different functional forms, but fitting a straightforward linear approximation is sufficient in many cases.
Depending upon the scene, and the number and variability of the images that have available, you may need to take calibration images to fit these functions properly.
The above approaches make assumptions about the functional forms of the sources of the intensity differences, but not about the scene or it's statistics.
Yet another approach might be to make some assumptions about the scene, for example, that all significant information is represented by spatial frequencies above some threshold. You can then remove all low image intensity spatial frequency components. This will "flatten" the image, removing much of the low-frequency vignetting and lighting issues.
This approach might be applicable to microscopy images, sattelite images, or images of other scenes within which most of the interest lies in the detail, rather than in the drama of the composition.
There are a number of papers that tackle this problem, many at a level of technical sophistication rather beyond the above discussion. For example, see D Goldman, "Vignette and Exposure Calibration and Compensation", IEEE Trans Pattern Analysis and Machine Intelligence, vol 32, no 12, pp2276-2288