图像处理:曝光融合图像被洗掉
我正在尝试复制 T. Mertens 等人的作品。等人。论文 [1],作者提出了一种方法,可以将使用不同相机曝光拍摄的多张图片融合成一张“更好”曝光的图片。论文 [2] 还提供了一个 Matlab 演示代码。 该方法非常简单:计算每个像素的像素权重图,然后使用权重图和拉普拉斯/高斯金字塔混合方法组合图像以防止混合伪影。
我基本上已将 Matlab 代码移植到 C++,但与 Matlab 实现相比,生成的图像看起来有些褪色(图像:http://imageshack.us/photo/my-images/204/exposuresample.jpg/)。
我已经比较了 C++ 端口处理工作流程中的不同步骤,但这些似乎没问题。我的金字塔处理好像有问题。
有图像处理背景的人有什么建议或想法可能会导致结果褪色吗?
问候,
[ 1 ] http://research.edm.uhasselt.be/%7Etmertens/exposure_fusion / [2] http://research.edm.uhasselt.be/%7Etmertens/exposure_fusion/exposure_fusion.zip
I am trying to replicate T. Mertens' et. al. paper [ 1 ] where the authors present a method to fuse multiple pictures captured with different camera exposures into a "better"-exposed picture. There is also a Matlab demo code available for the paper [ 2 ].
The method is very simple: you calculate a pixel-weight-map for each pixel and then the images are combined using the weight maps and a Laplace/Gaussian pyramid blending approach to prevent blending artifacts.
I have basically ported the Matlab code to C++ but the resulting images look washed out compared to the Matlab implementation (images: http://imageshack.us/photo/my-images/204/exposuresample.jpg/).
I already compared different steps in the processing workflow of my C++ port but these seem to be okay. There seems to be something wrong with my pyramid processing.
Has someone with image processing background a suggestion or idea what could cause the washed-out result?
Regards,
[ 1 ] http://research.edm.uhasselt.be/%7Etmertens/exposure_fusion/
[ 2 ] http : //research.edm.uhasselt.be/%7Etmertens/exposure_fusion/exposure_fusion.zip
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
看起来好像第二个图像要么偏移了某个常数,实际上导致它在非常亮的区域显得“更亮”并饱和,要么它乘以一个常数,导致它在某些区域饱和。您可以通过检查您假设为黑色的几个像素的值来测试这一点。如果预期的黑色确实是黑色,那么它是乘法的。我无法从您所附的图像中看出这一点。
不过,我的赌注是第一种情况。
为了调试这个问题,我会检查整个算法是否有任何像素操作导致超过 255(或 1,取决于您是否使用双精度数或整数),并从那里开始工作。或者,对于快速而肮脏的解决方案,请检查是否可以通过减去一个值或除以一个小值(1.3 或其他值)来校正最终图像
It appears as though the second image is either offset by some constant, effectively causing it to appear 'brighter' and saturated on very bright areas, or it is multiplied by a constant, causing it to be saturated in some areas. You can test this by checking the value of a few pixels you assume to be black. If expected black is indeed black, then it's multiplicative. I cannot make it out in the image you attach.
My bet would be on the first case, though.
To debug this, I would check throughout the algorithm if any pixel operation results in over 255 (or 1, depending if you work with doubles or integers) and work from there. Or for a quick and dirty solution, check if you can correct the final image by subtracting a value or dividing by a small value (1.3 or something)