我以为这会更容易,但过了一会儿我终于放弃了,至少在几个小时内......
我想从一组延时拍摄的照片中重现这张尾随的星星图像。受此启发:
原作者使用VirtualDub拍摄的低分辨率视频帧并与imageJ结合。我想我可以轻松地重现这个过程,但使用Python更加注重内存的方法,所以我可以使用 原始高分辨率图像以获得更好的输出。
我的算法的想法很简单,一次合并两个图像,然后通过将结果图像与下一个图像合并来迭代。这样做了数百次并对其进行了适当的权衡,以便每张图像对最终结果都有相同的贡献。
我对 python 相当陌生(而且我不是专业程序员,这一点很明显),但环顾四周,在我看来,Python 成像库非常标准,所以我决定使用它(如果你认为这是正确的,请纠正我)其他的东西会更好)。
这是我到目前为止所得到的:
#program to blend many images into one
import os,Image
files = os.listdir("./")
finalimage=Image.open("./"+files[0]) #add the first image
for i in range(1,len(files)): #note that this will skip files[0] but go all the way to the last file
currentimage=Image.open("./"+files[i])
finalimage=Image.blend(finalimage,currentimage,1/float(i+1))#alpha is 1/i+1 so when the image is a combination of i images any adition only contributes 1/i+1.
print "\r" + str(i+1) + "/" + str(len(files)) #lousy progress indicator
finalimage.save("allblended.jpg","JPEG")
这做了它应该做的事情,但生成的图像很暗,如果我只是尝试增强它,很明显,由于像素值缺乏深度,信息丢失了。 (我不确定这里的正确术语是什么,颜色深度、颜色精度、像素大小)。
这是使用低分辨率图像的最终结果:
或我尝试使用完整的4k x 2k分辨率(来自另一组照片):
因此,我尝试通过设置图像模式来修复它:
firstimage=Image.open("./"+files[0])
size = firstimage.size
finalimage=Image.new("I",size)
但显然Image.blend 不接受该图像模式。
ValueError:图像模式错误
什么想法吗?
(我还尝试在将图像与 im.point(lambda i: i * 2) 组合之前通过将其相乘来使图像“不那么暗”,但结果同样糟糕)
I thought this was going to be easier but after a while I'm finally giving up on this, at least for a couple of hours...
I wanted to reproduce this a trailing stars image from a timelapse set of pictures. Inspired by this:
The original author used low resolution video frames taken with VirtualDub and combined with imageJ. I imagined I could easily reproduce this process but with a more memory-conscious approach with Python, so I could use the original high-resolution images for a better output.
My algorithm's idea is simple, merging two images at a time, and then iterating by merging the resulting image with the next image. This done some hundreds of times and properly weighing it so that every image has the same contribution to the final result.
I'm fairly new to python (and I'm no professional programmer, that'll be evident), but looking around it appears to me the Python Imaging Library is very standard, so I decided to use it (correct me if you think something else would be better).
Here's what I have so far:
#program to blend many images into one
import os,Image
files = os.listdir("./")
finalimage=Image.open("./"+files[0]) #add the first image
for i in range(1,len(files)): #note that this will skip files[0] but go all the way to the last file
currentimage=Image.open("./"+files[i])
finalimage=Image.blend(finalimage,currentimage,1/float(i+1))#alpha is 1/i+1 so when the image is a combination of i images any adition only contributes 1/i+1.
print "\r" + str(i+1) + "/" + str(len(files)) #lousy progress indicator
finalimage.save("allblended.jpg","JPEG")
This does what it's supposed to but the resulting image is dark, and if I simply try to enhance it, it's evident that information was lost due lack of depth in pixel's values. (I'm not sure what the proper term here is, color depth, color precision, pixel size).
Here's the final result using low resolution images:
or one I was trying with the full 4k by 2k resolution (from another set of photos):
So, I tried to fix it by setting the image mode:
firstimage=Image.open("./"+files[0])
size = firstimage.size
finalimage=Image.new("I",size)
but apparently Image.blend does not accept that image mode.
ValueError: image has wrong mode
Any ideas?
(I also tried making the images "less dark" by multiplying it before combining them with im.point(lambda i: i * 2) but results were just as bad)
发布评论
评论(1)
这里的问题是你要平均每个像素的亮度。这看起来似乎很合理,但实际上根本不是你想要的——明亮的星星会被“平均掉”,因为它们在图像上移动。取以下四帧:
如果对它们进行平均,您将得到:
当您想要时:
您可以尝试获取任何图像中每个像素的最大值,而不是混合图像。如果你有 PIL,你可以尝试 ImageChops 中更轻的功能。
这是我得到的:
编辑: 我读了 Reddit 帖子,发现他实际上结合了两个一种是针对星迹的方法,另一种是针对地球的方法。这是您尝试的平均的更好实现,具有适当的权重。我使用 numpy 数组代替 uint8 Image 数组作为中间存储。
这是图像,您可以将其与上一张图像中的星迹结合起来。
The problem here is that you are averaging the brightness at each pixel. This may seem sensible but it is actually not what you want at all -- the bright stars will get "averaged away" because they move accross the image. Take the following four frames:
If you average those, you will get:
When you want:
Instead of blending the images you can try taking the maximum seen in any image for each pixel. If you have PIL you can try the lighter function in ImageChops.
Here is what I got:
EDIT: I read the Reddit post and see that he actually combined two approaches -- one for the star trails and a different one for the Earth. Here is a better implementation of the averaging you tried, with proper weighting. I used a numpy array for the intermediate storage instead of the uint8 Image array.
Here is the image, which you could then combine with the star trails from the previous one.