如何在不杀死内核的情况下将多个TIF文件与Rioxarray Python合并
我正在使用RGB飓风IDA的高RES NOAA航空影像数据。数据可在下载下可用 - 任何日期时间都将下载大量TIFS https://storms.ngs.noaa.gov/storms/ida/index.html#18/29.46140/-90.30946
我想沿着(x,y,y,y,y,y,y,y,y,y,y,y,频段)为了创建一个完整的“马赛克”,以识别任何缺失的图像贴片,然后创建一个构建轮廓面具。
我有一个可以合并20个TIF的功能,但最重要的是杀死了内核。有更好的方法可以进一步合并吗?理想情况下,图像中想要40。
import rioxarray
from rioxarray import merge
from rasterio.plot import show
def combine_tif_large(file_list, title=''):
"""Plot the combined image of a set of tif files - chunks long file list into groups of 50 (n) to prevent datasets closing
file_list = list of file paths for files to be merged
title = title for plot
"""
n = 50
fig, ax = plt.subplots( figsize=(20,20))
chunked_files = [file_list[i:i + n] for i in range(0, len(file_list), n)]
final = []
for chunk in chunked_files:
now = datetime.now()
current_time = now.strftime("%H:%M:%S")
print("Current Time =", current_time)
print('starting chunk..')
elements = []
for val in chunk:
elements.append(rioxarray.open_rasterio(val))
print('finish chunk')
print('starting merge')
merged = merge.merge_arrays(elements, nodata=0.0)
final.append(merged)
print('starting final merge')
merge_final = merge.merge_arrays(final, nodata=0.0)
image = merge_final.values
show(image, ax = ax, title= title)
I am working with high res NOAA aerial imagery data from Hurrican Ida, RGB. Data is available here under download - any of the date times will download a large amount of tifs https://storms.ngs.noaa.gov/storms/ida/index.html#18/29.46140/-90.30946
I want to merge multiple tifs along (x , y , bands) in order to create a complete 'mosaic', to identify any missing patches of imagery, and then create a building outline mask.
I have a function that can merge 20 Tifs, but above that it kills the kernel. Is there a better way to merge further? Ideally would want 40 in an image.
import rioxarray
from rioxarray import merge
from rasterio.plot import show
def combine_tif_large(file_list, title=''):
"""Plot the combined image of a set of tif files - chunks long file list into groups of 50 (n) to prevent datasets closing
file_list = list of file paths for files to be merged
title = title for plot
"""
n = 50
fig, ax = plt.subplots( figsize=(20,20))
chunked_files = [file_list[i:i + n] for i in range(0, len(file_list), n)]
final = []
for chunk in chunked_files:
now = datetime.now()
current_time = now.strftime("%H:%M:%S")
print("Current Time =", current_time)
print('starting chunk..')
elements = []
for val in chunk:
elements.append(rioxarray.open_rasterio(val))
print('finish chunk')
print('starting merge')
merged = merge.merge_arrays(elements, nodata=0.0)
final.append(merged)
print('starting final merge')
merge_final = merge.merge_arrays(final, nodata=0.0)
image = merge_final.values
show(image, ax = ax, title= title)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论