dask:读取HDF5并写入其他HDF5文件
我正在使用一个比内存大的HDF5文件。因此,我正在尝试使用dask修改它。我的目标是加载文件,进行一些修改(不一定保留形状),然后将其保存到其他文件中。我使用以下方式创建文件:
import h5py as h5
import numpy as np
source_file = "source.hdf5"
x = np.zeros((3, 3)) # In practice, x will be larger than memory
with h5.File(source_file, "w") as f:
f.create_dataset("/x", data=x, compression="gzip")
然后,我使用以下代码加载,修改和保存。
from dask import array as da
import h5py as h5
from dask.distributed import Client
if __name__ == "__main__":
dask_client = Client(n_workers=1) # No need to parallelize, just interested in dask for memory-purposes
source_file = "source.hdf5"
temp_filename = "target.hdf5"
# Load dataframe
f = h5.File(source_file, "r")
x_da = da.from_array(f["/x"])
# Do some modifications
x_da = x_da * 2
# Save to target
x_da.to_hdf5(temp_filename, "/x", compression="gzip")
# Close original file
f.close()
但是,这给出以下错误:
typeError :('无法序列化类型数据集的对象。 )distriptited.comm.utils-错误 - ('无法序列化类型数据集的对象。 3),键入“< f8”>')
我在做错什么,还是根本不可能?如果是这样,是否有解决方法?
提前致谢!
I am working with a hdf5 file that is larger than memory. Therefore, I'm trying to use dask to modify it. My goal is to load the file, do some modifications (not necessarily preserving shape), and saving it to some other file. I create my file with:
import h5py as h5
import numpy as np
source_file = "source.hdf5"
x = np.zeros((3, 3)) # In practice, x will be larger than memory
with h5.File(source_file, "w") as f:
f.create_dataset("/x", data=x, compression="gzip")
Then, I use the following code to load, modify and save it.
from dask import array as da
import h5py as h5
from dask.distributed import Client
if __name__ == "__main__":
dask_client = Client(n_workers=1) # No need to parallelize, just interested in dask for memory-purposes
source_file = "source.hdf5"
temp_filename = "target.hdf5"
# Load dataframe
f = h5.File(source_file, "r")
x_da = da.from_array(f["/x"])
# Do some modifications
x_da = x_da * 2
# Save to target
x_da.to_hdf5(temp_filename, "/x", compression="gzip")
# Close original file
f.close()
However, this gives the following error:
TypeError: ('Could not serialize object of type Dataset.', '<HDF5 dataset "x": shape (3, 3), type "<f8">') distributed.comm.utils - ERROR - ('Could not serialize object of type Dataset.', '<HDF5 dataset "x": shape (3, 3), type "<f8">')
Am I doing something wrong, or is this simply not possible? And if so, is there some workaround?
Thanks in advance!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对于任何有兴趣的人,我创建了一个解决方法,该解决方案仅在每个块上调用Compute()。只是分享它,尽管我仍然对更好的解决方案感兴趣。
For anyone interested, I created a workaround which simply calls compute() on each block. Just sharing it, although I'm still interested in a better solution.