dask:读取HDF5并写入其他HDF5文件

发布于 2025-02-13 16:02:05 字数 1131 浏览 0 评论 0原文

我正在使用一个比内存大的HDF5文件。因此,我正在尝试使用dask修改它。我的目标是加载文件,进行一些修改(不一定保留形状),然后将其保存到其他文件中。我使用以下方式创建文件:

import h5py as h5
import numpy as np

source_file = "source.hdf5"
x = np.zeros((3, 3))  # In practice, x will be larger than memory
with h5.File(source_file, "w") as f:
    f.create_dataset("/x", data=x, compression="gzip")

然后,我使用以下代码加载,修改和保存。

from dask import array as da
import h5py as h5
from dask.distributed import Client


if __name__ == "__main__":
    dask_client = Client(n_workers=1)  # No need to parallelize, just interested in dask for memory-purposes

    source_file = "source.hdf5"
    temp_filename = "target.hdf5"

    # Load dataframe
    f = h5.File(source_file, "r")
    x_da = da.from_array(f["/x"])

    # Do some modifications
    x_da = x_da * 2

    # Save to target
    x_da.to_hdf5(temp_filename, "/x", compression="gzip")

    # Close original file
    f.close()

但是,这给出以下错误:

typeError :('无法序列化类型数据集的对象。 )distriptited.comm.utils-错误 - ('无法序列化类型数据集的对象。 3),键入“< f8”>')

我在做错什么,还是根本不可能?如果是这样,是否有解决方法?

提前致谢!

I am working with a hdf5 file that is larger than memory. Therefore, I'm trying to use dask to modify it. My goal is to load the file, do some modifications (not necessarily preserving shape), and saving it to some other file. I create my file with:

import h5py as h5
import numpy as np

source_file = "source.hdf5"
x = np.zeros((3, 3))  # In practice, x will be larger than memory
with h5.File(source_file, "w") as f:
    f.create_dataset("/x", data=x, compression="gzip")

Then, I use the following code to load, modify and save it.

from dask import array as da
import h5py as h5
from dask.distributed import Client


if __name__ == "__main__":
    dask_client = Client(n_workers=1)  # No need to parallelize, just interested in dask for memory-purposes

    source_file = "source.hdf5"
    temp_filename = "target.hdf5"

    # Load dataframe
    f = h5.File(source_file, "r")
    x_da = da.from_array(f["/x"])

    # Do some modifications
    x_da = x_da * 2

    # Save to target
    x_da.to_hdf5(temp_filename, "/x", compression="gzip")

    # Close original file
    f.close()

However, this gives the following error:

TypeError: ('Could not serialize object of type Dataset.', '<HDF5 dataset "x": shape (3, 3), type "<f8">') distributed.comm.utils - ERROR - ('Could not serialize object of type Dataset.', '<HDF5 dataset "x": shape (3, 3), type "<f8">')

Am I doing something wrong, or is this simply not possible? And if so, is there some workaround?

Thanks in advance!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

过气美图社 2025-02-20 16:02:06

对于任何有兴趣的人,我创建了一个解决方法,该解决方案仅在每个块上调用Compute()。只是分享它,尽管我仍然对更好的解决方案感兴趣。

def to_hdf5(x, filename, datapath):
    """
    Appends dask array to hdf5 file
    """
    with h5.File(filename, "a") as f:
        dset = f.require_dataset(datapath, shape=x.shape, dtype=x.dtype)

        for block_ids in product(*[range(num) for num in x.numblocks]):
            pos = [sum(x.chunks[dim][0 : block_ids[dim]]) for dim in range(len(block_ids))]
            block = x.blocks[block_ids]
            slices = tuple(slice(pos[i], pos[i] + block.shape[i]) for i in range(len(block_ids)))
            dset[slices] = block.compute()

For anyone interested, I created a workaround which simply calls compute() on each block. Just sharing it, although I'm still interested in a better solution.

def to_hdf5(x, filename, datapath):
    """
    Appends dask array to hdf5 file
    """
    with h5.File(filename, "a") as f:
        dset = f.require_dataset(datapath, shape=x.shape, dtype=x.dtype)

        for block_ids in product(*[range(num) for num in x.numblocks]):
            pos = [sum(x.chunks[dim][0 : block_ids[dim]]) for dim in range(len(block_ids))]
            block = x.blocks[block_ids]
            slices = tuple(slice(pos[i], pos[i] + block.shape[i]) for i in range(len(block_ids)))
            dset[slices] = block.compute()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文