将大量图像加载到内存并保存泡菜

发布于 2025-01-22 03:48:29 字数 763 浏览 0 评论 0原文

我有问题...

我有一个数据集,每个案例有1200个案例和30个班级,每个班级有160张图像。这些图像是灰度ndarrays,float64 dtype。

我想切成薄片,只从每个班级中获取30张图像,然后将它们放入一个词典中,其中first_key为case_name和second_one class的名称。毕竟,我想将整个词典保存到泡菜中。

但是我一直都用完了记忆。

brain_all = []

for dir in path.iterdir():
    brain_sample = {}
    path_dir = path_save / dir.name
    try:
        path_dir.mkdir(parents=True, exist_ok=False)
    except FileExistsErorr:
        print('Folder is already there')
    for file in dir.iterdir():
        
        sample = nib.load(file).get_fdata()[:, :, 75:105]
        if 'flair' in file.name:
            brain_sample['flair'] = sample
        elif 't1ce' in file.name:
            brain_sample['t1ce'] = sample
    brain_all.append([file, brain_sample])

I have a problem...

I have a dataset with 1200 cases and 30 classes per case and 160 images per class. These images are grayscale ndarrays, float64 dtype.

I would like to slice each case and get only 30 images from each class and put them in a dictionary where the first_key is case_name and second_one name of a class. After all of this I would like to save whole dictionary to a pickle.

but I run out of memory all the time.

brain_all = []

for dir in path.iterdir():
    brain_sample = {}
    path_dir = path_save / dir.name
    try:
        path_dir.mkdir(parents=True, exist_ok=False)
    except FileExistsErorr:
        print('Folder is already there')
    for file in dir.iterdir():
        
        sample = nib.load(file).get_fdata()[:, :, 75:105]
        if 'flair' in file.name:
            brain_sample['flair'] = sample
        elif 't1ce' in file.name:
            brain_sample['t1ce'] = sample
    brain_all.append([file, brain_sample])

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文