将大量图像加载到内存并保存泡菜
我有问题...
我有一个数据集,每个案例有1200个案例和30个班级,每个班级有160张图像。这些图像是灰度ndarrays,float64 dtype。
我想切成薄片,只从每个班级中获取30张图像,然后将它们放入一个词典中,其中first_key为case_name和second_one class的名称。毕竟,我想将整个词典保存到泡菜中。
但是我一直都用完了记忆。
brain_all = []
for dir in path.iterdir():
brain_sample = {}
path_dir = path_save / dir.name
try:
path_dir.mkdir(parents=True, exist_ok=False)
except FileExistsErorr:
print('Folder is already there')
for file in dir.iterdir():
sample = nib.load(file).get_fdata()[:, :, 75:105]
if 'flair' in file.name:
brain_sample['flair'] = sample
elif 't1ce' in file.name:
brain_sample['t1ce'] = sample
brain_all.append([file, brain_sample])
I have a problem...
I have a dataset with 1200 cases and 30 classes per case and 160 images per class. These images are grayscale ndarrays, float64 dtype.
I would like to slice each case and get only 30 images from each class and put them in a dictionary where the first_key is case_name and second_one name of a class. After all of this I would like to save whole dictionary to a pickle.
but I run out of memory all the time.
brain_all = []
for dir in path.iterdir():
brain_sample = {}
path_dir = path_save / dir.name
try:
path_dir.mkdir(parents=True, exist_ok=False)
except FileExistsErorr:
print('Folder is already there')
for file in dir.iterdir():
sample = nib.load(file).get_fdata()[:, :, 75:105]
if 'flair' in file.name:
brain_sample['flair'] = sample
elif 't1ce' in file.name:
brain_sample['t1ce'] = sample
brain_all.append([file, brain_sample])
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论