使用flow_from_directory从多个目录加载多个数据集

发布于 2025-02-11 12:48:28 字数 1394 浏览 4 评论 0原文

我想加载来自不同目录的多个数据集,以训练一个深度学习模型,以进行语义细分任务。例如,我有一个数据集的图像和掩码,以及另一个数据集的不同图像和掩码,在DataSet1文件夹和DataSet2文件夹中具有相同的文件结构。

train_images/
    train/
        img1, img2, img3 ..
train_masks/
    train/
        msk1, msk2, msk3 ..
val_images/
    val/
        img1, img2, img3 ..
val_masks/
    val/
        msk1, msk2, msk3 ..

我可以制作一个图像生成器,将一个数据集的图像和掩码与以下代码结合在一起。我想知道如何制作同时使用dataset1和dataset2的生成器。

from tensorflow.keras.preprocessing.image import ImageDataGenerator
    
    img_data_gen_args = dict(horizontal_flip=True,
                      vertical_flip=True,
                      fill_mode='reflect')
    
    image_datagen = ImageDataGenerator(**img_data_gen_args)
    mask_datagen = ImageDataGenerator(**img_data_gen_args)
    
    image_generator = image_datagen.flow_from_directory(
        train_img_path,
        class_mode = None,
        batch_size = 16,
        seed = 123)
    
    mask_generator = mask_datagen.flow_from_directory(
        train_mask_path,
        class_mode = None,
        batch_size = 16,
        seed = 123)
    
    train_generator = zip(image_generator, mask_generator)

train_img_path = "dataset1/train_images/"
train_mask_path = "dataset1/train_masks/"

train_img_gen = trainGenerator(train_img_path, train_mask_path, num_class=1)
    # get one batch of image and mask
    x, y = train_img_gen.__next__()

I want to load multiple datasets from the different directories to train a deep learning model for a semantic segmentation task. For example, I have images and masks of one dataset and different images and masks of another dataset with the same file structure in dataset1 folder and dataset2 folder like this.

train_images/
    train/
        img1, img2, img3 ..
train_masks/
    train/
        msk1, msk2, msk3 ..
val_images/
    val/
        img1, img2, img3 ..
val_masks/
    val/
        msk1, msk2, msk3 ..

I could make an image generator that combines images and masks for one dataset with the code below. I wonder how I can make the generator that uses both dataset1 and dataset2.

from tensorflow.keras.preprocessing.image import ImageDataGenerator
    
    img_data_gen_args = dict(horizontal_flip=True,
                      vertical_flip=True,
                      fill_mode='reflect')
    
    image_datagen = ImageDataGenerator(**img_data_gen_args)
    mask_datagen = ImageDataGenerator(**img_data_gen_args)
    
    image_generator = image_datagen.flow_from_directory(
        train_img_path,
        class_mode = None,
        batch_size = 16,
        seed = 123)
    
    mask_generator = mask_datagen.flow_from_directory(
        train_mask_path,
        class_mode = None,
        batch_size = 16,
        seed = 123)
    
    train_generator = zip(image_generator, mask_generator)

train_img_path = "dataset1/train_images/"
train_mask_path = "dataset1/train_masks/"

train_img_gen = trainGenerator(train_img_path, train_mask_path, num_class=1)
    # get one batch of image and mask
    x, y = train_img_gen.__next__()

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

迷荒 2025-02-18 12:48:28

这是使用Flow_from_dataframe进行操作的一种方法。我创建了两个火车目录和两个蒙版目录。每个类别有2个类,每个类别有5个图像。代码在下面

def get_df(directory1, directory2):
    for d in [directory1, directory2]:
        filepaths=[]
        labels=[]
        classlist=os.listdir(d)
        for klass in classlist:
            classpath=os.path.join(d,klass)
            flist=os.listdir(classpath)
            for f in flist:
                fpath=os.path.join(classpath,f)
                filepaths.append(fpath)
                labels.append(klass)
        Fseries=pd.Series(filepaths, name='filepaths')
        Lseries=pd.Series(labels, name='labels')
        df=pd.concat([Fseries, Lseries],axis=1)
        if d == directory1:
            df1=df
        else:
            df2=df
    df=pd.concat([df1, df2], axis=0).reset_index(drop=True)
    return df

# combine the training directories
directory1=r'C:\Temp\demo\train1'
directory2=r'C:\Temp\demo\train2'
train_df=get_df(directory1, directory2)
print(len(train_df))

#combine the mask directories
directory1=r'C:\Temp\demo\mask1'
directory2=r'C:\Temp\demo\mask2'
mask_df1=get_df(directory1, directory2)
print(len(mask_df))

img_size=(256,256)
img_data_gen_args = dict(horizontal_flip=True,
                      vertical_flip=True,
                      fill_mode='reflect')
datagen = ImageDataGenerator(**img_data_gen_args)

image_generator = datagen.flow_from_dataframe(train_df, x_col='filepaths', y_col=None, target_size=img_size, class_mode=None, batch_size=16,
                                                     shuffle=True, seed=123)
mask_generator = datagen.flow_from_dataframe(mask_df, x_col='filepaths', y_col=None, target_size=img_size, class_mode=None, batch_size=16,
                                                     shuffle=True, seed=123)
gen=zip(image_generator, mask_generator) 
image, mask=next(gen)
print(image.shape, mask.shape)  

Here is a way to do it using flow_from_dataframe. I created two train directories and two mask directories. Each has 2 classes with 5 images in each class. Code is below

def get_df(directory1, directory2):
    for d in [directory1, directory2]:
        filepaths=[]
        labels=[]
        classlist=os.listdir(d)
        for klass in classlist:
            classpath=os.path.join(d,klass)
            flist=os.listdir(classpath)
            for f in flist:
                fpath=os.path.join(classpath,f)
                filepaths.append(fpath)
                labels.append(klass)
        Fseries=pd.Series(filepaths, name='filepaths')
        Lseries=pd.Series(labels, name='labels')
        df=pd.concat([Fseries, Lseries],axis=1)
        if d == directory1:
            df1=df
        else:
            df2=df
    df=pd.concat([df1, df2], axis=0).reset_index(drop=True)
    return df

# combine the training directories
directory1=r'C:\Temp\demo\train1'
directory2=r'C:\Temp\demo\train2'
train_df=get_df(directory1, directory2)
print(len(train_df))

#combine the mask directories
directory1=r'C:\Temp\demo\mask1'
directory2=r'C:\Temp\demo\mask2'
mask_df1=get_df(directory1, directory2)
print(len(mask_df))

img_size=(256,256)
img_data_gen_args = dict(horizontal_flip=True,
                      vertical_flip=True,
                      fill_mode='reflect')
datagen = ImageDataGenerator(**img_data_gen_args)

image_generator = datagen.flow_from_dataframe(train_df, x_col='filepaths', y_col=None, target_size=img_size, class_mode=None, batch_size=16,
                                                     shuffle=True, seed=123)
mask_generator = datagen.flow_from_dataframe(mask_df, x_col='filepaths', y_col=None, target_size=img_size, class_mode=None, batch_size=16,
                                                     shuffle=True, seed=123)
gen=zip(image_generator, mask_generator) 
image, mask=next(gen)
print(image.shape, mask.shape)  
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文