使用自定义损失函数后batch size不能大于1

发布于 2025-01-13 08:36:27 字数 1803 浏览 5 评论 0原文

我在 LSTM 中有一个自定义损失函数。当批量大小为 1 时,模型运行良好,但出现错误 输入重塑是一个具有 2 个值的张量,但请求的形状具有 1 [[{{节点损失/重塑}}]] [操作:__inference_train_function_3093102] 当我增加批量大小时。使用不同的批量大小可以做什么?我很感激你的帮助。 ”“” def custom_loss(sumP,sumE):

    def loss(y_true, y_pred):
        penalty= 0.69
        y_pred =(y_pred*data_train_std[-1])+data_train_mean[-1]
        y_pred = y_pred*3.54E-05
        y_pred1 = K.sum(y_pred,axis=-1)
        

        y_true =( y_true*data_train_std[-1])+data_train_mean[-1]
        y_true = y_true*3.54E-05
        y_true1 = K.sum(y_true,axis=-1)
        

        
        if (abs((sumP-sumE-(y_pred1))-(sumP-sumE-(y_true1)))) <= (abs(sumP-sumE-(y_true1))*0.01):
            return K.mean(K.abs(y_pred - y_true), axis=-1) 
        else:
            return K.mean(K.abs(y_pred - y_true), axis=-1)+(penalty*(abs((sumP-sumE-(y_pred1))-(sumP-sumE-(y_true1)))))
    return loss





# CONFIGURE LSTM --------------------------------------------------------------

model = Sequential()
model.add(LSTM(80, activation='relu', input_shape=(tlag, nvar), return_sequences=True))
model.add(LSTM(60, activation='relu', return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(20, activation='relu', return_sequences=False))
model.add(Dropout(0.3))
model.add(Dense(1, activation='linear'))
# opt=optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0, amsgrad=False)
model.compile(optimizer='adam', loss=custom_loss(sump1, sume1), metrics=['mse', 'mae'])
model.summary()


# TRAIN LSTM ------------------------------------------------------------------

result = model.fit(Xtrain, Ytrain, epochs=50, batch_size=2, validation_split=0.3, shuffle=False, verbose=True)

错误: reshape 的输入是一个有 2 个值的张量,但请求的形状有 1 个 [[{{节点损失/重塑}}]] [操作:__inference_train_function_3093102]

I have a custom loss function in LSTM. The model run wells while the batch size is 1 but give me error Input to reshape is a tensor with 2 values, but the requested shape has 1
[[{{node loss/Reshape}}]] [Op:__inference_train_function_3093102]
when i increase the batch size. what could be done to use different batch size ? I appreciate your help.
"""
def custom_loss(sumP,sumE):

    def loss(y_true, y_pred):
        penalty= 0.69
        y_pred =(y_pred*data_train_std[-1])+data_train_mean[-1]
        y_pred = y_pred*3.54E-05
        y_pred1 = K.sum(y_pred,axis=-1)
        

        y_true =( y_true*data_train_std[-1])+data_train_mean[-1]
        y_true = y_true*3.54E-05
        y_true1 = K.sum(y_true,axis=-1)
        

        
        if (abs((sumP-sumE-(y_pred1))-(sumP-sumE-(y_true1)))) <= (abs(sumP-sumE-(y_true1))*0.01):
            return K.mean(K.abs(y_pred - y_true), axis=-1) 
        else:
            return K.mean(K.abs(y_pred - y_true), axis=-1)+(penalty*(abs((sumP-sumE-(y_pred1))-(sumP-sumE-(y_true1)))))
    return loss





# CONFIGURE LSTM --------------------------------------------------------------

model = Sequential()
model.add(LSTM(80, activation='relu', input_shape=(tlag, nvar), return_sequences=True))
model.add(LSTM(60, activation='relu', return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(20, activation='relu', return_sequences=False))
model.add(Dropout(0.3))
model.add(Dense(1, activation='linear'))
# opt=optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0, amsgrad=False)
model.compile(optimizer='adam', loss=custom_loss(sump1, sume1), metrics=['mse', 'mae'])
model.summary()


# TRAIN LSTM ------------------------------------------------------------------

result = model.fit(Xtrain, Ytrain, epochs=50, batch_size=2, validation_split=0.3, shuffle=False, verbose=True)

Error:
Input to reshape is a tensor with 2 values, but the requested shape has 1
[[{{node loss/Reshape}}]] [Op:__inference_train_function_3093102]

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文