为什么这种简单的Keras模型不收敛?

发布于 2025-02-06 15:44:22 字数 1344 浏览 1 评论 0 原文

损耗函数不接近0。它似乎不会融合,并且始终无法预测y。 我尝试使用初始化器,激活和层尺寸。这里的任何见解将不胜感激。

import tensorflow as tf
import keras

activation = 'relu'
initializer = 'he_uniform'
input_layer = tf.keras.layers.Input(shape=(1,),batch_size=1)
dense_layer = keras.layers.Dense(
    32,
    activation=activation,
    kernel_initializer=initializer
)(input_layer)
dense_layer = keras.layers.Dense(
    32,
    activation=activation,
    kernel_initializer=initializer
)(dense_layer)
predictions = keras.layers.Dense(1)(
    dense_layer
)

model = keras.models.Model(inputs=input_layer, outputs=[predictions])
model.summary()

optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)

x = tf.constant([[727.], [1424.], [379], [1777], [51.]])
y = tf.constant([[1.], [1.], [0.], [1.], [0.]])
for item in tf.data.Dataset.from_tensor_slices((x,y)).shuffle(5).repeat():

    with tf.GradientTape() as tape:
        x = item[0]
        output = model(x)
        loss = keras.losses.BinaryCrossentropy(
            from_logits=True
        )(item[1], output)
        # loss = item[1] - output[0]
        dy_dx = tape.gradient(loss, model.trainable_variables)
        optimizer.apply_gradients(zip(dy_dx, model.trainable_weights))
        print("batch", item[0], "x",  "output", output, "expected", item[1], "gradient", dy_dx[-1])

        print("loss", loss)

The loss function doesn’t approach 0. It doesn’t seem to converge, and consistently can’t predict Y.
I've tried playing with the initializer, activation and layer sizes. Any insight here would be appreciated.

import tensorflow as tf
import keras

activation = 'relu'
initializer = 'he_uniform'
input_layer = tf.keras.layers.Input(shape=(1,),batch_size=1)
dense_layer = keras.layers.Dense(
    32,
    activation=activation,
    kernel_initializer=initializer
)(input_layer)
dense_layer = keras.layers.Dense(
    32,
    activation=activation,
    kernel_initializer=initializer
)(dense_layer)
predictions = keras.layers.Dense(1)(
    dense_layer
)

model = keras.models.Model(inputs=input_layer, outputs=[predictions])
model.summary()

optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)

x = tf.constant([[727.], [1424.], [379], [1777], [51.]])
y = tf.constant([[1.], [1.], [0.], [1.], [0.]])
for item in tf.data.Dataset.from_tensor_slices((x,y)).shuffle(5).repeat():

    with tf.GradientTape() as tape:
        x = item[0]
        output = model(x)
        loss = keras.losses.BinaryCrossentropy(
            from_logits=True
        )(item[1], output)
        # loss = item[1] - output[0]
        dy_dx = tape.gradient(loss, model.trainable_variables)
        optimizer.apply_gradients(zip(dy_dx, model.trainable_weights))
        print("batch", item[0], "x",  "output", output, "expected", item[1], "gradient", dy_dx[-1])

        print("loss", loss)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

樱娆 2025-02-13 15:44:23

您的输入数量很大,这会导致数值问题,并且您不会批量输入,这可能会导致每个批处理产生非常大的梯度(再次,由于输入数量很大),可能是不同的方向。当我将 .batch(5)添加到数据集定义时,它可以正常工作

  • (实际上,仅替换 shuffle ,因为每个批次包含完整的数据集)以改善梯度估计值,
  • 将输入除以1000,以使它们在更合理的范围内,
  • 此后您可以提高学习率(高达0.1的功能良好),以显着加快培训。

这应该很快收敛。

import tensorflow as tf
import keras

activation = 'relu'
initializer = 'he_uniform'
input_layer = tf.keras.layers.Input(shape=(1,))
dense_layer = keras.layers.Dense(
    32,
    activation=activation,
    kernel_initializer=initializer
)(input_layer)
dense_layer = keras.layers.Dense(
    32,
    activation=activation,
    kernel_initializer=initializer
)(dense_layer)
predictions = keras.layers.Dense(1)(
    input_layer
)

model = keras.models.Model(inputs=input_layer, outputs=[predictions])
model.summary()

optimizer = tf.keras.optimizers.Adam(learning_rate=0.1)

x = tf.constant([[727.], [1424.], [379], [1777], [51.]]) / 1000.
y = tf.constant([[1.], [1.], [0.], [1.], [0.]])
for step, item in enumerate(tf.data.Dataset.from_tensor_slices((x,y)).batch(5).repeat()):

    with tf.GradientTape() as tape:
        x = item[0]
        output = model(x)
        loss = keras.losses.BinaryCrossentropy(
            from_logits=True
        )(item[1], output)
        # loss = item[1] - output[0]
        dy_dx = tape.gradient(loss, model.trainable_variables)
        optimizer.apply_gradients(zip(dy_dx, model.trainable_weights))
        if not step % 100:
            print("batch", item[0], "x",  "output", tf.nn.sigmoid(output), "expected", item[1], "gradient", dy_dx[-1])
            print("loss", loss)

和注意:您使用 no 激活功能与二进制跨透明镜“从logits”是正确的,因此请忽略人们告诉您其他情况。

Your input numbers are huge which leads to numerical issues, and you are not batching your inputs which leads to each batch producing very large gradients (again, due to the large input numbers) in possibly different directions. It works fine when I

  • Add .batch(5) to the dataset definition (in fact, just replaced shuffle because every batch contains the full dataset) to improve the gradient estimates,
  • Divide the inputs by 1000 to get them in a more reasonable range,
  • After that you can increase the learning rate (something as high as 0.1 works fine) to speed up the training significantly.

This should converge very quickly.

import tensorflow as tf
import keras

activation = 'relu'
initializer = 'he_uniform'
input_layer = tf.keras.layers.Input(shape=(1,))
dense_layer = keras.layers.Dense(
    32,
    activation=activation,
    kernel_initializer=initializer
)(input_layer)
dense_layer = keras.layers.Dense(
    32,
    activation=activation,
    kernel_initializer=initializer
)(dense_layer)
predictions = keras.layers.Dense(1)(
    input_layer
)

model = keras.models.Model(inputs=input_layer, outputs=[predictions])
model.summary()

optimizer = tf.keras.optimizers.Adam(learning_rate=0.1)

x = tf.constant([[727.], [1424.], [379], [1777], [51.]]) / 1000.
y = tf.constant([[1.], [1.], [0.], [1.], [0.]])
for step, item in enumerate(tf.data.Dataset.from_tensor_slices((x,y)).batch(5).repeat()):

    with tf.GradientTape() as tape:
        x = item[0]
        output = model(x)
        loss = keras.losses.BinaryCrossentropy(
            from_logits=True
        )(item[1], output)
        # loss = item[1] - output[0]
        dy_dx = tape.gradient(loss, model.trainable_variables)
        optimizer.apply_gradients(zip(dy_dx, model.trainable_weights))
        if not step % 100:
            print("batch", item[0], "x",  "output", tf.nn.sigmoid(output), "expected", item[1], "gradient", dy_dx[-1])
            print("loss", loss)

And note: You using no activation function with a binary cross-entropy "from logits" is correct, so ignore people telling you otherwise.

月光色 2025-02-13 15:44:23

您的输出层 - 预测 - 缺少激活。 激活参数。从您的代码看来您正在进行二进制分类,因此输出层应具有'Sigmoid'激活。

推断时,请确保将模型的输出舍入0或1以获取预测。

Your output layer - predictions - is missing an activation. keras.layers.Dense has a default value of None for the activation parameter. From your code it looks like you are doing binary classification, therefore your output layer should have a 'sigmoid' activation.

On inference be sure to round the output of the model to 0 or 1 to get the predictions.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文