创建自定义损失函数,在tensorflow、python中解栈张量时出错
创建用于深度学习模型的加权高斯损失函数,并在对训练数据运行 model.fit 时收到以下错误消息
ValueError:维度必须为 2,但“{{node”的维度为 1 gaussian_loss/unstack_1}} = UnpackT=DT_FLOAT,轴=-1,num=2' 输入形状:[?,1]。
下面附上该函数的代码:
def w_g_l(weight_train, weight_test, train_num):
def gaussian_loss(y_true, y_pred):
if y_true.get_shape()[0] == train_num:
weight=weight_train
else:
weight=weight_test
mu, sigma = tf.unstack(y_pred, num=2, axis=-1)
truevals, dummy = tf.unstack(y_true, num=2, axis=-1)
mu = tf.expand_dims(mu, -1)
sigma = tf.expand_dims(sigma, -1)
truevals = tf.expand_dims(truevals, -1)
nll = (
tf.math.square(truevals-mu)/(2.0 * tf.math.square(sigma))
+ tf.math.log(sigma) + tf.math.log(weight)
)
return tf.math.reduce_mean(nll) - tf.math.reduce_mean(tf.math.log(weight))
return gaussian_loss
有任何线索吗?这似乎是 truevals, dummy = tf.unstack(y_pred, num=2, axis=-1) 行的问题,但我不确定具体可以解决什么问题。
型号如下:
def build_model():
model = keras.Sequential([
layers.Dense(units=2, input_dim=2, activation = 'relu'),
layers.Dense(units=12, activation = 'relu'),
layers.Dense(units=2, activation = 'softplus')
])
my_loss = w_g_l(weight_train, weight_test, 1148)
loss=my_loss
model.compile(loss = loss, optimizer = keras.optimizers.Adam(0.01), metrics = ['mse', my_loss])
return model
Creating a weighted gaussian loss function for use in deep learning models and getting the following error message when running model.fit for training data
ValueError: Dimension must be 2 but is 1 for '{{node
gaussian_loss/unstack_1}} = UnpackT=DT_FLOAT, axis=-1, num=2'
with input shapes: [?,1].
attached below is the code for the function:
def w_g_l(weight_train, weight_test, train_num):
def gaussian_loss(y_true, y_pred):
if y_true.get_shape()[0] == train_num:
weight=weight_train
else:
weight=weight_test
mu, sigma = tf.unstack(y_pred, num=2, axis=-1)
truevals, dummy = tf.unstack(y_true, num=2, axis=-1)
mu = tf.expand_dims(mu, -1)
sigma = tf.expand_dims(sigma, -1)
truevals = tf.expand_dims(truevals, -1)
nll = (
tf.math.square(truevals-mu)/(2.0 * tf.math.square(sigma))
+ tf.math.log(sigma) + tf.math.log(weight)
)
return tf.math.reduce_mean(nll) - tf.math.reduce_mean(tf.math.log(weight))
return gaussian_loss
Any clues? It seems to be an issue with the truevals, dummy = tf.unstack(y_pred, num=2, axis=-1) line, but I'm unsure what specifically can fix it.
Model is below:
def build_model():
model = keras.Sequential([
layers.Dense(units=2, input_dim=2, activation = 'relu'),
layers.Dense(units=12, activation = 'relu'),
layers.Dense(units=2, activation = 'softplus')
])
my_loss = w_g_l(weight_train, weight_test, 1148)
loss=my_loss
model.compile(loss = loss, optimizer = keras.optimizers.Adam(0.01), metrics = ['mse', my_loss])
return model
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您的损失函数期望
y_true
(这是来自您传递给fit
的y_train
的数据)在最后一个维度中有两个元素在truevals
和dummy
中拆栈。一种解决方案是设置
truevals = y_true
,因为您的数据中似乎没有虚拟值。另一个解决方案是向数据添加虚拟值,例如:
Your loss function is expecting
y_true
(which is data coming from youry_train
that you passed tofit
) to have two elements in the last dimension for unstacking intruevals
anddummy
.One solution is making
truevals = y_true
, since you don't seem to have dummy values in your data.Another solution is adding dummy values to your data like: