贝叶斯网络的分类问题(TensorFlow概率)
在过去的几天里,我在张量流概率上工作时遇到了一些困难。
我已经培训了该网络的频繁派版本,并达到了高于0.99的单个精度。虽然,在尝试贝叶斯版本时,精度等同于虚拟模型。这很奇怪,因为我怀疑结果可能没有太大差异。
既然我是贝叶斯方法的新手,我想知道我是否在这里错过了一些东西...我没有找到适合我的信息和示例。
在这个模型中,我围绕3个属性(y)的缺失(1)的存在(1)进行预测,这可能同时发生。
我真的很感谢一些见解。
谢谢大家。
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
from main import get_data
#%% Configuration
config = {"ID_prefix" : "Bayesian_CNN_Flipout",
"mode" : "classification",
"optimizer" : "Adam",
"loss" : "binary_crossentropy",
"monitor" : "val_loss",
"patience" : 10,
"lr" : 0.001,
"repetitions" : 3,
"X_reshape" : True}
#%% Get data
data = get_data("dataset.csv", config)
我的数据具有以下维度:
data["X_train"].shape
Out[8]: (39375, 1024, 1)
data["Y_train"].shape
Out[9]: (39375, 3)
data["X_val"].shape
Out[10]: (13125, 1024, 1)
data["Y_val"].shape
Out[11]: (13125, 3)
data["X_test"].shape
Out[13]: (17500, 1024, 1)
data["Y_test"].shape
Out[14]: (17500, 3)
网络的结构是:
#%% Model structure
config["inputs"] = tf.keras.Input(shape=(data["X_train"].shape[1], data["X_train"].shape[2]))
layer = config["inputs"]
layer = tfp.layers.Convolution1DFlipout(filters=10, kernel_size=5, strides=1, activation="relu")(layer)
layer = tf.keras.layers.MaxPooling1D(pool_size=2)(layer)
layer = tfp.layers.Convolution1DFlipout(filters=10, kernel_size=5, strides=1, activation="relu")(layer)
layer = tf.keras.layers.MaxPooling1D(pool_size=2)(layer)
layer = tf.keras.layers.Flatten()(layer)
config["outputs"] = tfp.layers.DenseFlipout(units=3, activation="sigmoid")(layer)
model = tf.keras.Model(inputs=config["inputs"], outputs=config["outputs"])
model.compile(optimizer=config["optimizer"], loss=config["loss"])
tf.keras.backend.set_value(model.optimizer.learning_rate, config["lr"])
earlystopping = tf.keras.callbacks.EarlyStopping(monitor=config["monitor"],
patience=config["patience"],
restore_best_weights=True)
#%% Fit model
history = model.fit(data["X_train"], data["Y_train"],
validation_data=[data["X_val"], data["Y_val"]],
epochs=999999,
callbacks=[earlystopping])
#%% Classification metrics
pred_train = np.zeros([config["repetitions"], data["Y_train"].shape[0], data["Y_train"].shape[1]])
pred_val = np.zeros([config["repetitions"], data["Y_val"].shape[0], data["Y_val"].shape[1]])
pred_test = np.zeros([config["repetitions"], data["Y_test"].shape[0], data["Y_test"].shape[1]])
accuracy_train = np.zeros([config["repetitions"], 1, data["Y_train"].shape[1]])
accuracy_val = np.zeros([config["repetitions"], 1, data["Y_val"].shape[1]])
accuracy_test = np.zeros([config["repetitions"], 1, data["Y_test"].shape[1]])
for i in range(config["repetitions"]):
pred_train[i] = model.predict(data["X_train"]).round()
pred_val[i] = model.predict(data["X_val"]).round()
pred_test[i] = model.predict(data["X_test"]).round()
accuracy_train[i] = (data["Y_train"]==pred_train[i]).mean(0)
accuracy_val[i] = (data["Y_val"]==pred_val[i]).mean(0)
accuracy_test[i] = (data["Y_test"]==pred_test[i]).mean(0)
I'm having some trouble working with tensorflow probability in the last few days.
I have trained the frequentist version of this network and reached individual accuracies above 0.99. Although, when trying the bayesian version, the accuracies are equivalent to a dummy model. This is weird as I suspect the results might not differ much.
As I'm new to bayesian approaches I would like to know if I'm missing something here... I didn't found much information and examples that suits me.
In this model I'm making predictions around the presence (1) of absence (0) of 3 properties (Y), which may occur simultaneously or not.
I would really appreciate some insights.
Thank you all in advance.
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
from main import get_data
#%% Configuration
config = {"ID_prefix" : "Bayesian_CNN_Flipout",
"mode" : "classification",
"optimizer" : "Adam",
"loss" : "binary_crossentropy",
"monitor" : "val_loss",
"patience" : 10,
"lr" : 0.001,
"repetitions" : 3,
"X_reshape" : True}
#%% Get data
data = get_data("dataset.csv", config)
my data have the followings dimensions:
data["X_train"].shape
Out[8]: (39375, 1024, 1)
data["Y_train"].shape
Out[9]: (39375, 3)
data["X_val"].shape
Out[10]: (13125, 1024, 1)
data["Y_val"].shape
Out[11]: (13125, 3)
data["X_test"].shape
Out[13]: (17500, 1024, 1)
data["Y_test"].shape
Out[14]: (17500, 3)
The structure of the network is:
#%% Model structure
config["inputs"] = tf.keras.Input(shape=(data["X_train"].shape[1], data["X_train"].shape[2]))
layer = config["inputs"]
layer = tfp.layers.Convolution1DFlipout(filters=10, kernel_size=5, strides=1, activation="relu")(layer)
layer = tf.keras.layers.MaxPooling1D(pool_size=2)(layer)
layer = tfp.layers.Convolution1DFlipout(filters=10, kernel_size=5, strides=1, activation="relu")(layer)
layer = tf.keras.layers.MaxPooling1D(pool_size=2)(layer)
layer = tf.keras.layers.Flatten()(layer)
config["outputs"] = tfp.layers.DenseFlipout(units=3, activation="sigmoid")(layer)
model = tf.keras.Model(inputs=config["inputs"], outputs=config["outputs"])
model.compile(optimizer=config["optimizer"], loss=config["loss"])
tf.keras.backend.set_value(model.optimizer.learning_rate, config["lr"])
earlystopping = tf.keras.callbacks.EarlyStopping(monitor=config["monitor"],
patience=config["patience"],
restore_best_weights=True)
#%% Fit model
history = model.fit(data["X_train"], data["Y_train"],
validation_data=[data["X_val"], data["Y_val"]],
epochs=999999,
callbacks=[earlystopping])
#%% Classification metrics
pred_train = np.zeros([config["repetitions"], data["Y_train"].shape[0], data["Y_train"].shape[1]])
pred_val = np.zeros([config["repetitions"], data["Y_val"].shape[0], data["Y_val"].shape[1]])
pred_test = np.zeros([config["repetitions"], data["Y_test"].shape[0], data["Y_test"].shape[1]])
accuracy_train = np.zeros([config["repetitions"], 1, data["Y_train"].shape[1]])
accuracy_val = np.zeros([config["repetitions"], 1, data["Y_val"].shape[1]])
accuracy_test = np.zeros([config["repetitions"], 1, data["Y_test"].shape[1]])
for i in range(config["repetitions"]):
pred_train[i] = model.predict(data["X_train"]).round()
pred_val[i] = model.predict(data["X_val"]).round()
pred_test[i] = model.predict(data["X_test"]).round()
accuracy_train[i] = (data["Y_train"]==pred_train[i]).mean(0)
accuracy_val[i] = (data["Y_val"]==pred_val[i]).mean(0)
accuracy_test[i] = (data["Y_test"]==pred_test[i]).mean(0)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
问题得到了回答,下面的代码正常工作:
我的数据具有以下维度:
网络的结构是:
THE QUESTION WAS ANSWERED AND THE CODE BELOW WORKED PROPERLY:
my data have the followings dimensions:
The structure of the network is: