准确率无变化 loss 几乎无变化

发布于 2022-09-11 20:34:12 字数 13023 浏览 19 评论 0

本人最近在使用CNN做一个分类问题,使用的数据是1D的数据,用了两层卷积,但是发现准确率只是刚才是变一点,后面就都是一样的,loss也只是有轻微的变化,看情况应该是发生了梯度消失,但是我只有两层CNN,怎么可能发生梯度消失。我的数据集也归一化了,但是使用是不行,不知道是不是代码哪里出了问题,反复检查了很多遍也没有结果。另外由于数据集比较小,所以没有使用分批训练,直接使用的全部的训练集进行训练的。请教大家帮忙看一下。

代码如下:

import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn import svm
from sklearn import preprocessing
import math
import time

# 定义输入
x = tf.placeholder(tf.float32, [None, 3128])
y = tf.placeholder(tf.float32, [None, 5])

x_data = tf.reshape(x, [-1, 3128, 1])

W_conv1 = tf.Variable(tf.truncated_normal(shape=[5, 1, 16], stddev=0.1))
b_conv1 = tf.Variable(tf.constant(0.1, shape=[16]))

h_conv1 = tf.nn.relu(tf.nn.conv1d(x_data, W_conv1, stride=1, padding="SAME") + b_conv1)  # 输出形状[-1, 3128, 16]
h_pool1 = tf.layers.max_pooling1d(h_conv1, pool_size=2, strides=2, padding="SAME")  # 输出形状[-1, 1564, 16]

W_conv2 = tf.Variable(tf.truncated_normal(shape=[5, 16, 32], stddev=0.1))
b_conv2 = tf.Variable(tf.constant(0.1, shape=[32]))

h_conv2 = tf.nn.relu(tf.nn.conv1d(h_pool1, W_conv2, stride=1, padding="SAME") + b_conv2)  # 输出形状[-1, 1564, 32]
h_pool2 = tf.layers.max_pooling1d(h_conv2, pool_size=2, strides=2, padding="SAME")  # 输出形状[-1, 782, 32]

h_pool_flat = tf.reshape(h_pool2, shape=[-1, 782 * 32])
W_fc1 = tf.Variable(tf.truncated_normal(shape=[782 * 32, 128], stddev=0.1))
b_fc1 = tf.Variable(tf.constant(0.1, shape=[128]))

h_fc1 = tf.nn.relu(tf.matmul(h_pool_flat, W_fc1) + b_fc1)

keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

W_fc2 = tf.Variable(tf.truncated_normal(shape=[128, 5], stddev=0.1))
b_fc2 = tf.Variable(tf.constant(0.1, shape=[5]))

prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=prediction))

train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

start = time.time()
# 因为数据集比较小,所以没有分批训练,直接用的所有训练集进行训练的
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for epoch in range(100):
        sess.run(train_step, feed_dict={x:train_data, y:train_labels, keep_prob:1.0})
        train_acc, train_predict, train_loss = sess.run([accuracy, prediction, cross_entropy], feed_dict={x:train_data, y:train_labels, keep_prob:1.0})
        test_acc, test_loss = sess.run([accuracy, cross_entropy], feed_dict={x:test_data, y:test_labels, keep_prob:1.0})
        print("Iter " + str(epoch) + " Training Acc=" + str(train_acc) + " Training Loss=" + str(train_loss) + " Testing Acc=" + str(test_acc) + " Testing Loss=" + str(test_loss))
        
end = time.time()
print("running time: " + str(end - start))
# 输出结果
Iter 0 Training Acc=0.1740196 Training Loss=1.7306318 Testing Acc=0.13725491 Testing Loss=1.7674819
Iter 1 Training Acc=0.15931372 Training Loss=1.7448654 Testing Acc=0.12745099 Testing Loss=1.7773167
Iter 2 Training Acc=0.1740196 Training Loss=1.7302881 Testing Acc=0.13725491 Testing Loss=1.7674912
Iter 3 Training Acc=0.1740196 Training Loss=1.7307093 Testing Acc=0.13725491 Testing Loss=1.7672241
Iter 4 Training Acc=0.17892157 Training Loss=1.7259791 Testing Acc=0.13725491 Testing Loss=1.7689348
Iter 5 Training Acc=0.17892157 Training Loss=1.7258989 Testing Acc=0.13725491 Testing Loss=1.7667129
Iter 6 Training Acc=0.18137255 Training Loss=1.7234895 Testing Acc=0.12745099 Testing Loss=1.7715402
Iter 7 Training Acc=0.18137255 Training Loss=1.7233703 Testing Acc=0.12745099 Testing Loss=1.777352
Iter 8 Training Acc=0.18382353 Training Loss=1.7210002 Testing Acc=0.12745099 Testing Loss=1.7768195
Iter 9 Training Acc=0.18382353 Training Loss=1.7210213 Testing Acc=0.11764706 Testing Loss=1.7871732
Iter 10 Training Acc=0.17892157 Training Loss=1.724439 Testing Acc=0.11764706 Testing Loss=1.7871851
Iter 11 Training Acc=0.18872549 Training Loss=1.7161047 Testing Acc=0.12745099 Testing Loss=1.7811257
Iter 12 Training Acc=0.18627451 Training Loss=1.7183675 Testing Acc=0.13725491 Testing Loss=1.7675552
Iter 13 Training Acc=0.19117647 Training Loss=1.7138144 Testing Acc=0.13725491 Testing Loss=1.7665836
Iter 14 Training Acc=0.19607843 Training Loss=1.7087549 Testing Acc=0.13725491 Testing Loss=1.7651485
Iter 15 Training Acc=0.19607843 Training Loss=1.7087322 Testing Acc=0.16666667 Testing Loss=1.7375987
Iter 16 Training Acc=0.19607843 Training Loss=1.7084687 Testing Acc=0.1764706 Testing Loss=1.7271962
Iter 17 Training Acc=0.19607843 Training Loss=1.708712 Testing Acc=0.1764706 Testing Loss=1.7280302
Iter 18 Training Acc=0.1985294 Training Loss=1.7067794 Testing Acc=0.1764706 Testing Loss=1.7283618
Iter 19 Training Acc=0.19607843 Training Loss=1.7087467 Testing Acc=0.1764706 Testing Loss=1.7277778
Iter 20 Training Acc=0.19607843 Training Loss=1.7081589 Testing Acc=0.18627451 Testing Loss=1.7199571
Iter 21 Training Acc=0.1985294 Training Loss=1.7070143 Testing Acc=0.19607843 Testing Loss=1.7087542
Iter 22 Training Acc=0.2009804 Training Loss=1.7036499 Testing Acc=0.18627451 Testing Loss=1.7184309
Iter 23 Training Acc=0.20588236 Training Loss=1.6987079 Testing Acc=0.18627451 Testing Loss=1.7225729
Iter 24 Training Acc=0.20833333 Training Loss=1.6964629 Testing Acc=0.1764706 Testing Loss=1.7283614
Iter 25 Training Acc=0.20833333 Training Loss=1.6963863 Testing Acc=0.16666667 Testing Loss=1.7381552
Iter 26 Training Acc=0.20833333 Training Loss=1.6963255 Testing Acc=0.15686275 Testing Loss=1.7479001
Iter 27 Training Acc=0.20833333 Training Loss=1.6963843 Testing Acc=0.14705883 Testing Loss=1.7586052
Iter 28 Training Acc=0.20588236 Training Loss=1.6986847 Testing Acc=0.13725491 Testing Loss=1.767122
Iter 29 Training Acc=0.2009804 Training Loss=1.7034937 Testing Acc=0.13725491 Testing Loss=1.7675763
Iter 30 Training Acc=0.2009804 Training Loss=1.7038218 Testing Acc=0.13725491 Testing Loss=1.7669554
Iter 31 Training Acc=0.2009804 Training Loss=1.703686 Testing Acc=0.13725491 Testing Loss=1.7675772
Iter 32 Training Acc=0.2009804 Training Loss=1.7032952 Testing Acc=0.13725491 Testing Loss=1.7675775
Iter 33 Training Acc=0.19607843 Training Loss=1.7082251 Testing Acc=0.13725491 Testing Loss=1.7675774
Iter 34 Training Acc=0.1985294 Training Loss=1.7062403 Testing Acc=0.13725491 Testing Loss=1.7675751
Iter 35 Training Acc=0.1985294 Training Loss=1.7062525 Testing Acc=0.13725491 Testing Loss=1.7675741
Iter 36 Training Acc=0.2009804 Training Loss=1.7041326 Testing Acc=0.13725491 Testing Loss=1.7675763
Iter 37 Training Acc=0.2009804 Training Loss=1.7037088 Testing Acc=0.13725491 Testing Loss=1.7672596
Iter 38 Training Acc=0.20343137 Training Loss=1.6999347 Testing Acc=0.13725491 Testing Loss=1.7672695
Iter 39 Training Acc=0.20588236 Training Loss=1.6975116 Testing Acc=0.13725491 Testing Loss=1.7675742
Iter 40 Training Acc=0.21078432 Training Loss=1.6931298 Testing Acc=0.13725491 Testing Loss=1.7674901
Iter 41 Training Acc=0.21323529 Training Loss=1.691509 Testing Acc=0.15686275 Testing Loss=1.747936
Iter 42 Training Acc=0.21078432 Training Loss=1.6928881 Testing Acc=0.1764706 Testing Loss=1.7283671
Iter 43 Training Acc=0.21323529 Training Loss=1.6916806 Testing Acc=0.18627451 Testing Loss=1.7185841
Iter 44 Training Acc=0.21323529 Training Loss=1.6907054 Testing Acc=0.19607843 Testing Loss=1.7069345
Iter 45 Training Acc=0.21568628 Training Loss=1.688941 Testing Acc=0.20588236 Testing Loss=1.6989132
Iter 46 Training Acc=0.21568628 Training Loss=1.6891797 Testing Acc=0.20588236 Testing Loss=1.6983442
Iter 47 Training Acc=0.21568628 Training Loss=1.6891633 Testing Acc=0.20588236 Testing Loss=1.6989129
Iter 48 Training Acc=0.21568628 Training Loss=1.6890675 Testing Acc=0.20588236 Testing Loss=1.6986765
Iter 49 Training Acc=0.21568628 Training Loss=1.6890374 Testing Acc=0.21568628 Testing Loss=1.6893693
Iter 50 Training Acc=0.21568628 Training Loss=1.6884812 Testing Acc=0.21568628 Testing Loss=1.6891462
Iter 51 Training Acc=0.21813725 Training Loss=1.6869223 Testing Acc=0.21568628 Testing Loss=1.6890742
Iter 52 Training Acc=0.21568628 Training Loss=1.6891427 Testing Acc=0.21568628 Testing Loss=1.6891451
Iter 53 Training Acc=0.21323529 Training Loss=1.69149 Testing Acc=0.21568628 Testing Loss=1.6891462
Iter 54 Training Acc=0.21323529 Training Loss=1.6915635 Testing Acc=0.21568628 Testing Loss=1.6891462
Iter 55 Training Acc=0.21323529 Training Loss=1.6919171 Testing Acc=0.21568628 Testing Loss=1.6891463
Iter 56 Training Acc=0.21568628 Training Loss=1.6893735 Testing Acc=0.21568628 Testing Loss=1.6891462
Iter 57 Training Acc=0.21813725 Training Loss=1.6866883 Testing Acc=0.21568628 Testing Loss=1.6891462
Iter 58 Training Acc=0.21813725 Training Loss=1.6865827 Testing Acc=0.21568628 Testing Loss=1.6891315
Iter 59 Training Acc=0.21813725 Training Loss=1.6866089 Testing Acc=0.20588236 Testing Loss=1.6947027
Iter 60 Training Acc=0.22058824 Training Loss=1.684343 Testing Acc=0.20588236 Testing Loss=1.698547
Iter 61 Training Acc=0.22058824 Training Loss=1.6842 Testing Acc=0.20588236 Testing Loss=1.6988896
Iter 62 Training Acc=0.22058824 Training Loss=1.6841788 Testing Acc=0.20588236 Testing Loss=1.6985292
Iter 63 Training Acc=0.22058824 Training Loss=1.6841968 Testing Acc=0.20588236 Testing Loss=1.698898
Iter 64 Training Acc=0.22058824 Training Loss=1.6839882 Testing Acc=0.20588236 Testing Loss=1.698895
Iter 65 Training Acc=0.22058824 Training Loss=1.6842259 Testing Acc=0.20588236 Testing Loss=1.6982908
Iter 66 Training Acc=0.22058824 Training Loss=1.6841806 Testing Acc=0.19607843 Testing Loss=1.706621
Iter 67 Training Acc=0.22058824 Training Loss=1.6840943 Testing Acc=0.19607843 Testing Loss=1.7085911
Iter 68 Training Acc=0.22058824 Training Loss=1.6841185 Testing Acc=0.19607843 Testing Loss=1.7085911
Iter 69 Training Acc=0.22058824 Training Loss=1.6840878 Testing Acc=0.19607843 Testing Loss=1.708624
Iter 70 Training Acc=0.22058824 Training Loss=1.6842223 Testing Acc=0.19607843 Testing Loss=1.7084818
Iter 71 Training Acc=0.22058824 Training Loss=1.684025 Testing Acc=0.19607843 Testing Loss=1.7079687
Iter 72 Training Acc=0.22058824 Training Loss=1.6841912 Testing Acc=0.19607843 Testing Loss=1.7093388
Iter 73 Training Acc=0.22058824 Training Loss=1.6842438 Testing Acc=0.18627451 Testing Loss=1.7145581
Iter 74 Training Acc=0.22058824 Training Loss=1.6842505 Testing Acc=0.18627451 Testing Loss=1.7186123
Iter 75 Training Acc=0.22058824 Training Loss=1.6842644 Testing Acc=0.1764706 Testing Loss=1.7275735
Iter 76 Training Acc=0.22058824 Training Loss=1.6842865 Testing Acc=0.1764706 Testing Loss=1.7281681
Iter 77 Training Acc=0.22058824 Training Loss=1.6842856 Testing Acc=0.1764706 Testing Loss=1.7281952
Iter 78 Training Acc=0.22058824 Training Loss=1.6842237 Testing Acc=0.1764706 Testing Loss=1.7282218
Iter 79 Training Acc=0.22058824 Training Loss=1.684189 Testing Acc=0.1764706 Testing Loss=1.7282323
Iter 80 Training Acc=0.22058824 Training Loss=1.6842088 Testing Acc=0.1764706 Testing Loss=1.7282703
Iter 81 Training Acc=0.22058824 Training Loss=1.6841407 Testing Acc=0.1764706 Testing Loss=1.7282972
Iter 82 Training Acc=0.22058824 Training Loss=1.6842216 Testing Acc=0.1764706 Testing Loss=1.7282907
Iter 83 Training Acc=0.22058824 Training Loss=1.6842427 Testing Acc=0.1764706 Testing Loss=1.7282029
Iter 84 Training Acc=0.22058824 Training Loss=1.6842439 Testing Acc=0.1764706 Testing Loss=1.7279367
Iter 85 Training Acc=0.22058824 Training Loss=1.684244 Testing Acc=0.1764706 Testing Loss=1.7273111
Iter 86 Training Acc=0.22058824 Training Loss=1.684244 Testing Acc=0.1764706 Testing Loss=1.7261287
Iter 87 Training Acc=0.22058824 Training Loss=1.684244 Testing Acc=0.1764706 Testing Loss=1.7244204
Iter 88 Training Acc=0.22058824 Training Loss=1.6842442 Testing Acc=0.18627451 Testing Loss=1.7225835
Iter 89 Training Acc=0.22058824 Training Loss=1.6842442 Testing Acc=0.18627451 Testing Loss=1.7210171
Iter 90 Training Acc=0.22058824 Training Loss=1.6842442 Testing Acc=0.18627451 Testing Loss=1.7197614
Iter 91 Training Acc=0.22058824 Training Loss=1.6842442 Testing Acc=0.18627451 Testing Loss=1.7185715
Iter 92 Training Acc=0.22058824 Training Loss=1.6842442 Testing Acc=0.18627451 Testing Loss=1.7171648
Iter 93 Training Acc=0.22058824 Training Loss=1.684244 Testing Acc=0.18627451 Testing Loss=1.7154564
Iter 94 Training Acc=0.22058824 Training Loss=1.684244 Testing Acc=0.18627451 Testing Loss=1.7136579
Iter 95 Training Acc=0.22058824 Training Loss=1.684244 Testing Acc=0.19607843 Testing Loss=1.7121166
Iter 96 Training Acc=0.22058824 Training Loss=1.6842439 Testing Acc=0.19607843 Testing Loss=1.7109753
Iter 97 Training Acc=0.22058824 Training Loss=1.6842438 Testing Acc=0.19607843 Testing Loss=1.7102116
Iter 98 Training Acc=0.22058824 Training Loss=1.6842434 Testing Acc=0.19607843 Testing Loss=1.7097214
Iter 99 Training Acc=0.22058824 Training Loss=1.6842432 Testing Acc=0.19607843 Testing Loss=1.7094033
running time: 12.46248173713684

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

早茶月光 2022-09-18 20:34:12

学习率0.0001有点儿小了???你试着调大一点儿,试着用用指数衰减学习率

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文