我正在使用Tensorflow构建卷积神经网络模型来进行字符识别。我的模型包含2个卷积层和2个全连接层。我有大约78,000张用于训练的图像和13,000张用于测试的图像。当我运行模型时,在测试集上获得了大约92.xx%的准确率。当我在Tensorboard上可视化我的准确率和损失曲线时,我得到了一条垂直线,但我不知道为什么会出现这种情况。我得到的曲线如下所示 在Tensorboard上查看的准确率和交叉熵曲线。
此外,权重和偏置的分布曲线也显示了一条垂直线 左侧显示测试参数(权重和偏置),右侧显示第一卷积层的训练参数
在这方面,任何帮助都将不胜感激。!!
def conv_layer(input, size_in, size_out, name="conv"):with tf.name_scope(name):w = tf.Variable(tf.random_normal([5, 5, size_in, size_out], stddev=0.1), name="W")b = tf.Variable(tf.constant(0.1, shape=[size_out]), name="B")conv = tf.nn.conv2d(input, w, strides=[1, 1, 1, 1],padding="VALID")act = tf.nn.relu(conv + b)tf.summary.histogram("weights", w)tf.summary.histogram("biases", b)tf.summary.histogram("activations", act)return tf.nn.max_pool(act, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")`def fc_layer(input, size_in, size_out, name="fc"):with tf.name_scope(name):w = tf.Variable(tf.random_normal([size_in, size_out], stddev=0.1), name="W") # Truncated_normalb = tf.Variable(tf.constant(0.1, shape=[size_out]), name="B")act = tf.matmul(input, w) + btf.summary.histogram("weights", w)tf.summary.histogram("biases", b)tf.summary.histogram("activations", act)return actdef model(use_two_conv, use_two_fc):sess = tf.Session()x = tf.placeholder(tf.float32, shape=[None, 1024], name="x")x_image = tf.reshape(x, [-1, 32, 32, 1])tf.summary.image('input', x_image, 3)y = tf.placeholder(tf.float32, shape=[None,46], name="labels")if use_two_conv: conv1 = conv_layer(x_image, 1, 4, "conv1") conv_out = conv_layer(conv1,4,16,"conv2") else: conv1 = conv_layer(x_image, 1, 16, "conv1") conv_out = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME")flattened = tf.reshape(conv_out, [-1, 5 * 5 * 16])if use_two_fc: fc1 = fc_layer(flattened, 5 * 5 * 16, 200, "fc1") relu = tf.nn.relu(fc1) tf.summary.histogram("fc1/relu", relu) logits = fc_layer(fc1, 200, 46, "fc2") else: logits = fc_layer(flattened, 5*5*16, 46, "fc")
回答:
当我过去遇到这个问题时,是因为使用了
writer.add_summary(current_summary)
而不是
writer.add_summary(current_summary, epoch)
(使用通用变量名称,因为提问者未发布相关代码部分)。例如,
summary_op = tf.summary.merge_all()with tf.Session() as sess: sess.run(tf.global_variables_initializer()) writer = tf.summary.FileWriter("/Whatever/Path", sess.graph) for iteration in range(1001): if epoch % 100 == 0: _, current_summary = sess.run([training_op, summary_op]) writer.add_summary(current_summary, iteration) else: _ = sess.run(training_op)