我在使用Python 2.7,我实在不明白为什么会出现这种情况。我猜测问题可能是我使用的Python 2.7导致了浮点类型的问题。
('Epoch', 1, 'completed out of', 10, 'loss:', 49576.683227539062)('Epoch', 2, 'completed out of', 10, 'loss:', 0.0)('Epoch', 3, 'completed out of', 10, 'loss:', 0.0)('Epoch', 4, 'completed out of', 10, 'loss:', 0.0)('Epoch', 5, 'completed out of', 10, 'loss:', 0.0)('Epoch', 6, 'completed out of', 10, 'loss:', 0.0)('Epoch', 7, 'completed out of', 10, 'loss:', 0.0)('Epoch', 8, 'completed out of', 10, 'loss:', 0.0)('Epoch', 9, 'completed out of', 10, 'loss:', 0.0)('Epoch', 10, 'completed out of', 10, 'loss:', 0.0)('Accuracy:', 1.0)
我的代码如下所示:
def train_neural_network(x): prediction = neural_network_model(x) cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) ) optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) for epoch in range(hm_epochs): epoch_loss = 0 i=0 while i < len(train_x): start = i end = i+batch_size batch_x = np.array(train_x[start:end]) batch_y = np.array(train_y[start:end]) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, # 将batch_x代入 y: batch_y}) epoch_loss += c i+=batch_size print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss) correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) print('Accuracy:',accuracy.eval({x:test_x, y:test_y}))
回答:
也许你的数据几乎完美拟合,导致四舍五入后损失变为0.0。你可以尝试在测试集上使用你的损失函数来查看输出,或者打印每次迭代的中间输出,看看损失是否如预期般减少。
不过,我不会太担心这个问题,因为你的测试准确率已经达到100%,这是最理想的结果。你还能要求什么呢?