我在使用Keras一段时间后开始学习TensorFlow,并尝试构建一个用于CIFAR-10分类的卷积神经网络。然而,我认为我在TensorFlow API的理解上可能存在一些误解,因为即使是在一层网络模型中,权重也没有更新。
模型的代码如下:
num_epochs = 10 batch_size = 64# mu和std的形状是正确的:(1, 32, 32, 3)mu = np.mean(X_train, axis=0, keepdims=True)sigma = np.std(X_train, axis=0, keepdims=True)# 数据和归一化的占位符# (归一化没有帮助)data = tf.placeholder(np.float32, shape=(None, 32, 32, 3), name='data')labels = tf.placeholder(np.int32, shape=(None,), name='labels')data = (data - mu) / sigma# 展平flat = tf.reshape(data, shape=(-1, 32 * 32 * 3))dense1 = tf.layers.dense(inputs=flat, units=10)predictions = tf.nn.softmax(dense1)onehot_labels = tf.one_hot(indices=labels, depth=10)# 也尝试了sparse_softmax_cross_entropy_with_logitsloss = tf.losses.softmax_cross_entropy(onehot_labels=onehot_labels, logits=predictions)loss = tf.reduce_mean(loss)# 学习率无关紧要,因为权重没有更新!optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)loss_history = []with tf.Session() as session: tf.global_variables_initializer().run() tf.local_variables_initializer().run() for epochs in range(10): print("Epoch:", epochs) # 加载小批量数据- for batch in iterate_minibatches(X_train.astype(np.float32)[:10], y_train[:10], 5): inputs, target = batch feed_dict = {data: inputs, labels: target} loss_val, _ = session.run([loss, optimizer], feed_dict=feed_dict) grads = tf.reduce_sum(tf.gradients(loss, dense1)[0]) grads = session.run(grads, {data: inputs, labels: target}) print("Loss:", loss_val, "Grads:", grads)
代码产生以下输出:
Epoch: 0Loss: 2.46115 Grads: -1.02031e-17Loss: 2.46041 Grads: 0.0Epoch: 1Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0Epoch: 2Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0Epoch: 3Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0Epoch: 4Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0Epoch: 5Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0Epoch: 6Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0Epoch: 7Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0Epoch: 8Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0Epoch: 9Loss: 2.46115 Grads: 0.0Loss: 2.26115 Grads: 0.0
看起来模型可能重置了其权重,或者完全停止了学习。我也尝试了稀疏softmax交叉熵损失,但没有任何帮助。
回答:
你在输出上应用了两次softmax,一次在tf.nn.softmax
中,另一次在应用softmax_cross_entropy
时。这可能会破坏网络的任何学习能力。