Tensorflow LeNet模型在MNIST上的应用

我在以下Tensorflow LeNet模型中找不到我的错误。我得到了以下错误:ValueError: Tried to convert ‘input’ to a tensor and failed. Error: Shapes must be equal rank, but are 2 and 1 From merging shape 22 with other shapes. for ‘Print_4/packed’ (op: ‘Pack’) with input shapes: [5,5,1,20], [20], [5,5,20,50], [50], [2450,200], [200], [200,10], [10], [5,5,1,20], [20], [5,5,20,50], [50], [2450,200], [200], [200,10], [10], [5,5,1,20], [20], [5,5,20,50], [50], [2450,200], [200], [200,10], [10].似乎我的架构在维度上不正确,但我无法找出问题所在,以下是我的代码:

 def weight_variable(shape):   initial = tf.truncated_normal(shape, stddev=0.1)   return tf.Variable(initial)def bias_variable(shape):   initial = tf.constant(0.1, shape=shape)   return tf.Variable(initial)def conv2d(x, W):   return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='VALID')def max_pool_2x2(x):   return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],                      strides=[1, 2, 2, 1], padding='SAME')     # 输入层 x  = tf.placeholder(tf.float32, [None, 784], name='x') y_ = tf.placeholder(tf.float32, [None, 10],  name='y_')x_image = tf.reshape(x, [-1, 28, 28, 1])# 卷积层 1W_conv1 = weight_variable([5, 5, 1, 20])b_conv1 = bias_variable([20])h_conv1 = conv2d(x_image, W_conv1) + b_conv1h_pool1 = max_pool_2x2(h_conv1)W_conv2 = weight_variable([5, 5, 20, 50])b_conv2 = bias_variable([50])h_conv2 = conv2d(h_pool1, W_conv2) + b_conv2h_pool2 = max_pool_2x2(h_conv2)h_pool2_flat = tf.reshape(h_pool2, [-1, 8*8*50])W_fc1 = weight_variable([8 * 8* 50, 500])b_fc1 = bias_variable([500])h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)W_fc2 = weight_variable([500, 10])b_fc2 = bias_variable([10])y = tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2, name='y')cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy') # 训练算法  train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) with tf.Session() as sess:     sess.run(tf.global_variables_initializer())     max_steps = 20000     for step in range(max_steps):          a = tf.Print(v, [v], message="This is a: ")         #print(a.eval())         batch_xs, batch_ys = mnist.train.next_batch(50)         sess.run([train_step], feed_dict={x: batch_xs, y_: batch_ys,          keep_prob: 0.5})print(max_steps, sess.run(accuracy, feed_dict={x: mnist.test.images,   y_: mnist.test.labels, keep_prob: 1.0}))

回答:

h_pool2的形状是(?,4,4,50)。所以你的这两行代码是错误的:

h_pool2_flat = tf.reshape(h_pool2, [-1, 8*8*50])W_fc1 = weight_variable([8 * 8* 50, 500])

改为4*4*50应该可以工作。

Related Posts

Keras Dense层输入未被展平

这是我的测试代码: from keras import…

无法将分类变量输入随机森林

我有10个分类变量和3个数值变量。我在分割后直接将它们…

如何在Keras中对每个输出应用Sigmoid函数?

这是我代码的一部分。 model = Sequenti…

如何选择类概率的最佳阈值?

我的神经网络输出是一个用于多标签分类的预测类概率表: …

在Keras中使用深度学习得到不同的结果

我按照一个教程使用Keras中的深度神经网络进行文本分…

‘MatMul’操作的输入’b’类型为float32,与参数’a’的类型float64不匹配

我写了一个简单的TensorFlow代码,但不断遇到T…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注