我在尝试使用Tensorflow时,总是遇到关于数据形状的错误。我的代码来自于这个YouTube教程:https://www.youtube.com/watch?v=PwAGxqrXSCs&list=PLQVvvaa0QuDfKTOs3Keq_kaG2P55YRn5v&index=47
我的训练数据是这样的:
enc0 = np.array([[[1,2,3,4],[0,1,0,1],[-33,0,0,0],[1,1,1,1]],[[2,3,3,2],[0,0,0,0],[9,0,0,0],[0,0,0,1]]]) # shape (2,4,4)ms0 = np.array([[1,6],[2,7]]) # shape (2,2)
我的错误信息如下:
ValueError: Dimension size must be evenly divisible by 10 but is 4 for ‘gradients/Reshape_grad/Reshape’ (op: ‘Reshape’) with input shapes: [1,4], [2].
我认为我的错误是由于以下这些行代码引起的:
x = tf.placeholder('float',[None,16])y = tf.placeholder('float',[4])enc = enc0.reshape([-1,16])
我的完整代码是这样的:
enc0 = np.array([[[1,2,3,4],[0,1,0,1],[-33,0,0,0],[1,1,1,1]],[[2,3,3,2],[0,0,0,0],[9,0,0,0],[0,0,0,1]]])ms0 = np.array([[1,6],[2,7]])n_nodes_hl1 = 500 # hidden layer 1n_nodes_hl2 = 500n_nodes_hl3 = 500n_classes = 10batch_size = 100 # load 100 features at a timex = tf.placeholder('float',[None,16]) y = tf.placeholder('float',[4])enc = enc0.reshape([-1,16])ms = ms0def neuralNet(data): hl_1 = {'weights':tf.Variable(tf.random_normal([16, n_nodes_hl1])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl1]))} hl_2 = {'weights':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl2]))} hl_3 = {'weights':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])), 'biases':tf.Variable(tf.random_normal([n_nodes_hl3]))} output_layer = {'weights':tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])), 'biases':tf.Variable(tf.random_normal([n_classes]))} l1 = tf.add(tf.matmul(data, hl_1['weights']), hl_1['biases']) l1 = tf.nn.relu(l1) l2 = tf.add(tf.matmul(l1, hl_2['weights']), hl_2['biases']) l2 = tf.nn.relu(l2) l3 = tf.add(tf.matmul(l2, hl_3['weights']), hl_3['biases']) l3 = tf.nn.relu(l3) ol = tf.matmul(l3, output_layer['weights']) + output_layer['biases'] return oldef train(x): prediction = neuralNet(x) print prediction cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # learning rate = 0.001 # cycles of feed forward and backprop num_epochs = 15 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(num_epochs): epoch_loss = 0 for _ in range(int(enc.shape[0])): epoch_x,epoch_y = enc,ms _,c = sess.run([optimizer,cost],feed_dict={x:epoch_x,y:epoch_y}) epoch_loss += c print 'Epoch', epoch + 1, 'completed out of', num_epochs, '\nLoss:',epoch_loss,'\n' correct = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct,'float')) print 'Accuracy', accuracy.eval({x:enc, y:ms}) train(x)
任何关于错误的帮助都将不胜感激。
回答:
原因是你的网络生成的预测数量为n_classes
(n_classes
是10),而你在y
占位符中比较的是4个值。只要将
y = tf.placeholder('float', [10])
然后实际向占位符中输入10个值即可。