使用Tensorflow重塑张量

我尝试使用tf.nn.sparse_softmax_cross_entropy_with_logits,并按照用户Olivier Moindrot在[这里][1]的回答进行操作,但遇到了维度错误的问题。

我正在构建一个分割网络,输入图像大小为200×200,输出图像也为200×200。分类是二元的,即前景和背景。

在构建CNN后,pred = conv_net(x, weights, biases, keep_prob)

pred的形状是<tf.Tensor 'Add_1:0' shape=(?, 40000) dtype=float32>

CNN包含几个卷积层,后接一个全连接层。全连接层的大小为40000,因为它是200×200的扁平化结果。

根据上述链接,我这样重塑pred

(附注:我也尝试过使用tf.pack()将两个pred打包在一起,但认为这可能是错误的)

pred = tf.reshape(pred, [-1, 200, 200, 2])

…以便有两个分类。继续上述链接…

temp_pred = tf.reshape(pred, [-1,2])temp_y = tf.reshape(y, [-1])cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(temp_pred, temp_y))optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

我有以下占位符和批次数据…

x = tf.placeholder(tf.float32, [None, 200, 200])y = tf.placeholder(tf.int64, [None, 200, 200])(Pdb) batch_x.shape(10, 200, 200)(Pdb) batch_y.shape(10, 200, 200)

当我运行训练会话时,得到以下维度错误:

tensorflow.python.framework.errors.InvalidArgumentError: logits firstdimension must match labels size.  logits shape=[3200000,2] labels shape=[400000]

我的完整代码如下:

import tensorflow as tfimport pdbimport numpy as np# 导入MINST数据# from tensorflow.examples.tutorials.mnist import input_data# mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)# 参数learning_rate = 0.001training_iters = 200000batch_size = 10display_step = 1# 网络参数n_input = 200 # MNIST数据输入(图像形状:28*28)n_classes = 2 # MNIST总类别(0-9数字)n_output = 40000#n_input = 200dropout = 0.75 # Dropout,保留单元的概率# tf图形输入x = tf.placeholder(tf.float32, [None, n_input, n_input])y = tf.placeholder(tf.int64, [None, n_input, n_input])keep_prob = tf.placeholder(tf.float32) #dropout(保留概率)# 创建一些包装器以简化操作def conv2d(x, W, b, strides=1):    # Conv2D包装器,带有偏置和relu激活    x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')    x = tf.nn.bias_add(x, b)    return tf.nn.relu(x)def maxpool2d(x, k=2):    # MaxPool2D包装器    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],                          padding='SAME')# 创建模型def conv_net(x, weights, biases, dropout):    # 重塑输入图像    x = tf.reshape(x, shape=[-1, 200, 200, 1])    # 卷积层    conv1 = conv2d(x, weights['wc1'], biases['bc1'])    # 最大池化(下采样)    # conv1 = tf.nn.local_response_normalization(conv1)    # conv1 = maxpool2d(conv1, k=2)    # 卷积层    conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])    # 最大池化(下采样)    # conv2 = tf.nn.local_response_normalization(conv2)    # conv2 = maxpool2d(conv2, k=2)    # 卷积层    conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])    # # 最大池化(下采样)    # conv3 = tf.nn.local_response_normalization(conv3)    # conv3 = maxpool2d(conv3, k=2)    # return conv3    # 全连接层    # 重塑conv2输出以适应全连接层输入    fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])    fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])    fc1 = tf.nn.relu(fc1)    # 应用Dropout    fc1 = tf.nn.dropout(fc1, dropout)    return tf.add(tf.matmul(fc1, weights['out']), biases['out'])    # 输出,类别预测    # output = []    # for i in xrange(2):    #     # output.append(tf.nn.softmax(tf.add(tf.matmul(fc1, weights['out']), biases['out'])))    #     output.append((tf.add(tf.matmul(fc1, weights['out']), biases['out'])))    #    # return output# 存储层权重和偏置weights = {    # 5x5卷积,1输入,32输出    'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),    # 5x5卷积,32输入,64输出    'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),    # 5x5卷积,32输入,64输出    'wc3': tf.Variable(tf.random_normal([5, 5, 64, 128])),    # 全连接,7*7*64输入,1024输出    'wd1': tf.Variable(tf.random_normal([50*50*64, 1024])),    # 1024输入,10输出(类别预测)    'out': tf.Variable(tf.random_normal([1024, n_output]))}biases = {    'bc1': tf.Variable(tf.random_normal([32])),    'bc2': tf.Variable(tf.random_normal([64])),    'bc3': tf.Variable(tf.random_normal([128])),    'bd1': tf.Variable(tf.random_normal([1024])),    'out': tf.Variable(tf.random_normal([n_output]))}# 构建模型pred = conv_net(x, weights, biases, keep_prob)pdb.set_trace()# pred = tf.pack(tf.transpose(pred,[1,2,0]))pred = tf.reshape(pred, [-1, n_input, n_input, 2])temp_pred = tf.reshape(pred, [-1,2])temp_y = tf.reshape(y, [-1])# 定义损失和优化器cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(temp_pred, temp_y))optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)# 评估模型# correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))temp_pred2 = tf.reshape(pred, [-1,n_input,n_input])correct_pred = tf.equal(tf.cast(y,tf.float32),tf.sub(temp_pred2,tf.cast(y,tf.float32)))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))# 初始化变量init = tf.initialize_all_variables()# 启动图形with tf.Session() as sess:    sess.run(init)    summ = tf.train.SummaryWriter('/tmp/logdir/', sess.graph_def)    step = 1    from tensorflow.contrib.learn.python.learn.datasets.scroll import scroll_data    data = scroll_data.read_data('/home/kendall/Desktop/')    # 持续训练直到达到最大迭代次数    while step * batch_size < training_iters:        batch_x, batch_y = data.train.next_batch(batch_size)        # 运行优化操作(反向传播)        batch_x = batch_x.reshape((batch_size, n_input, n_input))        batch_y = batch_y.reshape((batch_size, n_input, n_input))        batch_y = np.int64(batch_y)        # y = tf.reshape(y, [-1,n_input,n_input])        pdb.set_trace()        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y, keep_prob: dropout})        if step % display_step == 0:            # 计算批次损失和准确率            pdb.set_trace()            loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x, y: batch_y, keep_prob: 1.})            print "Iter " + str(step*batch_size) + ", Minibatch Loss= " + \                  "{:.6f}".format(loss) + ", Training Accuracy= " + \                  "{:.5f}".format(acc)        step += 1    print "Optimization Finished!"    # 计算256个mnist测试图像的准确率    print "Testing Accuracy:", \        sess.run(accuracy, feed_dict={x: data.test.images[:256],                                      y: data.test.labels[:256],                                      keep_prob: 1.})  [1]: http://stackoverflow.com/questions/35317029/how-to-implement-pixel-wise-classification-for-scene-labeling-in-tensorflow/37294185?noredirect=1#comment63253577_37294185

回答:

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注