Tensorflow – 批量归一化层

我在尝试构建一些神经网络,并且希望在激活函数之前使用批量归一化,但遇到了一些问题。我不确定是否正确使用了这些层。

graph = tf.Graph()with graph.as_default():    x = tf.placeholder(tf.float32, shape=(batch_size, image_width, image_height, image_depth), name='x')    y = tf.placeholder(tf.float32, shape=(batch_size, num_categories), name='y')    keep_prob = tf.placeholder(tf.float32, name='keep_prob')    phase = tf.placeholder(tf.bool, name='phase')    layer1_weights = tf.Variable(tf.truncated_normal(shape=(filter_size, filter_size, image_depth, num_filters), stddev=0.01))        layer1_biases = tf.Variable(tf.ones(shape=(num_filters)))    layer2_weights = tf.Variable(tf.truncated_normal(shape=(filter_size, filter_size, num_filters, num_filters), stddev=0.01))    layer2_biases = tf.Variable(tf.ones(shape=(num_filters)))    layer3_weights = tf.Variable(tf.truncated_normal(shape=(filter_size, filter_size, num_filters, num_filters*2), stddev=0.01))    layer3_biases = tf.Variable(tf.ones(shape=(num_filters*2)))    layer4_weights = tf.Variable(tf.truncated_normal(shape=(filter_size, filter_size, num_filters*2, num_categories), stddev=0.01))    layer4_biases = tf.Variable(tf.ones(shape=(num_categories)))    x = batch_normalization(x, training=phase)    conv = tf.nn.conv2d(x, layer1_weights, [1, 1, 1, 1], padding='SAME') + layer1_biases    conv = batch_normalization(conv, training=phase)    conv = tf.nn.elu(conv)    conv = tf.nn.max_pool(conv, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')    conv = tf.nn.conv2d(conv, layer2_weights, [1, 1, 1, 1], padding='SAME') + layer2_biases    conv = batch_normalization(conv, training=phase)    conv = tf.nn.elu(conv)    conv = tf.nn.max_pool(conv, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')    conv = tf.nn.conv2d(conv, layer3_weights, [1, 1, 1, 1], padding='SAME') + layer3_biases    conv = batch_normalization(conv, training=phase)    conv = tf.nn.elu(conv)    conv = tf.nn.max_pool(conv, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')    conv = tf.nn.conv2d(conv, layer4_weights, [1, 1, 1, 1], padding='SAME') + layer4_biases    conv = batch_normalization(conv, training=phase)    conv = tf.nn.elu(conv)    conv = tf.layers.average_pooling2d(conv, [4, 4], [4, 4])    shape = conv.get_shape().as_list()    size = shape[1] * shape[2] * shape[3]    conv = tf.reshape(conv, shape=[-1, size])    y_ = tf.nn.softmax(conv)    # 损失函数    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=conv, labels=y))    optimizer = tf.train.AdamOptimizer(learning_rate=0.0001)    extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)    with tf.control_dependencies(extra_update_ops):        train_step = optimizer.minimize(loss)    # 准确率    accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y_, axis=1),                                               tf.argmax(y, axis=1)),                                      tf.float32))epochs = 1dropout = 0.5with tf.Session(graph=graph) as sess:    sess.run(tf.global_variables_initializer())    losses = []    acc = []    for e in range(epochs):        print('\nEpoch {}'.format(e+1))        for b in range(0, len(X_train), batch_size):            be = min(len(X_train), b + batch_size)            x_batch = X_train[b: be]            y_batch = y_train[b: be]            extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)            l, a, _ = sess.run([loss, accuracy, train_step, extra_update_ops],                               feed_dict={x: x_batch, y: y_batch, keep_prob: dropout, phase: True})            losses += [l]            acc += [a]            print('\r[{:5d}/{:5d}] loss = {}'.format(be, len(X_train), l), end='')    validation_accuracy = 0    for b in range(0, len(y_test), batch_size):        be = min(len(y_test), b + batch_size)        a = sess.run(accuracy, feed_dict={x: X_test[b: be], y: y_test[b: be], keep_prob: 1, phase: False})        validation_accuracy += a * (be - b)    validation_accuracy /= len(y_test)    training_accuracy = 0    for b in range(0, len(y_train), batch_size):        be = min(len(y_train), b + batch_size)        a = sess.run(accuracy, feed_dict={x: X_train[b: be], y: y_train[b: be], keep_prob: 1, phase: False})        training_accuracy += a * (be - b)    training_accuracy /= len(y_train)plt.plot(losses)plt.plot(acc)plt.show()print('Validation accuracy: {}'.format(validation_accuracy))print()print('Training accuracy: {}'.format(training_accuracy))

错误:我不知道为什么它说我没有为张量 x 提供值?

InvalidArgumentError: You must feed a value for placeholder tensor 'x' with dtype float and shape [16,32,32,3]     [[Node: x = Placeholder[dtype=DT_FLOAT, shape=[16,32,32,3], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

回答:

在某一行中,你定义了 x 作为占位符

x = tf.placeholder(tf.float32, shape=(batch_size, image_width, image_height, image_depth), name='x')

在接下来的某一行中,你用 batch_normalization 函数调用的结果覆盖了 x 变量

x = batch_normalization(x, training=phase)

此时 x 不再是一个 tf.placeholder,因此当你在 feed_dict 中使用它时,你不是在覆盖一个 tf.placeholder 的值,而是在覆盖由 batch_normalization 操作生成的 tf.Tensor

要解决这个问题,你可以更改这一行

x = batch_normalization(x, training=phase)

x_bn = batch_normalization(x, training=phase)

并在后续的行中用 x_bn 替换 x

这样,占位符变量 x 就不会被覆盖,你的代码应该可以正常运行。

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注