背景:我想通过在回归设置中训练CNN来预测一个值。我还希望在每个epoch后评估并计算损失,因此需要在运行时在数据集之间切换。
输入: [num_examples, height, width, channels] -> [num_examples, y]
我想使用新的Dataset API,因为我想避免在训练过程中自己喂送批次数据。
我不想将数据集存储在计算图中,因为数据集大于2GB,但是小到可以存储在内存中。
这是我当前的设置:
def initialize_datasets(x, y,...): dataset_train = tf.data.Dataset.from_tensor_slices((x, y)) dataset_train = dataset_train.apply(tf.contrib.data.shuffle_and_repeat(buffer_size=examples_train, count=epochs)) dataset_train = dataset_train.batch(batch_size) dataset_test = tf.data.Dataset.from_tensor_slices((x, y)) dataset_test = dataset_test.apply(tf.contrib.data.shuffle_and_repeat(buffer_size=examples_test, count=-1)) dataset_test = dataset_test.batch(batch_size) # 迭代器 iterator_train = dataset_train.make_initializable_iterator() iterator_test = dataset_test.make_initializable_iterator() return iterator_train, iterator_testdef get_input_batch_data(testing, iterator_train, iterator_test): features, labels = tf.cond(testing, lambda: iterator_test.get_next(), lambda: iterator_train.get_next()) return features, labels
然后在my model()
函数中:
#1iterator_train, iterator_test = initialize_datasets(x, y, ...)#2features, labels = get_input_batch_data(testing, iterator_train, iterator_test)# 前向传递、损失等...with tf.Session as sess: # 使用训练数据初始化,trainX[num_examples, height, width, channels] sess.run(iterator_train.initializer, feed_dict={x: trainX, y: trainY, batch_size: batchsize}) # 使用测试数据初始化 sess.run(iterator_test.initializer, feed_dict={x: testX, y: testY, batch_size: NUM_EXAMPLES_TEST})for i in range(EPOCHS) for j in range(NUM_BATCHES) _, batch_loss = sess.run([train_step, loss], feed_dict={testing: False, i: iters_total, pkeep: p_keep}) # 在一个epoch后,计算整个测试数据集的损失 epoch_test_loss = sess.run(loss, feed_dict={testing: True, i: iters_total, pkeep: 1})
这是输出结果:
Iter: 44, Epoch: 0 (8.46s), Train-Loss: 103011.18, Test-Loss: 100162.34Iter: 89, Epoch: 1 (4.17s), Train-Loss: 93699.51, Test-Loss: 92130.21Iter: 134, Epoch: 2 (4.13s), Train-Loss: 90217.82, Test-Loss: 88978.74Iter: 179, Epoch: 3 (4.14s), Train-Loss: 88503.13, Test-Loss: 87515.81Iter: 224, Epoch: 4 (4.18s), Train-Loss: 87336.62, Test-Loss: 86486.40Iter: 269, Epoch: 5 (4.10s), Train-Loss: 86388.38, Test-Loss: 85637.64Iter: 314, Epoch: 6 (4.14s), Train-Loss: 85534.52, Test-Loss: 84858.43Iter: 359, Epoch: 7 (4.29s), Train-Loss: 84693.19, Test-Loss: 84074.78Iter: 404, Epoch: 8 (4.20s), Train-Loss: 83973.64, Test-Loss: 83314.47Iter: 449, Epoch: 9 (4.40s), Train-Loss: 83149.73, Test-Loss: 82541.73
问题:
- 这个输出表明我的数据集管道没有正常工作,因为测试损失和训练损失太接近,这可能是因为测试损失是在训练数据上计算的,或者反之亦然
- 我应该使用哪种迭代器和数据集来执行这个任务?
我还在这里上传了整个模型:https://github.com/toemm/TF-CNN-regression/blob/master/BA-CNN_so.ipynb
回答:
显而易见的答案是:你不希望在同一个图中执行此操作,因为评估图与训练图是不同的。
- Dropout有一个固定的乘数(没有采样)
- BatchNorm使用累积统计数据,并且不更新EMA
所以解决方案实际上是构建两个不同的东西,像这样
import numpy as npimport tensorflow as tfX_train = tf.constant(np.ones((100, 2)), 'float32')X_val = tf.constant(np.zeros((10, 2)), 'float32')iter_train = tf.data.Dataset.from_tensor_slices( X_train).make_initializable_iterator()iter_val = tf.data.Dataset.from_tensor_slices( X_val).make_initializable_iterator()def graph(x, is_train=True): return xoutput_train = graph(iter_train.get_next(), is_train=True)output_val = graph(iter_val.get_next(), is_train=False)with tf.Session() as sess: sess.run(tf.global_variables_initializer()) sess.run(iter_train.initializer) sess.run(iter_val.initializer) for train_iter in range(100): print(sess.run(output_train)) for train_iter in range(10): print(sess.run(output_val))