这是一个简单的tensorflow代码,创建了两个共享参数但输入不同的模型(占位符)。
import tensorflow as tfimport numpy as npclass Test: def __init__(self): self.x = tf.placeholder(tf.float32, [None] + [64], name='states') self.y = tf.placeholder(tf.float32, [None] + [64], name='y') self.x_test = tf.placeholder(tf.float32, [None] + [64], name='states_test') self.is_training = tf.placeholder(tf.bool, name='is_training') self.model() def network(self, x, reuse): with tf.variable_scope('test_network', reuse=reuse): h1 = tf.layers.dense(x, 64) bn1 = tf.layers.batch_normalization(h1, training=self.is_training) drp1 = tf.layers.dropout(tf.nn.relu(bn1), rate=.9, training=self.is_training, name='dropout') h2 = tf.layers.dense(drp1, 64) bn2 = tf.layers.batch_normalization(h2, training=self.is_training) out = tf.layers.dropout(tf.nn.relu(bn2), rate=.9, training=self.is_training, name='dropout') return out def model(self): self.out = self.network(self.x, False) self.out_test = self.network(self.x_test, True) self.loss = tf.losses.mean_squared_error(self.out, self.y) extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(extra_update_ops): self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)def main(_): my_test = Test() sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) batch_x = np.zeros((4, 64)) batch_y = np.zeros((4, 64)) for i in range(10): feed_dict = {my_test.x: batch_x, my_test.y: batch_y, my_test.is_training: True} _, loss = sess.run([my_test.train_step, my_test.loss], feed_dict)if __name__ == '__main__': tf.app.run()
当我运行“train_step”节点时,我得到了这个错误:
InvalidArgumentError (参见上面的跟踪信息): 您必须为占位符张量 'states_test' 提供一个值,其数据类型为float,形状为[?,64] [[Node: states_test = Placeholder[dtype=DT_FLOAT, shape=[?,64], _device="/job:localhost/replica:0/task:0/gpu:0"]()]] [[Node: mean_squared_error/value/_77 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_2678_mean_squared_error/value", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
尽管“train_step”节点并未连接到“states_test”占位符,并且运行时不需要它,为什么我必须为其提供值呢?
然而,如果我更改模型函数,使第二个网络在优化器之后创建,代码就可以无错误运行!(如下所示):
def model(self): self.out = self.network(self.x, False) self.loss = tf.losses.mean_squared_error(self.out, self.y) extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(extra_update_ops): self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss) self.out_test = self.network(self.x_test, True)
为什么会发生这种情况,尽管这两个代码会生成相同的tensorflow图?有人能解释这种行为吗?
回答:
问题在于使用了批量归一化,即以下这些行:
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)with tf.control_dependencies(extra_update_ops): self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)
请注意,您有两个共享变量的图 – 您的训练图和测试图。您首先创建这两个图,然后再创建优化器。然而,您对extra_update_ops
使用了控制依赖,这是一个所有更新操作的集合。问题是 – 每个批量归一化都会创建更新操作(用于跟踪均值/方差) – 您在训练图和测试图中各有一个。因此,通过请求控制依赖,您是在告诉TF,您的训练操作只能在训练和测试图中的批量归一化统计数据都执行后才能执行。这就需要提供测试样本。那么您应该怎么做呢?将您的extra_update_ops
更改为仅包含训练图的更新(通过名称范围、手动过滤或任何其他方法),或者在构建测试图之前调用tf.get_collection
,如下所示:
def model(self): self.out = self.network(self.x, False) # 请注意,此时我们只收集训练批量归一化的操作 extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) self.out_test = self.network(self.x_test, True) self.loss = tf.losses.mean_squared_error(self.out, self.y) with tf.control_dependencies(extra_update_ops): self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)
您可能还需要将reuse=True
传递给您的批量归一化层。