我在我的网络中使用了一个tf.layers.batch_normalization
层。如您所知,批归一化对该层中的每个单元u_i使用可训练的参数gamma和beta,以选择其在不同输入x下的标准差和均值。通常gamma被初始化为1,beta被初始化为0。
我对查看在不同单元中学习到的gamma和beta的值感兴趣,以便收集统计数据,了解它们在网络训练后通常会落在哪里。我如何在每个训练实例中查看它们的当前值?
回答:
您可以获取批归一化层作用域内的所有变量并打印它们。例如:
import tensorflow as tftf.reset_default_graph()x = tf.constant(3.0, shape=(3,))x = tf.layers.batch_normalization(x)print(x.name) # batch_normalization/batchnorm/add_1:0variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='batch_normalization')print(variables)#[<tf.Variable 'batch_normalization/gamma:0' shape=(3,) dtype=float32_ref>,# <tf.Variable 'batch_normalization/beta:0' shape=(3,) dtype=float32_ref>,# <tf.Variable 'batch_normalization/moving_mean:0' shape=(3,) dtype=float32_ref>,# <tf.Variable 'batch_normalization/moving_variance:0' shape=(3,) dtype=float32_ref>]with tf.Session() as sess: sess.run(tf.global_variables_initializer()) gamma = sess.run(variables[0]) print(gamma) # [1. 1. 1.]