我写的以下代码在self.optimizer.compute_gradients(self.output,all_variables)
处失败
import tensorflow as tfimport tensorlayer as tlfrom tensorflow.python.framework import opsimport numpy as npclass Network1():def __init__(self): ops.reset_default_graph() tl.layers.clear_layers_name() self.sess = tf.Session() self.optimizer = tf.train.AdamOptimizer(learning_rate=0.1) self.input_x = tf.placeholder(tf.float32, shape=[None, 784],name="input") input_layer = tl.layers.InputLayer(self.input_x) relu1 = tl.layers.DenseLayer(input_layer, n_units=800, act = tf.nn.relu, name="relu1") relu2 = tl.layers.DenseLayer(relu1, n_units=500, act = tf.nn.relu, name="relu2") self.output = relu2.all_layers[-1] all_variables = relu2.all_layers self.gradient = self.optimizer.compute_gradients(self.output,all_variables) init_op = tf.initialize_all_variables() self.sess.run(init_op)
伴随警告,
TypeError: Argument is not a tf.Variable: Tensor(“relu1/Relu:0”, shape=(?, 800), dtype=float32)
然而,当我将那行改为tf.gradients(self.output,all_variables)
时,代码运行正常,至少没有报告警告。我错过了什么,因为我认为这两个方法实际上在执行同样的事情,即返回一个(梯度,变量)对的列表。
回答:
optimizer.compute_gradients
包装了tf.gradients(),如这里所示。它还执行了额外的断言(这解释了您的错误)。