Tensorflow : TypeError: Fetch参数None的类型无效

我在尝试运行这个简单的程序来计算梯度,但是我得到了None错误:

import tensorflow as tfimport numpy as npbatch_size = 5dim = 3hidden_units = 8sess = tf.Session()with sess.as_default():    x = tf.placeholder(dtype=tf.float32, shape=[None, dim], name="x")    y = tf.placeholder(dtype=tf.int32, shape=[None], name="y")    w = tf.Variable(initial_value=tf.random_normal(shape=[dim, hidden_units]), name="w")    b = tf.Variable(initial_value=tf.zeros(shape=[hidden_units]), name="b")    logits = tf.nn.tanh(tf.matmul(x, w) + b)    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, y,name="xentropy")    # 定义模型结束    # 开始训练    optimizer = tf.train.GradientDescentOptimizer(1e-5)    grads_and_vars = optimizer.compute_gradients(cross_entropy, tf.trainable_variables())    # 生成数据    data = np.random.randn(batch_size, dim)    labels = np.random.randint(0, 10, size=batch_size)    sess.run(tf.initialize_all_variables())    gradients_and_vars = sess.run(grads_and_vars, feed_dict={x:data, y:labels})    for g, v in gradients_and_vars:        if g is not None:            print "****************这是变量*************"            print "变量的形状:", v.shape            print v            print "****************这是梯度*************"            print "梯度的形状:", g.shape            print gsess.close()

错误:

---------------------------------------------------------------------------TypeError                                 Traceback (most recent call last)<ipython-input-14-8096b2e21e06> in <module>()     29      30     sess.run(tf.initialize_all_variables())---> 31     outnet = sess.run(grads_and_vars, feed_dict={x:data, y:labels})     32 #     print(gradients_and_vars)     33 #         if g is not None://anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)    893     try:    894       result = self._run(None, fetches, feed_dict, options_ptr,--> 895                          run_metadata_ptr)    896       if run_metadata:    897         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)//anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)   1107     # Create a fetch handler to take care of the structure of fetches.   1108     fetch_handler = _FetchHandler(-> 1109         self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)   1110    1111     # Run request and get response.//anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in __init__(self, graph, fetches, feeds, feed_handles)    411     """    412     with graph.as_default():--> 413       self._fetch_mapper = _FetchMapper.for_fetch(fetches)    414     self._fetches = []    415     self._targets = []//anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in for_fetch(fetch)    231     elif isinstance(fetch, (list, tuple)):    232       # NOTE(touts): This is also the code path for namedtuples.--> 233       return _ListFetchMapper(fetch)    234     elif isinstance(fetch, dict):    235       return _DictFetchMapper(fetch)//anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in __init__(self, fetches)    338     """    339     self._fetch_type = type(fetches)--> 340     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]    341     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)    342 //anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in <listcomp>(.0)    338     """    339     self._fetch_type = type(fetches)--> 340     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]    341     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)    342 //anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in for_fetch(fetch)    231     elif isinstance(fetch, (list, tuple)):    232       # NOTE(touts): This is also the code path for namedtuples.--> 233       return _ListFetchMapper(fetch)    234     elif isinstance(fetch, dict):    235       return _DictFetchMapper(fetch)//anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in __init__(self, fetches)    338     """    339     self._fetch_type = type(fetches)--> 340     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]    341     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)    342 //anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in <listcomp>(.0)    338     """    339     self._fetch_type = type(fetches)--> 340     self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]    341     self._unique_fetches, self._value_indices = _uniquify_fetches(self._mappers)    342 //anaconda/lib/python3.5/site-packages/tensorflow/python/client/session.py in for_fetch(fetch)    228     if fetch is None:    229       raise TypeError('Fetch argument %r has invalid type %r' %--> 230                       (fetch, type(fetch)))    231     elif isinstance(fetch, (list, tuple)):    232       # NOTE(touts): This is also the code path for namedtuples.TypeError: Fetch参数None的类型无效

为什么会出现这个错误?是版本问题吗?


回答:

Gradients 如果图中它们之间没有明确的连接,则会返回None。在你的代码中,似乎所有声明的变量都有连接,所以可能是从其他图中加载变量的情况。你可以使用:

 print([v.name for v in tf.all_variables()])

并检查只有预期的变量是这个图的一部分。

尝试像这样的代码:

sess.run(tf.initialize_all_variables())gradients_and_vars = sess.run([variable for grad,variable in grads_and_vars], feed_dict={x:data, y:labels})print(gradients_and_vars)

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

Tensorflow TypeError: Fetch参数None的类型无效?

我正在构建一个基于TensorFlow教程的RNN模型。

我的模型的相关部分如下所示:

input_sequence = tf.placeholder(tf.float32, [BATCH_SIZE, TIME_STEPS, PIXEL_COUNT + AUX_INPUTS])output_actual = tf.placeholder(tf.float32, [BATCH_SIZE, OUTPUT_SIZE])lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(CELL_SIZE, state_is_tuple=False)stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * CELL_LAYERS, state_is_tuple=False)initial_state = state = stacked_lstm.zero_state(BATCH_SIZE, tf.float32)outputs = []with tf.variable_scope("LSTM"):    for step in xrange(TIME_STEPS):        if step > 0:            tf.get_variable_scope().reuse_variables()        cell_output, state = stacked_lstm(input_sequence[:, step, :], state)        outputs.append(cell_output)final_state = state

以及数据的输入:

cross_entropy = tf.reduce_mean(-tf.reduce_sum(output_actual * tf.log(prediction), reduction_indices=[1]))train_step = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(cross_entropy)correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(output_actual, 1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))with tf.Session() as sess:    sess.run(tf.initialize_all_variables())    numpy_state = initial_state.eval()    for i in xrange(1, ITERATIONS):        batch = DI.next_batch()        print i, type(batch[0]), np.array(batch[1]).shape, numpy_state.shape        if i % LOG_STEP == 0:            train_accuracy = accuracy.eval(feed_dict={                initial_state: numpy_state,                input_sequence: batch[0],                output_actual: batch[1]            })            print "Iteration " + str(i) + " Training Accuracy " + str(train_accuracy)        numpy_state, train_step = sess.run([final_state, train_step], feed_dict={            initial_state: numpy_state,            input_sequence: batch[0],            output_actual: batch[1]            })

当我运行这段代码时,出现了以下错误:

Traceback (most recent call last):  File "/home/agupta/Documents/Projects/Image-Recognition-with-LSTM/RNN/feature_tracking/model.py", line 109, in <module>    output_actual: batch[1]  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 698, in run    run_metadata_ptr)  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 838, in _run    fetch_handler = _FetchHandler(self._graph, fetches)  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 355, in __init__    self._fetch_mapper = _FetchMapper.for_fetch(fetches)  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 181, in for_fetch    return _ListFetchMapper(fetch)  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 288, in __init__    self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 178, in for_fetch    (fetch, type(fetch)))TypeError: Fetch argument None has invalid type <type 'NoneType'>

最奇怪的是,这个错误在第二次迭代时抛出,而第一次迭代完全正常。我快要抓狂了,任何帮助都将不胜感激。


回答:

您将train_step变量重新赋值为sess.run()结果的第二个元素(恰好是None)。因此,在第二次迭代时,train_stepNone,这导致了错误。

幸运的是,解决方法很简单:

for i in xrange(1, ITERATIONS):    # ...    # 丢弃结果的第二个元素。    numpy_state, _ = sess.run([final_state, train_step], feed_dict={        initial_state: numpy_state,        input_sequence: batch[0],        output_actual: batch[1]        })

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注