运行会话失败,原因是张量的类型和形状不匹配 Tensorflow

我尝试使用以下方法加载模型和图形:

saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta")graph = tf.get_default_graph()outputs = graph.get_tensor_by_name('output:0')outputs = tf.cast(outputs,dtype=tf.float32)X = graph.get_tensor_by_name('input:0')sess  = tf.Session()sess.run(tf.global_variables_initializer())   sess.run(tf.local_variables_initializer()) if(tf.train.checkpoint_exists(tf.train.latest_checkpoint(model_path))):    saver.restore(sess, tf.train.latest_checkpoint(model_path))    print(tf.train.latest_checkpoint(model_path) + "Session Loaded for Testing")   

它工作了!…
但是当我尝试运行会话时,我得到了以下错误:

y_test_output= sess.run(outputs, feed_dict={X: x_test})

错误是:

Caused by op 'output', defined at:  File "testing_reality.py", line 21, in <module>    saver = tf.train.import_meta_graph(tf.train.latest_checkpoint(model_path)+".meta")  File "C:\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1674, in import_meta_graph    meta_graph_or_file, clear_devices, import_scope, **kwargs)[0]  File "C:\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1696, in _import_meta_graph_with_return_elements    **kwargs))  File "C:\Python35\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements    return_elements=return_elements)  File "C:\Python35\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func    return func(*args, **kwargs)  File "C:\Python35\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def    _ProcessNewOps(graph)  File "C:\Python35\lib\site-packages\tensorflow\python\framework\importer.py", line 234, in _ProcessNewOps    for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in _add_new_tf_operations    for c_op in c_api_util.new_tf_operations(self)  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in <listcomp>    for c_op in c_api_util.new_tf_operations(self)  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3299, in _create_op_from_tf_operation    ret = Operation(c_op, self)  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__    self._traceback = tf_stack.extract_stack()InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'output' with dtype float and shape [?,1]         [[node output (defined at testing_reality.py:21)  = Placeholder[dtype=DT_FLOAT, shape=[?,1], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

我不知道是什么问题导致了这个错误。
请帮助我找到缺失的环节。

我已经检查了:

>>> outputs<tf.Tensor 'output:0' shape=(?, 1) dtype=float32>

仍然无法理解错误的原因。

我使用的是在Windows 10操作系统上的最新版本Tensorflow ‘1.12.0’。

这是我创建图形的方式:

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs],name="input")y = tf.placeholder(tf.float32, [None, n_outputs],name="output")layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons,activation=tf.nn.relu6, use_peepholes = True,name="layer"+str(layer))         for layer in range(n_layers)]multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])outputs = outputs[:,n_steps-1,:] # 只保留序列的最后一个输出loss = tf.reduce_mean(tf.square(outputs - y)) # 损失函数 = 均方误差 optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(loss)

回答:

当你尝试评估图形中依赖于占位符值的节点时,就会发生这种情况。因此,你会得到一个错误,指出你必须为占位符提供一个值。请看下面的例子:

tf.reset_default_graph()a = tf.placeholder(tf.float32)b = tf.placeholder(tf.float32)c = a + bd = awith tf.Session() as sess:    print(c.eval(feed_dict={a:1.0}))# 错误,因为要评估c我们必须有b的值。with tf.Session() as sess:    print(d.eval(feed_dict={a:1.0}))# 它工作了,因为d不依赖于b。

现在,在你的情况下,你不应该执行outputs占位符。你应该执行的是使用模型进行预测的操作,同时在X占位符中输入值(假设你使用这个占位符来向模型输入数据)。另一方面,我猜你使用output占位符来在训练时输入标签,所以没有必要在那个占位符中输入数据。

根据你最新的更新:

通过执行: outputs = graph.get_tensor_by_name('output:0') 你正在加载名为output的占位符。你不需要那个,你需要的是切片输出的操作。在创建图形的代码部分,执行:

outputs = tf.identity(outputs[:,n_steps-1,:], name="prediction")

然后,在加载模型时,加载这两个张量:

X = graph.get_tensor_by_name('input:0')prediction = graph.get_tensor_by_name('prediction:0')

最后,为了在你想要的输入上获取预测:

sess = tf.Session()sess.run(tf.global_variables_initializer())   sess.run(prediction, feed_dict={X: x_test})

Related Posts

PyTorch: state_dict 和 parameters() 有什么区别?

为了访问 PyTorch 模型的参数,我看到了两种方法…

输入维度错误在pytorch的前向检查

我正在使用pytorch创建一个RNN,它的结构如下:…

为什么我的DecisionTreeClassifier模型在预测时抱怨说labelCol不存在?

我开始编写一个用于对一系列文档中的段落进行分类的机器学…

pytorch中Tensor的Autograd.grad()

我想计算神经网络中两个张量之间的梯度。输入X张量(批次…

如何将虚拟变量添加到Pandas DataFrame?

我有一个名为data_df的数据框,看起来像这样: p…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注