在TensorFlow中保存模型时出现IndexError: list index out of range

谁能帮帮我?我使用TensorFlow训练LSTM网络。训练过程进行得很顺利,但当我想要保存模型时,出现了下面的错误。

Step 1, Minibatch Loss= 0.0146, Training Accuracy= 1.000Step 1, Minibatch Loss= 0.0129, Training Accuracy= 1.000Optimization Finished!Traceback (most recent call last):  File ".\lstm.py", line 169, in <module>    save_path = saver.save(sess, "modelslstm/" + str(time.strftime("%d-%m-%Y-%H-%M-%S")) + ".ckpt")  File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1314, in __exit__    self._default_graph_context_manager.__exit__(exec_type, exec_value, exec_tb)  File "C:\Python35\lib\contextlib.py", line 66, in __exit__    next(self.gen)  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3815, in get_controller    if self.stack[-1] is not default:IndexError: list index out of range

我的代码如下:

with tf.Session() as sess:    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'    # from tensorflow.examples.tutorials.mnist import input_data    # mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)    # a,b = mnist.train.next_batch(5)    # print(b)    # Run the initializer    sess.run(init)    saver = tf.train.Saver()    merged_summary_op = tf.summary.merge_all()    writer = tf.summary.FileWriter("trainlstm", sess.graph)    #print(str(data.train.num_examples))    for step in range(1, training_steps+1):        for batch_i in range(data.train.num_examples // batch_size):            batch_x, batch_y,name = data.train.next_batch(batch_size)            #hasil,cost = encode(batch_x[0][0],"models/25-09-2017-15-25-54.ckpt")            temp = []            for batchi in range(batch_size):                temp2 = []                for ti in range(timesteps):                    hasil,cost = encode(batch_x[batchi][ti],"models/25-09-2017-15-25-54.ckpt")                    hasil = np.reshape(hasil,[num_input])                    temp2.append(hasil.copy())                temp.append(temp2.copy())            batch_x = temp            # Reshape data to get 28 seq of 28 elements            #batch_x = batch_x.reshape((batch_size, timesteps, num_input))            #dlib.hit_enter_to_continue()            # Run optimization op (backprop)            sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})            # Calculate batch loss and accuracy            loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,                                                                 Y: batch_y})            print("Step " + str(step) + ", Minibatch Loss= " + \                  "{:.4f}".format(loss) + ", Training Accuracy= " + \                  "{:.3f}".format(acc))            f.write("Step " + str(step) + ", Minibatch Loss= " + \                  "{:.4f}".format(loss) + ", Training Accuracy= " + \                  "{:.3f}".format(acc)+"\n")    print("Optimization Finished!")    save_path = saver.save(sess, "modelslstm/" + str(time.strftime("%d-%m-%Y-%H-%M-%S")) + ".ckpt")f.close()

我尝试添加了tf.reset_default_graph(),但它不起作用。请帮我解决这个问题。谢谢!


回答:

你必须使用上下文管理器(第一行的with语句)吗?看起来上下文管理器在销毁你的对象时遇到了困难。这可能是__exit__内置函数的问题。建议你向开发者提交一个错误报告。

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注