在保存并训练了一个tensorflow图后,我通过以下方式恢复它以使用不同的损失函数进行重新训练:
import tensorflow as tf import numpy as np import pyximport pyximport.install() import math import tensorflow.contrib.slim as slim raw_data_train = np.loadtxt('all_data/train_all_raw.csv', skiprows = 1, delimiter=',') users = (np.unique(raw_data_train[ :, 0])) items = (np.unique(raw_data_train[ :, 1])) saver = tf.train.import_meta_graph('all_data/my_test_model.meta') with tf.Session() as sess: tf.global_variables_initializer().run(session=sess) saver.restore(sess, tf.train.latest_checkpoint('all_data/')) # 占位符 user_ids = sess.graph.get_tensor_by_name('user_ids:0') left_ids = sess.graph.get_tensor_by_name('left_ids:0') # 变量 user_latents = sess.graph.get_tensor_by_name('user_latents:0') item_latents = sess.graph.get_tensor_by_name('item_latents:0') # 网络最初定义为变量作用域“nn”,这就是为什么我在下面的行中按“nn/*”检索它们 weights_0 = sess.graph.get_tensor_by_name('nn/fully_connected/weights:0') biases_0 = sess.graph.get_tensor_by_name('nn/fully_connected/biases:0') weights_1 = sess.graph.get_tensor_by_name('nn/fully_connected_1/weights:0') biases_1 = sess.graph.get_tensor_by_name('nn/fully_connected_1/biases:0') # 查找 user_embeddings = sess.graph.get_tensor_by_name('embedding_user:0') item_left_embeddings = sess.graph.get_tensor_by_name('embedding_left:0') # 字典 fd = { user_ids: users, left_ids: items, } left_emb_val, weights_0_val, biases_0_val, weights_1_val, biases_1_val = sess.run([left_emb, weights_0, biases_0, weights_1, biases_1], feed_dict=fd) joined_input = tf.concat( [user_embeddings, item_left_embeddings], 1) net = slim.fully_connected(inputs=joined_input, num_outputs=64, weights_initializer = tf.constant_initializer(weights_0_val), biases_initializer=tf.constant_initializer(biases_0_val), activation_fn=tf.nn.relu) left_output = slim.fully_connected(inputs=net, num_outputs=1, weights_initializer = tf.constant_initializer(weights_1_val), biases_initializer=tf.constant_initializer(biases_1_val), activation_fn=None)# ********* 下面的行会导致错误 ************* left_output_val = sess.run([left_output], feed_dict=fd) print(left_output_val)
当我尝试通过调用sess.run
计算left_output_val
的值时,上述代码会产生以下错误。
FailedPreconditionError (见上面的追溯信息):尝试使用未初始化的值 fully_connected_1/biases [[Node: fully_connected_1/biases/read = Identity[T=DT_FLOAT, _class=["loc:@fully_connected_1/biases"], _device="/job:localhost/replica:0/task:0/cpu:0"](fully_connected_1/biases)]]
这让我有点惊讶,因为:
-
我使用以下行初始化了所有变量:
tf.global_variables_initializer().run(session=sess)
这可能是因为权重和偏置没有通过这一行初始化,如这里所建议:在Tensorflow中使用Adadelta优化器时出现未初始化值错误
-
我在以下行中初始化了权重和偏置:
net = slim.fully_connected(inputs=joined_input, num_outputs=64, weights_initializer = tf.constant_initializer(weights_0_val), biases_initializer=tf.constant_initializer(biases_0_val), activation_fn=tf.nn.relu)left_output = slim.fully_connected(inputs=net, num_outputs=1, weights_initializer = tf.constant_initializer(weights_1_val), biases_initializer=tf.constant_initializer(biases_1_val), activation_fn=None)
但在运行会话并计算
left_output_val
的值时,仍然存在未初始化的权重和偏置错误
我很感激任何解决我问题的建议。
回答:
您可以从这个密集层中获取变量并手动初始化它们。
with tf.variable_scope('fully_connected_1', reuse=True): weights = tf.get_variable('weights') biases = tf.get_variable('biases') sess.run([weights.initializer, biases.initializer])