我在Keras中构建模型时,在最后一层使用了一些TensorFlow函数(reduce_sum和l2_normalize),但遇到了这个问题。我已经搜索了解决方案,但所有相关内容都涉及到“keras张量”。
这是我的代码:
import tensorflow as tf;from tensorflow.python.keras import backend as Kvgg16_model = VGG16(weights = 'imagenet', include_top = False, input_shape = input_shape);fire8 = extract_layer_from_model(vgg16_model, layer_name = 'block4_pool');pool8 = MaxPooling2D((3,3), strides = (2,2), name = 'pool8')(fire8.output);fc1 = Conv2D(64, (6,6), strides= (1, 1), padding = 'same', name = 'fc1')(pool8);fc1 = Dropout(rate = 0.5)(fc1);fc2 = Conv2D(3, (1, 1), strides = (1, 1), padding = 'same', name = 'fc2')(fc1);fc2 = Activation('relu')(fc2);fc2 = Conv2D(3, (15, 15), padding = 'valid', name = 'fc_pooling')(fc2);fc2_norm = K.l2_normalize(fc2, axis = 3);est = tf.reduce_sum(fc2_norm, axis = (1, 2));est = K.l2_normalize(est);FC_model = Model(inputs = vgg16_model.input, outputs = est);
然后出现了这个错误:
ValueError: 模型的输出张量必须是TensorFlow
Layer
的输出(因此保留过去的层元数据)。发现:Tensor(“l2_normalize_3:0”, shape=(?, 3), dtype=float32)
我注意到,如果不将fc2层传递给这些函数,模型可以正常工作:
FC_model = Model(inputs = vgg16_model.input, outputs = fc2);
能有人解释一下这个问题,并提供一些解决建议吗?
回答:
我找到了一种解决问题的方法。对于遇到同样问题的人,你可以使用Lambda层来包装你的TensorFlow操作,这就是我所做的:
from tensorflow.python.keras.layers import Lambda;def norm(fc2): fc2_norm = K.l2_normalize(fc2, axis = 3); illum_est = tf.reduce_sum(fc2_norm, axis = (1, 2)); illum_est = K.l2_normalize(illum_est); return illum_est;illum_est = Lambda(norm)(fc2);