我已经对一个Resnet50模型进行了微调,其架构如下:
model = models.Sequential()model.add(resnet)model.add(Conv2D(512, (3, 3), activation='relu'))model.add(Conv2D(512, (3, 3), activation='relu'))model.add(MaxPooling2D((2, 2), strides=(2, 2)))model.add(Flatten())model.add(layers.Dense(2048, activation='relu'))model.add(layers.Dropout(0.5))model.add(layers.Dense(4096, activation='relu'))model.add(layers.Dropout(0.5))model.add(layers.Dense(736, activation='softmax')) # 输出层
现在我有一个保存的模型(.h5文件),我想将其作为另一个模型的输入,但我不需要最后一层。通常情况下,我会像这样处理基础的resnet50模型:
def base_model(): resnet = resnet50.ResNet50(weights="imagenet", include_top=False) x = resnet.output x = GlobalAveragePooling2D()(x) x = Dense(4096, activation='relu')(x) x = Dropout(0.6)(x) x = Dense(4096, activation='relu')(x) x = Dropout(0.6)(x) x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x) return Model(inputs=resnet.input, outputs=x)
但这种方法对我的模型不起作用,会报错。我目前尝试的代码如下,但仍然不起作用。
def base_model(): resnet = load_model("../Models/fine_tuned_model/fine_tuned_resnet50.h5") x = resnet.layers.pop() #resnet = resnet50.ResNet50(weights="imagenet", include_top=False) #x = resnet.output #x = GlobalAveragePooling2D()(x) x = Dense(4096, activation='relu')(x) x = Dropout(0.6)(x) x = Dense(4096, activation='relu')(x) x = Dropout(0.6)(x) x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x) return Model(inputs=resnet.input, outputs=x)enhanced_resent = base_model()
这是我得到的错误信息。
Layer dense_3 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.core.Dense'>. Full input: [<keras.layers.core.Dense object at 0x000001C61E68E2E8>]. All inputs to the layer should be tensors.
我不知道这是否可行。
回答:
经过一个小时的放弃后,我终于找到了解决方法。这就是你应该做的方法。
def base_model(): resnet = load_model("../Models/fine_tuned_model/42-0.85.h5") x = resnet.layers[-2].output x = Dense(4096, activation='relu', name="FC1")(x) x = Dropout(0.6, name="FCDrop1")(x) x = Dense(4096, activation='relu', name="FC2")(x) x = Dropout(0.6, name="FCDrop2")(x) x = Lambda(lambda x_: K.l2_normalize(x,axis=1))(x) return Model(inputs=resnet.input, outputs=x)enhanced_resent = base_model()
这样做完美地解决了问题。我希望这能帮助到其他人,因为我从未在任何教程中见过这种方法。
x = resnet.layers[-2].output
这将获取你想要的层,但你需要知道你想要的层的索引。-2是倒数第二层的全连接层,这是我想要的,因为我想要的是特征提取,而不是最终的分类。你可以通过执行
model.summary()