我试图从预训练模型中提取特征并在自己的模型中使用。我可以成功地实例化Inception V3模型,并保存输出以用作我模型的输入,但在尝试使用时会遇到错误。我尝试删除Flatten层,但看起来问题不在于此。我认为问题出在last_output上,但不知道如何解决。代码如下:
#%% Imports.import tensorflow as tffrom tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2Dfrom tensorflow.keras.optimizers import RMSpropfrom tensorflow.keras import layers, Modelfrom tensorflow.keras.applications.inception_v3 import InceptionV3import os, signalimport numpy as np#%% Instatiate an Inception V3 modelurl = "https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5" # Get the weights from the pretrained modellocal_weights_file = tf.keras.utils.get_file("inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5", origin = url, extract = True) pre_trained_model = InceptionV3(input_shape=(150, 150, 3), include_top=False, weights=None) # include_top=False argument, we load a network that doesn't includepre_trained_model.load_weights(local_weights_file) # the classification layers at the top—ideal for feature extraction.# Make the model non-trainable, since we will only use it for feature extraction; we won't update the weights of the pretrained model during training.for layers in pre_trained_model.layers: layers.trainable = False# The layer we will use for feature extraction in Inception v3 is called mixed7. It is not the bottleneck of the network, but we are using it to keep a# sufficiently large feature map (7x7 in this case). (Using the bottleneck layer would have resulting in a 3x3 feature map, which is a bit small.)last_layer = pre_trained_model.get_layer('mixed7')print('last layer output shape:', last_layer.output_shape)last_output = last_layer.outputprint(last_output)# %% Stick a fully connected classifier on top of last_output# Flatten the output layer to 1 dimensionx = layers.Flatten()(last_output)# Add a fully connected layer with 1,024 hidden units and ReLU activationx = layers.Dense(1024, activation='relu')(x)# Add a dropout rate of 0.2x = layers.Dropout(0.2)(x)# Add a final sigmoid layer for classificationx = layers.Dense(1, activation='sigmoid')(x)# Configure and compile the modelmodel = Model(pre_trained_model.input, x)model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.0001), metrics=['acc'])
错误如下:
---------------------------------------------------------------------------AttributeError Traceback (most recent call last)c:\Users\jpaul\Code\Google_ML_Crash_Course\02_Practica\02_Image_Classification\image_classification_part3.py in 39 # Flatten the output layer to 1 dimension----> 40 x = layers.Flatten()(last_output) 41 42 # Add a fully connected layer with 1,024 hidden units and ReLU activation 43 x = layers.Dense(1024, activation='relu')(x)AttributeError: 'Concatenate' object has no attribute 'Flatten'
回答:
在你的for
循环中,你覆盖了从以下导入语句中引入的layers
标识符:
from tensorflow.keras import layers
因此,当你尝试创建一个新的Flatten()
层时,layers
标识符包含了一个Concatenate
对象,而不是你期望的Keras的layers
模块。
更改for
循环中的变量名,你应该就能解决这个问题。