我正在尝试使用Keras中的VGG16模型来训练一个图像检测模型。
根据这些文章(https://www.pyimagesearch.com/2019/06/03/fine-tuning-with-keras-and-deep-learning/ 和 https://learnopencv.com/keras-tutorial-fine-tuning-using-pre-trained-models/),我在VGG16模型上添加了一些额外的Dense层。然而,经过20个epoch后的训练准确率大约在35%到41%之间,这与这些文章中的结果(超过90%)不符。
因此,我想知道我的以下代码是否有问题。
基本设置
url='/content/drive/My Drive/fer2013.csv'batch_size = 64img_width,img_height = 48,48# 0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutralnum_classes = 7 model_path = '/content/drive/My Drive/Af/cnn.h5'df=pd.read_csv(url) def _load_fer(): # 加载训练和评估数据 df = pd.read_csv(url, sep=',') train_df = df[df['Usage'] == 'Training'] eval_df = df[df['Usage'] == 'PublicTest'] return train_df, eval_dfdef _preprocess_fer(df,label_col='emotion',feature_col='pixels'): labels, features = df.loc[:, label_col].values.astype(np.int32), [ np.fromstring(image, np.float32, sep=' ') for image in df.loc[:, feature_col].values] labels = [to_categorical(l, num_classes=num_classes) for l in labels] features = np.stack((features,) * 3, axis=-1) features /= 255 features = features.reshape(features.shape[0], img_width, img_height,3) return features, labels# 加载fer数据train_df, eval_df = _load_fer()# 预处理fer数据x_train, y_train = _preprocess_fer(train_df)x_valid, y_valid = _preprocess_fer(eval_df)gen = ImageDataGenerator( rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest')train_generator = gen.flow(x_train, y_train, batch_size=batch_size)predict_size_train = int(np.math.ceil(len(x_train) / batch_size)) input_tensor = Input(shape=(img_width, img_height, 3))
现在是模型训练部分
baseModel = VGG16( include_top=False, weights='imagenet', input_tensor=input_tensor )# 构建将放置在基础模型顶部的模型头部(微调)headModel = baseModel.outputheadModel = Flatten()(headModel)headModel = Dense(1024, activation="relu")(headModel)#headModel = Dropout(0.5)(headModel)headModel = BatchNormalization()(headModel)headModel = Dense(num_classes, activation="softmax")(headModel)model = Model(inputs=baseModel.input, outputs=headModel)for layer in baseModel.layers: layer.trainable = False
model.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=0.001), metrics=['accuracy']) history = model.fit(train_generator, steps_per_epoch=predict_size_train * 1, epochs=20, validation_data=valid_generator, validation_steps=predict_size_valid)
结果:训练后的结果非常感谢您的建议。祝好。
回答:
由于您冻结了所有层,仅使用一个Dense层可能无法达到您想要的准确率。此外,如果您不急,可以不设置validation_steps和steps_per_epochs参数。在这个教程中,模型有波动,这是您不希望看到的。
我的建议是:
for layer in baseModel.layers: layer.trainable = Falsebase_out = baseModel.get_layer('block3_pool').output // 层名称可能不同,请使用model baseModel.summary检查
通过这种方式,您可以获取特定层的输出。获取输出后,您可以添加一些卷积层。在卷积层之后尝试堆叠更多的Dense层,如下所示:
x = tf.keras.layers.Flatten()(x)x = Dense(512, activation= 'relu')(x)x = Dropout(0.3)(x)x = Dense(256, activation= 'relu')(x)x = Dropout(0.2)(x)output_model = Dense(num_classes, activation = 'softmax')(x)
如果您不想添加卷积层并完全使用baseModel,这也没问题,但是您可以这样做:
for layer in baseModel.layers[:12]: // 12是随机的,您可以尝试不同的值。这次不是所有层都被冻结。 layer.trainable = Falsefor i, layer in enumerate(baseModel.layers): print(i, layer.name, layer.trainable) // 检查冻结的层
之后,您可以尝试设置:
headModel = baseModel.outputheadModel = Flatten()(headModel)headModel = Dense(1024, activation="relu")(headModel)headModel = Dropout(0.5)(headModel)headModel = Dense(512, activation="relu")(headModel)headModel = Dense(num_classes, activation="softmax")(headModel)
如果您发现模型在学习,但损失有波动,那么您可以降低学习率。或者您可以使用ReduceLROnPlateau回调函数:
rd_lr = ReduceLROnPlateau(monitor='val_loss', factor = np.sqrt(0.1), patience= 4, verbose = 1, min_lr = 5e-8)
参数完全取决于您的模型。有关更多详细信息,您可以查看文档