使用10000张图像训练vgg网络时验证准确率停滞不前

我有10000张图像,其中5000张是病变的医学图像,另外5000张是健康图像。我使用了vgg16并修改了最后几层,如下所示:

Layer (type)                 Output Shape              Param #   =================================================================input_1 (InputLayer)         (None, 224, 224, 3)       0         _________________________________________________________________block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      _________________________________________________________________block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     _________________________________________________________________block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         _________________________________________________________________block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     _________________________________________________________________block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    _________________________________________________________________block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         _________________________________________________________________block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    _________________________________________________________________block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    _________________________________________________________________block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    _________________________________________________________________block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         _________________________________________________________________block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   _________________________________________________________________block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   _________________________________________________________________block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   _________________________________________________________________block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         _________________________________________________________________block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   _________________________________________________________________block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   _________________________________________________________________block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   _________________________________________________________________block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         _________________________________________________________________flatten (Flatten)            (None, 25088)             0         _________________________________________________________________fc1 (Dense)                  (None, 256)               6422784   _________________________________________________________________fc2 (Dense)                  (None, 128)               32896     _________________________________________________________________output (Dense)               (None, 2)                 258       =================================================================Total params: 21,170,626Trainable params: 6,455,938Non-trainable params: 14,714,688

我的代码如下:

import numpy as npimport osimport timefrom vgg16 import VGG16from keras.preprocessing import imagefrom imagenet_utils import preprocess_input, decode_predictionsfrom keras.layers import Dense, Activation, Flattenfrom keras.layers import merge, Inputfrom keras.models import Modelfrom keras.utils import np_utilsfrom sklearn.utils import shufflefrom sklearn.cross_validation import train_test_split# 加载训练数据PATH = '/mount'# 定义数据路径data_path = PATH data_dir_list = os.listdir(data_path)img_data_list=[]y=0;for dataset in data_dir_list:    img_list=os.listdir(data_path+'/'+ dataset)    print ('已加载数据集的图像-'+'{}\n'.format(dataset))    for img in img_list:        img_path = data_path + '/'+ dataset + '/'+ img         img = image.load_img(img_path, target_size=(224, 224))        x = image.img_to_array(img)        x = np.expand_dims(x, axis=0)        x = preprocess_input(x)        x = x/255        y=y+1        print('输入图像形状:', x.shape)        print(y)        img_data_list.append(x)from keras.optimizers import SGDsgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)img_data = np.array(img_data_list)#img_data = img_data.astype('float32')print (img_data.shape)img_data=np.rollaxis(img_data,1,0)print (img_data.shape)img_data=img_data[0]print (img_data.shape)# 定义类别数量num_classes = 2num_of_samples = img_data.shape[0]labels = np.ones((num_of_samples,),dtype='int64')labels[0:5001]=0labels[5001:]=1names = ['YES','NO']# 将类别标签转换为独热编码Y = np_utils.to_categorical(labels, num_classes)# 打乱数据集x,y = shuffle(img_data,Y, random_state=2)# 拆分数据集X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=2)image_input = Input(shape=(224, 224, 3))model = VGG16(input_tensor=image_input, include_top=True,weights='imagenet')model.summary()last_layer = model.get_layer('block5_pool').outputx= Flatten(name='flatten')(last_layer)x = Dense(256, activation='relu', name='fc1')(x)x = Dense(128, activation='relu', name='fc2')(x)out = Dense(num_classes, activation='softmax', name='output')(x)custom_vgg_model2 = Model(image_input, out)custom_vgg_model2.summary()# 冻结除了全连接层之外的所有层for layer in custom_vgg_model2.layers[:-3]:    layer.trainable = Falsecustom_vgg_model2.summary()custom_vgg_model2.compile(loss='categorical_crossentropy',optimizer=sgd,metrics=['accuracy'])t=time.time()#   t = now()hist = custom_vgg_model2.fit(X_train, y_train, batch_size=128, epochs=50, verbose=1, validation_data=(X_test, y_test))print('训练时间: %s' % (t - time.time()))(loss, accuracy) = custom_vgg_model2.evaluate(X_test, y_test, batch_size=10, verbose=1)print("[INFO] 损失={:.4f}, 准确率: {:.4f}%"。format(loss,accuracy * 100))model.save("vgg_10000.h5")

我发布的结果是前5个和最后5个纪元

Epoch 1/508000/8000 [==============================] - 154s - loss: 0.6960 - acc: 0.5354 - val_loss: 0.6777 - val_acc: 0.5745Epoch 2/508000/8000 [==============================] - 134s - loss: 0.6684 - acc: 0.5899 - val_loss: 0.6866 - val_acc: 0.5490Epoch 3/508000/8000 [==============================] - 134s - loss: 0.6608 - acc: 0.6040 - val_loss: 0.6625 - val_acc: 0.5925Epoch 4/508000/8000 [==============================] - 134s - loss: 0.6518 - acc: 0.6115 - val_loss: 0.6668 - val_acc: 0.5810Epoch 5/508000/8000 [==============================] - 134s - loss: 0.6440 - acc: 0.6280 - val_loss: 0.6990 - val_acc: 0.5580

最后5个

Epoch 25/508000/8000 [==============================] - 134s - loss: 0.5944 - acc: 0.6720 - val_loss: 0.6271 - val_acc: 0.6485Epoch 26/508000/8000 [==============================] - 134s - loss: 0.5989 - acc: 0.6699 - val_loss: 0.6483 - val_acc: 0.6135Epoch 27/508000/8000 [==============================] - 134s - loss: 0.5950 - acc: 0.6789 - val_loss: 0.7130 - val_acc: 0.5785Epoch 28/508000/8000 [==============================] - 134s - loss: 0.5853 - acc: 0.6838 - val_loss: 0.6263 - val_acc: 0.6395

结果并不是很好。我尝试调整了最后两层的节点数为128和128,使用了adam优化器等,但结果仍然不令人满意。任何帮助都将非常感激。


回答:

你可以尝试以下方法:

  • 进行分层train_test_split

    train_test_split(x, y, stratify=y, test_size=0.2, random_state=2)
  • 检查你的数据,看看图像中是否有异常值。

  • 使用adam优化器:from keras.optimizers import Adam 替代 SGD
  • 尝试在适用的地方使用不同的种子,即替代 random_state=2,使用其他值:

    X_train, X_test, y_train, y_test = train_test_split(                               x, y, test_size=0.2, random_state=382938)
  • 尝试使用 include_top=False

    model = VGG16(input_tensor=image_input, include_top=False,weights='imagenet')
  • 使用 (训练, 验证, 测试) 集或 (交叉验证, 保留集) 来获得更可靠的性能指标。

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注