度量标准的问题?

我正在尝试使用卷积神经网络(CNN)对道路图片进行简单分类(单向/双向),我的数据集由4000张第一类图片和约4000张第二类图片组成,通常类别是平衡的,每个类别存储在不同的文件夹中。

但是,度量标准出现了某种“跳跃”现象?我尝试了不同的输入尺寸、不同的优化器(’adam’、’rmsprop’)、不同的批量大小(10、16、20),但得到的结果相同…有人知道是什么原因导致这种行为吗?

代码如下:

model = Sequential()model.add(Conv2D(32, (3, 3), input_shape=(300, 300,3)))model.add(Conv2D(32, (3, 3)))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Conv2D(64, (3, 3)))model.add(Conv2D(64, (3, 3)))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Conv2D(128, (3, 3)))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Flatten())  model.add(Dense(64))model.add(Activation('relu'))model.add(Dropout(0.5))model.add(Dense(1))model.add(Activation('sigmoid'))model.compile(loss='binary_crossentropy',          optimizer='rmsprop',          metrics=['accuracy']) batch_size = 10train_datagen = ImageDataGenerator(#        rescale=1./255,#        shear_range=0.2,#        zoom_range=0.2,#        horizontal_flip=True    featurewise_std_normalization=True,    rotation_range=40,    width_shift_range=0.2,    height_shift_range=0.2,    rescale=1./255,    shear_range=0.2,    zoom_range=0.2,    horizontal_flip=True,    fill_mode='nearest')test_datagen = ImageDataGenerator(featurewise_std_normalization=True,                              rescale=1./255)train_generator = train_datagen.flow_from_directory(    'data/train',      target_size=(300, 300),      batch_size=batch_size,    class_mode='binary')validation_generator = test_datagen.flow_from_directory(    'data/validation',    target_size=(300, 300),    batch_size=batch_size,    class_mode='binary')model.fit_generator(    train_generator,    steps_per_epoch=2000 // batch_size,    epochs=50,    validation_data=validation_generator,    validation_steps=800 // batch_size)

运行这段代码后,我得到了以下结果:

Epoch 1/50125/125 [==============================] - 253s 2s/step - loss: 0.8142 - acc: 0.5450 - val_loss: 0.4937 - val_acc: 0.8662Epoch 2/50125/125 [==============================] - 254s 2s/step - loss: 0.6748 - acc: 0.5980 - val_loss: 0.5782 - val_acc: 0.7859Epoch 3/50125/125 [==============================] - 255s 2s/step - loss: 0.6679 - acc: 0.6580 - val_loss: 0.5068 - val_acc: 0.8562Epoch 4/50125/125 [==============================] - 255s 2s/step - loss: 0.6438 - acc: 0.6780 - val_loss: 0.5018 - val_acc: 0.8766Epoch 5/50125/125 [==============================] - 257s 2s/step - loss: 0.6427 - acc: 0.7245 - val_loss: 0.3760 - val_acc: 0.9213Epoch 6/50125/125 [==============================] - 256s 2s/step - loss: 0.5635 - acc: 0.7435 - val_loss: 0.6140 - val_acc: 0.6398Epoch 7/50125/125 [==============================] - 254s 2s/step - loss: 0.6226 - acc: 0.7320 - val_loss: 0.1852 - val_acc: 0.9433Epoch 8/50125/125 [==============================] - 252s 2s/step - loss: 0.4858 - acc: 0.7765 - val_loss: 0.1617 - val_acc: 0.9437Epoch 9/50125/125 [==============================] - 253s 2s/step - loss: 0.4433 - acc: 0.8120 - val_loss: 0.5577 - val_acc: 0.6788Epoch 10/50125/125 [==============================] - 252s 2s/step - loss: 0.4621 - acc: 0.7935 - val_loss: 0.1000 - val_acc: 0.9762Epoch 11/50125/125 [==============================] - 254s 2s/step - loss: 0.4572 - acc: 0.8035 - val_loss: 0.3797 - val_acc: 0.8161Epoch 12/50125/125 [==============================] - 257s 2s/step - loss: 0.4707 - acc: 0.8105 - val_loss: 0.0903 - val_acc: 0.9761Epoch 13/50125/125 [==============================] - 254s 2s/step - loss: 0.4134 - acc: 0.8390 - val_loss: 0.1587 - val_acc: 0.9437Epoch 14/50125/125 [==============================] - 252s 2s/step - loss: 0.4023 - acc: 0.8355 - val_loss: 0.1149 - val_acc: 0.9584Epoch 15/50125/125 [==============================] - 253s 2s/step - loss: 0.4286 - acc: 0.8255 - val_loss: 0.0897 - val_acc: 0.9700Epoch 16/50125/125 [==============================] - 253s 2s/step - loss: 0.4665 - acc: 0.8140 - val_loss: 0.6411 - val_acc: 0.8136Epoch 17/50125/125 [==============================] - 252s 2s/step - loss: 0.4010 - acc: 0.8315 - val_loss: 0.1205 - val_acc: 0.9736Epoch 18/50125/125 [==============================] - 253s 2s/step - loss: 0.3790 - acc: 0.8550 - val_loss: 0.0993 - val_acc: 0.9613Epoch 19/50125/125 [==============================] - 251s 2s/step - loss: 0.3717 - acc: 0.8620 - val_loss: 0.1154 - val_acc: 0.9748Epoch 20/50125/125 [==============================] - 250s 2s/step - loss: 0.4434 - acc: 0.8405 - val_loss: 0.1251 - val_acc: 0.9537Epoch 21/50125/125 [==============================] - 253s 2s/step - loss: 0.4535 - acc: 0.7545 - val_loss: 0.6766 - val_acc: 0.3640Epoch 22/50125/125 [==============================] - 252s 2s/step - loss: 0.7482 - acc: 0.7140 - val_loss: 0.4803 - val_acc: 0.7950Epoch 23/50125/125 [==============================] - 252s 2s/step - loss: 0.3712 - acc: 0.8585 - val_loss: 0.1056 - val_acc: 0.9685Epoch 24/50125/125 [==============================] - 251s 2s/step - loss: 0.3836 - acc: 0.8545 - val_loss: 0.1267 - val_acc: 0.9673Epoch 25/50125/125 [==============================] - 250s 2s/step - loss: 0.3879 - acc: 0.8805 - val_loss: 0.8669 - val_acc: 0.8100Epoch 26/50 125/125 [==============================] - 250s 2s/step - loss: 0.3735 - acc: 0.8825 - val_loss: 0.1472 - val_acc: 0.9685Epoch 27/50125/125 [==============================] - 250s 2s/step - loss: 0.4577 - acc: 0.8620 - val_loss: 0.3285 - val_acc: 0.8925Epoch 28/50125/125 [==============================] - 252s 2s/step - loss: 0.3805 - acc: 0.8875 - val_loss: 0.3930 - val_acc: 0.7821Epoch 29/50125/125 [==============================] - 250s 2s/step - loss: 0.3565 - acc: 0.8930 - val_loss: 0.1087 - val_acc: 0.9647Epoch 30/50125/125 [==============================] - 250s 2s/step - loss: 0.4680 - acc: 0.8845 - val_loss: 0.1012 - val_acc: 0.9688Epoch 31/50125/125 [==============================] - 250s 2s/step - loss: 0.3293 - acc: 0.9080 - val_loss: 0.0700 - val_acc: 0.9811Epoch 32/50125/125 [==============================] - 250s 2s/step - loss: 0.4197 - acc: 0.9060 - val_loss: 0.1464 - val_acc: 0.9700Epoch 33/50125/125 [==============================] - 251s 2s/step - loss: 0.3656 - acc: 0.9005 - val_loss: 8.8236 - val_acc: 0.4307Epoch 34/50125/125 [==============================] - 249s 2s/step - loss: 0.4593 - acc: 0.9015 - val_loss: 4.3916 - val_acc: 0.6826Epoch 35/50125/125 [==============================] - 250s 2s/step - loss: 0.4824 - acc: 0.8605 - val_loss: 0.0748 - val_acc: 0.9850Epoch 36/50125/125 [==============================] - 250s 2s/step - loss: 0.4629 - acc: 0.8875 - val_loss: 0.2257 - val_acc: 0.8728Epoch 37/50125/125 [==============================] - 250s 2s/step - loss: 0.3708 - acc: 0.9075 - val_loss: 0.1196 - val_acc: 0.9537Epoch 38/50125/125 [==============================] - 250s 2s/step - loss: 0.9151 - acc: 0.8605 - val_loss: 0.1266 - val_acc: 0.9559Epoch 39/50125/125 [==============================] - 250s 2s/step - loss: 0.3700 - acc: 0.9035 - val_loss: 0.1038 - val_acc: 0.9812Epoch 40/50125/125 [==============================] - 249s 2s/step - loss: 0.5900 - acc: 0.8625 - val_loss: 0.0838 - val_acc: 0.9887Epoch 41/50125/125 [==============================] - 250s 2s/step - loss: 0.4409 - acc: 0.9065 - val_loss: 0.0828 - val_acc: 0.9773Epoch 42/50125/125 [==============================] - 250s 2s/step - loss: 0.3415 - acc: 0.9115 - val_loss: 0.8084 - val_acc: 0.8788Epoch 43/50125/125 [==============================] - 250s 2s/step - loss: 0.5181 - acc: 0.8440 - val_loss: 0.0998 - val_acc: 0.9786Epoch 44/50125/125 [==============================] - 249s 2s/step - loss: 0.3270 - acc: 0.8970 - val_loss: 0.1155 - val_acc: 0.9625Epoch 45/50125/125 [==============================] - 250s 2s/step - loss: 0.3810 - acc: 0.9125 - val_loss: 0.2881 - val_acc: 0.9484Epoch 46/50125/125 [==============================] - 249s 2s/step - loss: 0.3499 - acc: 0.9220 - val_loss: 0.3109 - val_acc: 0.8564Epoch 47/50125/125 [==============================] - 250s 2s/step - loss: 0.3505 - acc: 0.9160 - val_loss: 0.0861 - val_acc: 0.9788Epoch 48/50125/125 [==============================] - 250s 2s/step - loss: 0.3073 - acc: 0.9225 - val_loss: 0.0999 - val_acc: 0.9874Epoch 49/50125/125 [==============================] - 250s 2s/step - loss: 0.4418 - acc: 0.9000 - val_loss: 0.0301 - val_acc: 0.9925Epoch 50/50125/125 [==============================] - 250s 2s/step - loss: 0.3501 - acc: 0.9190 - val_loss: 0.0351 - val_acc: 0.9861

这是过拟合了吗?还是只是我的损失函数中的随机参数位置?我会尝试寻找其他图片来构建新的验证数据集…


回答:

“每个类别存储在不同的文件夹中”

所以你的意思是第一类图片在’train’文件夹中,
而另一类图片在’validate’文件夹中?

尝试将批量大小设置为32,
并将训练集与验证集的比例设为0.8对0.2

编辑

我找到一个你可以参考的链接:
https://stats.stackexchange.com/questions/187335/validation-error-less-than-training-error

编辑

尝试获取更多样本。
如果获取更多样本有困难,
尝试从现有样本中创建/修改样本。

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注