我在Keras中进行三类标签的多类分类。在训练过程中,训练和验证损失都在降低,准确率也在提高。训练完成后,我对训练集进行了测试,作为一种健全性检查,结果发现model.evaluate和model.predict之间存在巨大的差异。我确实找到了一些解决方案,这些方案似乎表明这是BatchNorm和Dropout层的问题,但这不应该导致如此大的差异。相关代码如下所示。
model=Sequential()model.add(Conv2D(32, (3, 3), padding="same",input_shape=input_shape))model.add(Activation("relu"))model.add(BatchNormalization(axis=chanDim))model.add(MaxPooling2D(pool_size=(3, 3)))model.add(Dropout(0.25))..model.add(Dense(n_classes))model.add(Activation("softmax"))optimizer=Adam()model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['categorical_accuracy'])datagen = ImageDataGenerator(horizontal_flip=True, fill_mode='nearest')train_datagen = datagen.flow(X_train, y_train, batch_size=batch_size)val_datagen = ImageDataGenerator().flow(X_val, y_val, batch_size=batch_size)history=model.fit(train_datagen, steps_per_epoch=math.ceil(nb_train_samples/batch_size), verbose=2, epochs=50, validation_data=val_datagen, validation_steps=math.ceil(nb_validation_samples/batch_size), class_weight=d_class_weights)print('model.evaluate accuracy: ', model.evaluate(X_train, y_train, batch_size=batch_size)[1])test_pred = model.predict(ImageDataGenerator().flow(X_train, y=None, batch_size=batch_size), steps=math.ceil(nb_train_samples/batch_size))test_result=np.array(test_pred)test_result = np.zeros(test_result.shape)test_result[np.arange(len(test_pred)), test_pred.argmax(1)] = 1total=0count=0for i in range(test_result.shape[0]): total+=1 count+=(test_result[i]==y_train[i]).all()print('model.predict accuracy: ', count/total)
我得到的输出如下:
66/66 [==============================] - 12s 177ms/step - loss: 0.0010 - categorical_accuracy: 1.0000model.evaluate accuracy: 1.0model.predict accuracy: 0.42138063279002874
我已经尝试解决这个问题一段时间了,但始终没有找到任何解决方法。我已经在使用categorical_crossentropy、categorical_accuracy和最后一层的softmax激活函数,所以我不知道哪里出了问题。任何帮助都将不胜感激!
回答:
我最终找到了解决方案,原来我只将X_train
传递给了predict函数,而shuffle
参数默认是True
,因此预测结果与真实值不对应。设置shuffle=False
解决了这个问题。
test_pred = model.predict(ImageDataGenerator().flow(X_train, y=None, batch_size=batch_size, shuffle=False), steps=math.ceil(nb_train_samples/batch_size))