我使用Keras构建了一个用于二元分类问题的NN模型,以下是代码:
# 创建一个新模型nn_model = models.Sequential()# 添加输入和全连接层nn_model.add(layers.Dense(128, activation='relu', input_shape=(22,))) # 128是隐藏单元的数量,22是特征的数量nn_model.add(layers.Dense(16, activation='relu'))nn_model.add(layers.Dense(16, activation='relu'))# 添加最终层nn_model.add(layers.Dense(1, activation='sigmoid'))# 我从训练集中分出了3000行数据来监控准确率和损失# 编译并训练模型nn_model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])history = nn_model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, # 批量大小定义了将通过网络传播的样本数量 validation_data=(x_val, y_val))
这是训练日志:
在42663个样本上训练,验证3000个样本Epoch 1/2042663/42663 [==============================] - 0s 9us/step - loss: 0.2626 - acc: 0.8960 - val_loss: 0.2913 - val_acc: 0.8767Epoch 2/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2569 - acc: 0.8976 - val_loss: 0.2625 - val_acc: 0.9007Epoch 3/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2560 - acc: 0.8958 - val_loss: 0.2546 - val_acc: 0.8900Epoch 4/2042663/42663 [==============================] - 0s 4us/step - loss: 0.2538 - acc: 0.8970 - val_loss: 0.2451 - val_acc: 0.9043Epoch 5/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2526 - acc: 0.8987 - val_loss: 0.2441 - val_acc: 0.9023Epoch 6/2042663/42663 [==============================] - 0s 4us/step - loss: 0.2507 - acc: 0.8997 - val_loss: 0.2825 - val_acc: 0.8820Epoch 7/2042663/42663 [==============================] - 0s 4us/step - loss: 0.2504 - acc: 0.8993 - val_loss: 0.2837 - val_acc: 0.8847Epoch 8/2042663/42663 [==============================] - 0s 4us/step - loss: 0.2507 - acc: 0.8988 - val_loss: 0.2631 - val_acc: 0.8873Epoch 9/2042663/42663 [==============================] - 0s 4us/step - loss: 0.2471 - acc: 0.9012 - val_loss: 0.2788 - val_acc: 0.8823Epoch 10/2042663/42663 [==============================] - 0s 4us/step - loss: 0.2489 - acc: 0.8997 - val_loss: 0.2414 - val_acc: 0.9010Epoch 11/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2471 - acc: 0.9017 - val_loss: 0.2741 - val_acc: 0.8880Epoch 12/2042663/42663 [==============================] - 0s 4us/step - loss: 0.2458 - acc: 0.9016 - val_loss: 0.2523 - val_acc: 0.8973Epoch 13/2042663/42663 [==============================] - 0s 4us/step - loss: 0.2433 - acc: 0.9022 - val_loss: 0.2571 - val_acc: 0.8940Epoch 14/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2457 - acc: 0.9012 - val_loss: 0.2567 - val_acc: 0.8973Epoch 15/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2421 - acc: 0.9020 - val_loss: 0.2411 - val_acc: 0.8957Epoch 16/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2434 - acc: 0.9007 - val_loss: 0.2431 - val_acc: 0.9000Epoch 17/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2431 - acc: 0.9021 - val_loss: 0.2398 - val_acc: 0.9000Epoch 18/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2435 - acc: 0.9018 - val_loss: 0.2919 - val_acc: 0.8787Epoch 19/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2409 - acc: 0.9029 - val_loss: 0.2478 - val_acc: 0.8943Epoch 20/2042663/42663 [==============================] - 0s 5us/step - loss: 0.2426 - acc: 0.9020 - val_loss: 0.2380 - val_acc: 0.9007
我绘制了训练集和验证集的准确率和损失图:
如我们所见,结果不是很稳定,我选择了两个epoch来重新训练整个训练集,以下是新的日志:
Epoch 1/245663/45663 [==============================] - 0s 7us/step - loss: 0.5759 - accuracy: 0.7004Epoch 2/245663/45663 [==============================] - 0s 5us/step - loss: 0.5155 - accuracy: 0.7341
我的问题是为什么准确率如此不稳定,并且重新训练的模型准确率只有73%,我该如何改进模型?谢谢。
回答:
你的验证集大小是3000,训练集大小是42663,这意味着你的验证集大小约为7%。你的验证准确率在0.88到0.90之间跳动,变化幅度为正负2%。7%的验证数据太少,无法获得良好的统计数据,仅用7%的数据,正负2%的跳动并不差。通常验证数据应占总数据的20%到25%,即75-25的训练-验证分割比例。
此外,请确保在进行训练-验证分割之前对数据进行洗牌处理。
如果X
和y
是你的完整数据集,那么可以使用
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
这将洗牌数据并为你提供75-25的分割比例。