验证准确率未见改善

无论我使用多少个周期或是更改学习率,我的验证准确率始终停留在50%左右。目前我使用了一个dropout层,如果我使用两个dropout层,我的最高训练准确率为40%,验证准确率为59%。而现在使用一个dropout层,我的结果如下:

2527/2527 [==============================] - 26s 10ms/step - loss: 1.2076 - accuracy: 0.7944 - val_loss: 3.0905 - val_accuracy: 0.5822Epoch 10/202527/2527 [==============================] - 26s 10ms/step - loss: 1.1592 - accuracy: 0.7991 - val_loss: 3.0318 - val_accuracy: 0.5864Epoch 11/202527/2527 [==============================] - 26s 10ms/step - loss: 1.1143 - accuracy: 0.8034 - val_loss: 3.0511 - val_accuracy: 0.5866Epoch 12/202527/2527 [==============================] - 26s 10ms/step - loss: 1.0686 - accuracy: 0.8079 - val_loss: 3.0169 - val_accuracy: 0.5872Epoch 13/202527/2527 [==============================] - 31s 12ms/step - loss: 1.0251 - accuracy: 0.8126 - val_loss: 3.0173 - val_accuracy: 0.5895Epoch 14/2527/2527 [==============================] - 26s 10ms/step - loss: 0.9824 - accuracy: 0.8165 - val_loss: 3.0013 - val_accuracy: 0.5917Epoch 15/202527/2527 [==============================] - 26s 10ms/step - loss: 0.9417 - accuracy: 0.8216 - val_loss: 2.9909 - val_accuracy: 0.5938Epoch 16/202527/2527 [==============================] - 26s 10ms/step - loss: 0.9000 - accuracy: 0.8264 - val_loss: 3.0269 - val_accuracy: 0.5943Epoch 17/202527/2527 [==============================] - 26s 10ms/step - loss: 0.8584 - accuracy: 0.8332 - val_loss: 3.0011 - val_accuracy: 0.5934Epoch 18/202527/2527 [==============================] - 26s 10ms/step - loss: 0.8172 - accuracy: 0.8378 - val_loss: 2.9918 - val_accuracy: 0.5949Epoch 19/202527/2527 [==============================] - 26s 10ms/step - loss: 0.7796 - accuracy: 0.8445 - val_loss: 2.9974 - val_accuracy: 0.5929Epoch 20/202527/2527 [==============================] - 25s 10ms/step - loss: 0.7407 - accuracy: 0.8502 - val_loss: 3.0005 - val_accuracy: 0.5907

最高只能达到59%。这是我得到的图表:

enter image description here

无论我做了多少更改,验证准确率最高只能达到59%。这是我的代码:

BATCH_SIZE = 64EPOCHS = 20LSTM_NODES = 256NUM_SENTENCES = 3000MAX_SENTENCE_LENGTH = 50MAX_NUM_WORDS = 5000EMBEDDING_SIZE = 100encoder_inputs_placeholder = Input(shape=(max_input_len,))x = embedding_layer(encoder_inputs_placeholder)encoder = LSTM(LSTM_NODES, return_state=True)encoder_outputs, h, c = encoder(x)encoder_states = [h, c]decoder_inputs_placeholder = Input(shape=(max_out_len,))decoder_embedding = Embedding(num_words_output, LSTM_NODES)decoder_inputs_x = decoder_embedding(decoder_inputs_placeholder)decoder_lstm = LSTM(LSTM_NODES, return_sequences=True, return_state=True)decoder_outputs, _, _ = decoder_lstm(decoder_inputs_x, initial_state=encoder_states)decoder_dropout1 = Dropout(0.2)decoder_outputs = decoder_dropout1(decoder_outputs)decoder_dense1 = Dense(num_words_output, activation='softmax')decoder_outputs = decoder_dense1(decoder_outputs)opt = tf.keras.optimizers.RMSprop()model = Model([encoder_inputs_placeholder,  decoder_inputs_placeholder],  decoder_outputs)model.compile(    optimizer=opt,    loss='categorical_crossentropy',    metrics=['accuracy'])history = model.fit(    [encoder_input_sequences, decoder_input_sequences],    decoder_targets_one_hot,    batch_size=BATCH_SIZE,    epochs=EPOCHS,    validation_split=0.1,)

我非常困惑为什么只有我的训练准确率在更新,验证准确率却没有变化。

这是模型摘要:

Model: "model_1"__________________________________________________________________________________________________Layer (type)                    Output Shape         Param #     Connected to                     ==================================================================================================input_1 (InputLayer)            (None, 25)           0                                            __________________________________________________________________________________________________input_2 (InputLayer)            (None, 23)           0                                            __________________________________________________________________________________________________embedding_1 (Embedding)         (None, 25, 100)      299100      input_1[0][0]                    __________________________________________________________________________________________________embedding_2 (Embedding)         (None, 23, 256)      838144      input_2[0][0]                    __________________________________________________________________________________________________lstm_1 (LSTM)                   [(None, 256), (None, 365568      embedding_1[0][0]                __________________________________________________________________________________________________lstm_2 (LSTM)                   [(None, 23, 256), (N 525312      embedding_2[0][0]                                                                                 lstm_1[0][1]                                                                                      lstm_1[0][2]                     __________________________________________________________________________________________________dropout_1 (Dropout)             (None, 23, 256)      0           lstm_2[0][0]                     __________________________________________________________________________________________________dense_1 (Dense)                 (None, 23, 3274)     841418      dropout_1[0][0]                  ==================================================================================================Total params: 2,869,542Trainable params: 2,869,542Non-trainable params: 0__________________________________________________________________________________________________None

回答:

训练数据集的大小不到3K,而可训练参数的数量约为300万。你问题的答案是经典的过拟合——模型太大,只是记住了训练子集而不是进行泛化。

如何改善当前情况:

  • 尝试生成或查找更多数据;
  • 降低模型的复杂度:
    • 使用预训练的嵌入(如glovefasttext等);
    • 减少LSTM节点的数量;

Related Posts

Swift Metal Compiler 因 XPC_ERROR_CONNECTION_INTERRUPTED 错误而失败 BERT 模型

我最近一直在开发一个应用程序,用户可以就一段文本提出问…

理解LSTM预测输出

这是一个15类别的分类模型,OUTPUT_DIM = …

使用 sklearn.linear_model.Ridge 解决线性病态问题 – 描述训练数据的最佳方式?

问题陈述:我正在处理一个对应于病态反问题的线性方程组。…

Keras 生成的模型没有准确度

我有以下代码用于训练基于一些数字的模型: from n…

kfold中的真实与预测值

我想请教一个建议! 我使用kfold交叉验证来分割我的…

TensorFlow教程中不同尺寸图像的DCGAN模型

TensorFlow的DCGAN 教程中,关于生成器和…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注