我查看了与这个问题相似的提问,但我仍然不明白为什么会得到这样的结果。一个模型在训练时达到99%的准确率,但在用相同的数据进行预测时,准确率却降低到81%,这正常吗?它不应该返回99%的准确率吗?
此外,当我展示新的未见过数据时,预测准确率只有可怜的17%。这肯定不对。我理解当模型面对新数据时,准确率应该低于模型的训练准确率,但绝对不应该低到17%这么差。
为了提供背景,这里是代码。我添加了注释以便于阅读:
# Step 1) Split Data into Training and Prediction Setsnum_split_df_at = int(0.75*len(df))np_train_data = df.iloc[0:num_split_df_at, columns_index_list].to_numpy()np_train_target = list(df.iloc[0:num_split_df_at, 4])np_predict_data = df.iloc[num_split_df_at:len(df), columns_index_list].to_numpy()np_predict_target = list(df.iloc[num_split_df_at:len(df), 4])# Step 2) Split Training Data into Training and Validation Setsx_train, x_test, y_train, y_test = train_test_split(np_train_data, np_train_target, random_state=0)# Step 3) Reshape Training and Validation Sets to (49, 5)# prints: "(3809, 245)"print(x_train.shape)# prints: "(1270, 245)"print(x_test.shape)x_train = x_train.reshape(x_train.shape[0], round(x_train.shape[1]/5), 5)x_test = x_test.reshape(x_test.shape[0], round(x_test.shape[1]/5), 5)y_train = np.array(y_train)- 1y_test = np.array(y_test)- 1# prints: "(3809, 49, 5)"print(x_train.shape)# prints: "[0 1 2 3 4 5 6 7 8 9]"print(np.unique(y_train))# prints: "10"print(len(np.unique(y_train)))input_shape = (x_train.shape[1], 5)# Step 4) Run Modeladam = keras.optimizers.Adam(learning_rate=0.0001)model = Sequential()model.add(Conv1D(512, 5, activation='relu', input_shape=input_shape))model.add(Conv1D(512, 5, activation='relu'))model.add(MaxPooling1D(3))model.add(Conv1D(512, 5, activation='relu'))model.add(Conv1D(512, 5, activation='relu'))model.add(GlobalAveragePooling1D())model.add(Dropout(0.5))model.add(Dense(10, activation='softmax'))model.compile(loss='sparse_categorical_crossentropy', optimizer=adam, metrics=['accuracy']) model.fit(x_train, y_train, batch_size=128, epochs=150, validation_data=(x_test, y_test))print(model.summary())model.save('model_1')# Step 5) Predict on Exact Same Trained Data - Should Return High Accuracynp_train_data = np_train_data.reshape(np_train_data.shape[0], round(np_train_data.shape[1]/5), 5)np_train_target = np.array(np_train_target)- 1predict_results = model.predict_classes(np_train_data)print(accuracy_score(predict_results, np_train_target))# Step 6) Predict on Validation Setnp_predict_data = np_predict_data.reshape(np_predict_data.shape[0], round(np_predict_data.shape[1]/5), 5)np_predict_target = np.array(np_predict_target)- 1predict_results = model.predict_classes(np_predict_data)print(accuracy_score(predict_results, np_predict_target))
以下是预测结果:
我的输出可能的分类结果是:
[1 2 3 4 5 6 7 8 9 10] 转换为 [0 1 2 3 4 5 6 7 8 9] 用于 “sparse_categorical_crossentropy”
回答:
这是因为Keras模型的训练准确率/损失是按批次计算然后取平均值的(参见这里)。而验证指标/性能是同时在所有传入数据上计算的。
这可以通过这个简单的例子来验证。我们训练一个神经网络,并将相同的训练数据作为验证数据。这样,我们可以比较(a)训练准确率,(b)验证准确率,以及(c)训练结束时的准确率得分。正如我们所见,(b)等于(c),但(a)与(c)和(b)不同,原因如上所述。
timestamp, features, n_sample = 45, 2, 1000n_class = 10X = np.random.uniform(0,1, (n_sample, timestamp, features))y = np.random.randint(0,n_class, n_sample)model = Sequential()model.add(Conv1D(8, 3, activation='relu', input_shape=(timestamp, features)))model.add(MaxPooling1D(3))model.add(Conv1D(8, 3, activation='relu'))model.add(GlobalAveragePooling1D())model.add(Dropout(0.5))model.add(Dense(n_class, activation='softmax'))model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(X, y, batch_size=128, epochs=5, validation_data=(X, y))history.history['accuracy'][-1] # (a)history.history['val_accuracy'][-1] # (b)accuracy_score(y, np.argmax(model.predict(X), axis=1)) # (c)