我有一个脚本,它应该使用交叉验证来训练不同的模型,然后计算平均准确率,以便我可以使用最佳模型进行分类任务。但是,我对每个分类器得到的结果都是相同的。
结果看起来像这样:
---Filename in processed................ corpusAmazon_trainetiquette : [0 1]Embeddings bert model used.................... : smModel name: Model_LSVC_ovr------------cross val predict used---------------- accuracy with cross_val_predict : 0.6582974014576258corpusAmazon_train file terminated--- ---------------cross val score used ----------------------- [0.66348722 0.66234262 0.63334605 0.66959176 0.66081648 0.6463182 0.66730256 0.65572519 0.65648855 0.66755725]0.66 accuracy with a standard deviation of 0.01 Model name: Model_G_NB------------cross val predict used---------------- accuracy with cross_val_predict : 0.6582974014576258corpusAmazon_train file terminated--- ---------------cross val score used ----------------------- [0.66348722 0.66234262 0.63334605 0.66959176 0.66081648 0.6463182 0.66730256 0.65572519 0.65648855 0.66755725]0.66 accuracy with a standard deviation of 0.01 Model name: Model_LR------------cross val predict used---------------- accuracy with cross_val_predict : 0.6582974014576258corpusAmazon_train file terminated--- ---------------cross val score used ----------------------- [0.66348722 0.66234262 0.63334605 0.66959176 0.66081648 0.6463182 0.66730256 0.65572519 0.65648855 0.66755725]0.66 accuracy with a standard deviation of 0.01
使用交叉验证的代码行:
models_list = {'Model_LSVC_ovr': model1, 'Model_G_NB': model2, 'Model_LR': model3, 'Model_RF': model4, 'Model_KN': model5, 'Model_MLP': model6, 'Model_LDA': model7, 'Model_XGB': model8}# cross_validationdef cross_validation(features, ylabels, models_list, n, lge_model): cv_splitter = KFold(n_splits=10, shuffle=True, random_state=42) features, s = get_flaubert_layer(features, lge_model) for model_name, model in models_list.items(): print("Model name: {}".format(model_name)) print("------------cross val predict used----------------", "\n") y_pred = cross_val_predict(model, features, ylabels, cv=cv_splitter, verbose=1) accuracy_score_predict = accuracy_score(ylabels, y_pred) print("accuracy with cross_val_predict :", accuracy_score_predict) print("---------------cross val score used -----------------------", "\n") scores = cross_val_score(model, features, ylabels, scoring='accuracy', cv=cv_splitter) print("%0.2f accuracy with a standard deviation of %0.2f" % (accuracy_score_mean, accuracy_score_std), "\n")
即使使用cross_val_score,每个模型的准确率也相同。你有任何想法吗?可能是由于我在交叉验证函数中使用了random_state吗?
模型定义的代码:
def classifiers_b(): model1 = LinearSVC() model2 = GaussianNB() # MultinomialNB() X cannot be a non-negative model3 = LogisticRegression() model4 = RandomForestClassifier() model5 = KNeighborsClassifier() model6 = MLPClassifier(hidden_layer_sizes=(50, 100, 50), max_iter=500, activation='relu', solver='adam', random_state=1) model8 = XGBClassifier(eval_metric="logloss") model7 = LinearDiscriminantAnalysis() #models_list = {'Model_LSVC_ovr': model1, 'Model_G_NB': model2, 'Model_LR': model3, 'Model_RF': model4, 'Model_KN': model5, 'Model_MLP': model6, 'Model_LDA': model7, 'Model_XGB': model8}
回答:
我建议为每个模型使用一个管道。看起来你在每次迭代中对同一个模型进行CV。你可以查看这里的文档,了解如何使用它们。然后为每个模型管道执行CV。