这里是一个可复现的示例代码:
from numpy import meanfrom sklearn.datasets import make_classificationfrom sklearn.model_selection import cross_validatefrom sklearn.model_selection import StratifiedShuffleSplitfrom sklearn.ensemble import RandomForestClassifierfrom sklearn.metrics import balanced_accuracy_score# 定义数据集X, y = make_classification(n_samples=1000, weights = [0.3,0.7], n_features=100, n_informative=75, random_state=0)# 定义模型model = RandomForestClassifier(n_estimators=10, random_state=0)# 评估模型n_splits=10cv = StratifiedShuffleSplit(n_splits, random_state=0)n_scores = cross_validate(model, X, y, scoring='balanced_accuracy', cv=cv, n_jobs=-1, error_score='raise')# 报告性能print('Accuracy: %0.4f' % (mean(n_scores['test_score'])))bal_acc_sum = []for train_index, test_index in cv.split(X,y): model.fit(X[train_index], y[train_index]) bal_acc_sum.append(balanced_accuracy_score(model.predict(X[test_index]),y[test_index]))print(f"Accuracy: %0.4f" % (mean(bal_acc_sum)))
结果:
Accuracy: 0.6737Accuracy: 0.7113
我自己计算的准确率结果总是高于交叉验证给出的结果。但它们应该是相同的,还是我遗漏了什么?相同的度量,相同的分割(KFold 带来相同的结果),相同的固定模型(其他模型表现相同),相同的随机状态,但结果不同?
回答:
这是因为,在您的 manual 计算中,您在 balanced_accuracy_score
中颠倒了参数的顺序,这很重要 – 它应该是 (y_true, y_pred)
(文档)。
更改这一点后,您的 manual 计算变为:
bal_acc_sum = []for train_index, test_index in cv.split(X,y): model.fit(X[train_index], y[train_index]) bal_acc_sum.append(balanced_accuracy_score(y[test_index], model.predict(X[test_index]))) # 在这里更改参数顺序print(f"Accuracy: %0.4f" % (mean(bal_acc_sum)))
结果:
Accuracy: 0.6737
并且
import numpy as npnp.all(bal_acc_sum==n_scores['test_score'])# True