我想在一个训练好的SVM分类器上绘制学习曲线,使用不同的评分标准,并采用留一组出(Leave One Group Out)作为交叉验证的方法。我以为我已经搞明白了,但是两种不同的评分标准 – ‘f1_micro’ 和 ‘accuracy’ – 却产生了相同的值。我很困惑,这应该是正常的吗?
这是我的代码(很遗憾我无法分享数据,因为数据不是公开的):
from sklearn import svmSVC_classifier_LOWO_VC0 = svm.SVC(cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False)training_data = pd.read_csv('training_data.csv')X = training_data.drop(['Groups', 'Targets'], axis=1).valuesscaler = preprocessing.StandardScaler().fit(X)X = scaler.transform(X)y = training_data['Targets'].valuesgroups = training_data["Groups"].valuesFscorer = make_scorer(f1_score, average = 'micro')logo = LeaveOneGroupOut()parm_range0 = np.logspace(-2, 6, 9)train_scores0, test_scores0 = validation_curve(SVC_classifier_LOWO_VC0, X, y, "C", parm_range0, cv =logo.split(X, y, groups=groups), scoring = Fscorer)
现在,从以下代码:
train_scores_mean0 = np.mean(train_scores0, axis=1)train_scores_std0 = np.std(train_scores0, axis=1)test_scores_mean0 = np.mean(test_scores0, axis=1)test_scores_std0 = np.std(test_scores0, axis=1)print test_scores_mean0print np.amax(test_scores_mean0)print np.logspace(-2, 6, 9)[test_scores_mean0.argmax(axis=0)]
我得到的结果是:
[ 0.20257407 0.35551122 0.40791047 0.49887676 0.5021742
0.50030438 0.49426622 0.48066419 0.4868987 ]0.502174200206
100.0
如果我创建一个新的分类器,但参数相同,并按之前的方式运行所有内容,只是更改了评分标准,例如:
parm_range1 = np.logspace(-2, 6, 9)train_scores1, test_scores1 = validation_curve(SVC_classifier_LOWO_VC1, X,y, "C", parm_range1, cv =logo.split(X, y, groups=groups), scoring ='accuracy')train_scores_mean1 = np.mean(train_scores1, axis=1)train_scores_std1= np.std(train_scores1, axis=1)test_scores_mean1 = np.mean(test_scores1, axis=1)test_scores_std1 = np.std(test_scores1, axis=1)print test_scores_mean1print np.amax(test_scores_mean1)print np.logspace(-2, 6, 9)[test_scores_mean1.argmax(axis=0)]
我得到的结果完全相同:
[ 0.20257407 0.35551122 0.40791047 0.49887676 0.5021742
0.50030438 0.49426622 0.48066419 0.4868987 ]0.502174200206
100.0
这是怎么可能的?我做错了什么,还是遗漏了什么?
谢谢
回答:
F1 = accuracy
当且仅当 TP = TN
,也就是说,真阳性数量等于真阴性数量,这种情况可能发生在你的类别完全平衡时。所以要么是这种情况,要么是你的代码有错误。你在哪里初始化你的评分器的,如下所示:scorer = make_scorer(accuracy_score, average = 'micro')
?