我正在尝试使用SciKit的逻辑回归来预测一组标签。我的数据非常不平衡(’0’标签远多于’1’标签),因此在交叉验证步骤中,我必须使用F1分数指标来“平衡”结果。
[Input]X_training, y_training, X_test, y_test = generate_datasets(df_X, df_y, 0.6)logistic = LogisticRegressionCV( Cs=50, cv=4, penalty='l2', fit_intercept=True, scoring='f1')logistic.fit(X_training, y_training)print('Predicted: %s' % str(logistic.predict(X_test)))print('F1-score: %f'% f1_score(y_test, logistic.predict(X_test)))print('Accuracy score: %f'% logistic.score(X_test, y_test))[Output]>> Predicted: [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0]>> Actual: [0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 1 1]>> F1-score: 0.285714>> Accuracy score: 0.782609>> C:\Anaconda3\lib\site-packages\sklearn\metrics\classification.py:958: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples.
我当然知道问题与我的数据集有关:它太小了(只是真实数据集的一个样本)。然而,有人能解释一下我看到的“UndefinedMetricWarning”警告的含义吗?背后到底发生了什么?
回答:
这似乎是一个已知的错误,在这里已经修复了,我想你应该尝试更新sklearn。