我正在处理一个使用以下代码创建的不平衡数据集
X, y = make_classification(n_samples=10000, n_features=2, n_redundant=0, n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=1)
我尝试使用SMOTE过采样来消除不平衡,然后尝试拟合一个机器学习模型。这分别通过普通方法和创建管道来完成的。
普通方法
from imblearn.over_sampling import SMOTEoversampled_data = SMOTE(sampling_strategy=0.5)X_over, y_over = oversampled_data.fit_resample(X, y)logistic = LogisticRegression(solver='liblinear')scoring = ['accuracy', 'precision', 'recall', 'f1']cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)# evaluating the modelscores = cross_validate(logistic, X_over, y_over, scoring=scoring, cv=cv, n_jobs=-1, return_train_score=True)print('Accuracy: {:.2f}, Precison: {:.2f}, Recall: {:.2f} F1: {:.2f}'.format(np.mean(scores['test_accuracy']), np.mean(scores['test_precision']), np.mean(scores['test_recall']), np.mean(scores['test_f1'])))
输出 – 准确率: 0.93, 精确率: 0.92, 召回率: 0.86, F1分数: 0.89
管道
from imblearn.pipeline import make_pipeline, Pipelinefrom sklearn.model_selection import cross_val_scoreoversampled_data = SMOTE(sampling_strategy=0.5)pipeline = Pipeline([('smote', oversampled_data), ('model', LogisticRegression())])# pipeline = make_pipeline(oversampled_data, logistic)scoring = ['accuracy', 'precision', 'recall', 'f1']cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)# evaluating the modelscores = cross_validate(pipeline, X, y, scoring=scoring, cv=cv, n_jobs=-1, return_train_score=True)print('Accuracy: {:.2f}, Precison: {:.2f}, Recall: {:.2f}, F1: {:.2f}'.format(np.mean(scores['test_accuracy']), np.mean(scores['test_precision']), np.mean(scores['test_recall']), np.mean(scores['test_f1'])))
输出 – 准确率: 0.96, 精确率: 0.19, 召回率: 0.84, F1分数: 0.31
我在使用管道时出了什么问题,为什么使用管道时的精确率和F1分数这么低?
回答:
在第一种方法中,你在划分训练集和测试集之前创建了合成样本,而在第二种方法中,你是在划分之后才这样做的。
前一种方法将合成数据点添加到测试集中,而后一种方法则没有。此外,前一种方法由于数据泄露导致了分数的虚高:它基于训练数据集中的某些数据点(部分)添加了合成测试样本。例如请参阅