from sklearn.model_selection import RandomizedSearchCV, GridSearchCVfrom sklearn.metrics import roc_auc_scorefrom sklearn.metrics import precision_recall_fscore_support as scorefrom sklearn.model_selection import StratifiedKFoldfrom xgboost import XGBClassifierimport timeparams = { 'min_child_weight': [1, 5, 10], 'gamma': [0.5, 1, 1.5, 2, 5], 'subsample': [0.6, 0.8, 1.0], 'colsample_bytree': [0.6, 0.8, 1.0], 'max_depth': [3, 4, 5] }xgb = XGBClassifier(learning_rate=0.02, n_estimators=600, silent=True, nthread=1)folds = 5param_comb = 5skf = StratifiedKFold(n_splits=folds, shuffle = True, random_state = 1001)random_search = RandomizedSearchCV(xgb, param_distributions=params, n_iter=param_comb, scoring=['f1_macro','precision_macro'], n_jobs=4, cv=skf.split(X_train,y_train), verbose=3, random_state=1001)start_time = time.clock() # 从此处开始计时,变量为 "start_time"random_search.fit(X_train, y_train)elapsed = (time.clock() - start) # 在此处结束计时,变量为 "start_time"
我的代码如上所示,我的 y_train 是一个包含从0到9的整数的多类别 pandas 序列。
y_train.head()1041 81177 72966 01690 22115 1Name: Industry, dtype: object
当我运行上述设置代码时,我收到了以下错误信息:
ValueError: Supported target types are: ('binary', 'multiclass'). Got 'unknown' instead.
我搜索了一些类似的问题,尝试使用 sklearn.model_selection
中的 cross_validate
,并尝试使用其他与多类别兼容的度量标准,但仍然收到了相同的错误信息。
有没有什么方法可以基于性能度量标准,使用分层交叉验证对参数进行网格搜索?
更新:在修复了 dtype
问题后,我想将多个度量标准传递到 scoring=
中,我尝试了这种方式,因为我阅读了这份文档(http://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter):
random_search = RandomizedSearchCV(xgb, param_distributions=params, n_iter=param_comb, scoring=['f1_macro','precision_macro'], n_jobs=4, cv=skf.split(X_train,y_train), verbose=3, random_state=1001)
然后我失败了,并收到了以下警告:
ValueError Traceback (most recent call last)<ipython-input-67-dd57cd97c89c> in <module>() 36 # Here we go 37 start_time = time.clock() # timing starts from this point for "start_time" variable---> 38 random_search.fit(X_train, y_train) 39 elapsed = (time.clock() - start) # timing ends here for "start_time" variable/anaconda3/lib/python3.6/site- packages/sklearn/model_selection/_search.py in fit(self, X, y, groups, **fit_params)609 "available for that metric. If this is not "610 "needed, refit should be set to False "--> 611 "explicitly. %r was passed." % self.refit)612 else:613 refit_metric = self.refitValueError: For multi-metric scoring, the parameter refit must be set to a scorer key to refit an estimator with the best parameter setting on the whole data and make the best_* attributes available for that metric. If this is not needed, refit should be set to False explicitly. True was passed.
如何解决这个问题?
回答:
正如用户指南中所述:
当指定多个度量标准时,必须将 refit 参数设置为将用于查找 best_params_ 并在整个数据集上构建 best_estimator_ 的度量标准(字符串)。如果不需要重新拟合,则设置 refit=False。将 refit 保留为默认值 None 在使用多个度量标准时会导致错误。
由于你在这里使用了多个度量标准:
random_search = RandomizedSearchCV(xgb, param_distributions=params, n_iter=param_comb, scoring=['f1_macro','precision_macro'], n_jobs=4, cv=skf.split(X_train,y_train), verbose=3, random_state=1001)
RandomizedSearchCV 将不知道如何找到最佳参数。它无法从两种不同的评分策略中选择最佳得分。因此,你需要指定你希望它用于查找最佳参数的评分类型。
为此,你需要将 refit
参数设置为你在 scoring
中使用的选项之一。像这样:
random_search = RandomizedSearchCV(xgb, param_distributions=params, ... scoring=['f1_macro','precision_macro'], ... refit = 'f1_macro')