当我使用提前停止运行LGBM时,它会给我最佳迭代对应的分数。
当我尝试自己重现这些分数时,得到的数字却不同。
import lightgbm as lgbfrom sklearn.datasets import load_breast_cancerimport pandas as pdfrom sklearn.metrics import mean_absolute_errorfrom sklearn.model_selection import KFolddata = load_breast_cancer()X = pd.DataFrame(data.data)y = pd.Series(data.target)lgb_params = {'boosting_type': 'dart', 'random_state': 42}folds = KFold(5)for train_idx, val_idx in folds.split(X): X_train, X_valid = X.iloc[train_idx], X.iloc[val_idx] y_train, y_valid = y.iloc[train_idx], y.iloc[val_idx] model = lgb.LGBMRegressor(**lgb_params, n_estimators=10000, n_jobs=-1) model.fit(X_train, y_train, eval_set=[(X_valid, y_valid)], eval_metric='mae', verbose=-1, early_stopping_rounds=200) y_pred_valid = model.predict(X_valid) print(mean_absolute_error(y_valid, y_pred_valid))
我期望
valid_0's l1: 0.123608
能与我用mean_absolute_error
计算的结果相匹配,但实际上并非如此。实际上,以下是我输出的前部分:
Training until validation scores don't improve for 200 rounds.Early stopping, best iteration is:[631] valid_0's l2: 0.0515033 valid_0's l1: 0.1236080.16287265537021847
我使用的是lightgbm的’2.2.1’版本。
回答:
如果你更新你的LGBM版本,你会得到
“UserWarning: Early stopping is not available in dart mode”
请参考这个议题了解更多详情。你可以做的就是使用最佳提升轮数重新训练模型。
results = model.evals_result_['valid_0']['l1']best_perf = min(results)num_boost = results.index(best_perf)print('with boost', num_boost, 'perf', best_perf) model = lgb.LGBMRegressor(**lgb_params, n_estimators=num_boost+1, n_jobs=-1)model.fit(X_train, y_train, verbose=-1)