我在尝试将CatBoostRegressor拟合到我的模型中。当我为基线模型执行K折交叉验证时,一切正常。但是当我使用Optuna进行超参数调整时,它做了一些非常奇怪的事情。它运行了第一个试验,然后抛出了以下错误:
[I 2021-08-26 08:00:56,865] Trial 0 finished with value: 0.7219653113910736 and parameters: {'model__depth': 2, 'model__iterations': 1715, 'model__subsample': 0.5627211605250965, 'model__learning_rate': 0.15601805222619286}. Best is trial 0 with value: 0.7219653113910736. [W 2021-08-26 08:00:56,869] Trial 1 failed because of the following error: CatBoostError("You can't change params of fitted model.")Traceback (most recent call last):
我对XGBRegressor和LGBM使用了类似的方法,它们都能正常工作。那么为什么我对CatBoost会得到一个错误呢?
以下是我的代码:
cat_cols = [cname for cname in train_data1.columns if train_data1[cname].dtype == 'object']num_cols = [cname for cname in train_data1.columns if train_data1[cname].dtype in ['int64', 'float64']]from sklearn.preprocessing import StandardScalernum_trans = Pipeline(steps = [('impute', SimpleImputer(strategy = 'mean')),('scale', StandardScaler())])cat_trans = Pipeline(steps = [('impute', SimpleImputer(strategy = 'most_frequent')), ('encode', OneHotEncoder(handle_unknown = 'ignore'))])from sklearn.compose import ColumnTransformerpreproc = ColumnTransformer(transformers = [('cat', cat_trans, cat_cols), ('num', num_trans, num_cols)])from catboost import CatBoostRegressorcbr_model = CatBoostRegressor(random_state = 69, loss_function='RMSE', eval_metric='RMSE', leaf_estimation_method ='Newton', bootstrap_type='Bernoulli', task_type = 'GPU')pipe = Pipeline(steps = [('preproc', preproc), ('model', cbr_model)])import optunafrom sklearn.metrics import mean_squared_errordef objective(trial): model__depth = trial.suggest_int('model__depth', 2, 10) model__iterations = trial.suggest_int('model__iterations', 100, 2000) model__subsample = trial.suggest_float('model__subsample', 0.0, 1.0) model__learning_rate =trial.suggest_float('model__learning_rate', 0.001, 0.3, log = True) params = {'model__depth' : model__depth, 'model__iterations' : model__iterations, 'model__subsample' : model__subsample, 'model__learning_rate' : model__learning_rate} pipe.set_params(**params) pipe.fit(train_x, train_y) pred = pipe.predict(test_x) return np.sqrt(mean_squared_error(test_y, pred))cbr_study = optuna.create_study(direction = 'minimize')cbr_study.optimize(objective, n_trials = 10)
回答:
显然,CatBoost有这样一个机制,你必须为每个试验创建新的CatBoost模型对象。我在Github上就这个问题开了一个issue,他们说这是为了保护长时间训练的结果,这对我来说毫无意义!
目前,解决此问题唯一的方法是你必须为每个试验创建新的CatBoost模型!
另一种更合理的办法是,如果你使用Pipeline
方法和Optuna,可以在optuna函数中定义最终的pipeline实例和模型实例。然后再次在函数外部定义最终的pipeline实例。
这样你就不必在使用50次试验时定义50个实例了!!