我有四个分类特征和第五个数值特征(Var5)。当我尝试以下代码时:
cat_attribs = ['var1','var2','var3','var4']full_pipeline = ColumnTransformer([('cat', OneHotEncoder(handle_unknown = 'ignore'), cat_attribs)], remainder = 'passthrough')X_train = full_pipeline.fit_transform(X_train)model = XGBRegressor(n_estimators=10, max_depth=20, verbosity=2)model.fit(X_train, y_train)y_pred = model.predict(X_test)
当模型尝试进行预测时,我得到了以下错误信息:
ValueError: DataFrame.dtypes for data must be int, float, bool or categorical. Whencategorical type is supplied, DMatrix parameter
enable_categorical
must be set toTrue
.Var1, Var2, Var3, Var4
有谁知道这里出了什么问题吗?
如果有帮助的话,这里是X_train数据和y_train数据的一个小样本:
Var1 Var2 Var3 Var4 Var51507856 JP 2009 6581 OME 325.787218839624 FR 2018 5783 I_S 11.9563261395729 BE 2015 6719 OME 42.8885651971169 DK 2011 3506 RPP 70.0941461140120 AT 2019 5474 NMM 270.082738
和:
Ind_Var1507856 8.013558839624 4.1055591395729 7.8300771971169 83.0000001140120 51.710526
回答:
你的代码问题在于你已经对X_train
进行了分类特征编码,但没有对X_test
进行编码,因此当你运行model.predict(X_test)
时会得到错误信息。为了解决这个问题,首先你需要对X_train
拟合编码器,然后使用该编码器同时转换X_train
和X_test
。请看下面的代码示例。
import pandas as pdfrom xgboost import XGBRegressorfrom sklearn.compose import ColumnTransformerfrom sklearn.preprocessing import OneHotEncoder# define the input datadf = pd.DataFrame([ {'Var1': 'JP', 'Var2': 2009, 'Var3': 6581, 'Var4': 'OME', 'Var5': 325.787218, 'Ind_Var': 8.013558}, {'Var1': 'FR', 'Var2': 2018, 'Var3': 5783, 'Var4': 'I_S', 'Var5': 11.956326, 'Ind_Var': 4.105559}, {'Var1': 'BE', 'Var2': 2015, 'Var3': 6719, 'Var4': 'OME', 'Var5': 42.888565, 'Ind_Var': 7.830077}, {'Var1': 'DK', 'Var2': 2011, 'Var3': 3506, 'Var4': 'RPP', 'Var5': 70.094146, 'Ind_Var': 83.000000}, {'Var1': 'AT', 'Var2': 2019, 'Var3': 5474, 'Var4': 'NMM', 'Var5': 270.082738, 'Ind_Var': 51.710526}])# extract the features and targetX_train, y_train = df.iloc[:3, :-1], df.iloc[:3, -1]X_test, y_test = df.iloc[3:, :-1], df.iloc[3:, -1]# one-hot encode the categorical featurescat_attribs = ['Var1', 'Var2', 'Var3', 'Var4']full_pipeline = ColumnTransformer([('cat', OneHotEncoder(handle_unknown='ignore'), cat_attribs)], remainder='passthrough')encoder = full_pipeline.fit(X_train)X_train = encoder.transform(X_train)X_test = encoder.transform(X_test)# train the modelmodel = XGBRegressor(n_estimators=10, max_depth=20, verbosity=2)model.fit(X_train, y_train)# extract the training set predictionsmodel.predict(X_train)# array([7.0887003, 3.7923286, 7.0887003], dtype=float32)# extract the test set predictionsmodel.predict(X_test)# array([7.0887003, 7.0887003], dtype=float32)