我需要你的帮助!
当我尝试拟合我的Pipeline时,会得到下面的ValueError。
ValueError: blocks[0,:] has incompatible row dimensions. Got blocks[0,2].shape[0] == 1, expected 13892.
我的任务是构建一个模型,将养老院的业务特征与其第一轮调查结果以及第一轮和第二轮调查之间的时间结合起来,以预测第二轮的总分。
这是我用来完成上述任务的代码。
# 创建一个自定义转换器来计算调查1和调查2之间的时间差class TimedeltaTransformer(BaseEstimator, TransformerMixin): def __init__(self, t1_col, t2_col): self.t1_col = t1_col self.t2_col = t2_col def fit(self, X, y=None): self.col_1 = X[self.t1_col].apply(pd.to_datetime) self.col_2 = X[self.t2_col].apply(pd.to_datetime) return self def transform(self, X): difference = self.col_1 - self.col_2 return difference.values# 创建TimedeltaTransformer对象cycle_1_date = 'CYCLE_1_SURVEY_DATE'cycle_2_date = 'CYCLE_2_SURVEY_DATE'time_feature = TimedeltaTransformer(cycle_1_date, cycle_2_date)# 使用自定义的列选择转换器来提取第一轮调查特征cycle_1_cols = ['CYCLE_1_DEFS', 'CYCLE_1_NFROMDEFS', 'CYCLE_1_NFROMCOMP', 'CYCLE_1_DEFS_SCORE', 'CYCLE_1_NUMREVIS', 'CYCLE_1_REVISIT_SCORE', 'CYCLE_1_TOTAL_SCORE']cycle_1_features = Pipeline([ ('cst2', ColumnSelectTransformer(cycle_1_cols)), ])# 创建我的survey_model Pipeline对象# Pipeline对象是一个两步过程,首先是特征联合转换和组合业务特征、第一轮调查特征以及时间特征;# 然后是将转换后的特征拟合到RandomForestRegressor中survey_model = Pipeline([ ('features', FeatureUnion([ ('business', business_features), ('survey', cycle_1_features), ('time', time_feature), ])), ('forest', RandomForestRegressor()),])# 尝试拟合我的Pipeline会抛出上面描述的ValueErrorsurvey_model.fit(data, cycle_2_score.astype(int))
一些额外的背景信息:我正在构建这个模型,以便将其predict_proba方法传递给项目中的自定义评分器。评分器将一个字典列表传递给我的估计器的predict或predict_proba方法,而不是DataFrame。这意味着模型必须能够处理这两种数据类型。因此,我需要提供一个自定义的ColumnSelectTransformer来替代scikit-learn自己的ColumnTransformer。
以下是与业务特征和ColumnSelectTransformer相关的额外代码
# 自定义转换器,用于从DataFrame中选择列并返回数组class ColumnSelectTransformer(BaseEstimator, TransformerMixin): def __init__(self, columns): self.columns = columns def fit(self, X, y=None): return self def transform(self, X): if not isinstance(X, pd.DataFrame): X = pd.DataFrame(X) return X[self.columns].valuessimple_features = Pipeline([ ('cst', ColumnSelectTransformer(simple_cols)), ('imputer', SimpleImputer(strategy='mean')),])owner_onehot = Pipeline([ ('cst', ColumnSelectTransformer(['OWNERSHIP'])), ('imputer', SimpleImputer(strategy='most_frequent')), ('encoder', OneHotEncoder()),])cert_onehot = Pipeline([ ('cst', ColumnSelectTransformer(['CERTIFICATION'])), ('imputer', SimpleImputer(strategy='most_frequent')), ('encoder', OneHotEncoder()),])categorical_features = FeatureUnion([ ('owner_onehot', owner_onehot), ('cert_onehot', cert_onehot),])business_features = FeatureUnion([ ('simple', simple_features), ('categorical', categorical_features)])
最后,这是完整的错误信息
---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-218-046724d81b69> in <module>()----> 1 survey_model.fit(data, cycle_2_score.astype(int))/opt/conda/lib/python3.7/site-packages/sklearn/pipeline.py in fit(self, X, y, **fit_params) 350 This estimator 351 """--> 352 Xt, fit_params = self._fit(X, y, **fit_params) 353 with _print_elapsed_time('Pipeline', 354 self._log_message(len(self.steps) - 1)):/opt/conda/lib/python3.7/site-packages/sklearn/pipeline.py in _fit(self, X, y, **fit_params) 315 message_clsname='Pipeline', 316 message=self._log_message(step_idx),--> 317 **fit_params_steps[name]) 318 # Replace the transformer of the step with the fitted 319 # transformer. This is necessary when loading the transformer/opt/conda/lib/python3.7/site-packages/joblib/memory.py in __call__(self, *args, **kwargs) 353 354 def __call__(self, *args, **kwargs):--> 355 return self.func(*args, **kwargs) 356 357 def call_and_shelve(self, *args, **kwargs):/opt/conda/lib/python3.7/site-packages/sklearn/pipeline.py in _fit_transform_one(transformer, X, y, weight, message_clsname, message, **fit_params) 714 with _print_elapsed_time(message_clsname, message): 715 if hasattr(transformer, 'fit_transform'):--> 716 res = transformer.fit_transform(X, y, **fit_params) 717 else: 718 res = transformer.fit(X, y, **fit_params).transform(X)/opt/conda/lib/python3.7/site-packages/sklearn/pipeline.py in fit_transform(self, X, y, **fit_params) 919 920 if any(sparse.issparse(f) for f in Xs):--> 921 Xs = sparse.hstack(Xs).tocsr() 922 else: 923 Xs = np.hstack(Xs)/opt/conda/lib/python3.7/site-packages/scipy/sparse/construct.py in hstack(blocks, format, dtype) 463 464 """--> 465 return bmat([blocks], format=format, dtype=dtype) 466 467 /opt/conda/lib/python3.7/site-packages/scipy/sparse/construct.py in bmat(blocks, format, dtype) 584 exp=brow_lengths[i], 585 got=A.shape[0]))--> 586 raise ValueError(msg) 587 588 if bcol_lengths[j] == 0:ValueError: blocks[0,:] has incompatible row dimensions. Got blocks[0,2].shape[0] == 1, expected 13892.
此外,可以从以下位置获取数据和元数据
%%bashmkdir datawget http://dataincubator-wqu.s3.amazonaws.com/mldata/providers-train.csv -nc -P ./ml-datawget http://dataincubator-wqu.s3.amazonaws.com/mldata/providers-metadata.csv -nc -P ./ml-data
回答:
修改我的TimedeltaConverter似乎有所帮助。首先将其更改为一系列整数,然后重塑为reshape(-1,1)的形式。
class TimedeltaTransformer(BaseEstimator, TransformerMixin): def __init__(self, t1_col, t2_col): self.t1_col = t1_col self.t2_col = t2_col def fit(self, X, y=None): if not isinstance(X, pd.DataFrame): X = pd.DataFrame(X) self.col_1 = X[self.t1_col].apply(pd.to_datetime) self.col_2 = X[self.t2_col].apply(pd.to_datetime) return self def transform(self, X): difference_list = [] difference = self.col_1 - self.col_2 for obj in difference: difference_list.append(obj.total_seconds()) return np.array(difference_list).reshape(-1,1)