我想在sklearn Pipeline
中使用VotingClassifier
,我已经定义了一组分类器..
我从这个问题中获得了一些启示:在Sklearn Pipeline中使用VotingClassifier
,以构建下面的代码,但在该问题中,每个分类器都是在独立的Pipeline中定义的..我不想以这种方式使用它,因为我有一组特征是事先准备好的,并且在多个Pipeline中重复生成这些特征并不是一个好主意(这是一个耗时的过程)!
我该如何实现这一点?!
model = Pipeline([ ('feat', FeatureUnion([ ('tfidf', TfidfVectorizer(analyzer='char', ngram_range=(3, 5), min_df=0.01, lowercase=True, tokenizer=tokenizeTfidf)), ])), ('pip1', Pipeline([('clf1', GradientBoostingClassifier(n_estimators=1000, random_state=7))])), ('pip2', Pipeline([('clf2', SVC())])), ('pip3', Pipeline([('clf3', RandomForestClassifier())])), ('clf', VotingClassifier(estimators=["pip1", "pip2", "pip3"])) ])clf = model.fit(X_train, y_train)
但我遇到了这个错误:
('clf', VotingClassifier(estimators=["pip1", "pip2", "pip3"])), File "C:\Python35\lib\site-packages\imblearn\pipeline.py", line 115, in __init__ self._validate_steps() File "C:\Python35\lib\site-packages\imblearn\pipeline.py", line 139, in _validate_steps "(but not both) '%s' (type %s) doesn't)" % (t, type(t)))TypeError: All intermediate steps of the chain should be estimators that implement fit and transform or sample (but not both) 'Pipeline(memory=None, steps=[('clf1', GradientBoostingClassifier(criterion='friedman_mse', init=None, learning_rate=0.1, loss='deviance', max_depth=3, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=1000, presort='auto', random_state=7, subsample=1.0, verbose=0, warm_start=False))])' (type <class 'imblearn.pipeline.Pipeline'>) doesn't)
回答:
我假设你想做类似这样的事情:
1) 使用TfidfVectorizer将文本数据转换为tfidf
2) 将转换后的数据发送到3个估计器(GradientBoostingClassifier, SVC, RandomForestClassifier),然后使用投票来获取预测结果。
如果是这样的话,这就是你需要的。
model = Pipeline([ ('feat', FeatureUnion([ ('tfidf', TfidfVectorizer(analyzer='char', ngram_range=(3, 5), min_df=0.01, lowercase=True, tokenizer=tokenizeTfidf)), ])), ('clf', VotingClassifier(estimators=[("pip1", GradientBoostingClassifier(n_estimators=1000, random_state=7)), ("pip2", SVC()), ("pip3", RandomForestClassifier())])) ])
另外,如果你只使用单个TfidfVectorizer
,并不与其他特征结合使用,你甚至不需要FeatureUnion
:
model = Pipeline([ ('tfidf', TfidfVectorizer(analyzer='char', ngram_range=(3, 5), min_df=0.01, lowercase=True, tokenizer=tokenizeTfidf)), ('clf', VotingClassifier(estimators=[("pip1", GradientBoostingClassifier(n_estimators=1000, random_state=7)), ("pip2", SVC()), ("pip3", RandomForestClassifier())])) ])