我在使用tpotClassifier()时,得到了以下作为最优的pipeline。我附上了我得到的pipeline代码。能有人解释一下pipeline的过程和顺序吗?
import numpy as npimport pandas as pdfrom sklearn.ensemble import ExtraTreesClassifierfrom sklearn.feature_selection import SelectFwe, f_classiffrom sklearn.model_selection import train_test_splitfrom sklearn.pipeline import make_pipeline, make_unionfrom tpot.builtins import StackingEstimatorfrom sklearn.preprocessing import FunctionTransformerfrom copy import copytpot_data = pd.read_csv('PATH/TO/DATA/FILE', sep='COLUMN_SEPARATOR', dtype=np.float64)features = tpot_data.drop('target', axis=1) training_features, testing_features, training_target, testing_target = \ train_test_split(features, tpot_data['target'], random_state=None)exported_pipeline = make_pipeline(make_union( FunctionTransformer(copy), make_union( FunctionTransformer(copy), make_union( FunctionTransformer(copy), make_union( FunctionTransformer(copy), FunctionTransformer(copy) ) ) )),SelectFwe(score_func=f_classif, alpha=0.049),ExtraTreesClassifier(bootstrap=False, criterion="entropy", max_features=1.0, min_samples_leaf=2, min_samples_split=5, n_estimators=100))exported_pipeline.fit(training_features, training_target)results = exported_pipeline.predict(testing_features)
回答:
make_union
只是将多个数据集联合起来,而FunctionTransformer(copy)
复制所有列。因此,嵌套的make_union
和FunctionTransformer(copy)
会对每个特征进行多次复制。这看起来很奇怪,但对于ExtraTreesClassifier
来说,这会对特征选择产生“自助抽样”的效果。参见问题581,解释了为什么最初会生成这些内容;基本上,添加副本在堆叠集成中有用,而TPOT使用的遗传算法意味着它需要先生成这些副本,然后再探索这种集成。建议进行更多次遗传算法迭代可能会清理这种人工制品。
之后的事情我想应该是直截了当的:你执行单变量特征选择,然后拟合一个额外的随机树分类器。