在pyspark.ml中使用RandomForestClassifier时,VectorIndexer的maxCategories未按预期工作

背景:我正在进行一个简单的二元分类,使用来自pyspark.ml的RandomForestClassifier。在将数据送入训练之前,我使用VectorIndexer来决定特征是数值型还是分类型,并通过提供maxCategories参数来实现这一目的。

问题:即使我已经将VectorIndexer的maxCategories设置为30,在训练过程中仍然出现了错误:

An error occurred while calling o15371.fit.: java.lang.IllegalArgumentException: requirement failed: DecisionTree requires maxBins (= 32) to be at least as large as the number of values in each categorical feature, but categorical feature 0 has 10765 values. Considering remove this and other categorical features with a large number of values, or add more training examples.

我的代码很简单,col_idx是我生成的将传递给StringIndexer的列字符串列表,col_all是我生成的将传递给StringIndexer和OneHotEncoder的列字符串列表,col_num是数值列名称。

from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler, IndexToString, VectorIndexerfrom pyspark.ml import Pipelinefrom pyspark.ml.classification import RandomForestClassifiermy_data.cache()# stringindexers and encodersstIndexers = [StringIndexer(inputCol = Col, outputCol = Col + 'Index').setHandleInvalid('keep') for Col in col_idx]encoder = OneHotEncoderEstimator(inputCols = [Col + 'Index' for Col in col_all], outputCols = [Col + 'ClassVec' for Col in col_all]).setHandleInvalid('keep')# vector assemblorcol_into_assembler = [cols + 'Index' for cols in col_idx] + [cols + 'ClassVec' for cols in col_all] + col_numassembler = VectorAssembler(inputCols = col_into_assembler, outputCol = "features")# featureIndexer, labelIndexer, rf classifier and labelConverterfeatureIndexer = VectorIndexer(inputCol = "features", outputCol = "indexedFeatures", maxCategories = 30)# columns smaller than maxCategories => categorical features, columns larger than maxCategories => numerical / continuous features, smaller value => less categorical features, larger value => more categorical features.labelIndexer = StringIndexer(inputCol = "label", outputCol = "indexedLabel").fit(my_data)rf = RandomForestClassifier(featuresCol = "indexedFeatures", labelCol = "indexedLabel")labelConverter = IndexToString(inputCol = "prediction", outputCol = "predictedLabel", labels=labelIndexer.labels)# chain all the estimators and transformers stages into a Pipeline estimatorrfPipeline = Pipeline(stages = stIndexers + [encoder, assembler, featureIndexer, labelIndexer, rf, labelConverter])# split data, cache themtraining, test = my_data.randomSplit([0.7, 0.3], seed = 100)training.cache()test.cache()# fit the estimator with training dataset to get a compiled pipeline with transformers and fitted models.ModelRF = rfPipeline.fit(training)# make predictionspredictions = ModelRF.transform(test)predictions.printSchema()predictions.show(5)

所以我的问题是:为什么即使我在VectorIndexer中将maxCategories设置为30,我的数据中仍然存在高层次的分类特征?我可以将rf分类器中的maxBins设置为更高的值,但我只是好奇:为什么VectorIndexer没有按预期工作(好吧,按我预期的那样):将小于maxCategories的特征转换为分类特征,将大于maxCategories的特征转换为数值特征。


回答:

看起来,与文档中列出的内容相反,其中列出了:

Preserve metadata in transform; if a feature’s metadata is already present, do not recompute.

在TODO中,元数据已经被保留。

from pyspark.sql.functions import colfrom pyspark.ml import Pipelinefrom pyspark.ml.feature import  *df = spark.range(10)stages = [StringIndexer(inputCol="id", outputCol="idx"), VectorAssembler(inputCols=["idx"], outputCol="features"), VectorIndexer(inputCol="features", outputCol="features_indexed", maxCategories=5)]Pipeline(stages=stages).fit(df).transform(df).schema["features"].metadata# {'ml_attr': {'attrs': {'nominal': [{'vals': ['8',#       '4',#       '9',#       '5',#       '6',#       '1',#       '0',#       '2',#       '7',#       '3'],#      'idx': 0,#      'name': 'idx'}]},#   'num_attrs': 1}}Pipeline(stages=stages).fit(df).transform(df).schema["features_indexed"].metadata# {'ml_attr': {'attrs': {'nominal': [{'ord': False,#      'vals': ['0.0',#       '1.0',#       '2.0',#       '3.0',#       '4.0',#       '5.0',#       '6.0',#       '7.0',#       '8.0',#       '9.0'],#      'idx': 0,#      'name': 'idx'}]},#   'num_attrs': 1}}

在正常情况下,这是期望的行为。您不应该将索引过的分类特征用作连续变量

但如果您仍然想绕过这种行为,您需要重置元数据,例如:

pipeline1 = Pipeline(stages=stages[:1])pipeline2 = Pipeline(stages=stages[1:])dft1 = pipeline1.fit(df).transform(df).withColumn("idx", col("idx").alias("idx", metadata={}))dft2 = pipeline2.fit(dft1).transform(dft1)dft2.schema["features_indexed"].metadata# {'ml_attr': {'attrs': {'numeric': [{'idx': 0, 'name': 'idx'}]},#   'num_attrs': 1}}

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注