我通过spark.ml.classification.LogisticRegressionModel.predict
获取预测结果。许多行的prediction
列显示为1.0
,而probability
列显示为.04
。model.getThreshold
的值为0.5
,所以我认为模型将概率阈值超过0.5
的分类为1.0
。
我应该如何解释一个预测结果为1.0的prediction
,而probability
为0.04的情况?
回答:
执行LogisticRegression
后,概率列应该包含一个与类别数量相同长度的列表,其中每个索引给出该类别的相应概率。我用两个类别做了一个小例子来说明:
case class Person(label: Double, age: Double, height: Double, weight: Double)val df = List(Person(0.0, 15, 175, 67), Person(0.0, 30, 190, 100), Person(1.0, 40, 155, 57), Person(1.0, 50, 160, 56), Person(0.0, 15, 170, 56), Person(1.0, 80, 180, 88)).toDF()val assembler = new VectorAssembler().setInputCols(Array("age", "height", "weight")) .setOutputCol("features") .select("label", "features")val df2 = assembler.transform(df)df2.show+-----+------------------+|label| features|+-----+------------------+| 0.0| [15.0,175.0,67.0]|| 0.0|[30.0,190.0,100.0]|| 1.0| [40.0,155.0,57.0]|| 1.0| [50.0,160.0,56.0]|| 0.0| [15.0,170.0,56.0]|| 1.0| [80.0,180.0,88.0]|+-----+------------------+val lr = new LogisticRegression().setMaxIter(10).setRegParam(0.3).setElasticNetParam(0.8)val Array(testing, training) = df2.randomSplit(Array(0.7, 0.3))val model = lr.fit(training)val predictions = model.transform(testing)predictions.select("probability", "prediction").show(false)+----------------------------------------+----------+|probability |prediction|+----------------------------------------+----------+|[0.7487950501224138,0.2512049498775863] |0.0 ||[0.6458452667523259,0.35415473324767416]|0.0 ||[0.3888393314864866,0.6111606685135134] |1.0 |+----------------------------------------+----------+
这里是算法计算出的概率以及最终的预测结果。最终概率最高的类别即为预测结果。