我正在尝试使用Java和Apache Spark 1.0.0版本实现一个简单的决策树分类器演示。我参考了http://spark.apache.org/docs/1.0.0/mllib-decision-tree.html。到目前为止,我已经编写了如下代码。
根据以下代码,我遇到了错误:
org.apache.spark.mllib.tree.impurity.Impurity impurity = new org.apache.spark.mllib.tree.impurity.Entropy();
类型不匹配:无法从Entropy转换为Impurity。这对我来说很奇怪,因为Entropy类实现了Impurity接口:
https://spark.apache.org/docs/1.0.0/api/java/org/apache/spark/mllib/tree/impurity/Entropy.html
我想知道为什么我不能进行这种赋值?
package decisionTree;import java.util.regex.Pattern;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.JavaSparkContext;import org.apache.spark.api.java.function.Function;import org.apache.spark.mllib.linalg.Vectors;import org.apache.spark.mllib.regression.LabeledPoint;import org.apache.spark.mllib.tree.DecisionTree;import org.apache.spark.mllib.tree.configuration.Algo;import org.apache.spark.mllib.tree.configuration.Strategy;import org.apache.spark.mllib.tree.impurity.Gini;import org.apache.spark.mllib.tree.impurity.Impurity;import scala.Enumeration.Value;public final class DecisionTreeDemo { static class ParsePoint implements Function<String, LabeledPoint> { private static final Pattern COMMA = Pattern.compile(","); private static final Pattern SPACE = Pattern.compile(" "); @Override public LabeledPoint call(String line) { String[] parts = COMMA.split(line); double y = Double.parseDouble(parts[0]); String[] tok = SPACE.split(parts[1]); double[] x = new double[tok.length]; for (int i = 0; i < tok.length; ++i) { x[i] = Double.parseDouble(tok[i]); } return new LabeledPoint(y, Vectors.dense(x)); } } public static void main(String[] args) throws Exception { if (args.length < 1) { System.err.println("Usage:DecisionTreeDemo <file>"); System.exit(1); } JavaSparkContext ctx = new JavaSparkContext("local[4]", "Log Analizer", System.getenv("SPARK_HOME"), JavaSparkContext.jarOfClass(DecisionTreeDemo.class)); JavaRDD<String> lines = ctx.textFile(args[0]); JavaRDD<LabeledPoint> points = lines.map(new ParsePoint()).cache(); int iterations = 100; int maxBins = 2; int maxMemory = 512; int maxDepth = 1; org.apache.spark.mllib.tree.impurity.Impurity impurity = new org.apache.spark.mllib.tree.impurity.Entropy(); Strategy strategy = new Strategy(Algo.Classification(), impurity, maxDepth, maxBins, null, null, maxMemory); ctx.stop(); }}
@[隐藏人名] 如果我删除impurity变量并更改为以下形式:
Strategy strategy = new Strategy(Algo.Classification(), new org.apache.spark.mllib.tree.impurity.Entropy(), maxDepth, maxBins, null, null, maxMemory);
错误变为:构造函数Entropy()未定义。
[已编辑]我找到了我认为正确的方法调用(https://issues.apache.org/jira/browse/SPARK-2197):
Strategy strategy = new Strategy(Algo.Classification(), new Impurity() {@Overridepublic double calculate(double arg0, double arg1, double arg2){ return Gini.calculate(arg0, arg1, arg2); }@Overridepublic double calculate(double arg0, double arg1){ return Gini.calculate(arg0, arg1); }}, 5, 100, QuantileStrategy.Sort(), null, 256);
不幸的是,我遇到了一个错误 🙁
回答:
现在可以通过这个拉取请求获得针对Bug 2197的Java解决方案:
为了便于Java使用,对决策树的其他改进: * 杂质类:添加了instance()方法以帮助Java接口。 * Strategy:添加了对Java友好的构造函数 –> 注意:我从Java友好的构造函数中移除了quantileCalculationStrategy,因为(a)它是一个特殊类,(b)目前只有一个选项。我怀疑在包含其他选项之前我们会重新设计API。
你可以在这里看到一个完整的示例,该示例使用Gini杂质的intance()方法解决了你的问题:这里
Strategy strategy = new Strategy(Algo.Classification(), Gini.instance(), maxDepth, numClasses,maxBins, categoricalFeaturesInfo);DecisionTreeModel model = DecisionTree$.MODULE$.train(rdd.rdd(), strategy);