在Hadoop中按列计算均值和标准差

我想在Hadoop中按列计算均值和标准差。

我简单地采用了单遍的Naïve算法来实现MapReduce。我在455000×90和650000×120的多变量数据集上进行了测试,发现加速比低于处理器数量。对于独立和伪分布式模式下使用2个活动核心,我在455000×90数据集上得到了0.4的加速比,即20秒/53秒。

为什么我的程序效率不高?是否有可能改进它?

Mapper:

public class CalculateMeanAndSTDEVMapper extends       Mapper <LongWritable,               DoubleArrayWritable,               IntWritable,               DoubleArrayWritable> {    private int dataDimFrom;    private int dataDimTo;    private long samplesCount;    private int universeSize;@Overrideprotected void setup(Context context) throws IOException {    Configuration conf = context.getConfiguration();    dataDimFrom = conf.getInt("dataDimFrom", 0);    dataDimTo = conf.getInt("dataDimTo", 0);    samplesCount = conf.getLong("samplesCount", 0);    universeSize = dataDimTo - dataDimFrom + 1;}@Overridepublic void map(        LongWritable key,        DoubleArrayWritable array,        Context context) throws IOException, InterruptedException {    DoubleWritable[] outArray = new DoubleWritable[universeSize*2];    for (int c = 0; c < universeSize; c++) {        outArray[c] = new DoubleWritable(                         array.get(c+dataDimFrom).get() / samplesCount);    }    for (int c = universeSize; c < universeSize*2; c++) {        double val = array.get(c-universeSize+dataDimFrom).get();        outArray[c] = new DoubleWritable((val*val) / samplesCount);    }    context.write(new IntWritable(1), new DoubleArrayWritable(outArray));}}

Combiner:

public class CalculateMeanAndSTDEVCombiner extends       Reducer <IntWritable,                DoubleArrayWritable,                IntWritable,                DoubleArrayWritable> {   private int dataDimFrom;   private int dataDimTo;   private int universeSize;@Overrideprotected void setup(Context context) throws IOException {    Configuration conf = context.getConfiguration();    dataDimFrom = conf.getInt("dataDimFrom", 0);    dataDimTo = conf.getInt("dataDimTo", 0);    universeSize = dataDimTo - dataDimFrom + 1;}@Overridepublic void reduce(        IntWritable column,        Iterable<DoubleArrayWritable> partialSums,        Context context) throws IOException, InterruptedException {    DoubleWritable[] outArray = new DoubleWritable[universeSize*2];    boolean isFirst = true;    for (DoubleArrayWritable partialSum : partialSums) {        for (int i = 0; i < universeSize*2; i++) {            if (!isFirst) {                outArray[i].set(outArray[i].get()                                  + partialSum.get(i).get());            } else {                outArray[i]                    = new DoubleWritable(partialSum.get(i).get());            }        }        isFirst = false;    }    context.write(column, new DoubleArrayWritable(outArray));}}

Reducer:

public class CalculateMeanAndSTDEVReducer extends       Reducer <IntWritable,                DoubleArrayWritable,                IntWritable,                DoubleArrayWritable> {   private int dataDimFrom;   private int dataDimTo;   private int universeSize;@Overrideprotected void setup(Context context) throws IOException {    Configuration conf = context.getConfiguration();    dataDimFrom = conf.getInt("dataDimFrom", 0);    dataDimTo = conf.getInt("dataDimTo", 0);    universeSize = dataDimTo - dataDimFrom + 1;}@Overridepublic void reduce(        IntWritable column,        Iterable<DoubleArrayWritable> partialSums,        Context context) throws IOException, InterruptedException {    DoubleWritable[] outArray = new DoubleWritable[universeSize*2];    boolean isFirst = true;    for (DoubleArrayWritable partialSum : partialSums) {        for (int i = 0; i < universeSize; i++) {            if (!isFirst) {                outArray[i].set(outArray[i].get() + partialSum.get(i).get());            } else {                outArray[i] = new DoubleWritable(partialSum.get(i).get());            }        }        isFirst = false;    }    for (int i = universeSize; i < universeSize * 2; i++) {        double mean = outArray[i-universeSize].get();        outArray[i].set(Math.sqrt(outArray[i].get() - mean*mean));    }    context.write(column, new DoubleArrayWritable(outArray));}}

其中DoubleArrayWritable是一个简单的类,它继承自ArrayWritable:

public class DoubleArrayWritable extends ArrayWritable {public DoubleArrayWritable() {    super(DoubleWritable.class);}public DoubleArrayWritable(DoubleWritable[] values) {    super(DoubleWritable.class, values);}public DoubleWritable get(int idx) {    return (DoubleWritable) get()[idx];}}

回答:

我在相同环境下关于另一个作业提出了相同的问题。David Gruzman猜测问题出在作业启动时间的差异(本地,集群)。他建议在该环境下看到良好加速的最佳数据大小为5 GB。我尝试了,确实如此。

为什么只有映射器的作业在真实集群中如此缓慢?

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注