将Lucene索引转换为Mahout向量

我有一个Spring Web应用程序。它通过Hibernate Search将模型Education映射到Lucene索引中:

@Entity@Table(name="educations")@Indexedpublic class Education {    @Id    @GeneratedValue(strategy = GenerationType.AUTO)    @Field(termVector = TermVector.WITH_POSITION_OFFSETS)    private long id;    @Column(name = "name")    @Field(termVector = TermVector.WITH_POSITION_OFFSETS)    @Boost(value = 1.5f)    private String name;    @Column(name = "local_name")    private String localName;    @Column(name = "description", columnDefinition="TEXT")    @Field(termVector = TermVector.WITH_POSITION_OFFSETS)    private String description;

这运行得很好!

现在我试图通过Mahout 0.9对我的Lucene索引进行聚类。我已经实现了一个基本的K-means聚类,但不知道如何将我的Lucene索引转换为Mahout向量。

这是我使用一些测试数据点的工作的基本K-Means聚类类,如下所示:

package com.courseportal.project.utils.lsh.util;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.FileSystem;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.SequenceFile;import org.apache.hadoop.io.Text;import org.apache.mahout.clustering.Cluster;import org.apache.mahout.clustering.classify.WeightedPropertyVectorWritable;import org.apache.mahout.clustering.kmeans.KMeansDriver;import org.apache.mahout.clustering.kmeans.Kluster;import org.apache.mahout.common.distance.EuclideanDistanceMeasure;import org.apache.mahout.math.RandomAccessSparseVector;import org.apache.mahout.math.Vector;import org.apache.mahout.math.VectorWritable;import java.io.File;import java.io.IOException;import java.util.ArrayList;import java.util.List;public class SimpleKMeansClustering {    public static final double[][] points = {            {1, 1}, {2, 1}, {1, 2},            {2, 2}, {3, 3}, {8, 8},            {9, 8}, {8, 9}, {9, 9}};    public static void writePointsToFile(List<Vector> points,                                         String fileName,                                         FileSystem fs,                                         Configuration conf) throws IOException {        Path path = new Path(fileName);        SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf,                path, LongWritable.class, VectorWritable.class);        long recNum = 0;        VectorWritable vec = new VectorWritable();        for (Vector point : points) {            vec.set(point);            writer.append(new LongWritable(recNum++), vec);        }        writer.close();    }    public static List<Vector> getPoints(double[][] raw) {        List<Vector> points = new ArrayList<Vector>();        for (int i = 0; i < raw.length; i++) {            double[] fr = raw[i];            Vector vec = new RandomAccessSparseVector(fr.length);            vec.assign(fr);            points.add(vec);        }        return points;    }    public static void main(String args[]) throws Exception {        int k = 2;        List<Vector> vectors = getPoints(points);        File testData = new File("clustering/testdata");        if (!testData.exists()) {            testData.mkdir();        }        testData = new File("clustering/testdata/points");        if (!testData.exists()) {            testData.mkdir();        }        Configuration conf = new Configuration();        FileSystem fs = FileSystem.get(conf);        writePointsToFile(vectors, "clustering/testdata/points/file1", fs, conf);        Path path = new Path("clustering/testdata/clusters/part-00000");        SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path, Text.class, Kluster.class);        for (int i = 0; i < k; i++) {            Vector vec = vectors.get(i);            Kluster cluster = new Kluster(vec, i, new EuclideanDistanceMeasure());            writer.append(new Text(cluster.getIdentifier()), cluster);        }        writer.close();        KMeansDriver.run(conf,                new Path("clustering/testdata/points"),                new Path("clustering/testdata/clusters"),                new Path("clustering/output"),                0.001,                10,                true,                0,                true);        SequenceFile.Reader reader = new SequenceFile.Reader(fs,                new Path("clustering/output/" + Cluster.CLUSTERED_POINTS_DIR + "/part-m-0"), conf);        IntWritable key = new IntWritable();        WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable();        while (reader.next(key, value)) {            System.out.println(value.toString() + " belongs to cluster " + key.toString());        }        reader.close();    }}

我读到(这里)我应该使用LuceneIndexToSequenceFiles来做这件事,但我在Mahout 0.9中找不到这个类。这是我需要手动引入的吗?

我如何将我的索引转换为与我的K-means聚类类一起使用?


回答:

你可以使用package org.apache.mahout.text;和类SequenceFilesFromLuceneStorageMRJob(用于分布式转换)或SequenceFilesFromLuceneStorageDriver

关于它们的使用方法,你可以在mahout-0.9的测试中找到更多信息。例如这里:

mahout-0.9/integration/src/test/java/org/apache/mahout/text/SequenceFilesFromLuceneStorageDriverTest.javamahout-0.9/integration/src/test/java/org/apache/mahout/text/SequenceFilesFromLuceneStorageMRJob.java

以及这里 https://mahout.apache.org/users/basics/creating-vectors-from-text.html

重要提示:你的Lucene索引必须使用与Mahout中使用的Lucene相同版本创建。

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注