我想在pySpark中将List转换为Vector,然后使用这一列来训练机器学习模型。但我的Spark版本是1.6.0,没有VectorUDT()
。那么在我的udf函数中应该返回什么类型呢?
from pyspark.sql import SQLContextfrom pyspark import SparkContext, SparkConffrom pyspark.sql.functions import *from pyspark.mllib.linalg import DenseVectorfrom pyspark.mllib.linalg import Vectorsfrom pyspark.sql.types import *conf = SparkConf().setAppName('rank_test')sc = SparkContext(conf=conf)spark = SQLContext(sc)df = spark.createDataFrame([[[0.1,0.2,0.3,0.4,0.5]]],['a'])print '???'df.show()def list2vec(column): print '?????',column return Vectors.dense(column)getVector = udf(lambda y: list2vec(y),DenseVector() )df.withColumn('b',getVector(col('a'))).show()
我尝试了很多类型,这个DenseVector()
会给我报错:
Traceback (most recent call last): File "t.py", line 21, in <module> getVector = udf(lambda y: list2vec(y),DenseVector() )TypeError: __init__() takes exactly 2 arguments (1 given)
请帮帮我。
回答:
你可以使用vectors和VectorUDT来处理UDF,
from pyspark.ml.linalg import Vectors, VectorUDTfrom pyspark.sql import functions as Fud_f = F.udf(lambda r : Vectors.dense(r),VectorUDT())df = df.withColumn('b',ud_f('a'))df.show()+-------------------------+---------------------+|a |b |+-------------------------+---------------------+|[0.1, 0.2, 0.3, 0.4, 0.5]|[0.1,0.2,0.3,0.4,0.5]|+-------------------------+---------------------+df.printSchema()root |-- a: array (nullable = true) | |-- element: double (containsNull = true) |-- b: vector (nullable = true)
关于VectorUDT,请参考http://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/linalg.html