我尝试运行文档中给出的非常简单的Word2Vec示例,文档链接如下:
https://spark.apache.org/docs/1.4.1/api/python/_modules/pyspark/ml/feature.html#Word2Vec
from pyspark import SparkContext, SQLContextfrom pyspark.mllib.feature import Word2VecsqlContext = SQLContext(sc)sent = ("a b " * 100 + "a c " * 10).split(" ")doc = sqlContext.createDataFrame([(sent,), (sent,)], ["sentence"])model = Word2Vec(vectorSize=5, seed=42, inputCol="sentence", outputCol="model").fit(doc)model.getVectors().show()model.findSynonyms("a", 2).show()
TypeError Traceback (most recent call last)<ipython-input-4-e57e9f694961> in <module>() 5 sent = ("a b " * 100 + "a c " * 10).split(" ") 6 doc = sqlContext.createDataFrame([(sent,), (sent,)], ["sentence"])----> 7 model = Word2Vec(vectorSize=5, seed=42, inputCol="sentence", outputCol="model").fit(doc) 8 model.getVectors().show() 9 model.findSynonyms("a", 2).show()TypeError: __init__() got an unexpected keyword argument 'vectorSize'
为什么会失败,有什么想法吗?
回答:
您参考的是ml
包的文档,但却从mllib
包中导入。在mllib
中,Word2Vec
的__init__
方法不接受任何参数。
您是否意图使用以下导入方式:
from pyspark.ml.feature import Word2Vec
输出结果如下:
+----+--------------------+|word| vector|+----+--------------------+| a|[-0.3511952459812...|| b|[0.29077222943305...|| c|[0.02315592765808...|+----+--------------------++----+-------------------+|word| similarity|+----+-------------------+| b|0.29255685145799626|| c|-0.5414068302988307|+----+-------------------+