能否有人解释一下,为什么当我尝试对任何短词进行 fit_transform 时,CountVectorizer 会引发这个错误?即使我设置 stopwords=None,仍然会得到同样的错误。以下是代码:
from sklearn.feature_extraction.text import CountVectorizertext = ['don\'t know when I shall return to the continuation of my scientific work. At the moment I can do absolutely nothing with it, and limit myself to the most necessary duty of my lectures; how much happier I would be to be scientifically active, if only I had the necessary mental freshness.']cv = CountVectorizer(stop_words=None).fit(text)
代码运行得相当正常。然后当我尝试用另一段文本进行 fit_transform 时:
cv.fit_transform(['q'])
引发的错误是:
---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-3-acbd560df1a2> in <module>()----> 1 cv.fit_transform(['q'])~/.local/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in fit_transform(self, raw_documents, y) 867 868 vocabulary, X = self._count_vocab(raw_documents,--> 869 self.fixed_vocabulary_) 870 871 if self.binary:~/.local/lib/python3.6/site-packages/sklearn/feature_extraction/text.py in _count_vocab(self, raw_documents, fixed_vocab) 809 vocabulary = dict(vocabulary) 810 if not vocabulary:--> 811 raise ValueError("empty vocabulary; perhaps the documents only" 812 " contain stop words") 813 ValueError: empty vocabulary; perhaps the documents only contain stop words
我读了一些关于这个错误的主题,因为似乎 CountVectorizer 经常会引发这个错误,但所有我找到的讨论都只涉及文本确实只包含停用词的情况。我实在无法弄清楚我的情况到底是什么问题,所以如果能得到任何帮助,我将非常感激!
回答:
CountVectorizer(token_pattern='(?u)\\b\\w\\w+\\b')
默认情况下只会对包含两个或更多字符的单词(标记)进行分词
您可以更改这种默认行为:
vect = CountVectorizer(token_pattern='(?u)\\b\\w+\\b')
测试:
In [29]: vect.fit_transform(['q'])Out[29]:<1x1 sparse matrix of type '<class 'numpy.int64'>' with 1 stored elements in Compressed Sparse Row format>In [30]: vect.get_feature_names()Out[30]: ['q']