假设我有一个包含不同文本行的数据框,我想对这些行进行聚类以发现数据中的潜在主题:
import pandas as pddf = pd.DataFrame({"id_num": np.random.randint(low = 0, high = 50, size = 10), "text": ["hello these are words i would like to cluster", "hello i would like to go home", "home i would like to go please thank you", "thank you please apple banana", "orange banana apple fruit corn", "orange orange orange banana banana banana banana", "can you take me home i have had enough of this place", "i am bored can we go home", "i would like to leave now to go home", "apple apple banana"]})
我会首先将这个数据框分成train
和test
:
>>> from sklearn.cross_validation import train_test_split>>> train, test = train_test_split(df, test_size = 0.40)>>> train, test = train["text"], test["text"]
然后开始聚类过程:
>>> from sklearn.feature_extraction.text import TfidfVectorizer>>> from sklearn.cluster import KMeans>>> vectorizer = TfidfVectorizer()>>> train_X = vectorizer.fit_transform(train)>>> test_X = vectorizer.fit_transform(test)>>> model = KMeans(n_clusters = 2)>>> model.fit(train_X)>>> model.predict(test_X)ValueError: 特征数量不正确。得到22个特征,期望18个。
当然,如果您在自己的机器上运行这个代码,可能会得到不同的结果。也许特征数量甚至可能对齐。但在大多数情况下,train_X
和test_X
的维度将不会匹配。
有没有其他人遇到过这个问题?我认为一种使维度相等的方法是通过仅选择train
和test
中都存在的特征(也就是词)来进行某种降维处理。另一种解决方案,会生成更大的矩阵,就是在给定文档中没有来自另一个语料库的词的地方填充零。
还有其他我应该尝试的方法吗?
回答:
经过一些研究,我找到了几个StackOverflow对同一个问题的回答:Python vectorization for classification 和 Scikit learn – fit_transform on the test set。
简而言之,我需要将
train_X = vectorizer.fit_transform(train)test_X = vectorizer.fit_transform(test)
改为
train_X = vectorizer.fit_transform(train)test_X = vectorizer.transform(test)
使用transform
而不是fit_transform
可以保留前一行fit_transform
创建的词汇表,并确保这些矩阵的列相同。