我在使用LDA处理一组简单的文档集合。我的目标是从中提取主题,然后使用这些提取的主题作为特征来评估我的模型。
我决定使用多项式SVM作为评估器。不确定这样做是否合适?
import itertoolsfrom gensim.models import ldamodelfrom nltk.tokenize import RegexpTokenizerfrom nltk.stem.porter import PorterStemmerfrom gensim import corpora, modelsfrom sklearn.naive_bayes import MultinomialNBtokenizer = RegexpTokenizer(r'\w+')# 创建英文停用词列表en_stop = {'a'}# 创建PorterStemmer类的一个实例p_stemmer = PorterStemmer()# 创建样本文档doc_a = "Brocolli is good to eat. My brother likes to eat good brocolli, but not my mother."doc_b = "My mother spends a lot of time driving my brother around to baseball practice."doc_c = "Some health experts suggest that driving may cause increased tension and blood pressure."doc_d = "I often feel pressure to perform well at school, but my mother never seems to drive my brother to do better."doc_e = "Health professionals say that brocolli is good for your health."# 将样本文档编译成列表doc_set = [doc_a, doc_b, doc_c, doc_d, doc_e]# 用于循环中分词后的文档列表texts = []# 遍历文档列表for i in doc_set: # 清理并分词文档字符串 raw = i.lower() tokens = tokenizer.tokenize(raw) # 从tokens中移除停用词 stopped_tokens = [i for i in tokens if not i in en_stop] # 词干提取 stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens] # 将tokens添加到列表中 texts.append(stemmed_tokens)# 将分词后的文档转换为id <-> 词的字典dictionary = corpora.Dictionary(texts)# 将分词后的文档转换为文档-词矩阵corpus = [dictionary.doc2bow(text) for text in texts]# 生成LDA模型#ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=2, id2word=dictionary, passes=20)id2word = corpora.Dictionary(texts)# 创建词袋模型mm = [id2word.doc2bow(text) for text in texts]# 训练LDA模型lda = ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=4, update_every=1, chunksize=10000, passes=1)# 将主题分配给语料库中的文档a=[]lda_corpus = lda[mm]for i in range(len(doc_set)): a.append(lda_corpus[i]) print(lda_corpus[i])merged_list = list(itertools.chain(*lda_corpus))print(a) #my_list.append(my_list[i])sv=MultinomialNB()yvalues = [0,1,2,3]sv.fit(a,yvalues)predictclass = sv.predict(a)testLables=[0,1,2,3]from sklearn import metrics, tree#yacc=metrics.accuracy_score(testLables,predictclass)#print (yacc)
当我运行这段代码时,会抛出主题中提到的错误。
另外,这是我输入到SVM的LDA模型输出(主题文档分布):
[[(0, 0.95533888404477663), (1, 0.014775921798986477), (2, 0.015161897773308793), (3, 0.014723296382928375)], [(0, 0.019079556242721694), (1, 0.017932434792585779), (2, 0.94498655991579728), (3, 0.018001449048895311)], [(0, 0.017957955483631164), (1, 0.017900184473362918), (2, 0.018133572636989413), (3, 0.9460082874060165)], [(0, 0.96554611572184923), (1, 0.011407838337200715), (2, 0.011537900721487016), (3, 0.011508145219463113)], [(0, 0.023306931039431281), (1, 0.022823706054846005), (2, 0.93072240824085961), (3, 0.023146954664863096)]]
我的标签是0,1,2,3。
我在这里找到了一个回应
但当我写下:
nsamples, nx, ny = a.shaped2_train_dataset = a.reshape((nsamples,nx*ny))
根据我的情况,这不起作用。实际上a没有shape方法。
完整的错误跟踪
Traceback (most recent call last): File "/home/saria/PycharmProjects/TfidfLDA/test3.py", line 87, in <module> sv.fit(a,yvalues) File "/home/saria/tfwithpython3.6/lib/python3.5/site-packages/sklearn/naive_bayes.py", line 562, in fit X, y = check_X_y(X, y, 'csr') File "/home/saria/tfwithpython3.6/lib/python3.5/site-packages/sklearn/utils/validation.py", line 521, in check_X_y ensure_min_features, warn_on_dtype, estimator) File "/home/saria/tfwithpython3.6/lib/python3.5/site-packages/sklearn/utils/validation.py", line 405, in check_array % (array.ndim, estimator_name))ValueError: Found array with dim 3. Estimator expected <= 2.
回答:
尝试在MultinomialNB
上调用fit
方法时出现错误,因为a
中包含的数据维度大于2。当前a
的构造方式是为每个文档提供一个元组列表,这是不被模型允许的。
由于元组的第一部分只是主题标签,你可以从元组中移除该值,并将数据重构为二维矩阵。下面的代码可以实现这一点,
new_a = []new_y = []for x in a: temp_a = [] sorted_labels = sorted(x, key=lambda x: x[1], reverse=True) new_y.append(sorted_labels[0][0]) for z in x: temp_a.append(z[1]) new_a.append(temp_a)
new_a
将是文档列表,其中每个文档将包含主题0, 1, 2, 和3的得分。然后你可以调用sv.fit(new_a, yvalues)
来拟合你的模型。