当我对一组文档运行tfidf时,它返回了一个tfidf矩阵,看起来像这样。
(1, 12) 0.656240233446 (1, 11) 0.754552023393 (2, 6) 1.0 (3, 13) 1.0 (4, 2) 1.0 (7, 9) 1.0 (9, 4) 0.742540927053 (9, 5) 0.66980069547 (11, 19) 0.735138466738 (11, 7) 0.677916982176 (12, 18) 1.0 (13, 14) 0.697455191865 (13, 11) 0.716628394177 (14, 5) 1.0 (15, 8) 1.0 (16, 17) 1.0 (18, 1) 1.0 (19, 17) 1.0 (22, 13) 1.0 (23, 3) 1.0 (25, 6) 1.0 (26, 19) 0.476648253537 (26, 7) 0.879094103268 (28, 10) 0.532672175403 (28, 7) 0.523456282204
我想知道这是什么,我无法理解这是如何提供的。当我在调试模式下时,我了解到了一些关于indices, indptr和data的信息…这些东西似乎与提供的数据有某种关联。这些是什么?这些数字让我感到很困惑,如果我说括号中的第一个元素是根据我的预测的文档,我没有看到第0个、第5个、第6个文档。请帮助我弄清楚这里是如何工作的。然而,我从维基百科中了解到tfidf的一般工作原理,包括对逆文档频率取对数等。我只想知道这里的三种不同类型的数字分别指的是什么?
源代码是:
#This contains the list of file names _filenames =[]#This conatains the list if contents/text in the file_contents = []#This is a dict of filename:content_file_contents = {}class KmeansClustering(): def kmeansClusters(self): global _report self.num_clusters = 5 km = KMeans(n_clusters=self.num_clusters) vocab_frame = TokenizingAndPanda().createPandaVocabFrame() self.tfidf_matrix, self.terms, self.dist = TfidfProcessing().getTfidFPropertyData() km.fit(self.tfidf_matrix) self.clusters = km.labels_.tolist() joblib.dump(km, 'doc_cluster2.pkl') km = joblib.load('doc_cluster2.pkl')class TokenizingAndPanda(): def tokenize_only(self,text): ''' This function tokenizes the text :param text: Give the text that you want to tokenize :return: it gives the filter tokes ''' # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] filtered_tokens = [] # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) for token in tokens: if re.search('[a-zA-Z]', token): filtered_tokens.append(token) return filtered_tokens def tokenize_and_stem(self,text): # first tokenize by sentence, then by word to ensure that punctuation is caught as it's own token tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)] filtered_tokens = [] # filter out any tokens not containing letters (e.g., numeric tokens, raw punctuation) for token in tokens: if re.search('[a-zA-Z]', token): filtered_tokens.append(token) stems = [_stemmer.stem(t) for t in filtered_tokens] return stems def getFilnames(self): ''' :return: ''' global _path global _filenames path = _path _filenames = FileAccess().read_all_file_names(path) def getContentsForFilenames(self): global _contents global _file_contents for filename in _filenames: content = FileAccess().read_the_contents_from_files(_path, filename) _contents.append(content) _file_contents[filename] = content def createPandaVocabFrame(self): global _totalvocab_stemmed global _totalvocab_tokenized #Enable this if you want to load the filenames and contents from a file structure. # self.getFilnames() # self.getContentsForFilenames() # for name, i in _file_contents.items(): # print(name) # print(i) for i in _contents: allwords_stemmed = self.tokenize_and_stem(i) _totalvocab_stemmed.extend(allwords_stemmed) allwords_tokenized = self.tokenize_only(i) _totalvocab_tokenized.extend(allwords_tokenized) vocab_frame = pd.DataFrame({'words': _totalvocab_tokenized}, index=_totalvocab_stemmed) print(vocab_frame) return vocab_frameclass TfidfProcessing(): def getTfidFPropertyData(self): tfidf_vectorizer = TfidfVectorizer(max_df=0.4, max_features=200000, min_df=0.02, stop_words='english', use_idf=True, tokenizer=TokenizingAndPanda().tokenize_and_stem, ngram_range=(1, 1)) # print(_contents) tfidf_matrix = tfidf_vectorizer.fit_transform(_contents) terms = tfidf_vectorizer.get_feature_names() dist = 1 - cosine_similarity(tfidf_matrix) return tfidf_matrix, terms, dist
回答:
tfidf应用于数据的结果通常是一个二维矩阵A,其中A_ij是第i个文档中第j个术语(词)的标准化频率。您在输出中看到的是这个矩阵的稀疏表示,换句话说——只打印出非零元素,因此:
(1, 12) 0.656240233446
意味着根据sklearn构建的某个词汇表,第12个词在第一个文档中的标准化频率为0.656240233446。缺失的部分是零,这意味着例如在第一个文档中找不到第三个词(因为没有(1,3)
),依此类推。
一些文档缺失的事实是您特定代码/数据的结果(您没有包含这些),也许您手动设置了词汇表?或者考虑的最大特征数?TfidfVectorizer中有许多参数可能会导致这种情况,但没有您的具体代码(和一些示例数据)无法进一步说明。例如,设置min_df
可能会导致这种情况(因为它会删除非常罕见的词),同样地max_features
(有相同的效果)