如何实现与sklearn的TfidfVectorizer相同的结果?

我正在尝试从头开始构建TfidfVectorizer,并且已经构建了一个与sklearn的非常相似的向量化器,但无法得到与TfidfVectorizer相同的tf-idf分数。

这是我的代码:

def vocab(corpus):    entire_corpus = ' '.join([i for i in corpus]).split()    values = Counter(entire_corpus)    return dict(values)def tfidf(corpus, vocab):    row = 0    vocabs = vocab(corpus)    for sentence in corpus:        col = 0        word_freq = Counter(sentence.split())        for word, freq in word_freq.items():            tf = freq/len(sentence)            n = vocabs.get(word, -1)            if n != -1:                idf = 1.0 + math.log((len(corpus)+1)/(n+1))            print((row, col), tf*idf)            col = col+1        row = row + 1vocabs = vocab(corpus)tfidf(corpus, vocabs)

第一行的输出是

(0, 0) 0.038461538461538464

(0, 1) 0.038461538461538464

(0, 2) 0.038461538461538464

(0, 3) 0.05810867783715349

(0, 4) 0.038461538461538464

而sklearn的TfidfVectorizer的输出是

(0, 8) 0.38408524091481483

(0, 6) 0.38408524091481483

(0, 3) 0.38408524091481483

(0, 2) 0.5802858236844359

(0, 1) 0.46979138557992045

你能告诉我哪里出错了么?谢谢你。


回答:

from sklearn.feature_extraction.text import TfidfVectorizerfrom collections import Counterimport numpy as npimport pandas as pddef tfidf_vectorizer(corpus):    terms = list(set(' '.join([i for i in corpus]).split()))    terms.sort()    mat = np.zeros((len(corpus), len(terms)))    for i in range(len(corpus)):        tf = Counter(corpus[i].split())        for j in range(len(terms)):            df = len([document for document in corpus if terms[j] in document])            idf = 1.0 + np.log((len(corpus) + 1) / (df + 1))            mat[i, j] = tf[terms[j]] * idf    return (terms, mat)corpus = ['this is the first document',          'this document is the second document',          'this one is the third']# manual calculationvectorizer_1 = tfidf_vectorizer(corpus)terms_1 = vectorizer_1[0]matrix_1 = vectorizer_1[1]# scikit-learn calculationvectorizer_2 = TfidfVectorizer(norm=None).fit(corpus)terms_2 = vectorizer_2.get_feature_names()matrix_2 = vectorizer_2.transform(corpus).toarray()
print(pd.DataFrame(data=matrix_1, columns=terms_1))   document     first   is       one    second  the     third  this0  1.287682  1.693147  1.0  0.000000  0.000000  1.0  0.000000   1.01  2.575364  0.000000  1.0  0.000000  1.693147  1.0  0.000000   1.02  0.000000  0.000000  1.0  1.693147  0.000000  1.0  1.693147   1.0
print(pd.DataFrame(data=matrix_2, columns=terms_2))   document     first   is       one    second  the     third  this0  1.287682  1.693147  1.0  0.000000  0.000000  1.0  0.000000   1.01  2.575364  0.000000  1.0  0.000000  1.693147  1.0  0.000000   1.02  0.000000  0.000000  1.0  1.693147  0.000000  1.0  1.693147   1.0

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注