我想保留dfm中那些PMI值大于短语中单词数3倍的2-3词短语(即特征)*。
此处PMI定义为:pmi(短语) = log(p(短语)/Product(p(单词))
其中p(短语):基于其相对频率的短语概率 Product(p(单词)):短语中每个单词概率的乘积。
到目前为止,我使用了以下代码,但PMI值似乎不正确,但我无法找到问题所在:
#creating dummy dataid <- c(1:5)text <- c("positiveemoticon my name is positiveemoticon positiveemoticon i love you", "hello dont", "i love you", "i love you", "happy birthday")ids_text_clean_test <- data.frame(id, text)ids_text_clean_test$id <- as.character(ids_text_clean_test$id)ids_text_clean_test$text <- as.character(ids_text_clean_test$text)test_corpus <- corpus(ids_text_clean_test[["text"]], docnames = ids_text_clean_test[["id"]])tokens_all_test <- tokens(test_corpus, remove_punct = TRUE)## Create a document-feature matrix(dfm)doc_phrases_matrix_test <- dfm(tokens_all_test, ngrams = 2:3) #extracting two- and three word phrasesdoc_phrases_matrix_test# calculating the pointwise mututal information for each phrase to identify phrases that occur at rates much higher than chancetcmrs = Matrix::rowSums(doc_phrases_matrix_test) #number of words per usertcmcs = Matrix::colSums(doc_phrases_matrix_test) #counts of each phraseN = sum(tcmrs) #number of total words used colp = tcmcs/N #proportion of the phrases by total phrasesrowp = tcmrs/N #proportion of each users' words used by total words usedpp = doc_phrases_matrix_test@p + 1ip = doc_phrases_matrix_test@i + 1tmpx = rep(0,length(doc_phrases_matrix_test@x)) # new values go here, just a numeric vector# iterate through sparse matrix:for (i in 1:(length(doc_phrases_matrix_test@p) - 1) ) { ind = pp[i]:(pp[i + 1] - 1) not0 = ip[ind] icol = doc_phrases_matrix_test@x[ind] tmp = log( (icol/N) / (rowp[not0] * colp[i] )) # PMI tmpx[ind] = tmp}doc_phrases_matrix_test@x = tmpxdoc_phrases_matrix_test
我认为PMI不应该因用户而在同一个短语内有所不同,但我认为直接将PMI应用于dfm会更容易,这样根据特征的PMI进行子集筛选就更简单了。
我尝试的另一种方法是直接将PMI应用于特征:
test_pmi <- textstat_keyness(doc_phrases_matrix_test, measure = "pmi", sort = TRUE)test_pmi
然而,首先,我得到了一个警告,说产生了NaNs,其次,我不理解PMI值(例如,为什么有负值)?
有没有人有更好的想法,如何根据上面定义的PMI值提取特征?
任何提示都非常感激 🙂
*遵循Park等人(2015)的研究
回答:
您可以使用以下R代码,该代码使用udpipe R包来获取您所需要的结果。以下是基于udpipe包中的tokenised data.frame的示例
library(udpipe) data(brussels_reviews_anno, package = "udpipe") x <- subset(brussels_reviews_anno, language %in% "fr") ## find keywords with PMI > 3 keyw <- keywords_collocation(x, term = "lemma", group = c("doc_id", "sentence_id"), ngram_max = 3, n_min = 10) keyw <- subset(keyw, pmi > 3) ## recodes to keywords x$term <- txt_recode_ngram(x$lemma, compound = keyw$keyword, ngram = keyw$ngram) ## create DTM dtm <- document_term_frequencies(x = x$term, document = x$doc_id) dtm <- document_term_matrix(dtm)
如果您想要获得与x类似结构的数据集,只需使用udpipe(text, “english”)或您选择的任何语言。如果您想使用quanteda进行分词,您仍然可以将其转换为更丰富的数据框 – 此示例在这里和这里给出。查看udpipe R包的帮助文档,它有许多案例研究(?udpipe)。
请注意,PMI很有用,但使用udpipe R包的依赖解析输出会更有用。如果您查看dep_rel字段,您会发现那里有标识多词表达的类别(例如,dep_rel fixed/flat/compound是根据http://universaldependencies.org/u/dep/index.html定义的多词表达),您也可以使用这些来放入您的文档/术语/矩阵中