如何为scikit-learn分类器获取不同类别的最具信息量的特征?

NLTK包提供了一个方法show_most_informative_features(),用于查找对两个类别最重要的特征,输出如下:

   contains(outstanding) = True              pos : neg    =     11.1 : 1.0        contains(seagal) = True              neg : pos    =      7.7 : 1.0   contains(wonderfully) = True              pos : neg    =      6.8 : 1.0         contains(damon) = True              pos : neg    =      5.9 : 1.0        contains(wasted) = True              neg : pos    =      5.8 : 1.0

正如在如何为scikit-learn分类器获取最具信息量的特征?这个问题中所回答的,这也可以在scikit-learn中使用。然而,对于二元分类器,该问题的答案只输出最佳特征本身。

所以我的问题是,我如何识别特征的关联类别,就像上面的例子一样(outstanding在正类别中最具信息量,而seagal在负类别中最具信息量)?

编辑:实际上,我想要的是每个类别最具信息量的单词列表。我该如何做到这一点?谢谢!


回答:

在二元分类的情况下,看起来系数数组已经被展平了。

让我们尝试仅用两个标签重新标记我们的数据:

import codecs, re, timefrom itertools import chainimport numpy as npfrom sklearn.feature_extraction.text import CountVectorizerfrom sklearn.naive_bayes import MultinomialNBtrainfile = 'train.txt'# Vectorizing data.train = []word_vectorizer = CountVectorizer(analyzer='word')trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))tags = ['bs','pt','bs','pt']# Training NBmnb = MultinomialNB()mnb.fit(trainset, tags)print mnb.classes_print mnb.coef_[0]print mnb.coef_[1]

[out]:

['bs' 'pt'][-5.55682806 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.1705337  -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.45821577 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -4.45821577 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.45821577 -4.86368088 -4.86368088 -4.45821577 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.1705337  -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.45821577 -4.86368088 -4.86368088]Traceback (most recent call last):  File "test.py", line 24, in <module>    print mnb.coef_[1]IndexError: index 1 is out of bounds for axis 0 with size 1

所以让我们做一些诊断:

print mnb.feature_count_print mnb.coef_[0]

[out]:

[[ 1.  0.  0.  1.  1.  1.  0.  0.  1.  1.  0.  0.  0.  1.  0.  1.  0.  1.   1.  1.  2.  2.  0.  0.  0.  1.  1.  0.  1.  0.  0.  0.  0.  0.  2.  1.   1.  1.  1.  0.  0.  0.  0.  0.  0.  1.  1.  0.  0.  0.  0.  1.  0.  0.   0.  1.  1.  1.  1.  1.  1.  1.  1.  0.  0.  0.  0.  1.  1.  0.  1.  0.   1.  2.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.  1.  1.  0.  1.  1.   0.  1.  0.  0.  0.  1.  1.  1.  0.  0.  1.  0.  1.  0.  1.  0.  1.  1.   1.  0.  0.  1.  0.  0.  0.  4.  0.  0.  1.  0.  0.  0.  0.  0.  1.  0.   0.  0.  1.  0.  0.  0.  0.  0.  0.  1.  0.  0.  1.  1.  0.  0.  0.  0.   0.  0.  1.  0.  0.  1.  0.  0.  0.  0.] [ 0.  1.  1.  0.  0.  0.  1.  1.  0.  0.  1.  1.  3.  0.  1.  0.  1.  0.   0.  0.  1.  2.  1.  1.  1.  1.  0.  1.  0.  1.  1.  1.  1.  1.  0.  0.   0.  0.  0.  2.  1.  1.  1.  1.  1.  0.  0.  1.  1.  1.  1.  0.  1.  1.   1.  0.  0.  0.  0.  0.  0.  0.  0.  1.  1.  1.  1.  0.  0.  1.  0.  1.   0.  0.  1.  1.  2.  1.  1.  2.  1.  1.  1.  0.  1.  0.  0.  1.  0.  0.   1.  0.  1.  1.  1.  0.  0.  0.  1.  1.  0.  1.  0.  1.  0.  1.  0.  0.   0.  1.  1.  0.  1.  1.  1.  3.  1.  1.  0.  1.  1.  1.  1.  1.  0.  1.   1.  1.  0.  1.  1.  1.  1.  1.  1.  0.  1.  1.  0.  0.  1.  1.  1.  1.   1.  1.  0.  1.  1.  0.  1.  2.  1.  1.]][-5.55682806 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.1705337  -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.45821577 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -4.45821577 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.45821577 -4.86368088 -4.86368088 -4.45821577 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -4.86368088 -5.55682806 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.1705337  -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -5.55682806 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.86368088 -5.55682806 -4.86368088 -4.45821577 -4.86368088 -4.86368088]

看起来特征已经被计数,然后在向量化时为了节省内存被展平了,所以让我们尝试:

index = 0coef_features_c1_c2 = []for feat, c1, c2 in zip(word_vectorizer.get_feature_names(), mnb.feature_count_[0], mnb.feature_count_[1]):    coef_features_c1_c2.append(tuple([mnb.coef_[0][index], feat, c1, c2]))    index+=1for i in sorted(coef_features_c1_c2):    print i

[out]:

(-5.5568280616995374, u'acuerdo', 1.0, 0.0)(-5.5568280616995374, u'al', 1.0, 0.0)(-5.5568280616995374, u'alex', 1.0, 0.0)(-5.5568280616995374, u'algo', 1.0, 0.0)(-5.5568280616995374, u'andaba', 1.0, 0.0)(-5.5568280616995374, u'andrea', 1.0, 0.0)(-5.5568280616995374, u'bien', 1.0, 0.0)(-5.5568280616995374, u'buscando', 1.0, 0.0)(-5.5568280616995374, u'como', 1.0, 0.0)(-5.5568280616995374, u'con', 1.0, 0.0)(-5.5568280616995374, u'conseguido', 1.0, 0.0)(-5.5568280616995374, u'distancia', 1.0, 0.0)(-5.5568280616995374, u'doprinese', 1.0, 0.0)(-5.5568280616995374, u'es', 2.0, 0.0)(-5.5568280616995374, u'est\xe1', 1.0, 0.0)(-5.5568280616995374, u'eulex', 1.0, 0.0)(-5.5568280616995374, u'excusa', 1.0, 0.0)(-5.5568280616995374, u'fama', 1.0, 0.0)(-5.5568280616995374, u'guasch', 1.0, 0.0)(-5.5568280616995374, u'ha', 1.0, 0.0)(-5.5568280616995374, u'incident', 1.0, 0.0)(-5.5568280616995374, u'ispit', 1.0, 0.0)(-5.5568280616995374, u'istragu', 1.0, 0.0)(-5.5568280616995374, u'izbijanju', 1.0, 0.0)(-5.5568280616995374, u'ja\u010danju', 1.0, 0.0)(-5.5568280616995374, u'je', 1.0, 0.0)(-5.5568280616995374, u'jedan', 1.0, 0.0)(-5.5568280616995374, u'jo\u0161', 1.0, 0.0)(-5.5568280616995374, u'kapaciteta', 1.0, 0.0)(-5.5568280616995374, u'kosova', 1.0, 0.0)(-5.5568280616995374, u'la', 1.0, 0.0)(-5.5568280616995374, u'lequio', 1.0, 0.0)(-5.5568280616995374, u'llevar', 1.0, 0.0)(-5.5568280616995374, u'lo', 2.0, 0.0)(-5.5568280616995374, u'misije', 1.0, 0.0)(-5.5568280616995374, u'muy', 1.0, 0.0)(-5.5568280616995374, u'm\xe1s', 1.0, 0.0)(-5.5568280616995374, u'na', 1.0, 0.0)(-5.5568280616995374, u'nada', 1.0, 0.0)(-5.5568280616995374, u'nasilja', 1.0, 0.0)(-5.5568280616995374, u'no', 1.0, 0.0)(-5.5568280616995374, u'obaviti', 1.0, 0.0)(-5.5568280616995374, u'obe\u0107ao', 1.0, 0.0)(-5.5568280616995374, u'parecer', 1.0, 0.0)(-5.5568280616995374, u'pone', 1.0, 0.0)(-5.5568280616995374, u'por', 1.0, 0.0)(-5.5568280616995374, u'po\u0161to', 1.0, 0.0)(-5.5568280616995374, u'prava', 1.0, 0.0)(-5.5568280616995374, u'predstavlja', 1.0, 0.0)(-5.5568280616995374, u'pro\u0161losedmi\u010dnom', 1.0, 0.0)(-5.5568280616995374, u'relaci\xf3n', 1.0, 0.0)(-5.5568280616995374, u'sjeveru', 1.0, 0.0)(-5.5568280616995374, u'taj', 1.0, 0.0)(-5.5568280616995374, u'una', 1.0, 0.0)(-5.5568280616995374, u'visto', 1.0, 0.0)(-5.5568280616995374, u'vladavine', 1.0, 0.0)(-5.5568280616995374, u'ya', 1.0, 0.0)(-5.5568280616995374, u'\u0107e', 1.0, 0.0)(-4.863680881139592, u'aj', 0.0, 1.0)(-4.863680881139592, u'ajudou', 0.0, 1.0)(-4.863680881139592, u'alpsk\xfdmi', 0.0, 1.0)(-4.863680881139592, u'alpy', 0.0, 1.0)(-4.863680881139592, u'ao', 0.0, 1.0)(-4.863680881139592, u'apresenta', 0.0, 1.0)(-4.863680881139592, u'bl\xedzko', 0.0, 1.0)(-4.863680881139592, u'come\xe7o', 0.0, 1.0)(-4.863680881139592, u'da', 2.0, 1.0)(-4.863680881139592, u'decepcionantes', 0.0, 1.0)(-4.863680881139592, u'deti', 0.0, 1.0)(-4.863680881139592, u'dificuldades', 0.0, 1.0)(-4.863680881139592, u'dif\xedcil', 1.0, 1.0)(-4.863680881139592, u'do', 0.0, 1.0)(-4.863680881139592, u'druh', 0.0, 1.0)(-4.863680881139592, u'd\xe1', 0.0, 1.0)(-4.863680881139592, u'ela', 0.0, 1.0)(-4.863680881139592, u'encontrar', 0.0, 1.0)(-4.863680881139592, u'enfrentar', 0.0, 1.0)(-4.863680881139592, u'for\xe7as', 0.0, 1.0)(-4.863680881139592, u'furiosa', 0.0, 1.0)(-4.863680881139592, u'golf', 0.0, 1.0)(-4.863680881139592, u'golfistami', 0.0, 1.0)(-4.863680881139592, u'golfov\xfdch', 0.0, 1.0)(-4.863680881139592, u'hotelmi', 0.0, 1.0)(-4.863680881139592, u'hra\u0165', 0.0, 1.0)(-4.863680881139592, u'ide', 0.0, 1.0)(-4.863680881139592, u'ihr\xedsk', 0.0, 1.0)(-4.863680881139592, u'intranspon\xedveis', 0.0, 1.0)(-4.863680881139592, u'in\xedcio', 0.0, 1.0)(-4.863680881139592, u'in\xfd', 0.0, 1.0)(-4.863680881139592, u'kde', 0.0, 1.0)(-4.863680881139592, u'kombin\xe1cie', 0.0, 1.0)(-4.863680881139592, u'komplex', 0.0, 1.0)(-4.863680881139592, u'kon\u010diarmi', 0.0, 1.0)(-4.863680881139592, u'lado', 0.0, 1.0)(-4.863680881139592, u'lete', 0.0, 1.0)(-4.863680881139592, u'longo', 0.0, 1.0)(-4.863680881139592, u'ly\u017eova\u0165', 0.0, 1.0)(-4.863680881139592, u'man\u017eelky', 0.0, 1.0)(-4.863680881139592, u'mas', 0.0, 1.0)(-4.863680881139592, u'mesmo', 0.0, 1.0)(-4.863680881139592, u'meu', 0.0, 1.0)(-4.863680881139592, u'minha', 0.0, 1.0)(-4.863680881139592, u'mo\u017enos\u0165ami', 0.0, 1.0)(-4.863680881139592, u'm\xe3e', 0.0, 1.0)(-4.863680881139592, u'nad\u0161en\xfdmi', 0.0, 1.0)(-4.863680881139592, u'negativas', 0.0, 1.0)(-4.863680881139592, u'nie', 0.0, 1.0)(-4.863680881139592, u'nieko\u013ek\xfdch', 0.0, 1.0)(-4.863680881139592, u'para', 0.0, 1.0)(-4.863680881139592, u'parecem', 0.0, 1.0)(-4.863680881139592, u'pod', 0.0, 1.0)(-4.863680881139592, u'pon\xfakaj\xfa', 0.0, 1.0)(-4.863680881139592, u'potrebuj\xfa', 0.0, 1.0)(-4.863680881139592, u'pri', 0.0, 1.0)(-4.863680881139592, u'prova\xe7\xf5es', 0.0, 1.0)(-4.863680881139592, u'punham', 0.0, 1.0)(-4.863680881139592, u'qual', 0.0, 1.0)(-4.863680881139592, u'qualquer', 0.0, 1.0)(-4.863680881139592, u'quem', 0.0, 1.0)(-4.863680881139592, u'rak\xfaske', 0.0, 1.0)(-4.863680881139592, u'rezortov', 0.0, 1.0)(-4.863680881139592, u'sa', 0.0, 1.0)(-4.863680881139592, u'sebe', 0.0, 1.0)(-4.863680881139592, u'sempre', 0.0, 1.0)(-4.863680881139592, u'situa\xe7\xf5es', 0.0, 1.0)(-4.863680881139592, u'spojen\xfdch', 0.0, 1.0)(-4.863680881139592, u'suplantar', 0.0, 1.0)(-4.863680881139592, u's\xfa', 0.0, 1.0)(-4.863680881139592, u'tak', 0.0, 1.0)(-4.863680881139592, u'talianske', 0.0, 1.0)(-4.863680881139592, u'teve', 0.0, 1.0)(-4.863680881139592, u'tive', 0.0, 1.0)(-4.863680881139592, u'todas', 0.0, 1.0)(-4.863680881139592, u'tr\xe1venia', 0.0, 1.0)(-4.863680881139592, u've\u013ek\xfd', 0.0, 1.0)(-4.863680881139592, u'vida', 0.0, 1.0)(-4.863680881139592, u'vo', 0.0, 1.0)(-4.863680881139592, u'vo\u013en\xe9ho', 0.0, 1.0)(-4.863680881139592, u'vysok\xfdmi', 0.0, 1.0)(-4.863680881139592, u'vy\u017eitia', 0.0, 1.0)(-4.863680881139592, u'v\xe4\u010d\u0161ine', 0.0, 1.0)(-4.863680881139592, u'v\u017edy', 0.0, 1.0)(-4.863680881139592, u'zauj\xedmav\xe9', 0.0, 1.0)(-4.863680881139592, u'zime', 0.0, 1.0)(-4.863680881139592, u'\u010dasu', 0.0, 1.0)(-4.863680881139592, u'\u010fal\u0161\xedmi', 0.0, 1.0)(-4.863680881139592, u'\u0161vaj\u010diarske', 0.0, 1.0)(-4.4582157730314274, u'de', 2.0, 2.0)(-4.4582157730314274, u'foi', 0.0, 2.0)(-4.4582157730314274, u'mais', 0.0, 2.0)(-4.4582157730314274, u'me', 0.0, 2.0)(-4.4582157730314274, u'\u010di', 0.0, 2.0)(-4.1705337005796466, u'as', 0.0, 3.0)(-4.1705337005796466, u'que', 4.0, 3.0)

现在我们看到了一些模式…看起来较高的系数偏向于一个类别,而另一端则偏向于另一个类别,所以你可以简单地这样做:

import codecs, re, timefrom itertools import chainimport numpy as npfrom sklearn.feature_extraction.text import CountVectorizerfrom sklearn.naive_bayes import MultinomialNBtrainfile = 'train.txt'# Vectorizing data.train = []word_vectorizer = CountVectorizer(analyzer='word')trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))tags = ['bs','pt','bs','pt']# Training NBmnb = MultinomialNB()mnb.fit(trainset, tags)def most_informative_feature_for_binary_classification(vectorizer, classifier, n=10):    class_labels = classifier.classes_    feature_names = vectorizer.get_feature_names()    topn_class1 = sorted(zip(classifier.coef_[0], feature_names))[:n]    topn_class2 = sorted(zip(classifier.coef_[0], feature_names))[-n:]    for coef, feat in topn_class1:        print class_labels[0], coef, feat    print    for coef, feat in reversed(topn_class2):        print class_labels[1], coef, featmost_informative_feature_for_binary_classification(word_vectorizer, mnb)

[out]:

bs -5.5568280617 acuerdobs -5.5568280617 albs -5.5568280617 alexbs -5.5568280617 algobs -5.5568280617 andababs -5.5568280617 andreabs -5.5568280617 bienbs -5.5568280617 buscandobs -5.5568280617 comobs -5.5568280617 conpt -4.17053370058 quept -4.17053370058 aspt -4.45821577303 čipt -4.45821577303 mept -4.45821577303 maispt -4.45821577303 foipt -4.45821577303 dept -4.86368088114 švajčiarskept -4.86368088114 ďalšímipt -4.86368088114 času

实际上,如果你仔细阅读了@larsmans的评论,他给出了关于二元类别系数的提示,在如何为scikit-learn分类器获取最具信息量的特征?

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注