我正在尝试使用scikit-learn对多类数据进行词级分类。我已经有了train
和test
分割。标记是以相同类别的批次出现的,例如,前10个标记属于class0
,接下来的20个属于class4
,以此类推。数据格式如下,用\t
分隔:
-----------------token tag-----------------way 6to 6reduce 6the 6amount 6of 6traffic 6 ....public 2transport 5is 5a 5key 5factor 5to 5 minimize 5 ....
数据分布如下:
Training Data Test Data# Total: 119490 29699# Class 0: 52631 13490# Class 1: 35116 8625# Class 2: 17968 4161# Class 3: 8658 2088# Class 4: 3002 800# Class 5: 1201 302# Class 6: 592 153
我尝试的代码如下:
import pandas as pdfrom sklearn.feature_extraction.text import CountVectorizerfrom sklearn.pipeline import Pipelinefrom sklearn.feature_extraction.text import TfidfTransformerfrom sklearn.naive_bayes import MultinomialNBfrom sklearn.linear_model import SGDClassifierfrom sklearn.metrics import f1_scorefrom sklearn.model_selection import KFoldfrom imblearn.over_sampling import SMOTEif __name__ == '__main__': # reading Files train_df = pd.read_csv(TRAINING_DATA_PATH, names=['token', 'tag'], sep='\t').dropna().reset_index(drop=True) test_df = pd.read_csv(TEST_DATA_PATH, names=['token', 'tag'], sep='\t') # getting training and testing data train_X = train_df['token'] test_X = test_df['token'].astype('U') train_y = train_df['tag'] test_y = test_df['tag'].astype('U') # Naive-Bayes nb_pipeline = Pipeline([('vect', CountVectorizer()), # Counts occurrences of each word ('tfidf', TfidfTransformer()), # Normalize the counts based on document length ]) f1_list = [] cv = KFold(n_splits=5) for train_index, test_index in cv.split(train_X): train_text = train_X[train_index] train_label = train_y[train_index] val_text = train_X[test_index] val_y = train_y[test_index] vectorized_text = nb_pipeline.fit_transform(train_text) sm = SMOTE(random_state=42) train_text_res, train_y_res = sm.fit_sample(vectorized_text, train_label) print("\nTraining Data Class Distribution:") print(train_label.value_counts()) print("\nRe-sampled Training Data Class Distribution:") print(train_y_res.value_counts()) # clf = SVC(kernel='rbf', max_iter=1000, class_weight='balanced', verbose=1) clf = MultinomialNB() # clf = SGDClassifier(loss='log', penalty='l2', alpha=1e-3, max_iter=100, tol=None, # n_jobs=-1, verbose=1) clf.fit(train_text_res, train_y_res) predictions = clf.predict(nb_pipeline.transform(val_text)) f1 = f1_score(val_y, predictions, average='macro') f1_list.append(f1) print(f1_list) pred = clf.predict(nb_pipeline.transform(test_X)) print('F1-macro: %s' % f1_score(pred, test_y, average='macro'))
我想构建n-grams
并将其作为特征添加到模型中,以便模型能够更好地理解上下文,但我不确定如何在测试时再次以词级别进行测试。我该如何构建并将n-gram提供给模型,然后在测试数据上再次以词级别进行预测?
回答:
不要使用:
nb_pipeline = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer())])
而是一次性对单字和双字进行计数和tfidf处理:
from sklearn.feature_extraction.text import TfidfVectorizernb_pipeline = Pipeline([('tfidf', TfidfVectorizer(ngram_range=(1, 2)))])
更多信息请参见文档。