我有一个几乎平衡的数据集,包含9个独特类别,每个类别大约有2200行,差异在正负100行以内。为了创建模型,我使用了以下链接中提到的方法,但在每种情况下,我的模型准确率大约为58%,精确率和召回率也大约为54%。请问我做错了什么?
https://towardsdatascience.com/multi-class-text-classification-with-scikit-learn-12f1e60e0a9fhttps://towardsdatascience.com/machine-learning-multiclass-classification-with-imbalanced-data-set-29f6a177c1a
https://medium.com/@robert.salgado/multiclass-text-classification-from-start-to-finish-f616a8642538
我的数据集只有两列,一列是特征,另一列是标签。
from pandas import ExcelFiledf = pd.read_excel('Prediction.xlsx', sheet_name='Sheet1')df.head()BAD_SYMBOLS_RE = re.compile('[^0-9a-z #+_]')STOPWORDS = set(stopwords.words('english'))import sys!{sys.executable} -m pip install lxmldef clean_text(text): """ text: a string return: modified initial string """ text = BeautifulSoup(text, "html.parser").text # HTML decoding text = text.lower() # lowercase text text = REPLACE_BY_SPACE_RE.sub(' ', text) # replace REPLACE_BY_SPACE_RE symbols by space in text text = BAD_SYMBOLS_RE.sub('', text) # delete symbols which are in BAD_SYMBOLS_RE from text text = ' '.join(word for word in text.split() if word not in STOPWORDS) # delete stopwors from text return textdf['notes_issuedesc'] = df['notes_issuedesc'].apply(clean_text)print_plot(10)df['notes_issuedesc'].apply(lambda x: len(x.split(' '))).sum()X = df.notes_issuedescy = df.finalX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state = 42)%%timefrom sklearn.naive_bayes import MultinomialNBfrom sklearn.pipeline import Pipelinefrom sklearn.feature_extraction.text import TfidfTransformernb = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ])nb.fit(X_train, y_train)from sklearn.metrics import classification_reporty_pred = nb.predict(X_test)print('accuracy %s' % accuracy_score(y_pred, y_test))print(classification_report(y_test, y_pred,target_names=my_tags))
回答:
通过首先纠正我的数据,我能够让我的代码运行起来。
问题在于有大量缺失的数据,所以我使用平均值来填补这些缺失值。我还使用散点图来识别异常数据,并将这些数据也删除了。
我进行了一些数据整理操作,结果生成的准确率更高了。