这是我找到并稍作修改的代码,来自这里…
我使用了与原作者相同的逻辑,但仍然没有获得好的准确性。平均倒数排名很接近(我的:52.79,示例:48.04)
cv = CountVectorizer(binary=True, max_df=0.95)feature_set = cv.fit_transform(df["short_description"])X_train, X_test, y_train, y_test = train_test_split( feature_set, df["category"].values, random_state=2000)scikit_log_reg = LogisticRegression( verbose=1, solver="liblinear", random_state=0, C=5, penalty="l2", max_iter=1000)model = scikit_log_reg.fit(X_train, y_train)target = to_categorical(y_test)y_pred = model.predict_proba(X_test)label_ranking_average_precision_score(target, y_pred)>> 0.5279108613021547model.score(X_test, y_test)>> 0.38620071684587814
但笔记本示例的准确性(59.80)与我的代码(38.62)不匹配
样本笔记本中使用的以下函数是否正确返回了准确性?
def compute_accuracy(eval_items:list): correct=0 total=0 for item in eval_items: true_pred=item[0] machine_pred=set(item[1]) for cat in true_pred: if cat in machine_pred: correct+=1 break accuracy=correct/float(len(eval_items)) return accuracy
回答:
笔记本代码检查实际类别是否在模型返回的前三名中:
def get_top_k_predictions(model, X_test, k): probs = model.predict_proba(X_test) best_n = np.argsort(probs, axis=1)[:, -k:] preds=[[model.classes_[predicted_cat] for predicted_cat in prediction] for prediction in best_n] preds=[item[::-1] for item in preds] return preds
如果你用下面的代码替换你代码的评估部分,你会发现你的模型也返回了0.5980的前三名准确性:
... model = scikit_log_reg.fit(X_train, y_train)top_preds = get_top_k_predictions(model, X_test, 3)pred_pairs = list(zip([[v] for v in y_test], top_preds))print(compute_accuracy(pred_pairs))# 下面是compute_accuracy的一个更简单且更符合Python风格的版本print(np.mean([actual in pred for actual, pred in zip(y_test, top_preds)]))