我需要从包含多类分类的CSV文件中计算精确度和召回率。
更具体地说,我的CSV文件结构如下:
real_class1, classified_class1real_class2, classified_class3real_class3, classified_class4real_class4, classified_class2
总共有六个类别被分类。
在二分类的情况下,我能理解如何计算真阳性(True Positive)、假阳性(False Positive)、真阴性(True Negative)和假阴性(False Negative)。但在多类分类中,我不知道如何进行。
有人能给我展示一些例子吗?最好是用Python编写的?
回答:
正如评论中所建议的,你需要创建混淆矩阵并按照以下步骤进行:
(我假设你正在使用Spark以获得机器学习处理的更好性能)
from __future__ import divisionimport pandas as pdimport numpy as npimport picklefrom pyspark import SparkContext, SparkConffrom pyspark.sql import SQLContext, functions as fnfrom sklearn.metrics import confusion_matrixdef getFirstColumn(line): parts = line.split(',') return parts[0]def getSecondColumn(line): parts = line.split(',') return parts[1]# Initializationconf= SparkConf()conf.setAppName("ConfusionMatrixPrecisionRecall")sc = SparkContext(conf= conf) # SparkContextsqlContext = SQLContext(sc) # SqlContextdata = sc.textFile('YOUR_FILE_PATH') # Load datasety_true = data.map(getFirstColumn).collect() # Split from line the classy_pred = data.map(getSecondColumn).collect() # Split from line the tagsconfusion_matrix = confusion_matrix(y_true, y_pred)print("Confusion matrix:\n%s" % confusion_matrix)# The True Positives are simply the diagonal elementsTP = np.diag(confusion_matrix)print("\nTP:\n%s" % TP)# The False Positives are the sum of the respective column, minus the diagonal element (i.e. the TP element)FP = np.sum(confusion_matrix, axis=0) - TPprint("\nFP:\n%s" % FP)# The False Negatives are the sum of the respective row, minus the diagonal (i.e. TP) element:FN = np.sum(confusion_matrix, axis=1) - TPprint("\nFN:\n%s" % FN)num_classes = INTEGER #static kwnow a priori, put your number of classesTN = []for i in range(num_classes): temp = np.delete(confusion_matrix, i, 0) # delete ith row temp = np.delete(temp, i, 1) # delete ith column TN.append(sum(sum(temp)))print("\nTN:\n%s" % TN)precision = TP/(TP+FP)recall = TP/(TP+FN)print("\nPrecision:\n%s" % precision)print("\nRecall:\n%s" % recall)