我有两个数组如下:
correct = [['*','*'],['*','PER','*','GPE','ORG'],['GPE','*','*','*','ORG']]predicted = [['PER','*'],['*','ORG','*','GPE','ORG'],['PER','*','*','*','MISC']]
correct和predicted的长度相同(超过10K),且两个数组中每个位置上的元素长度也相同。我想使用Python计算这两个数组的精确率、召回率和F1分数。我有以下6个类别:’PER’,’ORG’,’MISC’,’LOC’,’*’,’GPE’
我想计算5个类别(除了’*’)的精确率和召回率,并找出F1分数。使用Python进行此操作的有效方法是什么?
回答:
你需要如此处所示展平你的列表,然后使用scikit-learn中的classification_report
:
correct = [['*','*'],['*','PER','*','GPE','ORG'],['GPE','*','*','*','ORG']]predicted = [['PER','*'],['*','ORG','*','GPE','ORG'],['PER','*','*','*','MISC']]target_names = ['PER','ORG','MISC','LOC','GPE'] # 忽略'*'correct_flat = [item for sublist in correct for item in sublist]predicted_flat = [item for sublist in predicted for item in sublist]from sklearn.metrics import classification_reportprint(classification_report(correct_flat, predicted_flat, target_names=target_names))
结果:
precision recall f1-score support PER 1.00 0.86 0.92 7 ORG 1.00 0.50 0.67 2 MISC 0.00 0.00 0.00 0 LOC 0.50 0.50 0.50 2 GPE 0.00 0.00 0.00 1avg / total 0.83 0.67 0.73 12
在这个特定示例中,你还会收到一个警告:
UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples.
这是由于'MISC'
在真实标签(correct
)中不存在,但理论上在你的实际数据中不应该发生这种情况。