我正在尝试将二元对数损失应用于我创建的朴素贝叶斯机器学习模型。我生成了一个分类预测数据集(yNew)和一个概率数据集(probabilityYes),但无法成功在对数损失函数中运行它们。
简单的 sklearn.metrics 函数会给出一个单一的对数损失结果——我不确定如何解释这个结果
from sklearn.metrics import log_lossll = log_loss(yNew, probabilityYes, eps=1e-15)print(ll).0819....
更复杂的函数对于每个“NO”返回2.55的值,对于每个“YES”返回2.50的值(总共有90列)——同样,我不知道如何解释这些结果
def logloss(yNew,probabilityYes):epsilon = 1e-15probabilityYes = sp.maximum(epsilon, probabilityYes)probabilityYes = sp.minimum(1-epsilon, probabilityYes)#计算对数损失函数(向量化)ll = sum(yNew*sp.log(probabilityYes) + sp.subtract(1,yNew)*sp.log(sp.subtract(1,probabilityYes)))ll = ll * -1.0/len(yNew)return llprint(logloss(yNew,probabilityYes))2.55352047 2.55352047 2.50358354 2.55352047 2.50358354 2.55352047 .....
回答:
以下是如何计算每个样本的损失:
import numpy as npdef logloss(true_label, predicted, eps=1e-15): p = np.clip(predicted, eps, 1 - eps) if true_label == 1: return -np.log(p) else: return -np.log(1 - p)
让我们用一些虚拟数据来测试一下(实际上我们不需要模型):
predictions = np.array([0.25,0.65,0.2,0.51, 0.01,0.1,0.34,0.97])targets = np.array([1,0,0,0, 0,0,0,1])ll = [logloss(x,y) for (x,y) in zip(targets, predictions)]ll# 结果:[1.3862943611198906, 1.0498221244986778, 0.2231435513142097, 0.7133498878774648, 0.01005033585350145, 0.10536051565782628, 0.41551544396166595, 0.030459207484708574]
从上面的数组中,您应该能够说服自己,预测值与相应的真实标签之间的差距越大,损失就越大,这与我们的直觉是一致的。
让我们确认一下上面的计算结果与 scikit-learn 返回的总(平均)损失是否一致:
from sklearn.metrics import log_lossll_sk = log_loss(targets, predictions)ll_sk# 0.4917494284709932np.mean(ll)# 0.4917494284709932np.mean(ll) == ll_sk# True
代码改编自这里 [链接现已失效]。