我刚开始学习机器学习,正在尝试不同的算法,目前我正在使用逻辑回归对从sklearn生成的随机数据集进行分类。现在这是一个二分类器,但我希望使用多类逻辑回归的“一对所有”方法(稍后进行比较)。
以下是我尝试实现的二分类代码:
import numpy as npimport matplotlib.pyplot as pltimport sklearnimport randomfrom sklearn import datasetsfrom sklearn.preprocessing import StandardScalerfrom sklearn.datasets import make_blobsX, t = make_blobs(n_samples=[400, 800, 400], centers=[[0,0], [1,2], [2,3]], n_features=2, random_state=2019)indices = np.arange(X.shape[0])random.seed(2020)random.shuffle(indices)indices[:10]X_train = X[indices[:800], :]X_val = X[indices[800:1200], :]X_test = X[indices[1200:], :]t_train = t[indices[:800]]t_val = t[indices[800:1200]]t_test = t[indices[1200:]]t2_train = t_train == 1t2_train = t2_train.astype('int')t2_val = (t_val == 1).astype('int')t2_test = (t_test == 1).astype('int')def add_bias(X): # Put bias in position 0 sh = X.shape if len(sh) == 1: #X is a vector return np.concatenate([np.array([1]), X]) else: # X is a matrix m = sh[0] bias = np.ones((m, 1)) # Makes a m*1 matrix of 1-s return np.concatenate([bias, X], axis = 1)class NumpyClassifier(): # Common methods to all numpy classifiers --- if any def accuracy(self, X_val, t_val, **kwargs): pred = self.predict(X_val, **kwargs) if len(pred.shape) > 1: pred = pred[:, 0] return sum(pred==t_val)/len(pred)# code for Logistic Regressiondef logistic(x): return 1/(1+np.exp(-x))class NumpyLogReg(NumpyClassifier): def fit(self, X_train, t_train, gamma = 0.1, epochs=10): # X_train is a Nxm matrix, N data points, m features # t_train are the targets values for training data (k, m) = X_train.shape X_train = add_bias(X_train) self.theta = theta = np.zeros(m+1) for e in range(epochs): theta -= gamma / k * X_train.T @ (self.forward(X_train) - t_train) def forward(self, X_val): return logistic(X_val @ self.theta) def score(self, X_val): z = add_bias(X_val) score = self.forward(z) return score def predict(self, X_val, threshold=0.5): z = add_bias(X_val) score = self.forward(z) # score = z @ self.theta return (score>threshold).astype('int')lr_cl = NumpyLogReg()lr_cl.fit(X_train, t_train)lr_cl.predict(X_val)lr_cl.accuracy(X_val, t_val)for e in [1, 2, 5, 10, 50, 100, 1000, 10000, 100000, 1000000]: lr_cl = NumpyLogReg() lr_cl.fit(X_train, t_train, epochs=e, gamma=0.00001) print("{:10} {:7.3f}".format(e, lr_cl.accuracy(X_val, t_val)))
我需要关于如何将代码修改为多类“一对所有”/“一对其余”逻辑回归的建议/提示。我不想直接使用从sklearn导入的逻辑回归算法,而是像这样从头开始实现。
任何建议都非常受欢迎,提前感谢您。
回答:
我假设 NumpyLogReg
在二分类上表现很好。使用相同的类,通过“一对其余”(ovr)技术来实现多类
分类。
假设数据集有3个类别 A, B, C
- 使用二分类模型,将类别标签
A
作为正类,B, C
作为负类,并记录概率分数 - 重复上述步骤,将
B
作为正类,A, C
作为负类,以及将C
作为正类,A, B
作为负类。分别记录相应的概率分数。 - 基本上,如果有
n
个类别,将会有n
个二分类模型,即每个类别训练一个分类器
- 通过仔细检查每个类别的分类器(即分析概率值),你可以实现
多类
分类,并且模型将具有很高的可解释性。
请参考此指南获取更详细的解释