计算训练集的混淆矩阵

我是机器学习的新手。最近,我学会了如何计算KNN分类测试集confusion_matrix。但是,我不知道如何计算KNN分类训练集confusion_matrix

我如何从以下代码中计算KNN分类训练集confusion_matrix

以下代码用于计算测试集confusion_matrix

# Split test and train dataimport numpy as npfrom sklearn.model_selection import train_test_splitX = np.array(dataset.ix[:, 1:10])y = np.array(dataset['benign_malignant'])X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)#Define Classifierfrom sklearn.neighbors import KNeighborsClassifierknn = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)knn.fit(X_train, y_train)# Predicting the Test set resultsy_pred = knn.predict(X_test)# Making the Confusion Matrixfrom sklearn.metrics import confusion_matrixcm = confusion_matrix(y_test, y_pred) # Calulate Confusion matrix for test set.

关于k折交叉验证:

我也在尝试使用k折交叉验证来查找训练集confusion_matrix

我对这一行knn.fit(X_train, y_train)感到困惑。

我是否需要更改这一行knn.fit(X_train, y_train)

我应该在以下代码的哪里进行更改以计算训练集confusion_matrix

# Applying k-fold Methodfrom sklearn.cross_validation import StratifiedKFoldkfold = 10 # no. of folds (better to have this at the start of the code)skf = StratifiedKFold(y, kfold, random_state = 0)# Stratified KFold: This first divides the data into k folds. Then it also makes sure that the distribution of the data in each fold follows the original input distribution # Note: in future versions of scikit.learn, this module will be fused with kfoldskfind = [None]*len(skf) # indicescnt=0for train_index in skf:    skfind[cnt] = train_index    cnt = cnt + 1# skfind[i][0] -> train indices, skfind[i][1] -> test indices# Supervised Classification with k-fold Cross Validationfrom sklearn.metrics import confusion_matrixfrom sklearn.neighbors import KNeighborsClassifierconf_mat = np.zeros((2,2)) # Initializing the Confusion Matrixn_neighbors = 1; # better to have this at the start of the code# 10-fold Cross Validationfor i in range(kfold):    train_indices = skfind[i][0]    test_indices = skfind[i][1]    clf = []    clf = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)    X_train = X[train_indices]    y_train = y[train_indices]    X_test = X[test_indices]    y_test = y[test_indices]    # fit Training set    clf.fit(X_train,y_train)     # predict Test data    y_predcit_test = []    y_predict_test = clf.predict(X_test) # output is labels and not indices    # Compute confusion matrix    cm = []    cm = confusion_matrix(y_test,y_predict_test)    print(cm)    # conf_mat = conf_mat + cm 

回答:

你不需要做太多的更改

# Predicting the train set resultsy_train_pred = knn.predict(X_train)cm_train = confusion_matrix(y_train, y_train_pred)

这里我们使用X_train进行分类,而不是使用X_test,然后我们使用训练数据集的预测类别和实际类别生成分类矩阵。

分类矩阵的基本思想是找出分类落入四个类别中的数量(如果y是二元的):

  1. 预测为真但实际为假
  2. 预测为真且实际为真
  3. 预测为假但实际为真
  4. 预测为假且实际为假

只要你有两个集合——预测和实际,你就可以创建混淆矩阵。你所需要做的就是预测类别,并使用实际类别来获得混淆矩阵。

编辑

在交叉验证部分,你可以添加一行y_predict_train = clf.predict(X_train)来计算每次迭代的混淆矩阵。你可以这样做,因为在循环中,你每次都初始化clf,这基本上意味着重置你的模型。

另外,在你的代码中,你每次都计算混淆矩阵,但你没有将它存储在任何地方。最后你只会得到最后一个测试集的cm

Related Posts

L1-L2正则化的不同系数

我想对网络的权重同时应用L1和L2正则化。然而,我找不…

使用scikit-learn的无监督方法将列表分类成不同组别,有没有办法?

我有一系列实例,每个实例都有一份列表,代表它所遵循的不…

f1_score metric in lightgbm

我想使用自定义指标f1_score来训练一个lgb模型…

通过相关系数矩阵进行特征选择

我在测试不同的算法时,如逻辑回归、高斯朴素贝叶斯、随机…

可以将机器学习库用于流式输入和输出吗?

已关闭。此问题需要更加聚焦。目前不接受回答。 想要改进…

在TensorFlow中,queue.dequeue_up_to()方法的用途是什么?

我对这个方法感到非常困惑,特别是当我发现这个令人费解的…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注