我正在尝试对此处提供的数据集进行逻辑回归,使用5折交叉验证方法。
我的目标是对数据集中的分类列进行预测,该列的值可以是1(无癌症)或2(有癌症)。
以下是完整代码:
library(ISLR) library(boot) dataCancer <- read.csv("http://archive.ics.uci.edu/ml/machine-learning-databases/00451/dataR2.csv") #随机打乱数据 dataCancer<-dataCancer[sample(nrow(dataCancer)),] #创建5个等大小的折 folds <- cut(seq(1,nrow(dataCancer)),breaks=5,labels=FALSE) #执行5折交叉验证 for(i in 1:5){ #使用which()函数按折分割数据 testIndexes <- which(folds == i) testData <- dataCancer[testIndexes, ] trainData <- dataCancer[-testIndexes, ] #按您希望的方式使用测试和训练数据分区... classification_model = glm(as.factor(Classification) ~ ., data = trainData,family = binomial) summary(classification_model) #使用拟合模型对测试数据进行预测 model_pred_probs = predict(classification_model , testData , type = "response") model_predict_classification = rep(0 , length(testData)) model_predict_classification[model_pred_probs > 0.5] = 1 #创建混淆矩阵并计算错误分类率 table(model_predict_classification , testData) mean(model_predict_classification != testData) }
我希望在最后得到一些帮助
table(model_predict_classification , testData) mean(model_predict_classification != testData)
我遇到了以下错误:
Error in table(model_predict_classification, testData) : all arguments must have the same length
我不太明白如何使用混淆矩阵。
我希望得到5个错误分类率。trainData和testData已经被分割成5个部分。它们的尺寸应该与model_predict_classification相同。
谢谢您的帮助。
回答:
这里有一个使用caret
包的解决方案,用于在将癌症数据分割成测试和训练数据集后进行5折交叉验证。混淆矩阵将针对测试和训练数据生成。
caret::train()
报告了5个保留折的平均准确率。可以通过从输出模型对象中提取来获得每个折的结果。
library(caret)data <- read.csv("http://archive.ics.uci.edu/ml/machine-learning-databases/00451/dataR2.csv")# 将分类设置为因子,并重新编码为 # 0 = 无癌症,1 = 癌症 data$Classification <- as.factor((data$Classification - 1))# 根据因变量的值将数据分割为训练和测试集 trainIndex <- createDataPartition(data$Classification, p = .75,list=FALSE)training <- data[trainIndex,]testing <- data[-trainIndex,]trCntl <- trainControl(method = "CV",number = 5)glmModel <- train(Classification ~ .,data = training,trControl = trCntl,method="glm",family = "binomial")# 打印模型信息summary(glmModel)glmModelconfusionMatrix(glmModel)# 在保留数据上生成预测trainPredicted <- predict(glmModel,testing)# 为保留数据生成混淆矩阵confusionMatrix(trainPredicted,reference=testing$Classification)
…和输出:
> # 打印模型信息> > summary(glmModel)> > Call: NULL> > Deviance Residuals: > Min 1Q Median 3Q Max > -2.1542 -0.8358 0.2605 0.8260 2.1009 > > Coefficients:> Estimate Std. Error z value Pr(>|z|) (Intercept) -4.4039248 3.9159157 -1.125 0.2607 Age -0.0190241 0.0177119 -1.074 0.2828 BMI -0.1257962 0.0749341 -1.679 0.0932 . Glucose 0.0912229 0.0389587 2.342 0.0192 * Insulin 0.0917095 0.2889870 0.317 0.7510 HOMA -0.1820392 1.2139114 -0.150 0.8808 Leptin -0.0207606 0.0195192 -1.064 0.2875 Adiponectin -0.0158448 0.0401506 -0.395 0.6931 Resistin 0.0419178 0.0255536 1.640 0.1009 MCP.1 0.0004672 0.0009093 0.514 0.6074 > --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1> > (Dispersion parameter for binomial family taken to be 1)> > Null deviance: 119.675 on 86 degrees of freedom Residual deviance: 89.804 on 77 degrees of freedom AIC: 109.8> > Number of Fisher Scoring iterations: 7> > > glmModel Generalized Linear Model > > 87 samples 9 predictor 2 classes: '0', '1' > > No pre-processing Resampling: Cross-Validated (5 fold) Summary of> sample sizes: 70, 69, 70, 69, 70 Resampling results:> > Accuracy Kappa > 0.7143791 0.4356231> > > confusionMatrix(glmModel) Cross-Validated (5 fold) Confusion Matrix > > (entries are percentual average cell counts across resamples)> > Reference Prediction 0 1> 0 33.3 17.2> 1 11.5 37.9> Accuracy (average) : 0.7126> > > # 在保留数据上生成预测> > trainPredicted <- predict(glmModel,testing)> > # 为保留数据生成混淆矩阵> > confusionMatrix(trainPredicted,reference=testing$Classification) Confusion Matrix and Statistics> > Reference Prediction 0 1> 0 11 2> 1 2 14> > Accuracy : 0.8621 > 95% CI : (0.6834, 0.9611)> No Information Rate : 0.5517 > P-Value [Acc > NIR] : 0.0004078 > > Kappa : 0.7212 Mcnemar's Test P-Value : 1.0000000 > > Sensitivity : 0.8462 > Specificity : 0.8750 > Pos Pred Value : 0.8462 > Neg Pred Value : 0.8750 > Prevalence : 0.4483 > Detection Rate : 0.3793 Detection Prevalence : 0.4483 > Balanced Accuracy : 0.8606 > > 'Positive' Class : 0 > > >