如果我使用两种方法(神经网络和K近邻)并使用caret包,然后我想进行显著性测试,如何进行wilcoxon测试?
我提供了我的数据样本如下
structure(list(Input = c(25, 193, 70, 40), Output = c(150, 98, 27, 60), Inquiry = c(75, 70, 0, 20), File = c(60, 36, 12, 12), FPAdj = c(1, 1, 0.8, 1.15), RawFPcounts = c(1750, 1902, 535, 660), AdjFP = c(1750, 1902, 428, 759), Effort = c(102.4, 105.2, 11.1, 21.1)), row.names = c(NA, 4L), class = "data.frame") d=readARFF("albrecht.arff") index <- createDataPartition(d$Effort, p = .70,list = FALSE) tr <- d[index, ] ts <- d[-index, ] boot <- trainControl(method = "repeatedcv", number=100) cart1 <- train(log10(Effort) ~ ., data = tr, method = "knn", metric = "MAE", preProc = c("center", "scale", "nzv"), trControl = boot) postResample(predict(cart1, ts), log10(ts$Effort)) cart2 <- train(log10(Effort) ~ ., data = tr, method = "knn", metric = "MAE", preProc = c("center", "scale", "nzv"), trControl = boot) postResample(predict(cart2, ts), log10(ts$Effort))
如何在此处执行wilcox.test()
?
此致敬礼
回答:
解决您的问题的一种方法是为K近邻和神经网络生成多个性能值,然后使用统计测试进行比较。这可以通过嵌套重采样来实现。
在嵌套重采样中,您会多次进行训练/测试拆分,并在每个测试集上评估模型。
例如,我们使用BostonHousing数据集:
library(caret)library(mlbench)data(BostonHousing)
为了简化示例,我们只选择数值列:
d <- BostonHousing[,sapply(BostonHousing, is.numeric)]
据我所知,caret包中没有开箱即用的嵌套交叉验证功能,因此需要一个简单的包装器:
生成嵌套交叉验证的外部折叠:
outer_folds <- createFolds(d$medv, k = 5)
让我们使用引导重采样作为内部重采样循环来调整超参数:
boot <- trainControl(method = "boot", number = 100)
现在循环遍历外部折叠,使用训练集进行超参数优化,并在测试集上进行预测:
CV_knn <- lapply(outer_folds, function(index){ tr <- d[-index, ] ts <- d[index,] cart1 <- train(medv ~ ., data = tr, method = "knn", metric = "MAE", preProc = c("center", "scale", "nzv"), trControl = boot, tuneLength = 10) #为了简短,我们只探测10个超参数组合 postResample(predict(cart1, ts), ts$medv)})
从结果中提取仅MAE值:
sapply(CV_knn, function(x) x[3]) -> CV_knn_MAECV_knn_MAE#outputFold1.MAE Fold2.MAE Fold3.MAE Fold4.MAE Fold5.MAE 2.503333 2.587059 2.031200 2.475644 2.607885
对glmnet学习器也执行同样的操作:
CV_glmnet <- lapply(outer_folds, function(index){ tr <- d[-index, ] ts <- d[index,] cart1 <- train(medv ~ ., data = tr, method = "glmnet", metric = "MAE", preProc = c("center", "scale", "nzv"), trControl = boot, tuneLength = 10) postResample(predict(cart1, ts), ts$medv)})sapply(CV_glmnet, function(x) x[3]) -> CV_glmnet_MAECV_glmnet_MAE#outputFold1.MAE Fold2.MAE Fold3.MAE Fold4.MAE Fold5.MAE 3.400559 3.383317 2.830140 3.605266 3.525224
现在使用wilcox.test
比较这两个算法。由于两个学习器的性能是使用相同的数据拆分生成的,因此配对测试是合适的:
wilcox.test(CV_knn_MAE, CV_glmnet_MAE, paired = TRUE)
如果要比较两个以上的算法,可以使用friedman.test