我有一个如下的XGBoost交叉验证模型。
xgboostModelCV <- xgb.cv(data = dtrain, nrounds = 20, nfold = 3, metrics = "auc", verbose = TRUE, "eval_metric" = "auc", "objective" = "binary:logistic", "max.depth" = 6, "eta" = 0.01, "subsample" = 0.5, "colsample_bytree" = 1, print_every_n = 1, "min_child_weight" = 1, booster = "gbtree", early_stopping_rounds = 10, watchlist = watchlist, seed = 1234)
我的问题是关于模型的输出和nfold
设置,我将nfold
设置为3
评估日志的输出如下所示;
iter train_auc_mean train_auc_std test_auc_mean test_auc_std1 1 0.8852290 0.0023585703 0.8598630 0.0055154242 2 0.9015413 0.0018569007 0.8792137 0.0037651093 3 0.9081027 0.0014307577 0.8859040 0.0050536004 4 0.9108463 0.0011838160 0.8883130 0.0043241135 5 0.9130350 0.0008863908 0.8904100 0.0041731236 6 0.9143187 0.0009514359 0.8910723 0.0043728447 7 0.9151723 0.0010543653 0.8917300 0.0039052848 8 0.9162787 0.0010344935 0.8929013 0.0035827479 9 0.9173673 0.0010539116 0.8935753 0.00343194910 10 0.9178743 0.0011498505 0.8942567 0.00295551111 11 0.9182133 0.0010825702 0.8944377 0.00305141112 12 0.9185767 0.0011846632 0.8946267 0.00302696913 13 0.9186653 0.0013352629 0.8948340 0.00252679314 14 0.9190500 0.0012537195 0.8954053 0.00263638815 15 0.9192453 0.0010967155 0.8954127 0.00284140216 16 0.9194953 0.0009818501 0.8956447 0.00278378717 17 0.9198503 0.0009541517 0.8956400 0.00259086218 18 0.9200363 0.0009890185 0.8957223 0.00258039819 19 0.9201687 0.0010323405 0.8958790 0.00250869520 20 0.9204030 0.0009725742 0.8960677 0.002581329
然而,我设置了nrounds = 20
,但交叉验证的nfolds
= 3,所以我应该得到60个结果而不是20个吗?
还是说上面的输出正如列名所示,是每轮的AUC平均分数…
所以在nround = 1
时,训练集的train_auc_mean
是结果0.8852290
,这将是3个交叉验证nfolds
的平均值?
所以如果我绘制这些AUC分数,我将绘制3折交叉验证的平均AUC分数?
只是想确保一切都清楚明白。
回答:
你是对的,输出是折叠auc
的平均值。然而,如果你希望提取最佳/最后迭代的各个折叠auc,你可以按以下方式进行:
使用mlbench
中的Sonar数据集的一个示例
library(xgboost)library(tidyverse)library(mlbench)data(Sonar)xgb.train.data <- xgb.DMatrix(as.matrix(Sonar[,1:60]), label = as.numeric(Sonar$Class)-1)param <- list(objective = "binary:logistic")
在xgb.cv
中设置prediction = TRUE
model.cv <- xgb.cv(param = param, data = xgb.train.data, nrounds = 50, early_stopping_rounds = 10, nfold = 3, prediction = TRUE, eval_metric = "auc")
现在遍历折叠并将预测与真实标签和相应的索引连接起来:
z <- lapply(model.cv$folds, function(x){ pred <- model.cv$pred[x] true <- (as.numeric(Sonar$Class)-1)[x] index <- x out <- data.frame(pred, true, index) out})
为折叠命名:
names(z) <- paste("folds", 1:3, sep = "_")z %>% bind_rows(.id = "id") %>% group_by(id) %>% summarise(auroc = roc(true, pred) %>% auc())#output# A tibble: 3 x 2 id auroc <chr> <dbl>1 folds_1 0.9442 folds_2 0.9003 folds_3 0.899
这些值的平均值与最佳迭代的平均auc相同:
z %>% bind_rows(.id = "id") %>% group_by(id) %>% summarise(auroc = roc(true, pred) %>% auc()) %>% pull(auroc) %>% mean#output[1] 0.9143798model.cv$evaluation_log[model.cv$best_iteration,]#output iter train_auc_mean train_auc_std test_auc_mean test_auc_std1: 48 1 0 0.91438 0.02092817
当然,你可以做更多的事情,比如为每个折叠绘制auc曲线等。