如何在Colab笔记本中让detectron2在验证数据集上进行训练?

在Google Colab笔记本中,detectron2在训练过程中没有进行评估,如这里所示。在训练自定义数据集时,它不使用验证数据。我应该如何添加这个功能?

参考Github仓库 – https://github.com/facebookresearch/detectron2


回答:

Colab笔记本通常运行速度较慢,旨在展示一个仓库的基本使用方式。训练过程中不进行评估可能仅仅是因为他们认为在简单的笔记本中不需要这一步。仓库中包含了在训练过程中定期进行评估的更复杂示例。

然而,如果你仍然想在笔记本中进行评估,我看到他们在这里创建了训练和验证集的划分:

for d in ["train", "val"]:    DatasetCatalog.register("balloon_" + d, lambda d=d: get_balloon_dicts("balloon/" + d))    MetadataCatalog.get("balloon_" + d).set(thing_classes=["balloon"])

但由于以下这行代码,在训练过程中没有进行评估:

cfg.DATASETS.TEST = ()

尝试

cfg.DATASETS.TEST = ("balloon_val",)

然后设置训练器的钩子,以便满足你的评估需求

通过将eval_period设置为50,并在balloon_val上将自定义评估设置为COCOEvaluator,获得的结果如下:

[07/14 07:23:52 d2.engine.train_loop]: Starting training from iteration 0[07/14 07:24:02 d2.utils.events]:  eta: 0:02:14  iter: 19  total_loss: 2.246  loss_cls: 0.7813  loss_box_reg: 0.6616  loss_mask: 0.683  loss_rpn_cls: 0.03956  loss_rpn_loc: 0.008304  time: 0.4848  data_time: 0.0323  lr: 1.6068e-05  max_mem: 5425M[07/14 07:24:12 d2.utils.events]:  eta: 0:02:01  iter: 39  total_loss: 1.879  loss_cls: 0.6221  loss_box_reg: 0.5713  loss_mask: 0.615  loss_rpn_cls: 0.04036  loss_rpn_loc: 0.01448  time: 0.4721  data_time: 0.0108  lr: 3.2718e-05  max_mem: 5425M[07/14 07:24:17 d2.evaluation.evaluator]: Start inference on 13 batches[07/14 07:24:26 d2.evaluation.evaluator]: Inference done 11/13. Dataloading: 0.0013 s/iter. Inference: 0.1507 s/iter. Eval: 0.1892 s/iter. Total: 0.3412 s/iter. ETA=0:00:00[07/14 07:24:27 d2.evaluation.evaluator]: Total inference time: 0:00:02.799655 (0.349957 s / iter per device, on 1 devices)[07/14 07:24:27 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:01 (0.149321 s / iter per device, on 1 devices)[07/14 07:24:27 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...[07/14 07:24:27 d2.evaluation.coco_evaluation]: Saving results to ./output/coco_instances_results.json[07/14 07:24:27 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...Loading and preparing results...DONE (t=0.00s)creating index...index created![07/14 07:24:27 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*[07/14 07:24:27 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:24:27 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:24:27 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.029 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.063 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.021 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.002 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.044 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.035 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.000 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.176 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.444 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.100 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.388 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.510[07/14 07:24:27 d2.evaluation.coco_evaluation]: Evaluation results for bbox: |  AP   |  AP50  |  AP75  |  APs  |  APm  |  APl  ||:-----:|:------:|:------:|:-----:|:-----:|:-----:|| 2.906 | 6.326  | 2.098  | 0.193 | 4.398 | 3.484 |Loading and preparing results...DONE (t=0.02s)creating index...index created![07/14 07:24:27 d2.evaluation.fast_eval_api]: Evaluate annotation type *segm*[07/14 07:24:27 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.02 seconds.[07/14 07:24:27 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:24:27 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.040 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.081 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.039 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.002 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.049 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.062 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.004 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.214 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.532 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.100 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.465 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.613[07/14 07:24:27 d2.evaluation.coco_evaluation]: Evaluation results for segm: |  AP   |  AP50  |  AP75  |  APs  |  APm  |  APl  ||:-----:|:------:|:------:|:-----:|:-----:|:-----:|| 4.027 | 8.132  | 3.905  | 0.166 | 4.904 | 6.221 |[07/14 07:24:32 d2.utils.events]:  eta: 0:01:56  iter: 59  total_loss: 1.621  loss_cls: 0.4834  loss_box_reg: 0.6684  loss_mask: 0.4703  loss_rpn_cls: 0.03119  loss_rpn_loc: 0.006103  time: 0.4799  data_time: 0.0117  lr: 4.9367e-05  max_mem: 5425M[07/14 07:24:42 d2.utils.events]:  eta: 0:01:47  iter: 79  total_loss: 1.401  loss_cls: 0.3847  loss_box_reg: 0.6159  loss_mask: 0.3641  loss_rpn_cls: 0.03303  loss_rpn_loc: 0.00822  time: 0.4797  data_time: 0.0130  lr: 6.6017e-05  max_mem: 5425M[07/14 07:24:51 d2.utils.events]:  eta: 0:01:36  iter: 99  total_loss: 1.268  loss_cls: 0.3295  loss_box_reg: 0.6366  loss_mask: 0.2884  loss_rpn_cls: 0.01753  loss_rpn_loc: 0.00765  time: 0.4775  data_time: 0.0096  lr: 8.2668e-05  max_mem: 5425M[07/14 07:24:51 d2.evaluation.evaluator]: Start inference on 13 batches[07/14 07:25:01 d2.evaluation.evaluator]: Inference done 11/13. Dataloading: 0.0014 s/iter. Inference: 0.1493 s/iter. Eval: 0.1851 s/iter. Total: 0.3358 s/iter. ETA=0:00:00[07/14 07:25:01 d2.evaluation.evaluator]: Total inference time: 0:00:02.778349 (0.347294 s / iter per device, on 1 devices)[07/14 07:25:01 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:01 (0.148906 s / iter per device, on 1 devices)[07/14 07:25:02 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...[07/14 07:25:02 d2.evaluation.coco_evaluation]: Saving results to ./output/coco_instances_results.json[07/14 07:25:02 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...Loading and preparing results...DONE (t=0.00s)creating index...index created![07/14 07:25:02 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*[07/14 07:25:02 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:25:02 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:25:02 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.01 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.543 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.751 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.626 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.092 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.472 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.636 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.196 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.620 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.714 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.533 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.588 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.803[07/14 07:25:02 d2.evaluation.coco_evaluation]: Evaluation results for bbox: |   AP   |  AP50  |  AP75  |  APs  |  APm   |  APl   ||:------:|:------:|:------:|:-----:|:------:|:------:|| 54.340 | 75.066 | 62.622 | 9.181 | 47.208 | 63.594 |Loading and preparing results...DONE (t=0.02s)creating index...index created![07/14 07:25:02 d2.evaluation.fast_eval_api]: Evaluate annotation type *segm*[07/14 07:25:02 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.02 seconds.[07/14 07:25:02 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:25:02 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.01 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.630 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.754 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.741 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.060 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.519 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.750 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.214 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.692 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.786 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.533 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.641 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.893[07/14 07:25:02 d2.evaluation.coco_evaluation]: Evaluation results for segm: |   AP   |  AP50  |  AP75  |  APs  |  APm   |  APl   ||:------:|:------:|:------:|:-----:|:------:|:------:|| 62.959 | 75.390 | 74.088 | 5.987 | 51.899 | 74.988 |[07/14 07:25:11 d2.utils.events]:  eta: 0:01:26  iter: 119  total_loss: 1.158  loss_cls: 0.2745  loss_box_reg: 0.6951  loss_mask: 0.2165  loss_rpn_cls: 0.02461  loss_rpn_loc: 0.00421  time: 0.4773  data_time: 0.0101  lr: 9.9318e-05  max_mem: 5425M[07/14 07:25:21 d2.utils.events]:  eta: 0:01:16  iter: 139  total_loss: 1.015  loss_cls: 0.1891  loss_box_reg: 0.6029  loss_mask: 0.1745  loss_rpn_cls: 0.02219  loss_rpn_loc: 0.005621  time: 0.4766  data_time: 0.0111  lr: 0.00011597  max_mem: 5425M[07/14 07:25:26 d2.evaluation.evaluator]: Start inference on 13 batches[07/14 07:25:34 d2.evaluation.evaluator]: Inference done 11/13. Dataloading: 0.0013 s/iter. Inference: 0.1459 s/iter. Eval: 0.1786 s/iter. Total: 0.3258 s/iter. ETA=0:00:00[07/14 07:25:35 d2.evaluation.evaluator]: Total inference time: 0:00:02.608437 (0.326055 s / iter per device, on 1 devices)[07/14 07:25:35 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:01 (0.143658 s / iter per device, on 1 devices)[07/14 07:25:35 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...[07/14 07:25:35 d2.evaluation.coco_evaluation]: Saving results to ./output/coco_instances_results.json[07/14 07:25:35 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...Loading and preparing results...DONE (t=0.00s)creating index...index created![07/14 07:25:35 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*[07/14 07:25:35 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:25:35 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:25:35 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.663 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.843 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.754 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.245 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.562 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.790 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.228 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.712 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.758 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.567 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.647 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.840[07/14 07:25:35 d2.evaluation.coco_evaluation]: Evaluation results for bbox: |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   ||:------:|:------:|:------:|:------:|:------:|:------:|| 66.307 | 84.257 | 75.431 | 24.466 | 56.175 | 79.035 |Loading and preparing results...DONE (t=0.01s)creating index...index created![07/14 07:25:35 d2.evaluation.fast_eval_api]: Evaluate annotation type *segm*[07/14 07:25:35 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.02 seconds.[07/14 07:25:35 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:25:35 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.756 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.839 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.833 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.135 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.581 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.915 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.248 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.788 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.836 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.600 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.676 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.950[07/14 07:25:35 d2.evaluation.coco_evaluation]: Evaluation results for segm: |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   ||:------:|:------:|:------:|:------:|:------:|:------:|| 75.579 | 83.916 | 83.342 | 13.466 | 58.113 | 91.479 |[07/14 07:25:40 d2.utils.events]:  eta: 0:01:07  iter: 159  total_loss: 0.845  loss_cls: 0.1613  loss_box_reg: 0.5442  loss_mask: 0.1211  loss_rpn_cls: 0.01358  loss_rpn_loc: 0.006381  time: 0.4768  data_time: 0.0110  lr: 0.00013262  max_mem: 5425M[07/14 07:25:49 d2.utils.events]:  eta: 0:00:58  iter: 179  total_loss: 0.7381  loss_cls: 0.1207  loss_box_reg: 0.4569  loss_mask: 0.1153  loss_rpn_cls: 0.01103  loss_rpn_loc: 0.005893  time: 0.4782  data_time: 0.0098  lr: 0.00014927  max_mem: 5425M[07/14 07:25:59 d2.utils.events]:  eta: 0:00:48  iter: 199  total_loss: 0.5811  loss_cls: 0.108  loss_box_reg: 0.3294  loss_mask: 0.09868  loss_rpn_cls: 0.01414  loss_rpn_loc: 0.008676  time: 0.4783  data_time: 0.0101  lr: 0.00016592  max_mem: 5425M[07/14 07:25:59 d2.evaluation.evaluator]: Start inference on 13 batches[07/14 07:26:05 d2.evaluation.evaluator]: Inference done 11/13. Dataloading: 0.0017 s/iter. Inference: 0.1317 s/iter. Eval: 0.0985 s/iter. Total: 0.2319 s/iter. ETA=0:00:00[07/14 07:26:05 d2.evaluation.evaluator]: Total inference time: 0:00:01.788219 (0.223527 s / iter per device, on 1 devices)[07/14 07:26:05 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:01 (0.127455 s / iter per device, on 1 devices)[07/14 07:26:05 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...[07/14 07:26:05 d2.evaluation.coco_evaluation]: Saving results to ./output/coco_instances_results.json[07/14 07:26:05 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...Loading and preparing results...DONE (t=0.00s)creating index...index created![07/14 07:26:05 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*[07/14 07:26:05 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:26:05 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:26:05 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.01 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.728 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.894 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.858 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.303 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.571 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.848 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.218 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.742 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.790 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.500 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.688 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.877[07/14 07:26:05 d2.evaluation.coco_evaluation]: Evaluation results for bbox: |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   ||:------:|:------:|:------:|:------:|:------:|:------:|| 72.797 | 89.384 | 85.752 | 30.301 | 57.057 | 84.812 |Loading and preparing results...DONE (t=0.01s)creating index...index created![07/14 07:26:05 d2.evaluation.fast_eval_api]: Evaluate annotation type *segm*[07/14 07:26:05 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:26:05 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:26:05 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.805 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.885 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.880 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.252 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.617 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.950 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.250 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.808 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.860 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.567 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.735 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.960[07/14 07:26:05 d2.evaluation.coco_evaluation]: Evaluation results for segm: |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   ||:------:|:------:|:------:|:------:|:------:|:------:|| 80.490 | 88.547 | 87.953 | 25.206 | 61.723 | 94.959 |[07/14 07:26:15 d2.utils.events]:  eta: 0:00:38  iter: 219  total_loss: 0.4771  loss_cls: 0.08176  loss_box_reg: 0.2226  loss_mask: 0.09229  loss_rpn_cls: 0.01647  loss_rpn_loc: 0.009867  time: 0.4789  data_time: 0.0132  lr: 0.00018257  max_mem: 5425M[07/14 07:26:25 d2.utils.events]:  eta: 0:00:28  iter: 239  total_loss: 0.366  loss_cls: 0.07189  loss_box_reg: 0.1961  loss_mask: 0.08049  loss_rpn_cls: 0.01413  loss_rpn_loc: 0.006811  time: 0.4785  data_time: 0.0122  lr: 0.00019922  max_mem: 5425M[07/14 07:26:29 d2.evaluation.evaluator]: Start inference on 13 batches[07/14 07:26:34 d2.evaluation.evaluator]: Inference done 11/13. Dataloading: 0.0015 s/iter. Inference: 0.1195 s/iter. Eval: 0.0502 s/iter. Total: 0.1711 s/iter. ETA=0:00:00[07/14 07:26:34 d2.evaluation.evaluator]: Total inference time: 0:00:01.375643 (0.171955 s / iter per device, on 1 devices)[07/14 07:26:34 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:00 (0.117491 s / iter per device, on 1 devices)[07/14 07:26:34 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...[07/14 07:26:34 d2.evaluation.coco_evaluation]: Saving results to ./output/coco_instances_results.json[07/14 07:26:34 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...Loading and preparing results...DONE (t=0.00s)creating index...index created![07/14 07:26:34 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*[07/14 07:26:34 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:26:34 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:26:34 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.779 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.916 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.878 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.350 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.615 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.896 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.234 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.800 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.826 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.467 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.718 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.923[07/14 07:26:34 d2.evaluation.coco_evaluation]: Evaluation results for bbox: |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   ||:------:|:------:|:------:|:------:|:------:|:------:|| 77.888 | 91.606 | 87.774 | 34.965 | 61.497 | 89.576 |Loading and preparing results...DONE (t=0.00s)creating index...index created![07/14 07:26:34 d2.evaluation.fast_eval_api]: Evaluate annotation type *segm*[07/14 07:26:34 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:26:34 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:26:34 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.823 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.894 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.891 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.248 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.624 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.967 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.254 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.832 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.858 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.367 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.741 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.973[07/14 07:26:34 d2.evaluation.coco_evaluation]: Evaluation results for segm: |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   ||:------:|:------:|:------:|:------:|:------:|:------:|| 82.323 | 89.379 | 89.068 | 24.752 | 62.427 | 96.691 |[07/14 07:26:39 d2.utils.events]:  eta: 0:00:19  iter: 259  total_loss: 0.2651  loss_cls: 0.05436  loss_box_reg: 0.1442  loss_mask: 0.06249  loss_rpn_cls: 0.005261  loss_rpn_loc: 0.00489  time: 0.4781  data_time: 0.0123  lr: 0.00021587  max_mem: 5425M[07/14 07:26:49 d2.utils.events]:  eta: 0:00:09  iter: 279  total_loss: 0.4224  loss_cls: 0.07591  loss_box_reg: 0.1941  loss_mask: 0.09489  loss_rpn_cls: 0.009817  loss_rpn_loc: 0.008633  time: 0.4777  data_time: 0.0109  lr: 0.00023252  max_mem: 5425M[07/14 07:26:59 d2.utils.events]:  eta: 0:00:00  iter: 299  total_loss: 0.3534  loss_cls: 0.07829  loss_box_reg: 0.1646  loss_mask: 0.08058  loss_rpn_cls: 0.01157  loss_rpn_loc: 0.006635  time: 0.4779  data_time: 0.0120  lr: 0.00024917  max_mem: 5425M[07/14 07:27:00 d2.engine.hooks]: Overall training speed: 298 iterations in 0:02:22 (0.4779 s / it)[07/14 07:27:00 d2.engine.hooks]: Total training time: 0:03:06 (0:00:43 on hooks)[07/14 07:27:00 d2.evaluation.evaluator]: Start inference on 13 batches[07/14 07:27:04 d2.evaluation.evaluator]: Inference done 11/13. Dataloading: 0.0015 s/iter. Inference: 0.1155 s/iter. Eval: 0.0340 s/iter. Total: 0.1510 s/iter. ETA=0:00:00[07/14 07:27:04 d2.evaluation.evaluator]: Total inference time: 0:00:01.238510 (0.154814 s / iter per device, on 1 devices)[07/14 07:27:04 d2.evaluation.evaluator]: Total inference pure compute time: 0:00:00 (0.114618 s / iter per device, on 1 devices)[07/14 07:27:04 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...[07/14 07:27:04 d2.evaluation.coco_evaluation]: Saving results to ./output/coco_instances_results.json[07/14 07:27:04 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...Loading and preparing results...DONE (t=0.00s)creating index...index created![07/14 07:27:04 d2.evaluation.fast_eval_api]: Evaluate annotation type *bbox*[07/14 07:27:04 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:27:04 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:27:04 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.762 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.927 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.859 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.310 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.640 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.864 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.236 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.788 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.814 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.433 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.724 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.903[07/14 07:27:04 d2.evaluation.coco_evaluation]: Evaluation results for bbox: |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   ||:------:|:------:|:------:|:------:|:------:|:------:|| 76.245 | 92.732 | 85.874 | 31.015 | 63.981 | 86.418 |Loading and preparing results...DONE (t=0.00s)creating index...index created![07/14 07:27:04 d2.evaluation.fast_eval_api]: Evaluate annotation type *segm*[07/14 07:27:04 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.01 seconds.[07/14 07:27:04 d2.evaluation.fast_eval_api]: Accumulating evaluation results...[07/14 07:27:04 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds. Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.818 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.902 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.899 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.253 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.632 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.956 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.252 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.828 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.856 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.400 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.735 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.970[07/14 07:27:04 d2.evaluation.coco_evaluation]: Evaluation results for segm: |   AP   |  AP50  |  AP75  |  APs   |  APm   |  APl   ||:------:|:------:|:------:|:------:|:------:|:------:|| 81.780 | 90.213 | 89.900 | 25.284 | 63.179 | 95.585 |

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注