使用 detectorron2 进行语义分割
我使用 Detectron2 来训练具有实例分割的自定义模型,并且效果良好。 google colab 上有几个使用 Detectron2 使用实例分割的教程,但没有关于语义分割的内容。因此,要训练基于 colab 的代码(https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=7unkuuiqLdqd)是这样的:
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
为了运行语义分割训练,我替换了“COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x。 yaml”
到“/Misc/semantic_R_50_FPN_1x.yaml”
,基本上我改变了预训练的模型,就是这样。我收到此错误:
TypeError: cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not NoneType
How I set up to Semantic Segmentation on Google Colab?
I used Detectron2 to train a custom model with Instance Segmentation and worked well. There are several Tutorials on google colab with Detectron2 using Instance Segmentation, but nothing about Semantic Segmentation. So, to train the Custom Instance Segmentation the code based on colab (https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=7unkuuiqLdqd) is this:
from detectron2.engine import DefaultTrainer
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
And to run Semantic Segmentation train I replaced "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
to "/Misc/semantic_R_50_FPN_1x.yaml"
, basicly I changed the pre-trainded model, just this. And I got this error:
TypeError: cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not NoneType
How I set up to Semantic Segmentation on Google Colab?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
要训练语义分割,您可以使用相同的
COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml
模型。您不必更改此行。您在问题中显示的训练代码是正确的,也可以用于语义分割。所有更改的是标签文件。
模型训练完成后,您可以通过从训练模型加载模型权重来将其用于推理
To train for semantic segmentation you can use the same
COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml
model. You don't have to change this line.The training code you showed in your question is correct and can be used for semantic segmentation as well. All that changes are the label files.
Once the model is trained, you can use it for inference by loading the model weights from the trained model