在使用Pytorch进行模型时,堆栈期望每个张量相等的尺寸

发布于 01-18 21:28 字数 1296 浏览 2 评论 0原文

我正在使用 MMSegmentanon 库来训练我的模型,例如图像分割,在训练期间,我创建了模型(Vision Transformer),当我尝试使用它来训练模型时:

我收到此错误:

运行时错误:CaughtRuntimeErrorinDataLoaderworkerprocess0.OriginalTraceback(mostrecentcalllast): 文件“/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py”,第287行,in _worker_loop 数据=fetcher.fetch(索引) 文件“/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py​​”,第 47 行,infetch returnself.collat​​e_fn(数据) 文件“/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collat​​e.py”,第 81 行,在 collat​​eforkeyinbatch[0] 中 文件“/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collat​​e.py”,第81行,in <字典压缩> 批量分叉[0] 文件“/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collat​​e.py”,第59行,incollat​​estacked.append(default_collat​​e(padded_samples)) 文件“/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collat​​e.py”,第 56 行,indefault_collat​​e returntorch.stack(batch,0,out=out)

运行时错误:堆栈期望每个张量大小相等,但在条目 0 处得到 [1, 256, 256, 256],在条目 3 处得到 [1,256,256]

**我还必须提到我已经用他们的库中可用的其他模型测试了我自己的数据集,但它们都工作正常。

尝试过:


model=build_segmentor(cfg.model,train_cfg=cfg.get('train_cfg'),test_cfg=cfg.get('test_cfg'))train_segmentor(model,datasets,cfg,distributed=False,validate=True,
meta=dict())

I am using MMSegmentainon library to train my model for instance image segmentation, during the traingin, I craete the model(Vision Transformer) and when I try to train the model using this:

I get this error:

RuntimeError:CaughtRuntimeErrorinDataLoaderworkerprocess0.OriginalTraceback(mostrecentcalllast):
File"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py",line287,in
_worker_loop
data=fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 47, infetch
returnself.collate_fn(data)
File "/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py", line 81, in collateforkeyinbatch[0]
File"/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py",line81,in
<dictcomp>
forkey in batch[0]
File"/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py",line59,incollatestacked.append(default_collate(padded_samples))
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 56, indefault_collate
returntorch.stack(batch,0,out=out)

RuntimeError: stack expects each tensor to be equal size, but got [1, 256, 256, 256] at entry0 and[1,256,256] at entry3

** I must also mention that I have tested my own dataset with other models available in their library but all of them works properly.

tried :


model=build_segmentor(cfg.model,train_cfg=cfg.get('train_cfg'),test_cfg=cfg.get('test_cfg'))train_segmentor(model,datasets,cfg,distributed=False,validate=True,
meta=dict())

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

鲸落 2025-01-25 21:28:48

似乎数据集中的图像可能没有相同的大小,与Vit模型 https://arxiv.org.org.org.org.org /abs/2010.11929 ,您正在使用MLP模型,

如果不是这样,值得检查您的标签是否全部在预期的维度中。
据推测,mmsegementattion期望输出只是注释图(2D数组)。
建议您修改数据集并准备注释图。

It seems that images in your dataset might not have the same size, as in the VIT model https://arxiv.org/abs/2010.11929, you are using an MLP model,

if it was not the case, it is worth checking if your labels are all in the expected dimension.
presumably, MMsegmentattion expects the output to be just the annotation map (a 2D array).
It is recommended that you revise your dataset and prepare the annotation map.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文