张量尺寸的更改导致错误

发布于 2025-02-08 02:55:03 字数 1465 浏览 5 评论 0原文

我正在尝试测试transforms.ize的原理,而我发现一个令人困惑的点。当我运行以下代码:

import numpy as np
import torch
from torchvision import transforms

tim = np.array([[[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]],
                [[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]]]) # (2, 3, 3)

tim = torch.from_numpy(tim)

tf = transforms.Compose([  # Principle?
     transforms.ToPILImage(),
     transforms.Resize((6, 6)), # HW
     transforms.ToTensor()
])

mask = tf(tim)
squ = mask.squeeze()

发生错误时:

Traceback (most recent call last):
  File "C:/Users/Tim/Desktop/U-Net/test.py", line 62, in <module>
    mask = tf(tim)
  File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 95, in __call__
    img = t(img)
  File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 227, in __call__
    return F.to_pil_image(pic, self.mode)
  File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\functional.py", line 315, in to_pil_image
    raise TypeError(f"Input type {npimg.dtype} is not supported")
TypeError: Input type int32 is not supported

但是,当我更改张量的大小时,问题就解决了:

tim = np.array([[[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]]]) # (1, 3, 3)

我想知道为什么这会发生这种情况,因为错误描述与大小无关,而是类型。如果有人对原因有任何想法,请告诉我,谢谢您的时间!

I'm trying to test the principle of transforms.Resize while I find a confusing point. When I run the below codes:

import numpy as np
import torch
from torchvision import transforms

tim = np.array([[[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]],
                [[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]]]) # (2, 3, 3)

tim = torch.from_numpy(tim)

tf = transforms.Compose([  # Principle?
     transforms.ToPILImage(),
     transforms.Resize((6, 6)), # HW
     transforms.ToTensor()
])

mask = tf(tim)
squ = mask.squeeze()

A bug occurs:

Traceback (most recent call last):
  File "C:/Users/Tim/Desktop/U-Net/test.py", line 62, in <module>
    mask = tf(tim)
  File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 95, in __call__
    img = t(img)
  File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 227, in __call__
    return F.to_pil_image(pic, self.mode)
  File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\functional.py", line 315, in to_pil_image
    raise TypeError(f"Input type {npimg.dtype} is not supported")
TypeError: Input type int32 is not supported

However, when I change the size of the tensor, the problem is solved:

tim = np.array([[[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]]]) # (1, 3, 3)

I’m wondering why this happens as the bug description has nothing to do with size but type. If anyone has any ideas on the reason, please let me know, thanks for your time!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

零度℉ 2025-02-15 02:55:03

将数据类型更改为浮动...

tim = np.array([[[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]],
                [[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]]], dtype=np.float32) # (2, 3, 3)

请确保您知道输入数据是什么。

Change the datatype to float ...

tim = np.array([[[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]],
                [[1, 2, 3],
                 [1, 2, 3],
                 [1, 2, 3]]], dtype=np.float32) # (2, 3, 3)

Make sure you know what the input data is.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文