张量尺寸的更改导致错误
我正在尝试测试transforms.ize
的原理,而我发现一个令人困惑的点。当我运行以下代码:
import numpy as np
import torch
from torchvision import transforms
tim = np.array([[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]]) # (2, 3, 3)
tim = torch.from_numpy(tim)
tf = transforms.Compose([ # Principle?
transforms.ToPILImage(),
transforms.Resize((6, 6)), # HW
transforms.ToTensor()
])
mask = tf(tim)
squ = mask.squeeze()
发生错误时:
Traceback (most recent call last):
File "C:/Users/Tim/Desktop/U-Net/test.py", line 62, in <module>
mask = tf(tim)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 95, in __call__
img = t(img)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 227, in __call__
return F.to_pil_image(pic, self.mode)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\functional.py", line 315, in to_pil_image
raise TypeError(f"Input type {npimg.dtype} is not supported")
TypeError: Input type int32 is not supported
但是,当我更改张量的大小时,问题就解决了:
tim = np.array([[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]]) # (1, 3, 3)
我想知道为什么这会发生这种情况,因为错误描述与大小无关,而是类型。如果有人对原因有任何想法,请告诉我,谢谢您的时间!
I'm trying to test the principle of transforms.Resize
while I find a confusing point. When I run the below codes:
import numpy as np
import torch
from torchvision import transforms
tim = np.array([[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]]) # (2, 3, 3)
tim = torch.from_numpy(tim)
tf = transforms.Compose([ # Principle?
transforms.ToPILImage(),
transforms.Resize((6, 6)), # HW
transforms.ToTensor()
])
mask = tf(tim)
squ = mask.squeeze()
A bug occurs:
Traceback (most recent call last):
File "C:/Users/Tim/Desktop/U-Net/test.py", line 62, in <module>
mask = tf(tim)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 95, in __call__
img = t(img)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 227, in __call__
return F.to_pil_image(pic, self.mode)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\functional.py", line 315, in to_pil_image
raise TypeError(f"Input type {npimg.dtype} is not supported")
TypeError: Input type int32 is not supported
However, when I change the size of the tensor, the problem is solved:
tim = np.array([[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]]) # (1, 3, 3)
I’m wondering why this happens as the bug description has nothing to do with size but type. If anyone has any ideas on the reason, please let me know, thanks for your time!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
将数据类型更改为浮动...
请确保您知道输入数据是什么。
Change the datatype to float ...
Make sure you know what the input data is.