RuntimeError:PytorchStreamReader 读取 zip 存档失败:找不到中心目录
当我将经过训练的pytorch模型转换为Coreml模型时,我得到了一个错误:
File "/Users/lion/Documents/MyLab/web_workspace/sky_replacement/venv/lib/python3.9/site-packages/torch/jit/_serialization.py", line 161, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
这是我的代码:
from networks import *
import coremltools as ct
run_device = torch.device("cpu")
net_G = define_G(input_nc=3, output_nc=1, ngf=64,
netG='coord_resnet50').to(run_device)
checkpoint = torch.load('./model/best_ckpt.pt', map_location=run_device)
net_G.load_state_dict(checkpoint['model_G_state_dict'])
net_G.to(run_device)
net_G.eval()
model = ct.convert('./model/best_ckpt.pt', source='pytorch', inputs=[ct.ImageType()], skip_model_load=True)
model.save("result.mlmodel")
When I convert a my trained pytorch model to coreml model, I got this error:
File "/Users/lion/Documents/MyLab/web_workspace/sky_replacement/venv/lib/python3.9/site-packages/torch/jit/_serialization.py", line 161, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
This is my code:
from networks import *
import coremltools as ct
run_device = torch.device("cpu")
net_G = define_G(input_nc=3, output_nc=1, ngf=64,
netG='coord_resnet50').to(run_device)
checkpoint = torch.load('./model/best_ckpt.pt', map_location=run_device)
net_G.load_state_dict(checkpoint['model_G_state_dict'])
net_G.to(run_device)
net_G.eval()
model = ct.convert('./model/best_ckpt.pt', source='pytorch', inputs=[ct.ImageType()], skip_model_load=True)
model.save("result.mlmodel")
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我之所以出现这个问题,是因为失败的git lfs污迹破坏了检查站。检查您的CKPT/PTH文件的文件大小/校验和检查。
I had this issue because a failed git LFS smudge corrupted the checkpoint. Check the file size/checksum of your ckpt/pth file.
Pytorch版本和保存机制可能是一个问题。我遇到了相同的问题,并通过传递Kwarg
_use_new_zipfile_serialization = false
时解决了它。更多详细信息在这里。It could be a problem with the PyTorch version and saving mechanism. I had the same problem and solved it by passing the kwarg
_use_new_zipfile_serialization=False
when saving the model. More details here.