解决CUDA错误:通过代码修改记忆中

发布于 2025-02-05 07:29:57 字数 2732 浏览 1 评论 0 原文

我在运行此代码此代码

RuntimeError: CUDA out of memory. Tried to allocate 10.99 GiB (GPU 0; 10.76 GiB                                                                                         total capacity; 707.86 MiB already allocated; 2.61 GiB free; 726.00 MiB reserved                                                                                         in total by PyTorch)

。我尝试使批处理大小真的很小(从10000到10),现在错误已更改为:

(main.py:2595652): Gdk-CRITICAL **: 11:16:04.013: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
2022-06-07 11:16:05.909522: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
  File "main.py", line 194, in <module>
    **psm = psm.cuda()**
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 637, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 552, in _apply
    param_applied = fn(param)
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 637, in <lambda>
    return self._apply(lambda t: t.cuda(device))
**RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.**
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

这是PM的一部分。我复制了这一点,因为错误行显示 psm = psm.cuda()

class PSM(nn.Module):
    def __init__(self, n_classes, k, fr, num_feat_map=64, p=0.3, shar_channels=3):
        super(PSM, self).__init__()
        self.shar_channels = shar_channels
        self.num_feat_map = num_feat_map
        self.encoder = Encoder(k, fr, num_feat_map, p, shar_channels)
        self.decoder = Decoder(n_classes, p)

    def __call__(self, x):
        return self.forward(x)

    def forward(self, x):
        encodes = []
        outputs = []
        for device in x:
            encode = self.encoder(device)
            outputs.append(self.decoder(encode.cuda()))
            encodes.append(encode)
        # Add shared channel
        shared_encode = torch.mean(torch.stack(encodes), 2).permute(1,0,2).cuda()
        outputs.append(self.decoder(shared_encode))
        return torch.mean(torch.stack(outputs), 0)

I keep getting the following error while running this code on a server with GPU:

RuntimeError: CUDA out of memory. Tried to allocate 10.99 GiB (GPU 0; 10.76 GiB                                                                                         total capacity; 707.86 MiB already allocated; 2.61 GiB free; 726.00 MiB reserved                                                                                         in total by PyTorch)

I added a garbage collector. I tried making the batch size really small (from 10000 to 10) and now the error has changed to:

(main.py:2595652): Gdk-CRITICAL **: 11:16:04.013: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
2022-06-07 11:16:05.909522: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
  File "main.py", line 194, in <module>
    **psm = psm.cuda()**
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 637, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 552, in _apply
    param_applied = fn(param)
  File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 637, in <lambda>
    return self._apply(lambda t: t.cuda(device))
**RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.**
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Here is a part of PMS. I copied that as the error line shows psm = psm.cuda()

class PSM(nn.Module):
    def __init__(self, n_classes, k, fr, num_feat_map=64, p=0.3, shar_channels=3):
        super(PSM, self).__init__()
        self.shar_channels = shar_channels
        self.num_feat_map = num_feat_map
        self.encoder = Encoder(k, fr, num_feat_map, p, shar_channels)
        self.decoder = Decoder(n_classes, p)

    def __call__(self, x):
        return self.forward(x)

    def forward(self, x):
        encodes = []
        outputs = []
        for device in x:
            encode = self.encoder(device)
            outputs.append(self.decoder(encode.cuda()))
            encodes.append(encode)
        # Add shared channel
        shared_encode = torch.mean(torch.stack(encodes), 2).permute(1,0,2).cuda()
        outputs.append(self.decoder(shared_encode))
        return torch.mean(torch.stack(outputs), 0)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

并安 2025-02-12 07:30:00

这对我有用:

  1. 我在终端上运行 nvidia -smi ,发现GPU不太忙。
  2. 然后将 torch.cuda.set_device(1)添加到我的代码为我工作,因为设备1不太忙。我还使用了减小的批量尺寸。

This worked for me:

  1. I ran nvidia -smi on the terminal and found the GPU that was less busy.
  2. Then adding torch.cuda.set_device(1) to my code worked for me, since device 1 was less busy. I also used a reduced batch size.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文