如何在Google合作中使用GPU的Scipy最小化?

发布于 2025-01-23 18:50:25 字数 1577 浏览 5 评论 0原文

我不得不将运行时类型更改为GPU,否则我将RAM崩溃。但是,当我使用GPU时,我在执行Scipy最小化时会遇到错误。错误如下: -

------Start--------
Traceback (most recent call last):
  File "<ipython-input-8-4ca37ba86fbb>", line 119, in train
    result=minimize(objective,val,constraints=cons,options={"disp":True})
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/_minimize.py", line 618, in minimize
    constraints, callback=callback, **options)
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in _minimize_slsqp
    for c in cons['ineq']]))
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in <listcomp>
    for c in cons['ineq']]))
  File "<ipython-input-8-4ca37ba86fbb>", line 64, in constraint
    return -(A @ v)+alpha   # scipy proves >= for constraints
  File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 678, in __array__
    return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

------End--------

如何摆脱此问题?我需要将哪个张量复制到主机内存?我的目标是最小化和约束,如下所示: -

#Declaring the minimization equation here

def objective(x):
    alpha = x[0]
    v=x[1:len(x)]
    vnorm=torch.linalg.vector_norm(v) * torch.linalg.vector_norm(v)
    return alpha+(vnorm/2)

#Declaring the constraint here

def constraint(x):
    alpha=x[0]
    v=x[1:len(x)]
    return -(A @ v)+alpha


cons={'type':'ineq','fun':constraint}
result=minimize(objective,val,constraints=cons,options={"disp":True})

I had to change my runtime type to GPU in collab as otherwise, the RAM was crashing. However, when I use GPU I am getting an error while executing the scipy minimization. The error is as follows :-

------Start--------
Traceback (most recent call last):
  File "<ipython-input-8-4ca37ba86fbb>", line 119, in train
    result=minimize(objective,val,constraints=cons,options={"disp":True})
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/_minimize.py", line 618, in minimize
    constraints, callback=callback, **options)
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in _minimize_slsqp
    for c in cons['ineq']]))
  File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in <listcomp>
    for c in cons['ineq']]))
  File "<ipython-input-8-4ca37ba86fbb>", line 64, in constraint
    return -(A @ v)+alpha   # scipy proves >= for constraints
  File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 678, in __array__
    return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

------End--------

How to get rid of this problem ? Which tensor do I need to copy to the host memory? I have the objective to minimize and a constraint as follows:-

#Declaring the minimization equation here

def objective(x):
    alpha = x[0]
    v=x[1:len(x)]
    vnorm=torch.linalg.vector_norm(v) * torch.linalg.vector_norm(v)
    return alpha+(vnorm/2)

#Declaring the constraint here

def constraint(x):
    alpha=x[0]
    v=x[1:len(x)]
    return -(A @ v)+alpha


cons={'type':'ineq','fun':constraint}
result=minimize(objective,val,constraints=cons,options={"disp":True})

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

海的爱人是光 2025-01-30 18:50:25

val变量是torch.tensor或matrix a,该在约束函数中使用。因此,如果valtorch.tensor计算 result 带有以下行:

result = minimize(objective, val.cpu().numpy(), constraints=cons, options={"disp" : True})

该方式vals在主机上转移到nd.array,如最小化文档。将a转为nd.array(如果需要)可以做同样的方法。

Either val variable is torch.Tensor or matrix A, which is used in constraint function. So if val is torch.Tensor compute result with following line:

result = minimize(objective, val.cpu().numpy(), constraints=cons, options={"disp" : True})

That way vals transferred on host and turned to nd.array, as expected in documentation on minimize. Turning A to nd.array (if needed) can be done same way.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文