如何在Google合作中使用GPU的Scipy最小化?
我不得不将运行时类型更改为GPU,否则我将RAM崩溃。但是,当我使用GPU时,我在执行Scipy最小化时会遇到错误。错误如下: -
------Start--------
Traceback (most recent call last):
File "<ipython-input-8-4ca37ba86fbb>", line 119, in train
result=minimize(objective,val,constraints=cons,options={"disp":True})
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/_minimize.py", line 618, in minimize
constraints, callback=callback, **options)
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in _minimize_slsqp
for c in cons['ineq']]))
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in <listcomp>
for c in cons['ineq']]))
File "<ipython-input-8-4ca37ba86fbb>", line 64, in constraint
return -(A @ v)+alpha # scipy proves >= for constraints
File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 678, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
------End--------
如何摆脱此问题?我需要将哪个张量复制到主机内存?我的目标是最小化和约束,如下所示: -
#Declaring the minimization equation here
def objective(x):
alpha = x[0]
v=x[1:len(x)]
vnorm=torch.linalg.vector_norm(v) * torch.linalg.vector_norm(v)
return alpha+(vnorm/2)
#Declaring the constraint here
def constraint(x):
alpha=x[0]
v=x[1:len(x)]
return -(A @ v)+alpha
cons={'type':'ineq','fun':constraint}
result=minimize(objective,val,constraints=cons,options={"disp":True})
I had to change my runtime type to GPU in collab as otherwise, the RAM was crashing. However, when I use GPU I am getting an error while executing the scipy minimization. The error is as follows :-
------Start--------
Traceback (most recent call last):
File "<ipython-input-8-4ca37ba86fbb>", line 119, in train
result=minimize(objective,val,constraints=cons,options={"disp":True})
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/_minimize.py", line 618, in minimize
constraints, callback=callback, **options)
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in _minimize_slsqp
for c in cons['ineq']]))
File "/usr/local/lib/python3.7/dist-packages/scipy/optimize/slsqp.py", line 315, in <listcomp>
for c in cons['ineq']]))
File "<ipython-input-8-4ca37ba86fbb>", line 64, in constraint
return -(A @ v)+alpha # scipy proves >= for constraints
File "/usr/local/lib/python3.7/dist-packages/torch/_tensor.py", line 678, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
------End--------
How to get rid of this problem ? Which tensor do I need to copy to the host memory? I have the objective to minimize and a constraint as follows:-
#Declaring the minimization equation here
def objective(x):
alpha = x[0]
v=x[1:len(x)]
vnorm=torch.linalg.vector_norm(v) * torch.linalg.vector_norm(v)
return alpha+(vnorm/2)
#Declaring the constraint here
def constraint(x):
alpha=x[0]
v=x[1:len(x)]
return -(A @ v)+alpha
cons={'type':'ineq','fun':constraint}
result=minimize(objective,val,constraints=cons,options={"disp":True})
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
val
变量是torch.tensor
或matrixa
,该在约束
函数中使用。因此,如果val
是torch.tensor
计算 result 带有以下行:该方式
vals
在主机上转移到nd.array
,如最小化文档。将a
转为nd.array
(如果需要)可以做同样的方法。Either
val
variable istorch.Tensor
or matrixA
, which is used inconstraint
function. So ifval
istorch.Tensor
computeresult
with following line:That way
vals
transferred on host and turned tond.array
, as expected in documentation on minimize. TurningA
tond.array
(if needed) can be done same way.