我对GPU上的GridSearch有一些疑问
因此,我的印象是,在GPU上进行超参数的搜索速度会更快。
我在这里打开票: httpps://github.com/dmlc/dmlc/xgboost/xgboost/sissues/8041 <80041 <80041 <80041 < /a>
基本上,我在GPU上的网格搜索比我不明白的CPU花费的时间要多300倍。 GPU的重点不是具有很多CUDA核心,并且应该与CPU上的多个内核网格搜索相同吗?为什么即使在GPU运行时,为什么我所有的CPU核心都会100%,但它说它在每个GPU上使用416MB,同时仍具有100%的CPU。
我有点没有得到有关小数据的答案?不仅是1个参数组合在一定时间内在1个CUDA上处理(例如1分钟),因此基本上1000个Cudas的组合应需要1分钟。
我安装了带有Anaconda环境的软件包。 TensorFlow,CudatoolKit和XgBoost。
例如,为什么在这里要快8次: https”> https:> https:> https:> https:> https: //www.kaggle.com/code/vinhnguyen/accelerating-hyper-parameter-searching-with-gpu
我感谢你们所有的指针
So I was under impression that GridSearch of hyperparameters would be faster on GPU.
I opened my ticket here: https://github.com/dmlc/xgboost/issues/8041
Basically, my grid search on GPU takes 300x more time than on CPU, which I don't understand. Isn't the point of GPU that has a lot of CUDA cores and should act the same as multiple cores grid search on CPU? Why are all my CPU cores on 100% even while GPU is running, it says that it's using 416MB on each GPU available, while still having 100% CPU.
I kind of don't get the answer about small data? Isn't just that 1 combination of parameters gets processed on 1 CUDA in a certain amount of time (let's say 1 minute), so basically 1000 combinations on 1000 CUDAS should take 1 minute.
I installed packages with an anaconda environment. Tensorflow, CUDAtoolkit and XGBoost.
For example, why is here 8 times faster : https://www.kaggle.com/code/vinhnguyen/accelerating-hyper-parameter-searching-with-gpu
I appreciate all the pointers you guys might have :)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论