基于 GPU 搜索图上两个节点之间的所有可能路径
我的工作广泛使用 Migliore、Martorana 和 Sciortino 的算法来查找所有可能的简单路径,即在图中没有多次遇到节点的路径,如下所示: 一种查找图中两个节点之间的所有路径的算法。 (虽然这个算法本质上是深度优先搜索并且本质上是直观的递归,但作者还提出了一种非递归的、基于堆栈的实现。)我想知道这样的算法是否可以在 GPU 上实现。目前我正在努力寻找这个问题中任何真正的并行性。例如,监视和分派线程的成本可能会使协作图搜索(通过硬件线程)变得令人望而却步。或者,如果对图进行分区并分配给各个硬件线程进行搜索,则分而治之的策略也可以发挥作用。然而,人们必须弄清楚如何(1)对图进行分区(2)制定子任务以及(3)合并分区上的搜索结果。
My work makes extensive use of the algorithm by Migliore, Martorana and Sciortino for finding all possible simple paths, i.e. ones in which no node is encountered more than once, in a graph as described in: An Algorithm to find All Paths between Two Nodes in a Graph. (Although this algorithm is essentially a depth-first search and intuitively recursive in nature, the authors also present a non-recursive, stack-based implementation.) I'd like to know if such an algorithm can be implemented on the GPU. At the moment I'm struggling to see any real parallelism in this problem. For example, the cost of monitoring and dispatching threads might make the a cooperative graph search (by hardware threads) prohibitive. Alternatively, a divide and conquer strategy could work if the graph is partitioned and assigned to individual hardware threads for searching. However, one would have to figure out how to (1) partition the graph (2) formulate the subtasks and (3) combine the results of the searches on the partitions.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这一点有点生疏了。迪克斯特拉怎么样?
path[endNode][i] // 从 startNode
分区到 endNode 的第 i 条路径:来自节点 % 2
子任务:找到从节点去的地方
结合:你们有共享内存,对吧?
Bit rusty on this. How about Dijkstra?
path[endNode][i] // ith path to endNode from startNode
partitioning: came from node % 2
subtasks: find place to go from node
combining: you have shared memory, right?
我不认为你的问题可以轻松地移植到 GPU 上以使其执行速度更快。利用最多 GPU 能力的 GPU 程序:
我并不是说不存在高效的 GPU 算法,但我相信没有直接的方法可以将现有算法转换为高效的代码。
I don't think that your problem can be easily ported on a GPU in a way that it would perform faster. GPU programs that utilise most GPU power:
I don't say that there is no efficient GPU algorithm, but I believe that there is no straightforward way to transform existing algorithms into an efficient code.