并行动态规划

发布于 2024-07-27 19:02:10 字数 31 浏览 13 评论 0原文

有没有讨论如何采用动态程序并将其并行化的好论文?

Are there any good papers discussing how to take a dynamic program and parallelize it?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

嘿哥们儿 2024-08-03 19:02:10

我们最近发表了一篇论文,展示了如何通过共享无锁哈希表在共享内存多核计算机上并行化任何动态编程:

Stivala, A. 和 Stuckey, PJ 和 Garcia de la Banda, M. 和 Hermenegildo, M. 和 Wirth, A. 2010 “无锁并行动态规划”J并行分布。 计算。 70:839-848 doi:10.1016/j.jpdc.2010.01.004

本质上,您启动多个线程,所有线程都从要计算的动态编程的值开始运行相同的代码,自上而下地计算它(递归地),并在共享的无锁哈希表中记忆,但随机化子问题的计算顺序,以便线程在计算子问题时发散。

在实现方面,我们只是在UNIX类型的系统上使用了C和pthreads,所需要的只是能够拥有共享内存,以及用于线程之间无锁同步的CompareAndSwap(CAS)。

由于本文发表在爱思唯尔期刊上,因此您需要通过大学图书馆或类似的图书馆订阅来访问上述内容。 不过,您也许可以通过斯塔基教授的网页获得预印本。

We recently published a paper showing how to parallelize any dynamic programming on a shared memory multicore computer by means of a shared lock-free hash table in this paper:

Stivala, A. and Stuckey, P. J. and Garcia de la Banda, M. and Hermenegildo, M. and Wirth, A. 2010 "Lock-free parallel dynamic programming" J. Parallel Distrib. Comput. 70:839-848 doi:10.1016/j.jpdc.2010.01.004

Essentially, you start multiple threads, all running the same code starting at the value of the dynamic programming you want to compute, computing it top-down (recursively), and memoizing in a shared lock-free hash table, but randomizing the order in which subproblems are computed so that the threads diverge in which subproblems they compute.

In terms of implementation, we just used C and pthreads on UNIX-type systems, all you need is to be able to have shared memory, and CompareAndSwap (CAS) for lock-free synchronization between threads.

Because this paper was published in an Elsevier journal, you'll need to access the above through a University library or similar with a subscription to it. You might be able to get a pre-print copy via Prof. Stuckey's webpage though.

避讳 2024-08-03 19:02:10

IIRC,通常使用动态规划所做的就是将问题递归地划分为子问题,并从最优子解决方案中组合出最优解决方案。 它之所以有效,是因为所有最优子解决方案都内置在缓存中,因此不需要重新计算它们。

如果问题可以分为多种方式,您可以为每个子解决方案分叉求解器。 如果每个(子)问题平均有 1+epsilon(有趣的是,epsilon 大于零)可能的子解决方案,那么您将通过这种方式获得大量并行性。 您可能需要对缓存条目进行锁定,以防止单个解决方案被多次构造。

您需要一种语言,在这种语言中,您可以比解决子任务的工作更便宜地分叉子任务,并且很高兴能够同时处理大量分叉任务。 大多数语言中典型的并行产品在这方面做得很糟糕。 在使用“大堆栈模型”的系统中不能有很多分叉任务(请参阅无堆栈语言如何工作?)。

我们实现了并行编程语言 PARLANSE,以获得具有正确属性的语言。

IIRC, what you typically do with dynamic programming is to recursively divide a problem into subproblems, and assemble optimal solutions from optimal subsolutions. What makes it effective is that all optimal subsolutions are built into a cache so they need not be recomputed.

If the problem can be divided several ways, you can fork the solver for each subsolution. If each(sub) problem averages 1+epsilon (for epsilon interestingly more than zero) possible subsolutions, then you'll get a lot of parallelism this way. You'll probably need locks on the cache entries to protect the individual solutions from being constructed more than once.

You need a language in which you can fork subtasks more cheaply than the work to solve them, and which is happy to have lots of forked tasks at once. The typical parallel offerings in most languages do this badly; you can't have lots of forked tasks in systems that use "the big stack model" (see How does a stackless language work?).

We implemented our parallel programming langauge, PARLANSE, to get a language that had the right properties.

迎风吟唱 2024-08-03 19:02:10

已经有一些与在 GPU 上实现动态规划算法相关的工作。 例如:

使用 CUDA 进行动态编程
GPU 优化动态编程
最优多边形三角剖分动态规划的 GPU 实现

There have been some works related to implementing dynamic programming algorithms on GPUs. For e.g.:

Dynamic Programming with CUDA
GPU optimized dynamic programming
A GPU Implementation of Dynamic Programming for the Optimal Polygon Triangulation

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文