通过 GPU 使用随机数

发布于 2024-09-07 07:50:44 字数 276 浏览 6 评论 0原文

我正在研究使用 nvidia GPU 进行蒙特卡罗模拟。但是,我想使用 gsl 随机数生成器以及并行随机数生成器,例如 SPRNG。有谁知道这是否可能?

更新

我曾使用 GPU 玩过 RNG。目前还没有很好的解决办法。 SDK 附带的 Mersenne Twister 并不真正适合(我的)蒙特卡罗模拟,因为生成种子需要非常长的时间。

NAG 库更有前途。您可以批量或在单个线程中生成 RN。但是,目前仅支持少数分布 - 均匀分布、指数分布和正态分布。

I'm investigating using nvidia GPUs for Monte-Carlo simulations. However, I would like to use the gsl random number generators and also a parallel random number generator such as SPRNG. Does anyone know if this is possible?

Update

I've played about with RNG using GPUs. At present there isn't a nice solution. The Mersenne Twister that comes with the SDK isn't really suitable for (my) Monte-Carlo simulations since it takes an incredibly long time to generate seeds.

The NAG libraries are more promising. You can generate RNs either in batches or in individual threads. However, only a few distributions are currently supported - Uniform, exponential and Normal.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

烟酒忠诚 2024-09-14 07:50:45

使用 CUDA SDK 中提供的 Mersenne Twister PRNG。

Use the Mersenne Twister PRNG, as provided in the CUDA SDK.

过去的过去 2024-09-14 07:50:45

这里我们在 GPU 上使用 sobol 序列。

Here we use sobol sequences on the GPUs.

最单纯的乌龟 2024-09-14 07:50:45

您必须自己实施它们。

You will have to implement them by yourself.

晨敛清荷 2024-09-14 07:50:44

GSL 手册推荐 Mersenne Twister

Mersenne Twister 作者有一个 版本Nvidia GPU。我考虑将其移植到 R 包 gputools 但发现我需要过多的在“生成 GPU 并提供给 R”组合之前绘制(我认为是数百万)比仅在 R 中绘制(仅使用 CPU)更快。

这确实是计算/通信的权衡。

The GSL manual recommends the Mersenne Twister.

The Mersenne Twister authors have a version for Nvidia GPUs. I looked into porting this to the R package gputools but found that I needed excessively large number of draws (millions, I think) before the combination of 'generate of GPU and make available to R' was faster than just drawing in R (using only the CPU).

It really is a computation / communication tradeoff.

芸娘子的小脾气 2024-09-14 07:50:44

我和我的同事有一份预印本,将出现在 SC11 会议上,其中回顾了非常适合 GPU 的生成随机数的替代技术。这个想法是,第 n 个随机数是:

x_n = f(n) 

的传统方法相反

x_n = f(x_{n-1})

与提供 源代码 ,实现几个不同的生成器。提供 2^64 或更多流,每个流的周期为 2^128 或更多。所有这些都通过了流内和流间统计独立性的各种测试(TestU01 Crush 和 BigCrush 套件)。该库还包括适配器,允许您在 GSL 框架中使用我们的生成器。

My colleagues and I have a preprint, to appear in the SC11 conference that revisits an alternative technique for generating random numbers that is well-suited to GPUs. The idea is that the nth random number is:

x_n = f(n) 

In contrast to the conventional approach where

x_n = f(x_{n-1})

Source code is available, which implements several different generators. offering 2^64 or more streams, each with periods of 2^128 or more. All pass a wide assortment of tests (the TestU01 Crush and BigCrush suites) of both intra-stream and inter-stream statistical independence. The library also includes adapters that allow you to use our generators in a GSL framework.

纵山崖 2024-09-14 07:50:44

GPU 所需的大规模并行随机生成是一个难题。这是一个活跃的研究课题。您确实必须小心,不仅要有一个好的顺序随机生成器(您可以在文献中找到这些生成器),还要保证它们独立。对于良好的蒙特卡罗模拟来说,成对独立性是不够的。 AFAIK 没有好的公共域代码可用。

Massive parallel random generation as you need it for GPUs is a difficult problem. This is an active research topic. You really have to be careful not only to have a good sequential random generator (these you find in the literature) but something that guarantees that they are independent. Pairwise independence is not sufficient for a good Monte Carlo simulation. AFAIK there is no good public domain code available.

空名 2024-09-14 07:50:44

我刚刚发现 NAG 提供了一些 RNG 例程。这些图书馆对学者免费。

I've just found that NAG provide some RNG routines. These libraries are free for academics.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文