cuda上的128位整数?

发布于 2024-11-10 16:14:57 字数 223 浏览 9 评论 0原文

我刚刚成功在 Linux Ubuntu 10.04 下安装了我的 cuda SDK。我的显卡是 NVIDIA geForce GT 425M,我想用它来解决一些繁重的计算问题。 我想知道的是:有没有办法使用一些无符号的 128 位 int var?当使用 gcc 在 CPU 上运行我的程序时,我使用的是 __uint128_t 类型,但将它与 cuda 一起使用似乎不起作用。 有什么办法可以让 cuda 上有 128 位整数吗?

I just managed to install my cuda SDK under Linux Ubuntu 10.04. My graphic card is an NVIDIA geForce GT 425M, and I'd like to use it for some heavy computational problem.
What I wonder is: is there any way to use some unsigned 128 bit int var? When using gcc to run my program on the CPU, I was using the __uint128_t type, but using it with cuda doesn't seem to work.
Is there anything I can do to have 128 bit integers on cuda?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

刘备忘录 2024-11-17 16:14:57

为了获得最佳性能,人们希望将 128 位类型映射到合适的 CUDA 向量类型(例如 uint4)之上,并使用 PTX 内联汇编来实现功能。加法看起来像这样:

typedef uint4 my_uint128_t;
__device__ my_uint128_t add_uint128 (my_uint128_t addend, my_uint128_t augend)
{
    my_uint128_t res;
    asm ("add.cc.u32      %0, %4, %8;\n\t"
         "addc.cc.u32     %1, %5, %9;\n\t"
         "addc.cc.u32     %2, %6, %10;\n\t"
         "addc.u32        %3, %7, %11;\n\t"
         : "=r"(res.x), "=r"(res.y), "=r"(res.z), "=r"(res.w)
         : "r"(addend.x), "r"(addend.y), "r"(addend.z), "r"(addend.w),
           "r"(augend.x), "r"(augend.y), "r"(augend.z), "r"(augend.w));
    return res;
}

乘法可以类似地使用 PTX 内联汇编来构建,方法是将 128 位数字分解为 32 位块,计算 64 位部分乘积并适当地相加。显然这需要一些工作。通过将数字分解为 64 位块并使用 __umul64hi() 与常规 64 位乘法和一些加法结合使用,可以在 C 级别获得合理的性能。这将导致以下结果:

__device__ my_uint128_t mul_uint128 (my_uint128_t multiplicand, 
                                     my_uint128_t multiplier)
{
    my_uint128_t res;
    unsigned long long ahi, alo, bhi, blo, phi, plo;
    alo = ((unsigned long long)multiplicand.y << 32) | multiplicand.x;
    ahi = ((unsigned long long)multiplicand.w << 32) | multiplicand.z;
    blo = ((unsigned long long)multiplier.y << 32) | multiplier.x;
    bhi = ((unsigned long long)multiplier.w << 32) | multiplier.z;
    plo = alo * blo;
    phi = __umul64hi (alo, blo) + alo * bhi + ahi * blo;
    res.x = (unsigned int)(plo & 0xffffffff);
    res.y = (unsigned int)(plo >> 32);
    res.z = (unsigned int)(phi & 0xffffffff);
    res.w = (unsigned int)(phi >> 32);
    return res;
}

下面是使用 PTX 内联汇编的 128 位乘法的版本。它需要 PTX 3.0(随 CUDA 4.2 一起提供),并且代码需要至少具有计算能力 2.0 的 GPU,即 Fermi 或 Kepler 类设备。该代码使用最少数量的指令,因为需要 16 个 32 位乘法来实现 128 位乘法。相比之下,上面使用 CUDA 内在函数的变体为 sm_20 目标编译为 23 条指令。

__device__ my_uint128_t mul_uint128 (my_uint128_t a, my_uint128_t b)
{
    my_uint128_t res;
    asm ("{\n\t"
         "mul.lo.u32      %0, %4, %8;    \n\t"
         "mul.hi.u32      %1, %4, %8;    \n\t"
         "mad.lo.cc.u32   %1, %4, %9, %1;\n\t"
         "madc.hi.u32     %2, %4, %9,  0;\n\t"
         "mad.lo.cc.u32   %1, %5, %8, %1;\n\t"
         "madc.hi.cc.u32  %2, %5, %8, %2;\n\t"
         "madc.hi.u32     %3, %4,%10,  0;\n\t"
         "mad.lo.cc.u32   %2, %4,%10, %2;\n\t"
         "madc.hi.u32     %3, %5, %9, %3;\n\t"
         "mad.lo.cc.u32   %2, %5, %9, %2;\n\t"
         "madc.hi.u32     %3, %6, %8, %3;\n\t"
         "mad.lo.cc.u32   %2, %6, %8, %2;\n\t"
         "madc.lo.u32     %3, %4,%11, %3;\n\t"
         "mad.lo.u32      %3, %5,%10, %3;\n\t"
         "mad.lo.u32      %3, %6, %9, %3;\n\t"
         "mad.lo.u32      %3, %7, %8, %3;\n\t"
         "}"
         : "=r"(res.x), "=r"(res.y), "=r"(res.z), "=r"(res.w)
         : "r"(a.x), "r"(a.y), "r"(a.z), "r"(a.w),
           "r"(b.x), "r"(b.y), "r"(b.z), "r"(b.w));
    return res;
}

For best performance, one would want to map the 128-bit type on top of a suitable CUDA vector type, such as uint4, and implement the functionality using PTX inline assembly. The addition would look something like this:

typedef uint4 my_uint128_t;
__device__ my_uint128_t add_uint128 (my_uint128_t addend, my_uint128_t augend)
{
    my_uint128_t res;
    asm ("add.cc.u32      %0, %4, %8;\n\t"
         "addc.cc.u32     %1, %5, %9;\n\t"
         "addc.cc.u32     %2, %6, %10;\n\t"
         "addc.u32        %3, %7, %11;\n\t"
         : "=r"(res.x), "=r"(res.y), "=r"(res.z), "=r"(res.w)
         : "r"(addend.x), "r"(addend.y), "r"(addend.z), "r"(addend.w),
           "r"(augend.x), "r"(augend.y), "r"(augend.z), "r"(augend.w));
    return res;
}

The multiplication can similarly be constructed using PTX inline assembly by breaking the 128-bit numbers into 32-bit chunks, computing the 64-bit partial products and adding them appropriately. Obviously this takes a bit of work. One might get reasonable performance at the C level by breaking the number into 64-bit chunks and using __umul64hi() in conjuction with regular 64-bit multiplication and some additions. This would result in the following:

__device__ my_uint128_t mul_uint128 (my_uint128_t multiplicand, 
                                     my_uint128_t multiplier)
{
    my_uint128_t res;
    unsigned long long ahi, alo, bhi, blo, phi, plo;
    alo = ((unsigned long long)multiplicand.y << 32) | multiplicand.x;
    ahi = ((unsigned long long)multiplicand.w << 32) | multiplicand.z;
    blo = ((unsigned long long)multiplier.y << 32) | multiplier.x;
    bhi = ((unsigned long long)multiplier.w << 32) | multiplier.z;
    plo = alo * blo;
    phi = __umul64hi (alo, blo) + alo * bhi + ahi * blo;
    res.x = (unsigned int)(plo & 0xffffffff);
    res.y = (unsigned int)(plo >> 32);
    res.z = (unsigned int)(phi & 0xffffffff);
    res.w = (unsigned int)(phi >> 32);
    return res;
}

Below is a version of the 128-bit multiplication that uses PTX inline assembly. It requires PTX 3.0, which shipped with CUDA 4.2, and the code requires a GPU with at least compute capability 2.0, i.e. a Fermi or Kepler class device. The code uses the minimal number of instructions, as sixteen 32-bit multiplies are needed to implement a 128-bit multiplication. By comparison, the variant above using CUDA intrinsics compiles to 23 instructions for an sm_20 target.

__device__ my_uint128_t mul_uint128 (my_uint128_t a, my_uint128_t b)
{
    my_uint128_t res;
    asm ("{\n\t"
         "mul.lo.u32      %0, %4, %8;    \n\t"
         "mul.hi.u32      %1, %4, %8;    \n\t"
         "mad.lo.cc.u32   %1, %4, %9, %1;\n\t"
         "madc.hi.u32     %2, %4, %9,  0;\n\t"
         "mad.lo.cc.u32   %1, %5, %8, %1;\n\t"
         "madc.hi.cc.u32  %2, %5, %8, %2;\n\t"
         "madc.hi.u32     %3, %4,%10,  0;\n\t"
         "mad.lo.cc.u32   %2, %4,%10, %2;\n\t"
         "madc.hi.u32     %3, %5, %9, %3;\n\t"
         "mad.lo.cc.u32   %2, %5, %9, %2;\n\t"
         "madc.hi.u32     %3, %6, %8, %3;\n\t"
         "mad.lo.cc.u32   %2, %6, %8, %2;\n\t"
         "madc.lo.u32     %3, %4,%11, %3;\n\t"
         "mad.lo.u32      %3, %5,%10, %3;\n\t"
         "mad.lo.u32      %3, %6, %9, %3;\n\t"
         "mad.lo.u32      %3, %7, %8, %3;\n\t"
         "}"
         : "=r"(res.x), "=r"(res.y), "=r"(res.z), "=r"(res.w)
         : "r"(a.x), "r"(a.y), "r"(a.z), "r"(a.w),
           "r"(b.x), "r"(b.y), "r"(b.z), "r"(b.w));
    return res;
}
孤者何惧 2024-11-17 16:14:57

CUDA 本身不支持 128 位整数。您可以使用两个 64 位整数自行伪造这些操作。

看看这篇文章

typedef struct {
  unsigned long long int lo;
  unsigned long long int hi;
} my_uint128;

my_uint128 add_uint128 (my_uint128 a, my_uint128 b)
{
  my_uint128 res;
  res.lo = a.lo + b.lo;
  res.hi = a.hi + b.hi + (res.lo < a.lo);
  return res;
} 

CUDA doesn't support 128 bit integers natively. You can fake the operations yourself using two 64 bit integers.

Look at this post:

typedef struct {
  unsigned long long int lo;
  unsigned long long int hi;
} my_uint128;

my_uint128 add_uint128 (my_uint128 a, my_uint128 b)
{
  my_uint128 res;
  res.lo = a.lo + b.lo;
  res.hi = a.hi + b.hi + (res.lo < a.lo);
  return res;
} 
英雄似剑 2024-11-17 16:14:57

对于后代,请注意,从 11.5 开始,当主机编译器支持(例如,clang/gcc,但不支持 MSVC)时,CUDA 和 nvcc 在设备代码中支持 __int128_t。 11.6 添加了对 __int128_t 调试工具的支持。

请参阅:

For posterity, note that as of 11.5, CUDA and nvcc support __int128_t in device code when the host compiler supports it (e.g., clang/gcc, but not MSVC). 11.6 added support for debug tools with __int128_t.

See:

來不及說愛妳 2024-11-17 16:14:57

一个迟来的答案,但您可以考虑使用这个库:

https://github.com/curtisseizert/ CUDA-uint128

定义了一个128位大小的结构,具有方法和独立的实用函数来使其按预期运行,这使得它可以像常规整数一样使用。大多。

A much-belated answer, but you could consider using this library:

https://github.com/curtisseizert/CUDA-uint128

which defines a 128-bit-sized structure, with methods and freestanding utility functions to get it to function as expected, which allow it to be used like a regular integer. Mostly.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文