OpenCL 内核重新编译的问题会导致程序变慢,并因此可能出现内存问题

发布于 2024-10-06 03:23:11 字数 8692 浏览 0 评论 0原文

我对 OpenCL 相当陌生,我运行的是 OS X 10.6,其中 Nvidia 330 显卡。我正在用 C++ 进行布料模拟,我已经设法编写了一个可以编译和运行的内核。问题是它的运行速度比没有 OpenCL 的 cpu 上慢。我相信这样做的原因是每次我调用 update() 方法进行一些计算时,我都会设置上下文和设备,然后从源代码重新编译内核。

为了解决这个问题,我尝试将所需的各种 OpenCL 类型封装到布料模拟类中以尝试将它们存储在那里,然后创建一个 initCL() 来设置这些值。然后我创建了一个 runCL() 来执行内核。奇怪的是,当我将 OpenCL 内容分成两个方法时,这只会给我带来内存问题。如果 initCL() 和 runCL() 都组合成一个方法,它就可以正常工作,但这就是我有点卡住的原因。

程序编译并运行,但随后我在 runCL() 代码中标记的点处收到 SIGABRT 或 EXC BAD ACCESS。当我收到 SIGABRT 时,我收到错误 CL_INVALID_COMMAND_QUEUE 但我无法弄清楚为什么只有当我分开这两种方法时才会发生这种情况。当断言失败时,我有时会收到 SIGABRT,这是预期的,但其他时候,我只是在尝试写入缓冲区时收到错误的内存访问错误。

另外,如果有人能告诉我更好的方法/正确的做法,或者如果 JIT 重新编译不是减慢我的代码速度的原因,那么我将非常感激,因为我已经盯着这个问题太久了!

谢谢

Jon

OpenCL 变量的初始化 Code:

int VPESimulationCloth::initCL(){
   // Find the CPU CL device, as a fallback
   err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_CPU, 1, &device, NULL);
   assert(err == CL_SUCCESS);

   // Find the GPU CL device, this is what we really want
// If there is no GPU device is CL capable, fall back to CPU
  err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 1, &device, NULL);
if (err != CL_SUCCESS) err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_CPU, 1, &device, NULL);
assert(device);

// Get some information about the returned device
cl_char vendor_name[1024] = {0};
cl_char device_name[1024] = {0};
err = clGetDeviceInfo(device, CL_DEVICE_VENDOR, sizeof(vendor_name), 
                vendor_name, &returned_size);
err |= clGetDeviceInfo(device, CL_DEVICE_NAME, sizeof(device_name), 
                 device_name, &returned_size);
assert(err == CL_SUCCESS);
//printf("Connecting to %s %s...\n", vendor_name, device_name);

// Now create a context to perform our calculation with the 
// specified device 
context = clCreateContext(0, 1, &device, NULL, NULL, &err);
assert(err == CL_SUCCESS);

// And also a command queue for the context
cmd_queue = clCreateCommandQueue(context, device, 0, NULL);

// Load the program source from disk
// The kernel/program should be in the resource directory
const char * filename = "clothSimKernel.cl";
char *program_source = load_program_source(filename);


program[0] = clCreateProgramWithSource(context, 1, (const char**)&program_source,
                             NULL, &err);
if (!program[0])
{
   printf("Error: Failed to create compute program!\n");
   return EXIT_FAILURE;
}
assert(err == CL_SUCCESS);

err = clBuildProgram(program[0], 0, NULL, NULL, NULL, NULL);
if (err != CL_SUCCESS)
{
   char build[2048];
   clGetProgramBuildInfo(program[0], device, CL_PROGRAM_BUILD_LOG, 2048, build, NULL);
   printf("Build Log:\n%s\n",build);
   if (err == CL_BUILD_PROGRAM_FAILURE) {
      printf("CL_BUILD_PROGRAM_FAILURE\n");
   }
}
if (err != CL_SUCCESS) {
   cout<<getErrorDesc(err)<<endl;
}
assert(err == CL_SUCCESS);
//writeBinaries();
// Now create the kernel "objects" that we want to use in the example file 
kernel[0] = clCreateKernel(program[0], "clothSimulation", &err);

}

执行内核的方法 代码:

int VPESimulationCloth::runCL(){

// Find the GPU CL device, this is what we really want
// If there is no GPU device is CL capable, fall back to CPU
err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 1, &device, NULL);
if (err != CL_SUCCESS) err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_CPU, 1, &device, NULL);
assert(device);

// Get some information about the returned device
cl_char vendor_name[1024] = {0};
cl_char device_name[1024] = {0};
err = clGetDeviceInfo(device, CL_DEVICE_VENDOR, sizeof(vendor_name), 
                vendor_name, &returned_size);
err |= clGetDeviceInfo(device, CL_DEVICE_NAME, sizeof(device_name), 
                 device_name, &returned_size);
assert(err == CL_SUCCESS);
//printf("Connecting to %s %s...\n", vendor_name, device_name);

// Now create a context to perform our calculation with the 
// specified device 

//cmd_queue = clCreateCommandQueue(context, device, 0, NULL);
//memory allocation
cl_mem nowPos_mem, prevPos_mem, rForce_mem, mass_mem, passive_mem,    canMove_mem,numPart_mem, theForces_mem, numForces_mem, drag_mem, answerPos_mem;

// Allocate memory on the device to hold our data and store the results into
buffer_size = sizeof(float4) * numParts;

// Input arrays 
//------------------------------------
// This is where the error occurs
nowPos_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, nowPos_mem, CL_TRUE, 0, buffer_size,
                    (void*)nowPos, 0, NULL, NULL);
if (err != CL_SUCCESS) {
  cout<<getErrorDesc(err)<<endl;
}
assert(err == CL_SUCCESS);
//------------------------------------
prevPos_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, prevPos_mem, CL_TRUE, 0, buffer_size,
                    (void*)prevPos, 0, NULL, NULL);
assert(err == CL_SUCCESS);
rForce_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, rForce_mem, CL_TRUE, 0, buffer_size,
                    (void*)rForce, 0, NULL, NULL);
assert(err == CL_SUCCESS);
mass_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, mass_mem, CL_TRUE, 0, buffer_size,
                    (void*)mass, 0, NULL, NULL);
assert(err == CL_SUCCESS);
answerPos_mem = clCreateBuffer(context, CL_MEM_READ_WRITE, buffer_size, NULL, NULL);
//uint buffer
buffer_size = sizeof(uint) * numParts;
passive_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, passive_mem, CL_TRUE, 0, buffer_size,
                    (void*)passive, 0, NULL, NULL);
assert(err == CL_SUCCESS);
canMove_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, canMove_mem, CL_TRUE, 0, buffer_size,
                    (void*)canMove, 0, NULL, NULL);
assert(err == CL_SUCCESS);

buffer_size = sizeof(float4) * numForces;
theForces_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, theForces_mem, CL_TRUE, 0, buffer_size,
                    (void*)theForces, 0, NULL, NULL);
assert(err == CL_SUCCESS);

//drag float
buffer_size = sizeof(float);
drag_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err |= clEnqueueWriteBuffer(cmd_queue, drag_mem, CL_TRUE, 0, buffer_size,
                    (void*)drag, 0, NULL, NULL);
assert(err == CL_SUCCESS);

// Now setup the arguments to our kernel
err  = clSetKernelArg(kernel[0],  0, sizeof(cl_mem), &nowPos_mem);
err |= clSetKernelArg(kernel[0],  1, sizeof(cl_mem), &prevPos_mem);
err |= clSetKernelArg(kernel[0],  2, sizeof(cl_mem), &rForce_mem);
err |= clSetKernelArg(kernel[0],  3, sizeof(cl_mem), &mass_mem);
err |= clSetKernelArg(kernel[0],  4, sizeof(cl_mem), &passive_mem);
err |= clSetKernelArg(kernel[0],  5, sizeof(cl_mem), &canMove_mem);
err |= clSetKernelArg(kernel[0],  6, sizeof(cl_mem), &numParts);
err |= clSetKernelArg(kernel[0],  7, sizeof(cl_mem), &theForces_mem);
err |= clSetKernelArg(kernel[0],  8, sizeof(cl_mem), &numForces);
err |= clSetKernelArg(kernel[0],  9, sizeof(cl_mem), &drag_mem);
err |= clSetKernelArg(kernel[0],  10, sizeof(cl_mem), &answerPos_mem);
if (err != CL_SUCCESS) {
   cout<<getErrorDesc(err)<<endl;
}
assert(err == CL_SUCCESS);
// Run the calculation by enqueuing it and forcing the 
// command queue to complete the task
size_t global_work_size = numParts;
size_t local_work_size = global_work_size/8;
err = clEnqueueNDRangeKernel(cmd_queue, kernel[0], 1, NULL, 
                     &global_work_size, &local_work_size, 0, NULL, NULL);
if (err != CL_SUCCESS) {
   cout<<getErrorDesc(err)<<endl;
}

assert(err == CL_SUCCESS);
//clFinish(cmd_queue);

// Once finished read back the results from the answer 
// array into the results array
//reset the buffer first
buffer_size = sizeof(float4) * numParts;
err = clEnqueueReadBuffer(cmd_queue, answerPos_mem, CL_TRUE, 0, buffer_size, 
                   answerPos, 0, NULL, NULL);
if (err != CL_SUCCESS) {
   cout<<getErrorDesc(err)<<endl;
}


//cl mem
clReleaseMemObject(nowPos_mem);
clReleaseMemObject(prevPos_mem);
clReleaseMemObject(rForce_mem);
clReleaseMemObject(mass_mem);
clReleaseMemObject(passive_mem);
clReleaseMemObject(canMove_mem);
clReleaseMemObject(theForces_mem);
clReleaseMemObject(drag_mem);
clReleaseMemObject(answerPos_mem);
clReleaseCommandQueue(cmd_queue);
clReleaseContext(context);
assert(err == CL_SUCCESS);
return err;

}

I'm fairly new to OpenCL and I'm running OS X 10.6 which the Nvidia 330 graphics card. I'm working on a cloth simulation in C++ which I've managed to write a kernel for that compiles and runs. The problem is that it's running slower than it did on the cpu without OpenCL. I believe the reason for this is that every time I call the update() method to do some calculations I'm setting the context and device and then recompiling the Kernel from source.

To solve this, I tried encapsulating the various OpenCL types I needed into the cloth simulation class to try and store them there, and then created an initCL() to set up these values. I then created a runCL() to execute the kernel. Strangely this only gives me a memory problem when I separate the OpenCL stuff into two methods. It works fine if the initCL() and runCL() are both combined into one method though which is why I'm a little stuck.

The program compiles and runs but I then get a SIGABRT or EXC BAD ACCESS at the point marked in the runCL() code. When I get a SIGABRT I get the error CL_INVALID_COMMAND_QUEUE but I can't work out for the life of me why this only happens when I split up the two methods. I sometimes get a SIGABRT when the assertion fails which is to be expected but other times I just get the bad memory access error when trying to write to the buffer.

Also if anyone can tell me a better way/the right to do this or if the JIT recompiling isn't what's slowing my code down then I'd be very grateful because I've been staring at this for far too long!

Thanks,

Jon

The initialisation of OpenCL variables
Code:

int VPESimulationCloth::initCL(){
   // Find the CPU CL device, as a fallback
   err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_CPU, 1, &device, NULL);
   assert(err == CL_SUCCESS);

   // Find the GPU CL device, this is what we really want
// If there is no GPU device is CL capable, fall back to CPU
  err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 1, &device, NULL);
if (err != CL_SUCCESS) err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_CPU, 1, &device, NULL);
assert(device);

// Get some information about the returned device
cl_char vendor_name[1024] = {0};
cl_char device_name[1024] = {0};
err = clGetDeviceInfo(device, CL_DEVICE_VENDOR, sizeof(vendor_name), 
                vendor_name, &returned_size);
err |= clGetDeviceInfo(device, CL_DEVICE_NAME, sizeof(device_name), 
                 device_name, &returned_size);
assert(err == CL_SUCCESS);
//printf("Connecting to %s %s...\n", vendor_name, device_name);

// Now create a context to perform our calculation with the 
// specified device 
context = clCreateContext(0, 1, &device, NULL, NULL, &err);
assert(err == CL_SUCCESS);

// And also a command queue for the context
cmd_queue = clCreateCommandQueue(context, device, 0, NULL);

// Load the program source from disk
// The kernel/program should be in the resource directory
const char * filename = "clothSimKernel.cl";
char *program_source = load_program_source(filename);


program[0] = clCreateProgramWithSource(context, 1, (const char**)&program_source,
                             NULL, &err);
if (!program[0])
{
   printf("Error: Failed to create compute program!\n");
   return EXIT_FAILURE;
}
assert(err == CL_SUCCESS);

err = clBuildProgram(program[0], 0, NULL, NULL, NULL, NULL);
if (err != CL_SUCCESS)
{
   char build[2048];
   clGetProgramBuildInfo(program[0], device, CL_PROGRAM_BUILD_LOG, 2048, build, NULL);
   printf("Build Log:\n%s\n",build);
   if (err == CL_BUILD_PROGRAM_FAILURE) {
      printf("CL_BUILD_PROGRAM_FAILURE\n");
   }
}
if (err != CL_SUCCESS) {
   cout<<getErrorDesc(err)<<endl;
}
assert(err == CL_SUCCESS);
//writeBinaries();
// Now create the kernel "objects" that we want to use in the example file 
kernel[0] = clCreateKernel(program[0], "clothSimulation", &err);

}

The method to execute the kernel
Code:

int VPESimulationCloth::runCL(){

// Find the GPU CL device, this is what we really want
// If there is no GPU device is CL capable, fall back to CPU
err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 1, &device, NULL);
if (err != CL_SUCCESS) err = clGetDeviceIDs(NULL, CL_DEVICE_TYPE_CPU, 1, &device, NULL);
assert(device);

// Get some information about the returned device
cl_char vendor_name[1024] = {0};
cl_char device_name[1024] = {0};
err = clGetDeviceInfo(device, CL_DEVICE_VENDOR, sizeof(vendor_name), 
                vendor_name, &returned_size);
err |= clGetDeviceInfo(device, CL_DEVICE_NAME, sizeof(device_name), 
                 device_name, &returned_size);
assert(err == CL_SUCCESS);
//printf("Connecting to %s %s...\n", vendor_name, device_name);

// Now create a context to perform our calculation with the 
// specified device 

//cmd_queue = clCreateCommandQueue(context, device, 0, NULL);
//memory allocation
cl_mem nowPos_mem, prevPos_mem, rForce_mem, mass_mem, passive_mem,    canMove_mem,numPart_mem, theForces_mem, numForces_mem, drag_mem, answerPos_mem;

// Allocate memory on the device to hold our data and store the results into
buffer_size = sizeof(float4) * numParts;

// Input arrays 
//------------------------------------
// This is where the error occurs
nowPos_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, nowPos_mem, CL_TRUE, 0, buffer_size,
                    (void*)nowPos, 0, NULL, NULL);
if (err != CL_SUCCESS) {
  cout<<getErrorDesc(err)<<endl;
}
assert(err == CL_SUCCESS);
//------------------------------------
prevPos_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, prevPos_mem, CL_TRUE, 0, buffer_size,
                    (void*)prevPos, 0, NULL, NULL);
assert(err == CL_SUCCESS);
rForce_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, rForce_mem, CL_TRUE, 0, buffer_size,
                    (void*)rForce, 0, NULL, NULL);
assert(err == CL_SUCCESS);
mass_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, mass_mem, CL_TRUE, 0, buffer_size,
                    (void*)mass, 0, NULL, NULL);
assert(err == CL_SUCCESS);
answerPos_mem = clCreateBuffer(context, CL_MEM_READ_WRITE, buffer_size, NULL, NULL);
//uint buffer
buffer_size = sizeof(uint) * numParts;
passive_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, passive_mem, CL_TRUE, 0, buffer_size,
                    (void*)passive, 0, NULL, NULL);
assert(err == CL_SUCCESS);
canMove_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, canMove_mem, CL_TRUE, 0, buffer_size,
                    (void*)canMove, 0, NULL, NULL);
assert(err == CL_SUCCESS);

buffer_size = sizeof(float4) * numForces;
theForces_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err = clEnqueueWriteBuffer(cmd_queue, theForces_mem, CL_TRUE, 0, buffer_size,
                    (void*)theForces, 0, NULL, NULL);
assert(err == CL_SUCCESS);

//drag float
buffer_size = sizeof(float);
drag_mem = clCreateBuffer(context, CL_MEM_READ_ONLY, buffer_size, NULL, NULL);
err |= clEnqueueWriteBuffer(cmd_queue, drag_mem, CL_TRUE, 0, buffer_size,
                    (void*)drag, 0, NULL, NULL);
assert(err == CL_SUCCESS);

// Now setup the arguments to our kernel
err  = clSetKernelArg(kernel[0],  0, sizeof(cl_mem), &nowPos_mem);
err |= clSetKernelArg(kernel[0],  1, sizeof(cl_mem), &prevPos_mem);
err |= clSetKernelArg(kernel[0],  2, sizeof(cl_mem), &rForce_mem);
err |= clSetKernelArg(kernel[0],  3, sizeof(cl_mem), &mass_mem);
err |= clSetKernelArg(kernel[0],  4, sizeof(cl_mem), &passive_mem);
err |= clSetKernelArg(kernel[0],  5, sizeof(cl_mem), &canMove_mem);
err |= clSetKernelArg(kernel[0],  6, sizeof(cl_mem), &numParts);
err |= clSetKernelArg(kernel[0],  7, sizeof(cl_mem), &theForces_mem);
err |= clSetKernelArg(kernel[0],  8, sizeof(cl_mem), &numForces);
err |= clSetKernelArg(kernel[0],  9, sizeof(cl_mem), &drag_mem);
err |= clSetKernelArg(kernel[0],  10, sizeof(cl_mem), &answerPos_mem);
if (err != CL_SUCCESS) {
   cout<<getErrorDesc(err)<<endl;
}
assert(err == CL_SUCCESS);
// Run the calculation by enqueuing it and forcing the 
// command queue to complete the task
size_t global_work_size = numParts;
size_t local_work_size = global_work_size/8;
err = clEnqueueNDRangeKernel(cmd_queue, kernel[0], 1, NULL, 
                     &global_work_size, &local_work_size, 0, NULL, NULL);
if (err != CL_SUCCESS) {
   cout<<getErrorDesc(err)<<endl;
}

assert(err == CL_SUCCESS);
//clFinish(cmd_queue);

// Once finished read back the results from the answer 
// array into the results array
//reset the buffer first
buffer_size = sizeof(float4) * numParts;
err = clEnqueueReadBuffer(cmd_queue, answerPos_mem, CL_TRUE, 0, buffer_size, 
                   answerPos, 0, NULL, NULL);
if (err != CL_SUCCESS) {
   cout<<getErrorDesc(err)<<endl;
}


//cl mem
clReleaseMemObject(nowPos_mem);
clReleaseMemObject(prevPos_mem);
clReleaseMemObject(rForce_mem);
clReleaseMemObject(mass_mem);
clReleaseMemObject(passive_mem);
clReleaseMemObject(canMove_mem);
clReleaseMemObject(theForces_mem);
clReleaseMemObject(drag_mem);
clReleaseMemObject(answerPos_mem);
clReleaseCommandQueue(cmd_queue);
clReleaseContext(context);
assert(err == CL_SUCCESS);
return err;

}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

分開簡單 2024-10-13 03:23:11

问题解决了!在 runCL() 方法的底部,我“释放”了所有 cl 类型,虽然我只是释放了一些 cl_mem,但仔细检查后,我释放了上下文等。一如既往的明显而烦人的错误:)。

感谢 Khronos 论坛上的 andrew.brownsword 发现了这一点。

Problem solved! At the bottom of the runCL() method I was "freeing" all my cl types, I though I was just freeing some cl_mem but on closer inspection I was freeing the context etc. An obvious and annoying mistake as always :).

Thanks to andrew.brownsword on the Khronos forums for spotting this one.

沉鱼一梦 2024-10-13 03:23:11

解决主要问题做得很好。

关于性能,numParts 是一个很大的数字吗?全局工作规模应该很大,以确保设备充满工作,例如数万个。理想情况下,本地工作大小(线性化时)应为 32 的倍数,最佳值取决于您的内核。

通常将本地工作大小设置为某个常量或取决于您的内核的某个值(您可以查询诸如最大本地工作大小之类的信息),因为如果 numParts/8 变得太大,可能会导致启动失败大(限制取决于特定内核和特定设备)。

Well done for fixing the main issue.

Regarding performance, is numParts a large number? The global work size should be large to ensure that you saturate the device with work, e.g. tens of thousands. Ideally the local work size (when linearized) would be a multiple of 32, the best value will depend on your kernel.

It is common to set local work size to some constant or to some value dependent on your kernel (you can query for information like maximum local work size) since numParts/8 could cause launch failures if it becomes too large (the limit depends on the specific kernel and the specific device).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文