进程间共享内存和pthread_barrier:如何安全?

发布于 2024-10-08 10:36:38 字数 1379 浏览 3 评论 0原文

我想要一个简单的进程间障碍解决方案。这里有一个解决方案: 解决方案

但我完全迷失了 mmap ...在我的第一次尝试中,十分之一会失败(段错误或死锁)。

我知道我的问题来自同步问题,但我找不到它。我找到了一个设置 mmaped 内存的示例(示例),但我不确定这对于 mmaped pthread_barrier 是否有好处。

这里是我的代码摘录:

#define MMAP_FILE "/tmp/mmapped_bigdft.bin"

void init_barrier() {
  pthread_barrier_t *shared_mem_barrier;
  pthread_barrierattr_t barattr;
  pthread_barrierattr_setpshared(&barattr, PTHREAD_PROCESS_SHARED);

  hbcast_fd = open(MMAP_FILE, O_RDWR | O_CREAT | O_TRUNC, (mode_t)0600);
  result = lseek(hbcast_fd, sizeof(pthread_barrier_t)-1, SEEK_SET);
  result = write(hbcast_fd, "", 1);
  shared_mem_barrier = (pthread_barrier_t*) mmap(0, sizeof(pthread_barrier_t), PROT_READ | PROT_WRITE, MAP_SHARED, hbcast_fd, 0);
  if (mpi_rank == 0) {
    int err = pthread_barrier_init(shared_mem_barrier, &barattr, host_size);
  }
  MPI_Barrier(some_communicator);
}

问题:

  • 我在 mmap 初始化中遗漏了一些东西吗?
  • 哪一项操作应由所有进程执行,哪一项操作应仅由一个进程执行?

新问题

哪个管理 pthread 屏障更安全?或者它们基于相同的机制?

  • shmget
  • shm_open
  • mmap
  • 另一个

I wanted a simple solution for inter processes barrier. Here a solution: solution

But I am totally lost with mmap... With my first try, it fails one out of ten times (segfault or deadlock).

I understand my problem comes from a synchronization issue, but I can't find it. I found an example to set up mmaped memory (example), but I am not sure it is good for a mmaped pthread_barrier.

Here an extract of my code:

#define MMAP_FILE "/tmp/mmapped_bigdft.bin"

void init_barrier() {
  pthread_barrier_t *shared_mem_barrier;
  pthread_barrierattr_t barattr;
  pthread_barrierattr_setpshared(&barattr, PTHREAD_PROCESS_SHARED);

  hbcast_fd = open(MMAP_FILE, O_RDWR | O_CREAT | O_TRUNC, (mode_t)0600);
  result = lseek(hbcast_fd, sizeof(pthread_barrier_t)-1, SEEK_SET);
  result = write(hbcast_fd, "", 1);
  shared_mem_barrier = (pthread_barrier_t*) mmap(0, sizeof(pthread_barrier_t), PROT_READ | PROT_WRITE, MAP_SHARED, hbcast_fd, 0);
  if (mpi_rank == 0) {
    int err = pthread_barrier_init(shared_mem_barrier, &barattr, host_size);
  }
  MPI_Barrier(some_communicator);
}

Questions:

  • do I miss something in mmap initialization?
  • which operation should be performed by all processes, and which should be by only one?

New question

Which is safier for managing pthread barrier? Or are they based on the same mechanism?

  • shmget
  • shm_open
  • mmap
  • another one

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

落花随流水 2024-10-15 10:36:38

正如查尔斯所提到的,看起来截断是你的问题所在。另外,您应该使用 pthread_barrierattr_init 初始化属性。

至于另一个问题,只有一个进程应该进行初始化,然后所有进程都应该调用 pthread_barrier_wait (就像 MPI 一样)。

我看到你的其他问题,所以我知道你为什么不想使用 MPI。因此,您可能只需要执行一个 MPI 屏障来初始化您的 pthread 屏障,如下所示:

if (rank == 0)
{
  /* Create the shared memory segment, initialise the barrier. */
}
MPI_Barrier(communicator);
if (rank != 0)
{
  /* Load the shared memory segment, cast it to a pthread_barrier_t* and store.
   * It's already initialised */
}

As Charles has mentioned, it looks like the truncation is what's getting you. Also, you should initialise the attributes using pthread_barrierattr_init.

As for the other question, just one process should do the initialisation, and then all processes should call pthread_barrier_wait (just like with MPI).

I saw your other question, so I know why you don't want to use MPI. So you'd probably do just a single MPI barrier to initialise your pthread barriers, like so:

if (rank == 0)
{
  /* Create the shared memory segment, initialise the barrier. */
}
MPI_Barrier(communicator);
if (rank != 0)
{
  /* Load the shared memory segment, cast it to a pthread_barrier_t* and store.
   * It's already initialised */
}
若相惜即相离 2024-10-15 10:36:38

您不想为每个进程都使用 O_TRUNC 打开文件。每次执行此操作时,您都会再次截断文件,并可能使您之前执行的 mmap 操作无效(更改文件大小时对先前 mmap 的影响通常是未定义的)。

除此之外,我认为您不能在 mmap 内存中拥有信号量并使其正常运行(它可能在某些操作系统平台上,所以我怀疑它通常能保证按您想要的方式运行)。

你真正想要使用的是共享内存。执行 man “shmget”和“shmat”以了解如何创建和映射共享内存。您可能仍然需要一个文件来传递共享内存 ID,并且在应用程序崩溃期间通过注册信号处理程序释放共享内存 ID 时应该小心。否则,您可能会留下僵尸共享内存分配并超出操作系统资源限制。如果您尝试在主线程上创建共享内存段时收到 ENOSPC,您就会知道发生了这种情况。

You don't want to open the file with O_TRUNC for every process. Every time you do that, you'll truncate the file again and potentially invalidate the previous mmap operations you've performed (the affect on previous mmap's when changing the file size is generally undefined).

That aside, I don't think you can have a semaphore in a mmap'd memory and have it function correctly (it may on some OS platforms, so I doubt it's generally guaranteed to function the way you want).

What you really want to use is shared memory. Do man on " shmget" and "shmat" to learn how to create and map shared memory. You'll probably still need a file to pass the shared memory ID around, and you should be careful about freeing a shared memory ID during application crashes by registering signal handlers. Otherwise you can leave zombie shared memory allocations sitting around and overrun your OS resource limits. You'll know that happened if you get ENOSPC when attempting to create the shared memory segment on your master thread.

黑白记忆 2024-10-15 10:36:38

您应该使用 shm_open 创建共享段。

  • 使用参数 O_CREAT
    应该能够检测一个进程是否
    是第一个创建该段的人。
  • 只有那个过程应该被截断
    将段调整为适当的长度,并将其映射
    并初始化屏障。
  • 所有其他检测到他们
    不是应该先睡一觉
    同时,一秒钟左右就足够了,然后映射该段。
  • 之后所有进程都可以
    屏障的同步。

You should use shm_open to create a shared segment.

  • With the paramenter O_CREAT you
    should be able to detect if a process
    is the first to create the segment.
  • Only that process should trunctate
    the segment to the appropriate length, map it
    and initialize the barrier.
  • All the others that detect that they
    are not the first should sleep for a
    while, a second or so should suffice, and then map the segment.
  • After that all processes may
    synchronize of the barrier.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文