瞬态 gen_server 进程和更新 pid

发布于 2024-10-01 21:57:58 字数 691 浏览 6 评论 0原文

我目前正在以合理的速度学习 Erlang,但对主管有一个关于 gen_server 的问题。如果 gen_server 进程崩溃并随后由主管重新启动,它会收到一个新的 pid。现在,如果我希望其他进程通过 Pid 引用该进程怎么办?在这些进程中“更新”Pid 有哪些好的惯用方法?

作为一些实际应用程序的练习,我正在编写一个锁服务器,客户端可以在其中使用任意密钥请求锁。理想情况下,我希望有一个单独的进程来处理特定锁的锁定和释放,其想法是,如果在 N 时间后没有人请求它,我可以使用 gen_server 中的超时参数来终止该进程,因此只有当前相关的锁将保留在内存中。现在,我有一个目录进程,它将锁名称映射到锁进程。当锁定进程终止时,它会从目录中删除锁定。

我关心的是如何处理客户端请求锁定而锁定进程正在终止的情况。它还没有关闭,所以嗅探 pid 是否还活着是行不通的。锁定过程尚未到达将其从目录中删除的子句。

有更好的方法来处理这个问题吗?

编辑

当前有两个 gen_server:维护来自 LockName 的 ETS 表的“目录”->锁定进程,以及使用 start_child 动态添加到监督树中的“锁定服务器”。理想情况下,我希望每个锁服务器直接处理与客户端的通信,但担心当进程崩溃时,通过调用或强制转换发出获取/释放请求的情况(因此不会响应)到消息)。

从 {local} 或 {global} 开始是行不通的,因为它们可能有 N 个。

I'm currently learning Erlang at a reasonable clip but have a question about gen_server with supervisors. If a gen_server process crashes and is consequentially restarted by a supervisor, it receives a new pid. Now, what if I want other processes to refer to that process by Pid? What are some good idiomatic ways to 'update' the Pid in those processes?

As an exercise with some practical application, I'm writing a lock server where a client can request a lock with an arbitrary key. I ideally would like to have a separate processes handle the locking and releasing of a particular lock, the idea being that I can use the timeout argument in gen_server to terminate the process if no one has requested it after N amount time, so that only currently relevant locks will stay in memory. Now, I have a directory process which maps the lock name to the lock process. When the lock process terminates, it deletes the lock from the directory.

My concern is how to handle the case where a client requests a lock while the lock process is in the middle of terminating. It hasn't shutdown yet, so sniffing that the pid is alive won't work. The lock process hasn't reached the clause that deletes it from the directory yet.

Is there a better way to handle this?

EDIT

There are two gen_servers currently: the 'directory' which maintains an ETS table from LockName -> Lock Process, and the 'lock servers' which are added dynamically to the supervision tree using start_child. Ideally I would like each lock server to handle talking with the clients directly, but am worried about the scenario of a request to acquire/release getting issued with call or cast when the process is in the middle of crashing (and thus won't respond to the message).

Starting with {local} or {global} won't work since there can be N amount of them.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

北方的韩爷 2024-10-08 21:57:58

诀窍是给进程命名,而不是通过它的 pid 来引用它。您通常有 3 个可行的选择,

  • 使用注册名称。这就是安德烈波普的建议。您可以通过服务器的注册名称来引用该服务器。 本地注册名称必须是原子,这可能会在一定程度上限制您。 全球注册名称没有此限制,您可以注册任何术语。

  • Supervisor 知道 Pid。问吧。您必须将主管 Pid 传递给进程。

  • 或者,使用 gproc 应用程序(位于 http://github.com 上)。它允许您创建一个通用的进程注册表 - 您可以通过 ETS 完成此操作,但是窃取好的代码而不是自己实现。

如果所有进程都是同一监督树的一部分,则 pid 可用。因此,其中一个人的死亡就意味着其他人的死亡。因此,Pids 回收并不重要。

The trick is to name the process and don't refer to it by its pid. You generally have 3 viable options,

  • Use registered names. This is what andreypopp suggests. You refer to the server by its registered name. locally registered names have to be atoms, which may somewhat limit you. globally registered names do not have this limitation, you can register any term.

  • The Supervisor knows the Pid. Ask it. You will have to pass the Supervisor Pid to the process.

  • Alternatively, use the gproc application (exists on http://github.com). It allows you to create a generic process registry - you could have done that by ETS, but steal good code rather than implement yourself.

The pid is usable if all processes are part of the same supervision tree. So the death of one of them means the death of the others. Thus, the Pids recycling doesn't matter.

旧梦荧光笔 2024-10-08 21:57:58

不要通过 pid 引用 gen_server 进程。

您应该通过 gen_server:call/2gen_server:call/3 函数为 gen_server 提供 API。它们接受 ServerRef 作为第一个参数,可以是 Name | {名称,节点} | {全局,全局名称} | pid()。所以,你的 API 看起来像:


lock(Key) ->
  gen_server:call(?MODULE, {lock, Key}).
release(Key) ->
  gen_server:call(?MODULE, {release, Key}).

请注意,此 API 是在与 gen_server 相同的模块中定义的,我假设您使用以下内容启动服务器:
<代码>

gen_server:start_link({local, ?MODULE}, ?MODULE, [], [])

因此,您的 API 方法可以不通过 pid 查找服务器,而是通过服务器名称查找服务器,该名称等于 ?MODULE

有关更多信息,请参阅 gen_server 文档

Don't refer to gen_server process by pid.

You should provide API for your gen_server via gen_server:call/2 or gen_server:call/3 functions. They are accept ServerRef as first argument, which can be Name | {Name,Node} | {global,GlobalName} | pid(). So, you API would look like:

lock(Key) ->
  gen_server:call(?MODULE, {lock, Key}).
release(Key) ->
  gen_server:call(?MODULE, {release, Key}).

Note that this API is defined in the same module as your gen_server and I assume you starting you server with something like:

gen_server:start_link({local, ?MODULE}, ?MODULE, [], [])

So your API methods can lookup server not by pid, but by server name, which is equal to ?MODULE.

For more information, please see gen_server documentation.

乖乖 2024-10-08 21:57:58

您可以通过使用“erlang:monitor/demonitor”API 完全避免使用“lock_server”进程。

当客户端请求锁时,您发出锁..并在客户端上执行 erlang:monitor ..这将返回您一个监视器引用..然后您可以将此引用与您的锁一起存储..这样做的好处是当客户端死亡时,您的目录服务器将收到通知..您可以在客户端中实现 TIMEOUT 事物。

这是我最近编写的代码片段。
https://github.com/xslogic/phoebus/blob/master/src /table_manager.erl

基本上,table_manager是一个向客户端发出特定表资源锁定的进程。如果客户端死亡,该表将返回到池中。

You can completely avoid the use of your "lock_server" process by using the "erlang:monitor/demonitor" API.

When a client requests a lock, you issue the lock.. and do a erlang:monitor on the client.. This will return you a Monitor Reference.. You can then store this Reference along with your lock.. The beauty of this is that your directory server WILL be notified when the client dies.. you could implement the TIMEOUT thing in the client.

Here is a snippet from code I had written recently..
https://github.com/xslogic/phoebus/blob/master/src/table_manager.erl

Basically, the table_manager is a process that issues lock on a particular table resource to client.. if the client dies, the table is returned to the pool..

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文