解决两个gen_tcp之间的死锁

发布于 2024-11-08 05:36:12 字数 570 浏览 0 评论 0原文

在浏览 erlang 应用程序的代码时,我遇到了一个有趣的设计问题。让我描述一下情况,但由于 PIA 抱歉,我无法发布任何代码。

该代码的结构为 OTP 应用程序,其中两个 gen_server 模块负责分配某种资源。该应用程序完美运行了一段时间,我们并没有真正遇到大问题。

当第一个 gen_server 需要检查第二个 gen_server 是否还有足够的资源时,棘手的部分就开始了。向第二个 gen_server 发出调用,该 gen_server 本身调用一个实用程序库,该实用程序库(在非常非常特殊的情况下)向第一个 gen_server 发出调用

我对 erlang 比较陌生,但我认为这种情况会让两个 gen_server 互相等待。

这可能是一个设计问题,但我只是想知道 OTP 中是否内置了任何特殊机制可以防止这种“挂起”。

任何帮助将不胜感激。

编辑 : 总结一下答案:如果您遇到两个gen_server以循环方式相互调用的情况,您最好在应用程序设计上多花一些时间。

感谢您的帮助 :)

While browsing the code of an erlang application, I came across an interesting design problem. Let me describe the situation, but I can't post any code because of PIA sorry.

The code is structured as an OTP application in which two gen_server modules are responsible for allocating some kind of resources. The application runs perfectly for some time and we didn't really had big issues.

The tricky part begins when one the first gen_server need to check if the second have enough resources left. A call is issued to the second gen_server that itself call a utility library that (in very very special case) issue a call to the first gen_server.

I'm relatively new to erlang but I think that this situation is going to make the two gen_server wait for each other.

This is probably a design problem but I just wanted to know if there is any special mechanism built into OTP that can prevent this kind of "hangs".

Any help would be appreciated.

EDIT :
To summaries the answers : If you have a situation where two gen_servers call each other in a cyclic way you'd better spend some more time in the application design.

Thanks for your help :)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

最单纯的乌龟 2024-11-15 05:36:12

这称为死锁,可以/应该在设计层面避免。以下是一个可能的解决方法和一些主观观点,希望可以帮助您避免犯错误。

虽然有多种方法可以解决您的问题,但“等待”正是 call 所做的事情。

一种可能的解决方法是从 A 内部生成一个进程,该进程调用 B,但不会阻止 A 处理来自 B 的调用。该进程将直接回复调用者。

在服务器 A 中:

handle_call(do_spaghetti_call, From, State) ->
    spawn(fun() -> gen_server:reply(From, call_server_B(more_spaghetti)) end),
    {noreply, State};
handle_call(spaghetti_callback, _From, State) ->
    {reply, foobar, State}

在服务器 B 中:

handle_call(more_spaghetti, _From, State) ->
    {reply, gen_server:call(server_a, spaghetti_callback), State}

对我来说,这非常复杂且超级难以推理。我想你甚至可以称之为意大利面条代码,而不会冒犯任何人。

另一方面,虽然上述内容可能会解决您的问题,但您应该认真思考这样的调用实际上意味着什么。例如,如果服务器 A 多次执行此调用,会发生什么情况?如果在任何时候超时会发生什么?如何配置超时使其有意义? (最里面的调用必须有比外部调用更短的超时,等等)。

我会改变设计,即使这很痛苦,因为当你允许它存在并解决它时,你的系统变得很难推理。恕我直言,复杂性是万恶之源,应该不惜一切代价避免。

This is called a deadlock and could/should be avoided at a design level. Below is a possible workaround and some subjective points that hopefully helps you avoid doing a mistake.

While there are ways to work around your problem, "waiting" is exactly what the call is doing.

One possible work around would be to spawn a process from inside A which calls B, but does not block A from handling the call from B. This process would reply directly to the caller.

In server A:

handle_call(do_spaghetti_call, From, State) ->
    spawn(fun() -> gen_server:reply(From, call_server_B(more_spaghetti)) end),
    {noreply, State};
handle_call(spaghetti_callback, _From, State) ->
    {reply, foobar, State}

In server B:

handle_call(more_spaghetti, _From, State) ->
    {reply, gen_server:call(server_a, spaghetti_callback), State}

For me this is very complex and superhard to reason about. I think you even could call it spaghetti code without offending anyone.

On another note, while the above might solve your problem, you should think hard about what calling like this actually implies. For example, what happens if server A executes this call many times? What happens if at any point there is a timeout? How do you configure the timeouts so they make sense? (The innermost call must have a shorter timeout than the outer calls, etc).

I would change the design, even if it is painful, because when you allow this to exist and work around it, your system becomes very hard to reason about. IMHO, complexity is the root of all evil and should be avoided at all costs.

染火枫林 2024-11-15 05:36:12

这主要是一个设计问题,您需要确保没有来自 gen_server1 的长时间阻塞调用。这可以很容易地通过生成一个小函数来完成,该函数负责您对 gen_server2 的调用,并在完成后将结果传递给 gen_server1 。

您必须跟踪 gen_server1 正在等待 gen_server2 的响应这一事实。可能是这样的:

handle_call(Msg, From, S) ->
  Self = self(),
  spawn(fun() ->
    Res = gen_server:call(gen_server2, Msg),
    gen_server:cast(Self, {reply,Res})
  end),
{noreply, S#state{ from = From }}.

handle_cast({reply, Res}, S = #state{ from = From }) ->
  gen_server:reply(From, Res),
  {noreply, S#state{ from = undefiend}.

这样 gen_server1 可以服务来自 gen_server2 的请求而不会挂起。当然,您还需要对小流程进行适当的错误传播,但您已经了解了总体思路。

It is mostly a design issue where you need to make sure that there are no long blocking calls from gen_server1. This can quite easily be done by spawning a small fun which takes care of your call to gen_server2 and the delivers the result to gen_server1 when done.

You would have to keep track of the fact that gen_server1 is waiting for a response from gen_server2. Something like this maybe:

handle_call(Msg, From, S) ->
  Self = self(),
  spawn(fun() ->
    Res = gen_server:call(gen_server2, Msg),
    gen_server:cast(Self, {reply,Res})
  end),
{noreply, S#state{ from = From }}.

handle_cast({reply, Res}, S = #state{ from = From }) ->
  gen_server:reply(From, Res),
  {noreply, S#state{ from = undefiend}.

This way gen_server1 can serve requests from gen_server2 without hanging. You would ofcourse also need to do proper error propagation of the small process, but you get the general idea.

神魇的王 2024-11-15 05:36:12

我认为更好的另一种方法是使此(资源)信息异步传递。当每个服务器从其他服务器收到(异步)my_resource_state 消息时,它都会做出反应并执行其应该执行的操作。它还可以提示其他服务器通过 send_me_your_resource_state 异步消息发送其资源状态。由于这两个消息都是异步的,因此它们永远不会阻塞,并且服务器在提示后等待来自其他服务器的 my_resource_state 消息时可以处理其他请求。

异步消息的另一个好处是,服务器可以在认为有必要时发送此信息,而无需提示,例如“帮帮我,我的运行速度非常低!”或“我已经满了,你想要一些吗?”。

@Lukas 和 @knutin 的两个回复实际上是异步执行的,但它们是通过生成一个临时进程来实现的,然后该进程可以执行同步调用而不会阻塞服务器。直接使用异步消息更容易,而且意图也更清晰。

Another way of doing it, which I think is better, is to make this (resource) information passing asynchronous. Each server reacts and does what it is supposed to when it gets an (asynchronous) my_resource_state message from the other server. It can also prompt the other server to send its resource state with an send_me_your_resource_state asynchronous message. As both these messages are asynchronous they will never block and a server can process other requests while it is waiting for a my_resource_state message from the other server after prompting it.

Another benefit of having the message asynchronous is that servers can send off this information without being prompted when they feel it is necessary, for example "help me I am running really low!" or "I am overflowing, do you want some?".

The two replies from @Lukas and @knutin actually do do it asynchronously, but they do it by a spawning a temporary process, which can then do synchronous calls without blocking the servers. It is easier to use asynchronous messages straight off, and clearer in intent as well.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文