如何使 Fabric 执行遵循 env.hosts 列表顺序?

发布于 2024-08-16 04:12:49 字数 872 浏览 4 评论 0 原文

我有以下 fabfile.py:

from fabric.api import env, run

host1 = '192.168.200.181'
host2 = '192.168.200.182'
host3 = '192.168.200.183'

env.hosts = [host1, host2, host3]

def df_h():
    run("df -h | grep sda3")

我得到以下输出:

[192.168.200.181] run: df -h | grep sda3
[192.168.200.181] out: /dev/sda3             365G  180G  185G  50% /usr/local/nwe
[192.168.200.183] run: df -h | grep sda3
[192.168.200.183] out: /dev/sda3             365G   41G  324G  12% /usr/local/nwe
[192.168.200.182] run: df -h | grep sda3
[192.168.200.182] out: /dev/sda3             365G   87G  279G  24% /usr/local/nwe

Done.
Disconnecting from 192.168.200.182... done.
Disconnecting from 192.168.200.181... done.
Disconnecting from 192.168.200.183... done.

请注意,执行顺序与 env.hosts 规范不同。

为什么会这样呢?有没有办法使执行顺序与 env.hosts 列表中指定的顺序相同?

I have the following fabfile.py:

from fabric.api import env, run

host1 = '192.168.200.181'
host2 = '192.168.200.182'
host3 = '192.168.200.183'

env.hosts = [host1, host2, host3]

def df_h():
    run("df -h | grep sda3")

And I get the following output:

[192.168.200.181] run: df -h | grep sda3
[192.168.200.181] out: /dev/sda3             365G  180G  185G  50% /usr/local/nwe
[192.168.200.183] run: df -h | grep sda3
[192.168.200.183] out: /dev/sda3             365G   41G  324G  12% /usr/local/nwe
[192.168.200.182] run: df -h | grep sda3
[192.168.200.182] out: /dev/sda3             365G   87G  279G  24% /usr/local/nwe

Done.
Disconnecting from 192.168.200.182... done.
Disconnecting from 192.168.200.181... done.
Disconnecting from 192.168.200.183... done.

Note that the execution order is different from the env.hosts specification.

Why does it work this way? Is there a way to make the execution order the same as specified in env.hosts list?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

绿光 2024-08-23 04:12:49

env.hosts 中未保留顺序的确切原因是,可以指定要操作的主机的三个“级别”——env.hosts、命令行和每个函数—— - 合并在一起。在fabric/main.py中>第 309 行,您可以看到它们使用 set() 类型来删除三个可能的主机列表中的重复项。由于 set() 没有顺序,因此主机将以“随机”顺序作为列表返回。

这是方法,这是有充分理由的。这是一种非常有效的机制,可以从列表中删除重复项,对于结构来说,顺序无关紧要,这一点很重要。您要求结构在各个主机上执行一系列完全并行的原子操作。由于并行、原子操作的本质,顺序不会影响操作成功执行的能力。如果订单确实很重要,那么就需要采取不同的策略,并且结构将不再是完成这项工作的正确工具。

也就是说,是否有特殊原因需要这些操作按顺序进行?也许如果您遇到某种因执行顺序而导致的问题,我们可以帮助您解决该问题。

The exact reason that the order is not preserved from env.hosts is that there are three "levels" that the hosts to operate can be specified--env.hosts, the command line, and per function--which are merged together. In fabric/main.py on line 309, you can see that they use the set() type to remove duplicates in the three possible lists of hosts. Since set() does not have an order, the hosts will be returned as a list in "random" order.

There's a pretty good reason that this is method. It's a very efficient mechanism for removing duplicates from a list and for fabric it's important that order doesn't matter. You're asking fabric to perform a series of completely parallel, atomic actions on various hosts. By the very nature of parallel, atomic actions, order does not effect the ability of the actions to be performed successfully. If order did matter, then a different strategy would be necessary and fabric would no longer be the correct tool for the job.

That said, is there a particular reason that you need these operations to occur in order? Perhaps if you're having some sort of problem that's a result of execution order, we can help you work that out.

儭儭莪哋寶赑 2024-08-23 04:12:49

更新一下,最新的 Fabric 1.1+(甚至是 1.0)现在以保留顺序的方式进行重复数据删除。所以现在这应该不是问题。

Just to update, newest Fabric 1.1+ (think even 1.0) dedupes in an order preserving way now. So this should be a non-issue now.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文