我已经在远程计算机上设置了一个店面的后端服务器。我能够通过修改.preftect文件夹中的config.toml将本地代理从其他机器连接到服务器:
[server]
endpoint = "http://server_ip:port/graphql"
[server.ui]
apollo_url = "http://server_ip:port/graphql"
当时我可以在每台计算机上创建一个本地代理,注册并在相应的机器上运行它们。现在,我想拥有一台中央计算机,可以开发和注册我的流量。
不幸的是,当我在计算机上注册的机器 b 上运行流量时,我会收到一个“找不到模块”的错误消息。我已经读到,错误来自机器,仅在其本地存储中寻找流量。
不使用Git,GCS等,是否可以使用,例如,所有流量都存储并且所有机器都可以用于访问流量的NAS?
如果是这样,必须如何配置流,代理和存储?不幸的是,我还没有发现任何好的文档。
许多应用程序使用Docker代理并存在类似的问题,或直接使用远程存储。
I have set up a Prefect backend server on a remote machine. I was able to connect local agents from different other machines to the server by modifying the config.toml in the .prefect folder:
[server]
endpoint = "http://server_ip:port/graphql"
[server.ui]
apollo_url = "http://server_ip:port/graphql"
As it stands, I can create a local agent on each machine, register flows and run them on the respective machines. Now I would like to have a central computer where I can develop and register my flows.
Unfortunately, when I run a flow on Machine B, registered on Machine A, I get a "Module not Found" error message. I have read that the error comes from machines only looking for the flows in their local storage.
Without using Git, GCS, etc., is it possible to use, for example, a NAS where all flows are stored and which all machines can use to access the flows?
if so, how must flows, agents, and storage be configured? Unfortunately, I have not found any good documentation on this.
Many applications use Docker agents and have similar problems, or use remote storage directly.
发布评论
评论(2)
核心库中没有本机NAS存储界面,但是我们提供有关如何解决
modulenorfounderror
的食谱和指南 - 查看这个话语wiki页面您可以如何解决该方法There is no native NAS storage interface available in the core library, but we provide recipes and guidance on how you may solve the
ModuleNorFoundError
- check out this Discourse wiki page which dives into how you may solve that我能够找到答案的解决方案。先决条件是共享存储(例如NAS),在同一路径下的所有机器上都可以访问。在此存储中,流以.py文件的形式存储。流和使用的本地代理不需要任何特殊准备。
注册了流量
我只是在CLI中 。
我能够从机器A部署所有流量,并从任何其他机器上执行/安排它们
I was able to find a solution to my answer. The prerequisite is shared storage (e.g. a NAS), which is accessible on all machines under the same path. In this storage, the flows are stored in the form of .py files. Flows and used local Agents do not need any special preparations.
I simply registered my flows with
in CLI.
I was able to deploy all my flows from machine A and execute them from/schedule them on any other machine