运行 Python 时在进程之间交换对象
我想在Python中创建一个过程,该过程在我的代码的主要执行过程中不断运行。它应该提供一种处理Python的顺序执行的方法,以阻止我进行异步执行。
因此,我希望在我的主代码执行其他操作时,函数RunningFunc
运行。
我尝试使用螺纹
模块。但是,该计算不在parralell中,RunningFunc
是一个高度密集的计算,并且在很大程度上放慢了我的主要代码。
我还尝试使用多处理
模块,我想这应该是我的答案,使用Multiprocessing.manager()
在通过共享存储器访问第一个过程时进行一些计算随着时间的推移计算的数据。但是我没有找到这样做的方法。
对于runningfunc
,compteur
变量正在增加。
def RunningFunc(x):
boolean = True
Compteur = 0
while boolean:
Compteur +=1
在我的主代码中,某些计算正在运行,我有时会打电话(不一定是每个,而其他_bool
迭代),compteur
runningfunc
的变量。
other_bool = True
Value = 0
while other_bool:
## MAKING SOME COMPUTATION
Value = Compteur # Call the variable compteur that is constantly running
## MAKING SOME COMPUTATION
I would like to create in Python a process that run constantly in parallell while the main execution of my code is running. It should provide a way to deal with the sequential execution of Python that prevent me to do an asynchronous execution.
So I would like that a function RunningFunc
run while my main code is doing some other operation.
I tried to use the threading
module. However the computation is not in parralell and RunningFunc
is an highly intensive computation and slow down heavily my main code.
I also tried using the multiprocessing
module and I guess this should be my answer using a multiprocessing.Manager()
doing some computation on a first process while accessing via a shared memory the data computed over time. But I didn't figure out a way to do that.
For exemple the RunningFunc
is incrementing the Compteur
variable.
def RunningFunc(x):
boolean = True
Compteur = 0
while boolean:
Compteur +=1
While in my main code some computation are running and I call sometime (not necessarily each while other_bool
iteration), the Compteur
variable of RunningFunc
.
other_bool = True
Value = 0
while other_bool:
## MAKING SOME COMPUTATION
Value = Compteur # Call the variable compteur that is constantly running
## MAKING SOME COMPUTATION
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在儿童过程中有很多方法可以进行处理。最佳取决于问题,例如要共享的数据大小,这是计算中花费的时间。以下是一个示例,就像您对变量的简单增量相似,但是将其冲入了稍大的整数列表,以突出一些您会遇到的问题。
MultipRocessing.manager
是一种在过程之间共享数据的方便方式,但是它并不是特别快,因为它需要在其流程之间同步数据。如果您要共享的数据相当谦虚,并且不会经常改变,那是一个不错的选择。但是我将在这里关注共享记忆。大多数Python对象不能在共享内存中创建。诸如对象标头,参考计数或内存堆之类的东西是不可共享的。某些对象,特别是numpy数组可以共享,但这是一个不同的答案。
您能做的是序列化和写入/读取共享内存。这可以通过任何序列化机制来完成,但是通过
struct
转换为基本类型是做到这一点的好方法。这意味着您必须编写代码以定期保存数据。如果您将比单个CPU级别的单词保存到内存中,则还需要担心同步。父母可以在孩子写作时阅读,从而为您提供不一致的数据。
以下示例显示了处理共享内存的一种方法:
There are many ways to do processing in child processes. Which is best depends on questions such as the size of the data to be shared verses the time spent in the calculation. Following is an example much like your simple increment of a variable, but flushed out to a slightly larger list of integers to highlight some of the issues you'll bump into.
A
multiprocessing.Manager
is a convenient way to share data among processes, but its not particularly fast because it needs to synchronize data among its processes. If the data you want to share is fairly modest and doesn't change that often, its a good choice. But I will just focus on shared memory here.Most python objects cannot be created in shared memory. Things like the object header, reference count or the memory heap are not shareable. Some objects, notably numpy arrays can be shared, but that is a different answer.
What you can do, is serialize and write/read to shared memory. This could be done with any serialization mechanism, but converting to fundamental types via
struct
is a good way to do it.That means that you have to write your code to save its data periodically. You also need to worry about synchronization if you are saving anything bigger than a single CPU level word to memory. The parent could read while the child is writing, giving you inconsistent data.
The following example shows one way to handle shared memory: