具有关键资源'在单独的Python脚本中以防止种族条件
我对多处理非常新鲜,想创建一个Python脚本,以便任何人都可以在我的RPI上进行SSH并使用GPIO,但是唯一的条件是,当一个用户访问一个函数时,其他用户必须等待'x'秒(直到函数完成执行)以启用同步。 为了进行测试,我在PC上创建了两个测试文件,希望您能为您提供更好的想法: - 文件1
def main1(input2, input1, num, val, lock):
with lock:
print(input2)
print(input1)
time.sleep(int(input1))
val.value = val.value + 1
def main3(input2, input1, d, val, lock):
t1 = multiprocessing.Process(target=main1, args=(input2, input1, d, val, lock, ))
t1.start()
t1.join()
print(val.value)
文件2: -
if __name__ == '__main__':
lock = multiprocessing.Lock()
val = multiprocessing.Value('i', int(1))
while True:
input3 = input('enter on')
if input3 == 'on':
relno = int(input('enter relay to turn on [1-7]: '))
d = 0
test.main3(input3, relno, d, val, lock)
elif input3 == 'off':
relno = int(input('enter relay to turn on [1-7]: '))
d = 0
test.main3(input3, relno, d, val, lock)
else:
print("not working")
break
print(val.value)
我的任何文件都不会遇到任何错误。唯一的问题是,当我使用两个终端发布命令时,我的关键资源不会同时获得两个过程(不同的PID)访问。
我希望您可能对我要实现的目标有所了解,并且任何建议都是有帮助的。 谢谢。
I am very new to multiprocessing and want to create a python script such that anyone can SSH to my Rpi and play with GPIOs but only condition is that when a function is being accessed by one user then other user must wait for 'x' seconds(till function has finished executing) to enable synchronization.
To test this I have created two test files on my pc which can hopefully provide you with better idea:-
File 1
def main1(input2, input1, num, val, lock):
with lock:
print(input2)
print(input1)
time.sleep(int(input1))
val.value = val.value + 1
def main3(input2, input1, d, val, lock):
t1 = multiprocessing.Process(target=main1, args=(input2, input1, d, val, lock, ))
t1.start()
t1.join()
print(val.value)
File 2:-
if __name__ == '__main__':
lock = multiprocessing.Lock()
val = multiprocessing.Value('i', int(1))
while True:
input3 = input('enter on')
if input3 == 'on':
relno = int(input('enter relay to turn on [1-7]: '))
d = 0
test.main3(input3, relno, d, val, lock)
elif input3 == 'off':
relno = int(input('enter relay to turn on [1-7]: '))
d = 0
test.main3(input3, relno, d, val, lock)
else:
print("not working")
break
print(val.value)
I am not getting any errors with any of my files. Only issue is that when I issue commands parallelly using two terminals, my critical resource is not secured and being accessed by both processes (different PIDs) simultaneously.
I hope you probably got an idea of what I am trying to achieve and any suggestions are helpful.
Thanks.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果我正确理解,您正在两个单独的终端中调用
python file2.py
?在这种情况下,您将完全单独使用主过程(file2.py),每个实例都有自己的子过程(file1.main1),每个都有一个单独的mp.value()
。您的流程无法彼此了解或了解另一个“共享”价值。这些类型的共享值只能与子过程共享。如果两个过程之间没有关系,则必须使用其他机制共享信息。有几种方法可以做到这一点,但它们都归结为管理所有流程共有的资源的操作系统。首先,所有进程都是常见的文件系统,因此您可以使用
filelock之类的东西。
控制对继电器的访问。文件系统也位于MultipRocessing.shared_memory
后面,可以给出固定的文件名,允许在无关的过程中进行通信(尽管它没有提供与lock lock
的类似类似的模拟,但是它可以很容易地用于数据传输)。其次,您可以在控制对继电器访问的固定端口上托管服务器,而“客户端”只需连接到该端口(只能向Local主机打开,或者您甚至可以允许外部连接以避免SSH的需求)。这样,只有一个过程可以控制您的“关键资源”。
第三,因为您提到了RPI,所以您可以安装
posix_ipc
您创建命名锁,队列和共享内存区域。这与内置的
多处理非常相似。shared_memory
,您可以在单独的文件中指定同一锁定,但添加OS本机锁和队列(不是常规的Python队列,它们只能只能发送字符串)。If I understand correctly, you are calling
python file2.py
in two separate terminals? In this case, you have totally separate instances of a main process (file2.py), each with their own subprocess (file1.main1), each with a separatemp.Value()
. There is no way for your processes to know about each other or know about the other "shared" value. Those types of shared values can only be shared with child processes. If there's no relationship between the two processes you must use another mechanism to share information. There are a couple ways to do that, but they all boil down to the OS managing some resource which is common to all processes.First of all the filesystem is common to all processes, so you could use something like
filelock
to control access to the relays. The filesystem is also behindmultiprocessing.shared_memory
which can be given a fixed filename, allowing communication across unrelated processes (although it doesn't provide a similar easy analog toLock
, but it can be used quite easily for data transfer).Secondly, you can host a server on a fixed port which controls access to the relays, and the "clients" simply connect to that port (which can be open only to localhost, or you could even allow external connections to avoid the need for SSH). This way only a single process has control over your "critical resource".
Thirdly, because you mention RPI, you could install
posix_ipc
, which allows you to create named Locks, Queues, and shared memory regions. This is very similar to the built-inmultiprocessing.shared_memory
in that you can refer to the same lock by name in separate files, but adds OS native locks and queues (not regular python queues, they can only send strings).