Python共享内存,如何将随机整数放入共享内存块中?

发布于 2025-02-11 00:43:02 字数 548 浏览 0 评论 0原文

我创建了一个字节大小为10的内存块,并想创建一个随机数并将其放入内存块中,但它总是会给我错误消息,所以我想知道我是否做错了。

from multiprocessing import shared_memory
import random

shared_mem_1 = shared_memory.SharedMemory(create=True, size=10)
num = (random.sample(range(1, 1000), 10))
for i, c in enumerate(num):
    shared_mem_1.buf[i] = c

错误消息:

Traceback (most recent call last):
    File "main.py", line 7, in <module> shared_mem_1.buf[i] = c
ValueError: memoryview: invalid value for format 'B'

I created a memory block with a Byte size of 10 and wanted to create a random number and put it into the Memory block but it always just gives me error messages so I wonder if I am doing it wrong.

from multiprocessing import shared_memory
import random

shared_mem_1 = shared_memory.SharedMemory(create=True, size=10)
num = (random.sample(range(1, 1000), 10))
for i, c in enumerate(num):
    shared_mem_1.buf[i] = c

The error-message:

Traceback (most recent call last):
    File "main.py", line 7, in <module> shared_mem_1.buf[i] = c
ValueError: memoryview: invalid value for format 'B'

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

因为看清所以看轻 2025-02-18 00:43:02

问题在于num包含255上的值以及分配给buf格式的无效值'b'b'出现错误。格式b正是字节的格式(在此处检查格式表)。

有2个选项:

  1. 将随机数的范围更改为0至255范围;或者,
  2. 使用 int.int.to_bytes 功能。

选项1

from multiprocessing import shared_memory
import random

shared_mem_1 = shared_memory.SharedMemory(create=True, size=10)
num = (random.sample(range(0, 255), 10))
for i, c in enumerate(num):
    shared_mem_1.buf[i] = c
shared_mem_1.unlink()

选项2

的选项2您需要注意字节订单(大端/小式)以及整数在您的情况下具有多少个字节(另外,分配的内存量都取决于此长度)。对缓冲区的分配应计算已保存的偏移量。

from multiprocessing import shared_memory
import random

int_length = 4
shared_mem_1 = shared_memory.SharedMemory(create=True, size=int_length * 10)
num = (random.sample(range(1, 1000), 10))
for i, c in enumerate(num):
    pos = i*int_length
    shared_mem_1.buf[pos:pos+int_length] = c.to_bytes(int_length, 'big')
shared_mem_1.unlink()

The problem is that num contains values over 255 and when it's assigned to buf the invalid value for format 'B' error appears. Format B is exactly the format for bytes (Check the table of formats here).

There are 2 options:

  1. Change the range of the random numbers to be between 0 and 255; or,
  2. Convert to bytes with the int.to_bytes function.

Option 1

from multiprocessing import shared_memory
import random

shared_mem_1 = shared_memory.SharedMemory(create=True, size=10)
num = (random.sample(range(0, 255), 10))
for i, c in enumerate(num):
    shared_mem_1.buf[i] = c
shared_mem_1.unlink()

Option 2

For option 2 you'd need to pay attention to the bytes order (big-endian/little-endian) and how many bytes an integer has in your case (Also, the amount of memory to allocate depends on this length). The assignment to the buffer should calculate the offset it saved already.

from multiprocessing import shared_memory
import random

int_length = 4
shared_mem_1 = shared_memory.SharedMemory(create=True, size=int_length * 10)
num = (random.sample(range(1, 1000), 10))
for i, c in enumerate(num):
    pos = i*int_length
    shared_mem_1.buf[pos:pos+int_length] = c.to_bytes(int_length, 'big')
shared_mem_1.unlink()
铁憨憨 2025-02-18 00:43:02

我发现利用多处理。Shared_memory的最有用方法是创建一个使用共享内存区域作为其内存缓冲区的Numpy数组。 numpy处理设置正确的数据类型(它是8位整数?一个32位float?64 lit float?et。 模块)。这样,对数组的任何修改都可以在具有相同内存区域映射到数组的任何过程中可见。

from multiprocessing import Process
from multiprocessing.shared_memory import SharedMemory
import numpy as np

def foo(shm, shape, dtype):
    arr = np.ndarray(shape, dtype, buffer = shm.buf) #remote version of arr
    print(arr)
    arr[0] = 20 #modify some data in arr to show modifications cross to the other process
    shm.close() #SharedMemory is internally a file, which needs to be closed.

if __name__ == "__main__":
    shm = SharedMemory(create=True, size=40) #40 bytes for 10 floats
    arr = np.ndarray([10], 'f4', shm.buf) #local version of arr (10 floats)
    arr[:] = np.random.rand(10) #insert some data to arr
    p = Process(target=foo, args=(shm, arr.shape, arr.dtype)
    p.start()
    p.join() #wait for p to finish
    print(arr) #arr should reflect the changes made in foo which occurred in another process.
    shm.close() #close the file
    shm.unlink() #delete the file (happens automatically on windows but not linux)

I find the most useful way to take advantage of multiprocessing.shared_memory is to create a numpy array that uses the shared memory region as it's memory buffer. Numpy handles setting the correct data type (is it an 8 bit integer? a 32 bit float? 64 bit float? etc..) as well as providing a convenient interface (similar, but more extensible than python's built-in array module). That way any modifications to the array are visible across any processes that have that same memory region mapped to an array.

from multiprocessing import Process
from multiprocessing.shared_memory import SharedMemory
import numpy as np

def foo(shm, shape, dtype):
    arr = np.ndarray(shape, dtype, buffer = shm.buf) #remote version of arr
    print(arr)
    arr[0] = 20 #modify some data in arr to show modifications cross to the other process
    shm.close() #SharedMemory is internally a file, which needs to be closed.

if __name__ == "__main__":
    shm = SharedMemory(create=True, size=40) #40 bytes for 10 floats
    arr = np.ndarray([10], 'f4', shm.buf) #local version of arr (10 floats)
    arr[:] = np.random.rand(10) #insert some data to arr
    p = Process(target=foo, args=(shm, arr.shape, arr.dtype)
    p.start()
    p.join() #wait for p to finish
    print(arr) #arr should reflect the changes made in foo which occurred in another process.
    shm.close() #close the file
    shm.unlink() #delete the file (happens automatically on windows but not linux)
陈年往事 2025-02-18 00:43:02

数十年来,我一直在与CSV文件的两个同时运行脚本之间共享值,没有任何问题。我试图切换直接分享。
这是我带有共享的测试代码。我在另一个线程中发布了共享_memory_dict的测试代码。共享_memory不能共享负值,而_ dict可以。源文件:srcarry2.py

from multiprocessing import shared_memory
from time import sleep

shm_a = shared_memory.SharedMemory(name='Tst2', create=True, size=64)

if __name__ == "__main__":
    while True:
        for i in range (0, 16):
            try:
                print(shm_a.buf[4])
            except:
                pass
            shm_a.buf[0] = i
            shm_a.buf[1] = (i + 10)
            shm_a.buf[2] = (i + 20)
            shm_a.buf[3] = (i * 3)
        sleep(1)

接收文件:rcvarry2.py

from multiprocessing import shared_memory
from time import sleep

shm_a = shared_memory.SharedMemory(name='Tst2', create=False, size=10)

if __name__ == "__main__":
    while True:
        print(shm_a.buf[0])
        print(shm_a.buf[1])
        print(shm_a.buf[2])
        print(shm_a.buf[3])
        shm_a.buf[4] = shm_a.buf[0] * 10
        sleep(1)

buf [4]在接收文件中已更改。源文件必须在接收文件之前启动。当时不存在Buf [4],因此处理了一个例外。

I have been sharing values between two concurrently running scripts with csv files for decades without any problem. I was trying to switch to share directly.
Here are my test codes with shared_memory. I posted the test codes for shared_memory_dict in another thread. The shared_memory cannot share negative values whereas _dict can. Source file : SrcArry2.py

from multiprocessing import shared_memory
from time import sleep

shm_a = shared_memory.SharedMemory(name='Tst2', create=True, size=64)

if __name__ == "__main__":
    while True:
        for i in range (0, 16):
            try:
                print(shm_a.buf[4])
            except:
                pass
            shm_a.buf[0] = i
            shm_a.buf[1] = (i + 10)
            shm_a.buf[2] = (i + 20)
            shm_a.buf[3] = (i * 3)
        sleep(1)

Receiving file: RcvArry2.py

from multiprocessing import shared_memory
from time import sleep

shm_a = shared_memory.SharedMemory(name='Tst2', create=False, size=10)

if __name__ == "__main__":
    while True:
        print(shm_a.buf[0])
        print(shm_a.buf[1])
        print(shm_a.buf[2])
        print(shm_a.buf[3])
        shm_a.buf[4] = shm_a.buf[0] * 10
        sleep(1)

The buf[4] is changed in receiving file. The source file has to be started before the receiving file. The buf[4] doesn't exist at that time so an exception was handled.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文