如何将EasyOROR用于多处理?

发布于 2025-01-23 19:48:33 字数 2105 浏览 0 评论 0原文

我试图在Python上使用EasyORCOR的图像上阅读文本,并且我想单独运行它,以免阻止代码的其他部分。但是,当我在多处理循环中调用该函数时,我会遇到一个通知的错误。这是代码的示例。

import multiprocessing as mp
import easyocr
import cv2

def ocr_test(q, reader):
    while not q.empty():
        q.get()
        img = cv2.imread('unknown.png')
        result = reader.readtext(img)


if __name__ == '__main__':
    q = mp.Queue()
    reader = easyocr.Reader(['en'])

    p = mp.Process(target=ocr_test, args=(q,reader))
    p.start()
    q.put('start')
    p.join()

这是我遇到的错误。

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
  File "C:\Python\venv\lib\site-packages\torch\multiprocessing\reductions.py", line 90, in rebuild_tensor
    t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
  File "C:\Python\venv\lib\site-packages\torch\_utils.py", line 134, in _rebuild_tensor
    t = torch.tensor([], dtype=storage.dtype, device=storage._untyped().device)

NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, Meta, MkldnnCPU, SparseCPU, SparseCsrCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].

有没有办法解决这个问题?

I tried to read text on images with easyocr on python, and I want to run it separately so it doesn't hold back other parts of the code. But when I call the function inside a multiprocessing loop, I get a notimplemented error. Here is an example of code.

import multiprocessing as mp
import easyocr
import cv2

def ocr_test(q, reader):
    while not q.empty():
        q.get()
        img = cv2.imread('unknown.png')
        result = reader.readtext(img)


if __name__ == '__main__':
    q = mp.Queue()
    reader = easyocr.Reader(['en'])

    p = mp.Process(target=ocr_test, args=(q,reader))
    p.start()
    q.put('start')
    p.join()

and this is the error I get.

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
  File "C:\Python\venv\lib\site-packages\torch\multiprocessing\reductions.py", line 90, in rebuild_tensor
    t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
  File "C:\Python\venv\lib\site-packages\torch\_utils.py", line 134, in _rebuild_tensor
    t = torch.tensor([], dtype=storage.dtype, device=storage._untyped().device)

NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, Meta, MkldnnCPU, SparseCPU, SparseCsrCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].

Is there a way to solve this problem?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文