跨进程发送图像的最有效方法

发布于 2024-08-26 22:10:54 字数 587 浏览 3 评论 0原文

目标

将一个进程生成的图像高效且高速地传递到另一个进程。这两个进程在同一台计算机和同一个桌面上运行。操作系统可以是WinXP、Vista和Win7。

详细描述

第一个过程仅用于控制与生成图像的设备的通信。这些图像的大小约为 500x300 像素,每秒可能更新数百次。第二个过程需要这些图像来处理它们。第一个过程使用第三方 API 将图像从设备绘制到 HDC。这个 HDC 必须由我提供。

注意:两个进程之间已存在打开的连接。他们通过匿名管道进行通信并共享内存映射文件视图。

想法

我如何以尽可能少的工作实现这一目标?我的意思是两者都为计算机和我工作(当然;))。我正在使用 Delphi,所以也许有一些组件可用于执行此操作?我想我总是可以绘制到任何图像组件的 HDC,将内容保存到内存流,通过内存映射文件复制内容,将其解压到另一侧并将其绘制到目标 HDC。我还阅读了有关可用于编组图像的 IPicture 接口的信息。我需要尽快得到它,所以开销越少越好。我不希望仅仅通过复制一些图像来给机器带来压力。

你有什么想法?我很欣赏对此的每一个想法!

Goal

Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7.

Detailed description

The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to process them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me.

Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views.

Thoughts

How would I achieve this goal with as little work as possible? And I mean both work for the computer and me (of course ;)). I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshal images. I need it as quick as possible, so the less overhead the better. I don't want the machine to be stressed just by copying some images.

What are your ideas? I appreciate every thought on this!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

暮年 2024-09-02 22:10:54

使用内存映射文件

有关 Delphi 参考,请参阅 Delphi 中的内存映射文件Delphi 中的共享内存

对于更通用的方法,您可以考虑使用管道或通过 TCP 发送位图数据。如有必要,这将使您能够更轻松地在节点之间分发图像数据。

Use a Memory Mapped File.

For a Delphi reference see Memory-mapped Files in Delphi and Shared Memory in Delphi.

For a more versatile approach you can look at using pipes or sending bitmap data via TCP. This would allow you to distribute the image data between nodes more easily, if necessary.

枯叶蝶 2024-09-02 22:10:54

使用共享内存来传递图像数据,并使用其他东西(命名管道、套接字……)来协调切换。

Use shared memory to pass the image data, and something else (named pipes, sockets, ...) to coordinate the handover.

不知所踪 2024-09-02 22:10:54

在某些情况下,您可以跨进程传递 HBITMAP 句柄。我以前见过它的实现(是的,在 XP/Vista 上),当我的一位同事向我展示时,我和团队中的其他人一样感到惊讶。

如果内存正确的话,我相信如果使用 GDI 函数之一(CreateBitmap、CreateCompatibleBitmap、CreateDIBitmap 等)分配 HBITMAP,那么由 LoadBitmap 创建的 HBIMAP 句柄将无法工作,因为它只是一个指向 in 的指针,因此它会工作。 -proc 资源。

我认为,当您将 HBITMAP 共享给其他进程时,除了正常的 BitBlt 操作之外,不要尝试对其执行任何特殊操作。

至少我记得是这样。我们很幸运,因为我们的图形库已经编写为将所有图像作为 HBITMAP 进行管理。

青年MMV

In some cases, you can pass HBITMAP handles across processes. I've seen it done before (yes, on XP/Vista), and was surprised as everyone else on the team when one of my co-workers showed me.

If memory serves me correctly, I believe it will work if the HBITMAP was allocated with one of the GDI function (CreateBitmap, CreateCompatibleBitmap,CreateDIBitmap,etc...) HBIMAP handles created by LoadBitmap will not work as it's just a pointer to an in-proc resource.

That, and I think when you share the HBITMAP across to the other process, don't try to do anything special with it other than normal BitBlt operations.

At least that's what I remember. We got lucky because our graphic libraries were already written to manage all images as HBITMAPs.

YMMV

风筝在阴天搁浅。 2024-09-02 22:10:54

好吧,看来内存映射文件和管道是正确的选择。这还不错,因为两个进程已经共享一个 MMF 和两个管道(用于双向通信)。唯一需要解决的问题是如何以尽可能少的复制操作来传递数据。

运行良好的设计如下(顺序流程):

进程 1(需要图像)

  • 向进程 2 发出信号(通过管道 1)将图像存储在共享内存中
  • 进入睡眠状态并等待响应(阻止从管道 2 读取)

进程 2(提供图像)

  • 收到信号(通过管道 1)唤醒并告诉硬件设备绘制到 HDC 1(这由共享内存支持,见下文)
  • 向进程 1 发出信号(通过管道 2)
  • 进入睡眠状态并等待对于新作业(通过管道 1)

信号上的进程 1(需要图像)

  • (通过管道 2)唤醒并从共享内存绘制到目标 HDC 2

现在通过共享内存进行图像传输(我的目标是使用不超过一个额外的复制操作):

进程2通过CreateDIBSection创建一个HBITMAP,并提供文件映射的句柄和映射视图的偏移量。因此,图像数据存在于共享内存中。这将创建一个 HBITMAP,它被选入 HDC 1(也由进程 2 创建),并且从现在起将由进程 2 使用。

进程 1 将 StretchDIBits 与指向映射视图内存的指针(如此处所述)。这似乎是从内存直接获取位到另一个 HDC(在本例中为 HDC 2)的唯一函数。其他函数会首先将它们复制到某个中间缓冲区中,然后再将它们从那里传输到最终的 HDC。

所以最后看起来需要传输的位数大约是开始时的两倍。但我认为这已经是最好的了,除非可以在进程之间共享 GDI 句柄。

注意:我使用管道而不是信号,因为我还需要传输一些额外的数据。

Ok it seems as if memory mapped files and pipes are the right way to go. That is not too bad because the two processes already share a MMF and two pipes (for bidirectional communication). The only thing left to solve was how to pass the data with as little copy operations as possible.

The design which works quite well looks as follows (sequential flow):

Process 1 (wants image)

  • give signal to process 2 (via pipe 1) to store image in shared memory
  • go to sleep and wait for response (blocking read from pipe 2)

Process 2 (provides images)

  • on signal (via pipe 1) wake up and tell hardware device to paint to HDC 1 (this is backed by shared memory, see below)
  • give signal to process 1 (via pipe 2)
  • go to sleep and wait for new job (via pipe 1)

Process 1 (wants image)

  • on signal (via pipe 2) wake up and paint from shared memory to destination HDC 2

Now for the image transfer via shared memory (my goal was to use not more than one additional copy operation):

Process 2 creates a HBITMAP via CreateDIBSection and provides the handle of the file mapping and the offset of the mapped view. Thus the image data lives in the shared memory. This creates an HBITMAP which is selected into HDC 1 (which is also created by process 2) and which will be used from now on by process 2.

Process 1 uses StretchDIBits with a pointer to the mapped view's memory (as described here). This seems to be the only function for getting bits from memory directly into another HDC (in this case HDC 2). Other functions would copy them first into some intermediary buffer before you could transfer them from there to the final HDC.

So in the end it seems the bits needed to be transferred are about twice as much as in the beginning. But I think this is as good as it gets unless sharing GDI handles between processes would be possible.

Note: I used pipes instead of signals because I need to transfer some additional data, too.

三月梨花 2024-09-02 22:10:54

正如我所看到的,您有两个选择:

  1. 仅将图像句柄/指针传递给其他进程,因此两个进程仅适用于一组图像。
  2. 将图像内容复制到其他进程,然后处理副本。

哪种方法最好取决于您的设计。这两种方法的最佳工具是“内存映射文件”或“命名管道”。这是您可以获得的最快速度。内存映射文件可能是进程间通信最快的形式,但其缺点是没有内置“客户端-服务器”范例。所以你必须自己同步MMF的访问。另一方面,命名管道几乎同样快,但内置了客户端-服务器范例。速度的差异主要来自于此。

现在,由于更新的共享速度,第一种方法可能会更好,但是您必须注意进程之间的同步,因此它们不会同时读取/写入单个图像。还可以使用某种缓存或其他智能技术,以便将流量减少到最低限度。当面对如此高水平的通信时,总是建议尽可能寻找降低该水平的方法。

要基于命名管道快速实现 IPC,您可以使用我的 IPC 实施。它是面向消息的,因此您不必担心管道技术细节。它还在幕后使用线程池,并且具有最小的额外开销。
您可以对其进行压力测试并亲自查看(完整的客户端-服务器请求-响应周期,一条典型的消息需要 0.1 毫秒)。

As I can see this, you have two options:

  1. Pass only the image handle / pointer to other process, so both processes work only on one collection of images.
  2. Copy the image content to other process and work on a copy from then on.

Which approach is best depends on your design. Best tool for both approaches would be "memory mapped files", or "named pipes". This are the fastest you can get. Memory mapped files are probaly the fastest form of inter process communication but have the donwside that there is no "client-server" paradigm build into them. So you have to synchronize the acces to MMF yourself. Named pipes on the other hand are almost as fast but have the client-server paradigm build right into them. The difference in speed comes mainly from that.

Now because of the share speed of the updates, the first approach could be better, but then you have to watch out for synchronization between processes, so they do not read / write to single image at the same time. Also some sort of caching or other smart tehniques could be used, so you reduce your traffic to minimum. When facing such high level of communications there is always advisable to look for means of reducing that level if possible.

For a very fast implementation of IPC based on named pipes you can use my IPC implementation. It is message oriented so you do not have to worry about pipe technical details. It also uses thread pool behind the scenes and has mininal additional overhead.
You can stress test it and see for yourself (a typical message takes 0.1 ms for full client-server request-response cycle).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文