仅当调整窗口大小时,图层支持的 OpenGLView 才会重绘

发布于 2024-12-07 09:52:08 字数 1104 浏览 1 评论 0原文

我有一个窗口,其主视图类型为 NSView ,子视图是 NSOpenGLView 的子类,其名称为 CustomOpenGLView 。 NSOpenGLView 的子类是通过 Interface Builder 中的Custom View 并将其类设置为 CustomOpenGLView 获得的。 这是根据Apple示例代码制作的图层支持的 OpenGLView

该应用程序每隔 0.05 秒向 OpenGLContext 绘制一些内容。 通过核心动画层禁用,我能够看到视图中移动的对象,这是视图连续重绘的结果。一切都完美无缺。

我现在想要在 CustomOpenGLView 顶部有一个半透明视图来容纳播放/停止/ecc 等控制按钮。

为此,我向 CustomOpenGLView 添加了一个子视图,然后我已在 CustomOpenGLView 上启用核心动画层。控制按钮放置在这个新的子视图中。

这样,带有控制按钮的视图就会正确显示在 CustomOpenGLView 的顶部,但现在视图不会重绘。仅当我调整包含所有这些视图的窗口大小时,它才会绘制。

结果是我没有看到任何“动画”......我只看到一个静态图像,它代表绘图循环开始时绘制的第一帧。 如果我调整窗口大小,openGLContext 会重新绘制,直到我停止调整窗口大小。之后,我再次看到一张静止图像,其中最后一次绘制是在调整大小期间发生的。

此外,当绘图循环开始时,只有第一个“帧”出现在屏幕上,如果我调整窗口大小,比如说 5 秒后,我会在视图中准确地看到在开始后 5 秒应该绘制的内容。绘图循环。 看来我需要设置[glView setNeedsDisplay:TRUE]。我这样做了,但什么都没有改变。

错误在哪里?为什么添加核心动画层会破坏重绘?这是否意味着我没有得到什么?

I have a window with a main view of type NSView and a subview which is a subclass of NSOpenGLView whose name is CustomOpenGLView.
The subclass of NSOpenGLView is obtained through a Custom View in Interface Builder and by setting its class to CustomOpenGLView.
This is made according to the Apple Sample Code Layer Backed OpenGLView.

The app is made to draw something to the OpenGLContext every, let's say, 0.05 seconds.
With Core Animation Layer disabled I am able to see the moving object in the view, which is the consequence of the continuous redrawing of the view. And everything works flawlessly.

I now want to have a semitransparent view on top of CustomOpenGLView to house control buttons like play/stop/ecc..

To do this I have add a subview to CustomOpenGLView and I have enabled Core Animation Layer on CustomOpenGLView. Control buttons are placed in this new subview.

This way the view with control buttons correctly appears on top of CustomOpenGLView but now the view doesn't redraw. It draws only if I resize the window containing all these views.

The result is that I do not see any "animation"...I only see a still image which represents the first frame which gets drawn when the drawing loop starts.
If I resize the window, openGLContext gets redrawn until I stop resizing the window. After that I see once again a still image with the last drawing occurred during the resize.

In addition, when the drawing loop starts, only the first "frame" appears on screen and if I resize the window, let's say, 5 seconds later, I see in the view exactly what it should have been drawn 5 seconds after the starting of drawing loop.
It seems like I need to set [glView setNeedsDisplay:TRUE]. I did that but nothing has changed.

Where is the mistake? Why does adding Core Animation Layer break the redraw? Does it imply something I'm not getting?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

轻拂→两袖风尘 2024-12-14 09:52:08

当你有一个普通的NSOpenGLView时,你可以简单地通过OpenGL绘制一些东西,然后调用NSOpenGLContext-flushBuffer来使渲染出现在屏幕上。如果您的上下文不是双缓冲的(如果渲染到窗口则不需要这样做,因为在 MacOS X 中所有窗口本身都已经是双缓冲的),调用 glFlush() 也足够了(仅适用于真正的全屏 OpenGL 渲染,您需要双缓冲以避免伪影)。然后,OpenGL 将直接渲染到视图的像素存储中(实际上是窗口的后备存储),或者在双缓冲的情况下,它将渲染到后缓冲区,然后与前缓冲区交换;因此,新内容立即在屏幕上可见(实际上不是在下一次屏幕刷新之前,但这样的刷新每秒至少发生 50-60 次)。

如果 NSOpenGLView 是层支持的,情况会有点不同。当你调用 -flushBufferglFlush() 时,渲染实际上就像之前一样发生,图像直接渲染到视图的像素存储中但是,该像素存储不再是窗口的后备存储,而是视图的“后备层”。因此,您的 OpenGL 图像已更新,您只是看不到它发生,因为“绘制到图层中”和“在屏幕上显示图层”是两个完全不同的事情!要使新图层内容可见,您必须在图层支持的 NSOpenGLView 上调用 setNeedsDisplay:YES

为什么当您调用 setNeedsDisplay:YES 时它对您不起作用?首先,确保在主线程上执行此调用。您可以在您喜欢的任何线程上执行此调用,它肯定会将视图标记为脏,但只有在主线程上执行此调用时,它还会为其安排重绘调用(如果没有该调用,它会被标记为脏,但它在重绘其任何其他父/子视图之前不会重绘)。另一个问题可能是 drawRect: 方法。当您将视图标记为脏并重新绘制时,将调用此方法,并且此方法“绘制”的任何内容都会覆盖图层中当前的任何内容。只要您的视图不是基于图层的,那么在何处渲染 OpenGL 内容并不重要,但对于基于图层的视图,这实际上是您应该执行所有绘图的方法。

尝试以下操作:在主线程上创建一个 NSTimer,每 20 毫秒触发一次,并调用一个方法,该方法在图层支持的 NSOpenGLViewsetNeedsDisplay:YES代码>.将所有 OpenGL 渲染代码移至图层支持的 NSOpenGLViewdrawRect: 方法中。那应该效果很好。如果您需要比 NSTimer 更可靠的东西,请尝试 CVDisplayLink (CV = CoreVideo)。 CVDisplayLink 就像一个计时器,但每次屏幕重新绘制时它都会触发。

更新

分层 NSOpenGLView 有点过时了,从 10.6 开始,它们不再真正需要了。 NSOpenGLView 在内部创建分层时会创建一个 NSOpenGLLayer,因此您也可以自己直接使用这样的层并“构建”自己的 NSOpenGLView:

  1. 创建您自己的 NSOpenGLLayer 子类,我们称之为 MyOpenGLLayer
  2. 创建您自己的 NSView 子类,我们称之为 MyGLView
  3. 覆盖 - (CALayer *)makeBackingLayer 返回 MyOpenGLLayer 的自动释放实例 为
  4. MyGLView 设置 wantsLayer:YES

您现在拥有自己的图层支持视图,并且它是由 NSOpenGLLayer 子类支持的层。由于它是层支持的,因此向其添加子视图(例如按钮、文本字段等)是绝对可以的。

对于您的背衬层,您基本上有两种选择。

选项 1
正确且官方支持的方法是将渲染保持在主线程上。因此,您必须执行以下操作:

  • 重写 canDrawInContext:... 以返回 YES/NO,具体取决于您是否可以/想要绘制是否下一帧。
  • 重写drawInContext:...来执行实际的OpenGL渲染。
  • 使图层异步 (setAsynchronous:YES)
  • 确保图层在调整大小时“更新”(setNeedsDisplayOnBoundsChange:YES),否则当图层大小被调整(并且每次图层重绘时渲染的 OpenGL 上下文都必须拉伸/收缩)

Apple 将为您创建一个 CVDisplayLink ,它调用每次触发时主线程上的 canDrawInContext:... ,如果此方法返回 YES,则会调用 drawInContext:...。这就是你应该这样做的方式。

如果您的渲染成本太高而无法在主线程上进行,您可以执行以下技巧:覆盖 openGLContextForPixelFormat:... 以创建与您之前创建的另一个上下文共享的上下文(上下文 B)(上下文A)。在上下文 A 中创建一个帧缓冲区(您可以在创建上下文 B 之前或之后执行此操作,这并不重要);如果需要,附加深度和/或模板渲染缓冲区(您选择的位深度),但不是颜色渲染缓冲区,而是附加“纹理”(纹理 X)作为颜色附件 (glFramebufferTexture()) 。现在,在渲染到该帧缓冲区时,所有颜色渲染输出都会写入该纹理。在您选择的任何线程上使用上下文 A 执行对此帧缓冲区的所有渲染!渲染完成后,使 canDrawInContext:... 返回 YES 并在 drawInContext:... 中绘制一个简单的四边形 填充整个活动帧缓冲区(Apple 已为您设置它以及视口以完全填充它),并且使用纹理 X 进行纹理化。这是可能的,因为共享上下文还共享所有对象(例如纹理、帧缓冲区等)。因此,您的 drawInContext:... 方法只会绘制一个简单的纹理四边形,仅此而已。所有其他(可能昂贵的渲染)都发生在后台线程上的该纹理上,并且不会阻塞主线程。

选项 2
另一个选项不受 Apple 官方支持,可能适合也可能不适合您:

  • 不要覆盖 canDrawInContext:...,默认实现始终返回 YES,这就是你想要什么。
  • 重写drawInContext:...来执行所有实际的OpenGL渲染。
  • 不要使该层异步。
  • 不要设置needsDisplayOnBoundsChange

每当你想重绘该图层时,直接调用 displayNOT setNeedsDisplay!确实如此,苹果说你不应该调用它,但是“不应该”不是“不能”),并且在调用 display 后,调用 [CATransactionlush]。即使从后台线程调用,这也会起作用!您的 drawInContext:... 方法是从调用 display 的同一个线程调用的,该线程可以是任何线程。直接调用 display 将确保您的 OpenGL 渲染代码执行,但新渲染的内容仍然只在图层的后备存储中可见,要将其显示到屏幕上,您必须强制系统执行图层合成并且 [CATransactionlush] 将会做到这一点。 CATransaction 类只有类方法(您永远不会创建它的实例),它是隐式线程安全的,并且可以随时从任何线程使用(无论何时何地需要,它都会自行执行锁定)。

虽然不建议使用此方法,因为它可能会导致其他视图出现重绘问题(因为这些视图也可能在主线程以外的线程上重绘,并且并非所有视图都支持),但也不禁止,它不使用私有 API,并且已在 Apple 邮件列表中建议,Apple 中没有任何人反对。

When you have a normal NSOpenGLView, you can simply draw something via OpenGL and then call -flushBuffer of the NSOpenGLContext to make the rendering appear on screen. If your context is not double buffered, which is not necessary if you render to a window, since all windows are already double buffered by themselves in MacOS X, calling glFlush() is sufficient as well (only for real fullscreen OpenGL rendering, you'll need double buffering to avoid artifacts). OpenGL will then render directly into the pixel storage of your view (which is in fact the backing storage of the window) or in case of double buffering, it will render to the back-buffer and then swap it with the front-buffer; thus the new content is immediately visible on screen (actually not before the next screen refresh but such a refresh takes place at least 50-60 times a second).

Things are a bit different if the NSOpenGLView is layer-backed. When you call -flushBuffer or glFlush(), the rendering does actually take place just as it did before and again, the image is directly rendered to the pixel storage of the view, however, this pixel storage is not the backing storage of the window any longer, it is the "backing layer" of the view. So your OpenGL image is updated, you just don't see it happening since "drawing into a layer" and "displaying a layer on screen" are two completely different things! To make the new layer content visible, you'll have to call setNeedsDisplay:YES on your layer-backed NSOpenGLView.

Why didn't it work for you when you called setNeedsDisplay:YES? First of all, make sure you perform this call on the main thread. You can perform this call on any thread you like, it will for sure mark the view dirty, yet only when performing this call on the main thread, it will also schedule a redraw call for it (without that call it is marked dirty but it won't be redrawn until any other parent/child view of it is redrawn). Another problem could be the drawRect: method. When you mark the view as dirty and it is redrawn, this method is being called and whatever this method "draws" overwrites whatever content is currently within the layer. As long as your view wasn't layer-backed, it didn't matter where you rendered your OpenGL content but for a layer-backed view, this is actually the method where you should perform all your drawings.

Try the following: Create a NSTimer on your main thread that fires every 20 ms and calls a method that calls setNeedsDisplay:YES on your layer-backed NSOpenGLView. Move all your OpenGL render code into the drawRect: method of your layer-backed NSOpenGLView. That should work pretty well. If you need something more reliably than a NSTimer, try a CVDisplayLink (CV = CoreVideo). A CVDisplayLink is like a timer, yet it fires every time the screen has just been redrawn.

Update

Layered NSOpenGLView are somewhat outdated, starting with 10.6 they are not really needed any longer. Internally a NSOpenGLView creates a NSOpenGLLayer when you make it layered, so you can as well use such a layer directly yourself and "building" your own NSOpenGLView:

  1. Create your own subclass of NSOpenGLLayer, let's call it MyOpenGLLayer
  2. Create your own subclass of NSView, let's call it MyGLView
  3. Override - (CALayer *)makeBackingLayer to return an autoreleased instance of MyOpenGLLayer
  4. Set wantsLayer:YES for MyGLView

You now have your own layer backed view and it is layer backed by your NSOpenGLLayer subclass. Since it is layer backed, it is absolutely okay to add sub-views to it (e.g. buttons, textfields, etc.).

For your backing layer, you have basically two options.

Option 1
The correct and officially supported way is to keep your rendering on the main thread. Therefor you must do the following:

  • Override canDrawInContext:... to return YES/NO, depending on whether you can/want to draw the next frame or not.
  • Override drawInContext:... to perform your actual OpenGL rendering.
  • Make the layer asynchronous (setAsynchronous:YES)
  • Be sure the layer is "updated" whenever its resized (setNeedsDisplayOnBoundsChange:YES), otherwise the OpenGL backing surface is not resized when the layer is resized (and the rendered OpenGL context must be stretched/shrunk each time the layer redraws)

Apple will create a CVDisplayLink for you, that calls canDrawInContext:... on main thread each time it fires and if this method returns YES, it calls drawInContext:.... This is the way how you should do it.

If your rendering is too expensive to happen on main thread, you can do the following trick: Override openGLContextForPixelFormat:... to create a context (Context B) that is shared with another context you created earlier (Context A). Create a framebuffer in Context A (you can do that before or after creating Context B, it won't really matter); attach depth and/or stencil renderbuffers if required (of a bit depth of your choice), however instead of a color renderbuffer, attach a "texture" (Texture X) as color attachments (glFramebufferTexture()). Now all color render output is written to that texture when rendering to that framebuffer. Perform all rendering to this framebuffer using Context A on any thread of your choice! Once the rendering is done, make canDrawInContext:... return YES and in drawInContext:... just draw a simple quad that fills the whole active framebuffer (Apple has already set it for you and also the viewport to fill it completely) and that is textured with the Texture X. This is possible, since shared contexts share also all objects (e.g. like textures, framebuffers, etc.). So your drawInContext:... method will never do more than drawing a single, simple textured quad, that's all. All other (possibly expensive rendering) happens to this texture on a background thread and without ever blocking your main thread.

Option 2
The other option is not officially supported by Apple and may or may not work for you:

  • Don't override canDrawInContext:..., the default implementation always returns YES and that's what you want.
  • Override drawInContext:... to perform your actual OpenGL rendering, all of it.
  • Don't make the layer asynchronous.
  • Don't set needsDisplayOnBoundsChange.

Whenever you want to redraw this layer, call display directly (NOT setNeedsDisplay! It's true, Apple says you shouldn't call it, but "shouldn't" is not "mustn't") and after calling display, call [CATransaction flush]. This will work, even when called from a background thread! Your drawInContext:... method is called from the same thread that calls display which can be any thread. Calling display directly will make sure your OpenGL render code executes, yet the newly rendered content is still only visible in the backing storage of the layer, to bring it to screen you must force the system to perform layer compositing and [CATransaction flush] will do exactly that. The class CATransaction, which has only class methods (you will never create an instance of it) is implicitly thread-safe and may always be used from any thread at any time (it performs locking on its own whenever and wherever required).

While this method is not recommend, since it may cause redraw issues for other views (since those may also be redrawn on threads other than main thread and not all views support that), it is not forbidden either, it uses no private API and it has been suggested on the Apple mailing list without anyone at Apple opposing it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文