DirectShow 推送源、同步和时间戳

发布于 2024-12-14 01:34:52 字数 578 浏览 1 评论 0原文

我有一个过滤器图,它获取原始音频和视频输入,然后使用 ASF Writer 将它们编码为 WMV 文件。

我编写了两个自定义推送源过滤器来为图表提供输入。音频过滤器仅使用环回模式下的 WASAPI 来捕获音频并向下游发送数据。视频过滤器获取原始 RGB 帧并将其发送到下游。

对于音频和视频帧,我都有捕获帧时的性能计数器值。

问题1:如果我想正确地为视频和音频添加时间戳,我是否需要创建一个使用性能计数器的自定义参考时钟,或者是否有更好的方法来同步两个输入,即计算流时间?

视频输入是从其他地方的 Direct3D 缓冲区捕获的,我无法保证帧速率,因此它的行为就像实时源一样。当然,我总是知道帧的开始时间,但是我如何知道结束时间呢?

例如,假设视频过滤器理想情况下希望以 25 FPS 运行,但由于延迟等原因,第 1 帧完美地在 1/25 标记处开始,但第 2 帧比预期的 2/25 标记晚开始。这意味着图表中现在存在间隙,因为第 1 帧的结束时间与第 2 帧的开始时间不匹配。

问题 2:下游过滤器是否知道如何处理帧之间的延迟1 和 2,还是我必须手动减少帧 2 的长度?

I have a filter graph that takes raw audio and video input and then uses the ASF Writer to encode them to a WMV file.

I've written two custom push source filters to provide the input to the graph. The audio filter just uses WASAPI in loopback mode to capture the audio and send the data downstream. The video filter takes raw RGB frames and sends them downstream.

For both the audio and video frames, I have the performance counter value for the time the frames were captured.

Question 1: If I want to properly timestamp the video and audio, do I need to create a custom reference clock that uses the performance counter or is there a better way for me to sync the two inputs, i.e. calculate the stream time?

The video input is captured from a Direct3D buffer somewhere else and I cannot guarantee the framerate, so it behaves like a live source. I always know the start time of a frame, of course, but how do I know the end time?

For instance, let's say the video filter ideally wants to run at 25 FPS, but due to latency and so on, frame 1 starts perfectly at the 1/25th mark but frame 2 starts later than the expected 2/25th mark. That means there's now a gap in the graph since the end time of frame 1 doesn't match the start time of frame 2.

Question 2: Will the downstream filters know what to do with the delay between frame 1 and 2, or do I manually have to decrease the length of frame 2?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

無處可尋 2024-12-21 01:34:52
  1. 一种选择是省略时间戳,但这可能最终导致过滤器无法处理此数据。另一种选择是使用 系统参考时钟生成时间戳 - 无论如何,这比直接使用性能计数器作为时间戳源更好。
    1. 是的,您需要为视频和音频添加时间戳,以使它们保持同步,这是判断数据实际上归属于同一时间的唯一方法
    2. 视频样本没有时间,您可以省略停止时间或将其设置为等于开始时间,视频帧停止时间和下一帧开始时间之间的间隙不会产生任何后果
  2. 渲染器可以自由选择是否需要尊重时间戳,对于音频,您当然会希望流畅的流没有间隙时间戳
  1. One option is to omit time stamps, but this might end up in filters fail to process this data. Another option is to use System Reference Clock to generate time stamps - in any way this is preferable to directly using performance counter as a time stamp source.
    1. Yes you need to time stamp video and audio in order to keep them in sync, this is the only way to tell that data is actually attributed to the same time
    2. Video samples don't have time, you can omit stop time or set it equal to start time, a gap between video frame stop time and next frame start time has no consequences
  2. Renderers are free to choose whether they need to respect time stamps or not, with audio you of course will want smooth stream without gaps in time stamps
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文