MPEG 2 解码

发布于 2024-07-13 23:45:05 字数 175 浏览 9 评论 0 原文

我想了解视频和音频解码的工作原理,特别是定时同步(如何获得 30fps 视频,如何将其与音频耦合等)。 我不想知道所有细节,只想知道它的本质。 我希望能够编写实际视频/音频解码器的高级简化。

你能给我指点一下吗? 我认为,MPEG2 视频/音频解码器的实际 C/C++ 源代码将是理解这些内容的最快方法。

I want to understand how video and audio decoding works, specially the timing synchronization (how to get 30fps video, how to couple that with audio, etc.). I don't want to know ALL the details, just the essence of it. I want to be able to write a high level simplification of an actual video/audio decoder.

Could you provide pointers to me? An actual C/C++ source code of a MPEG2 video/audio decoder would be the fastest way to understand those things I think.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

我要还你自由 2024-07-20 23:45:05

从有效的编解码器中读取源代码似乎是正确的方法。
我建议如下:

http://www.mpeg.org /MPEG/video/mssg-free-mpeg-software.html

鉴于 mpeg.org 网站上提到了它,我想说您会在这里找到您需要的内容。

过去我花了一些时间来解码 mpeg 视频(虽然没有音频),原理很简单。 其中包括一些纯图像,一些相对于最接近的主图像来描述的中间图像,其余的使用最接近的主/中间图像来描述。

一个时间段,一张图像。 但我猜最近的编解码器要复杂得多!

编辑:同步

我不是同步音频和视频的专家,但问题似乎是使用同步层来处理的(参见此处 进行定义)。

Reading source code from a codec that works seems the right way to go.
I suggest the following :

http://www.mpeg.org/MPEG/video/mssg-free-mpeg-software.html

Given that it's mentionned on the mpeg.org website, i'd say you'll find what you need here.

In the past i've had some time to work on decoding mpeg videos (no audio though), and the principles are quite simple. There are some pure images included, some intermediary images that are described relatively to the closest main ones, and the rest are described using the closest main/intermediary images.

One time slot, one image. But recent codecs are much more complicated, I guess !

EDIT : synchronization

I am no expert in synchronizing audio and video, but the issue seems to be dealt with using a sync layer (see there for a definition).

烟酉 2024-07-20 23:45:05

对于音频/视频同步,基本上,每个视频和音频帧都应该带有时间戳。 时间戳通常称为 PTS(呈现时间戳)。 一旦视频/音频被解码器解码,音频/视频渲染器就应该安排帧在正确的时间显示,以便音频/视频同步。

我想你可以参考时序模型"一章://www.bretl.com/mpeghtml/MPEGindex.htm" rel="nofollow noreferrer">MPEG2 教程 了解详细信息。

For audio/video synchronization, basically, every video and audio frame should be time-stamped. The timestamp is typically known as PTS (Presentation Time Stamp). Once a video/audio is decoder by decoder, the audio/video renderer should schedule the frame to be displayed at the right time so that audio/video is synchronized.

I think you can refer to chapter "Timing Model" of MPEG2 Tutorial for details.

残疾 2024-07-20 23:45:05

您可以浏览 ffmpeg 的源代码(可通过 svn 获取),或其 API 文档

You can browse source code of ffmpeg (available through svn), or its API documentation.

呢古 2024-07-20 23:45:05

根据您对 MPEG-2 格式的了解程度,您可能希望首先阅读相关文章来获得广泛的概述。 我的意思是这样的:

初学者MPEG-2 标准指南

MPEG-2 视频压缩

Depending on how much you know about MPEG-2 format, you might want to get a broad overview by reading an article about it first. I mean something like these:

A Beginners Guide for MPEG-2 Standard

MPEG-2 VIDEO COMPRESSION

凉薄对峙 2024-07-20 23:45:05

@帕特里克和尼尔斯

所以你说有时间戳,嘿…我猜这些只是视频部分。 对于音频,我想标题中有足够的信息(例如“每秒样本数”)。 多久需要一次这些时间戳? 我想音频和视频数据包的交错可以确保视频数据始终领先于音频数据或其他什么?

编辑:找到我需要的东西:
http://www.dranger.com/ffmpeg/tutorial01.html

@ Patric and Nils

So you say that there are timestamps, hein... These are for the video part only I guess. For audio I guess there is enough information in the header (like "samples per second"). How often these time stamps are needed? I imagine that interlacing of audio and video packets ensures that video data is always ahead of audio data or something?

EDIT: Found what I needed:
http://www.dranger.com/ffmpeg/tutorial01.html

满身野味 2024-07-20 23:45:05

Helltone,

音频数据的时间戳仍然是必要的,因为音频和视频帧可能无法在同一位置对齐。 例如:

V:1000 1040 1080 1120 ...
A: 990 1013 1036 (丢失) 1082

您可能需要补偿第一个视频/音频帧之间的偏移。 此外,如果可能存在丢包(在视频流期间),则需要视频/音频的时间戳以保持准确的同步。

Helltone,

Timestamps for audio data are still necessary because the audio and video frame may not be aligned at the same place. For example:

V: 1000 1040 1080 1120 ...
A: 990 1013 1036 (lost) 1082

You may need to compensate the offset between the first video/audio frame. Besides, if it is possible that there are packet loss (during video streaming), you need the timestamps of both video/audio to keep accurate synchronization.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文