视频压缩:什么需要更长的时间?

发布于 2024-09-29 20:43:19 字数 602 浏览 2 评论 0原文

因此,我一直想知道,在压缩方面,将视频编码为较小分辨率或较大分辨率是否需要更少的时间。

为了更现实地回答这个问题,让我们举一个稍微无损的 mov(可能是 mjpeg 或 prores 422)、29.97 fps 的示例,键设置为压缩器想要的任何值,如果 auto 不可用,则键设置为 24。我将进行 2 次转换,1 至 480p @ 800kbps,1 至 720p @ 1500kbps,均为 2-pass,转换为 mp4。我意识到这可能是编码器特有的,但知道哪些编码器需要更长的时间来完成某些事情也会很好。如果您想了解具体信息,我们假设它是 ffmpeg。

如果我以 800kbps 的速度转换为 480p 视频,乍一看似乎它会更小,因为它生成的数据更少。

但后来我在想,也许减少每一帧的压缩(如果这确实发生了),可能会更快。那么如果我以 1500kbps 的速度转换为 720p,也许会更快?

我想这两个特定转换之间的时间差异不会太大,但会有所不同。什么会对转换速度产生负面影响?视频的大小?比特率?关键帧?您建议如何最大限度地提高转换速度,同时对质量影响最小?

这主要是假设的,我真的无法想象一种情况,我不能让服务器整晚都在进行转换,但我一直想知道是否有什么事情我正在做,不必要地减慢了我的转换速度。

So, I've always wondered, when it comes to compression, if it takes less time to encode a video to a smaller resolution, or a larger one.

For the sake of being realistic about the question, let's take an example of a somewhat lossless mov, (maybe mjpeg or prores 422), 29.97 fps, keys set to whatever the compressor wants or 24 if auto is unavailable. I'll do 2 conversions, 1 to 480p @ 800kbps, 1 to 720p @ 1500kbps, both 2-pass, to mp4. I realize that this may be specific to the encoder, but knowing which encoders take longer to do certain things would be good too. If you want specifics, let's assume it's ffmpeg.

If I am converting down to a 480p video at 800kbps, at first that seems like it would make sense that it would be smaller, because it is generating less data.

But then I was thinking, maybe compressing each frame less (if that is in fact what happens), might be faster. So if I were to convert to a 720p at 1500kbps, maybe that would be faster?

I imagine the time difference wouldn't be much between these two specific conversions, but it would be different. What would effect the speed of conversion negatively? The size of the video? The bitrate? keyframes? How would you suggest maximizing the speed of conversion with the least effect on quality?

This is mostly hypothetical, I can't really think of a situation where I wouldn't be able to let a server chug on conversions all night, but I've always wondered if there was something I was doing that unnecessarily slowed down my conversions.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

蓝梦月影 2024-10-06 20:43:20

大胆推测,您可以(天真地)将压缩时间解释为读取输入文件所需的时间、处理源中每个样本点所需的时间以及写出处理后的输出所需的时间。如果(极度简化的)表示中唯一改变的是输出,并且输出的大小减小,那么你的时间就会减少。

除此之外,解决这个问题最简单的方法就是为自己制定一个编码基准,确保多次重复测试以确保没有外部影响因素。

Speculating wildly, you can (naively) account for the time in compression as the time it takes to read the input file, the time it takes to process each sample point from the source, and the time it takes to write out the resulting processed output. If the only thing in that (supremely over-simplified) representation that changes is the output, and the size of that goes down, then your time goes down.

Besides this, the easiest thing to do to solve this problem would be to make an encoding benchmark for yourself, making sure you repeat your tests multiple times to ensure there weren't outside influencing factors.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文