使用 JMF 将多路复用音频/视频录制到文件
我有一个使用 JMF 的项目,并短时间(几秒到几分钟)记录网络摄像头和音频输入,然后将结果写入文件。
我的项目的问题是该文件从未正确生成,并且无法播放。
虽然我发现了许多关于如何通过 RTP 进行音频和视频多路传输或将输入文件从一种格式转换为另一种格式的示例,但我还没有看到一个捕获音频和视频并将其写入的工作示例到一个文件。
有人有执行此操作的功能代码示例吗?
I have a project that uses JMF, and records for a short time (a few seconds to a couple of minutes) both the web camera, and audio inputs, and then writes the results to a file.
The problem with my project is that this file is never produced properly, and cannot be played back.
While I've found numerous examples of how to do multiplexed transmission of audio and video over RTP, or conversion of an input file from one format to another , I haven't seen a working example yet that captures audio and video, and writes it to a file.
Does anyone have an example of functioning code to do this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我找到了无法在 JMF 下从两个单独的捕获设备生成文件的原因,它与启动命令的顺序有关。特别是,像处理器这样的东西将采用数据源,或合并数据源,分配和同步时基并为您启动/停止源,因此我尝试手动启动数据源的额外工作是完全多余的,并阻碍了工作。
这是很多痛苦的尝试和错误,我建议您在尝试自己实现之前阅读每一行代码,了解顺序,并了解已包含的内容、遗漏的内容以及原因。如果你不小心的话,JMF 就会变得很熊。
哦,记得捕捉异常。由于长度限制,我不得不省略该代码。
这是我的最终解决方案:
I've found the reason why I was not able to generate a file from two separate capture devices under JMF, and it relates to ordering of the start commands. In particular, things like Processors will take a datasource, or merging datasource, assign and synchronize the time base(s) and start/stop the sources for you, so the extra work I was trying to do starting the datasources manually is utterly redundant, and throws a wrench in the works.
This was a lot of painful trial and error, and I would suggest you read every line of code, understand the sequencing, and understand what has been included, and what has been left out and why before trying to implement this yourself. JMF is quite the bear if you're not careful.
Oh, and remember to catch exceptions. I had to omit that code due to length restrictions.
Here's my final solution: