使用 ffmpeg 实时解码 Android 硬件编码的 H264 相机源

发布于 2024-12-09 15:31:07 字数 774 浏览 0 评论 0原文

正在尝试使用 Android 上的硬件 H264 编码器从相机创建视频,并使用 FFmpeg 混合音频(全部在 Android 手机本身上)

我 到目前为止已经完成的是将 H264 视频打包成 rtsp 数据包,并使用 VLC 对其进行解码(通过 UDP),所以我知道视频至少格式正确。但是,我无法将视频数据以它可以理解的格式转换为 ffmpeg。

我尝试将相同的 rtsp 数据包发送到本地主机上的端口 5006(通过 UDP),然后向 ffmpeg 提供 sdp 文件,该文件告诉如果我正确理解 rtsp 流,它会告诉您视频流进入哪个本地端口以及如何解码视频。然而,这不起作用,我无法诊断原因,因为 ffmpeg 只是坐在那里等待输入。

出于延迟和可扩展性的原因,我不能只是将视频和音频发送到服务器并在那里进行复用,它必须在电话上以尽可能轻量级的方式完成。

我想我正在寻找的是如何实现这一目标的建议。最佳解决方案是通过管道将打包的 H264 视频发送到 ffmpeg,但随后我无法发送 ffmpegsdp 解码视频所需的文件参数。

我可以根据要求提供更多信息,例如 ffmpeg 是如何为 Android 编译的,但我怀疑这是必要的。

哦,我启动 ffmpeg 的方式是通过命令行,如果可能的话,我真的宁愿避免使用 jni。

非常感谢您的帮助,谢谢。

I'm trying to use the hardware H264 encoder on Android to create video from the camera, and use FFmpeg to mux in audio (all on the Android phone itself)

What I've accomplished so far is packetizing the H264 video into rtsp packets, and decoding it using VLC (over UDP), so I know the video is at least correctly formatted. However, I'm having trouble getting the video data to ffmpeg in a format it can understand.

I've tried sending the same rtsp packets to a port 5006 on localhost (over UDP), then providing ffmpeg with the sdp file that tells it which local port the video stream is coming in on and how to decode the video, if I understand rtsp streaming correctly. However this doesn't work and I'm having trouble diagnosing why, as ffmpeg just sits there waiting for input.

For reasons of latency and scalability I can't just send the video and audio to the server and mux it there, it has to be done on the phone, in as lightweight a manner as possible.

What I guess I'm looking for are suggestions as to how this can be accomplished. The optimal solution would be sending the packetized H264 video to ffmpeg over a pipe, but then I can't send ffmpeg the sdp file parameters it needs to decode the video.

I can provide more information on request, like how ffmpeg is compiled for Android but I doubt that's necessary.

Oh, and the way I start ffmpeg is through command line, I would really rather avoid mucking about with jni if that's at all possible.

And help would be much appreciated, thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

旧时模样 2024-12-16 15:31:07

您尝试过使用 java.lang.Runtime 吗?

String[] parameters = {"ffmpeg", "other", "args"};
Program program Runtime.getRuntime().exec(parameters);

InputStream in = program.getInputStream();
OutputStream out = program.getOutputStream();
InputStream err = program.getErrorStream();

然后写入 stdout 并从 stdin 和 stderr 读取。它不是管道,但它应该比使用网络接口更好。

Have you tried using java.lang.Runtime?

String[] parameters = {"ffmpeg", "other", "args"};
Program program Runtime.getRuntime().exec(parameters);

InputStream in = program.getInputStream();
OutputStream out = program.getOutputStream();
InputStream err = program.getErrorStream();

Then you write to stdout and read from stdin and stderr. It's not a pipe but it should be better than using a network interface.

漫漫岁月 2024-12-16 15:31:07

有点晚了,但我认为这是一个很好的问题,但还没有一个好的答案。

如果您想从 Android 设备流式传输摄像头和麦克风,您有两种主要选择:Java 或 NDK 实现。

  1. Java 实现。

我只想提一下这个想法,但基本上它是基于这些标准在 java 中实现 RTSP 服务器和 RTP 协议 实时流协议版本 2.0H.264 视频的 RTP 有效负载格式。这项任务将是非常漫长和艰巨的。但如果您正在开发 PhP,那么拥有一个适用于 Android 的 RTSP Java 库可能会很好。

  1. NDK 实现。

这是替代方案,包括各种解决方案。主要思想是在我们的 Android 应用程序中使用强大的 C 或 C++ 库。对于本例,FFmpeg。该库可以针对 Android 进行编译,并且可以支持各种体系结构。
这种方法的问题是您可能需要了解 Android NDK、C 和 C++ 才能完成此任务。

但还有一个替代方案。您可以包装 c 库并使用 FFmpeg。但如何呢?

例如使用FFmpeg Android,已经用x264、libass、fontconfig、freetype编译过了和 fribidi 并支持各种架构。但编程仍然很困难,如果你想实时流式传输,你需要处理文件描述符和输入/输出流。

从 Java 编程的角度来看,最好的选择是使用 JavaCV。 JavaCV 使用常用计算机视觉库中的包装器,其中包括:(OpenCVFFmpeg 等,并提供实用程序类,使其功能更容易在 Java 平台上使用,包括(当然)Android。JavaCV

还附带硬件加速完整功能- 屏幕图像显示(CanvasFrameGLCanvasFrame)、在多个内核上并行执行代码的易于使用的方法 (Parallel)、用户友好的几何和相机和投影仪的颜色校准(GeometricCalibratorProCamGeometricCalibratorProCamColorCalibrator)、特征点检测和匹配(ObjectFinder),一组实现投影仪-相机系统直接图像对齐的类(主要是GNImageAlignerProjectiveTransformerProjectiveColorTransformer< /code>、ProCamTransformerReflectanceInitializer)、斑点分析包 (Blobs) 以及其他JavaCV 类中的功能。其中一些类还具有 OpenCL 和 OpenGL 对应项,其名称以 CL 结尾或以 GL 开头,即:JavaCVCLGLCanvasFrame等。

但是我们如何使用这个解决方案呢?

这里我们有一个使用 UDP 进行流传输的基本实现。

String streamURL = "udp://ip_destination:port";
recorder = new FFmpegFrameRecorder(streamURL, frameWidth, frameHeight, 1);
recorder.setInterleaved(false);
// video options //
recorder.setFormat("mpegts");
recorder.setVideoOption("tune", "zerolatency");
recorder.setVideoOption("preset", "ultrafast");
recorder.setVideoBitrate(5 * 1024 * 1024);
recorder.setFrameRate(30);
recorder.setSampleRate(AUDIO_SAMPLE_RATE);
recorder.setVideoCodec(AV_CODEC_ID_H264);
recorder.setAudioCodec(AV_CODEC_ID_AAC);

这部分代码展示了如何初始化称为记录器的 FFmpegFrameRecorder 对象。该对象将捕获并编码从相机获得的帧和从麦克风获得的样本。

如果您想在同一个 Android 应用程序中捕获预览,那么我们需要实现一个 CameraPreview 类,该类将转换从相机提供的原始数据,并为 FFmpegFrameRecorder 创建预览和帧。

请记住将 ip_destination 替换为要发送流的电脑或设备的 IP。例如,端口可以是 8080。

@Override
public Mat onCameraFrame(Mat mat)
{
    if (audioRecordRunnable == null) {
        startTime = System.currentTimeMillis();
        return mat;
    }
    if (recording && mat != null) {
        synchronized (semaphore) {
            try {
                Frame frame = converterToMat.convert(mat);
                long t = 1000 * (System.currentTimeMillis() - startTime);
                if (t > recorder.getTimestamp()) {
                    recorder.setTimestamp(t);
                }
                recorder.record(frame);
            } catch (FFmpegFrameRecorder.Exception e) {
                LogHelper.i(TAG, e.getMessage());
                e.printStackTrace();
            }
        }
    }
    return mat;
}

此方法显示了 onCameraFrame 方法的实现,该方法从相机获取 Mat(图片),并将其转换为 Frame 并由 FFmpegFrameRecorder 对象记录。

@Override
public void onSampleReady(ShortBuffer audioData)
{
    if (recorder == null) return;
    if (recording && audioData == null) return;

    try {
        long t = 1000 * (System.currentTimeMillis() - startTime);
        if (t > recorder.getTimestamp()) {
            recorder.setTimestamp(t);
        }
        LogHelper.e(TAG, "audioData: " + audioData);
        recorder.recordSamples(audioData);
    } catch (FFmpegFrameRecorder.Exception e) {
        LogHelper.v(TAG, e.getMessage());
        e.printStackTrace();
    }
}

与音频相同,audioData 是一个 ShortBuffer 对象,将由 FFmpegFrameRecorder 进行记录。

在 PC 或设备目标中,您可以运行以下命令来获取流。

ffplay udp://ip_source:port

ip_source 是传输摄像头和麦克风流的智能手机的 IP。端口必须是相同的 8080。

我在我的 github 存储库中创建了一个解决方案:UDPAVStreamer

祝你好运

A little bit late but I think this is a good question and it doesn't have a good answer yet.

If you want to stream the camera and mic from an android device you have two main alternatives: Java or NDK implementations.

  1. Java implementation.

I'm only going to mention the idea but basically it is implement an RTSP Server and RTP Protocol in java based on these standards Real-Time Streaming Protocol Version 2.0 and RTP Payload Format for H.264 Video. This task will be very long and hard. But if you are doing your PhP it could be nice to have a nice RTSP Java lib for Android.

  1. NDK implementation.

This is alternative include various solutions. The main idea is to use a power C or C++ library in our Android application. For this instance, FFmpeg. This library can be compiled for Android and may support various architectures.
The problem of this approach is that you may need to learn about the Android NDK, C and C++ to accomplish this.

But there is an alternative. You can wrap the c library and use the FFmpeg. But how?

For example, using FFmpeg Android, which has been compiled with x264, libass, fontconfig, freetype and fribidi and supports various architectures. But it still hard to program the if you want to stream in real-time you need to deal with file descriptors and in/out streams.

The best alternative, from a Java programming point of view, is to use JavaCV. JavaCV uses wrappers from commonly used libraries of computer vision that includes: (OpenCV, FFmpeg, etc, and provides utility classes to make their functionality easier to use on the Java platform, including (of course) Android.

JavaCV also comes with hardware accelerated full-screen image display (CanvasFrame and GLCanvasFrame), easy-to-use methods to execute code in parallel on multiple cores (Parallel), user-friendly geometric and color calibration of cameras and projectors (GeometricCalibrator, ProCamGeometricCalibrator, ProCamColorCalibrator), detection and matching of feature points (ObjectFinder), a set of classes that implement direct image alignment of projector-camera systems (mainly GNImageAligner, ProjectiveTransformer, ProjectiveColorTransformer, ProCamTransformer, and ReflectanceInitializer), a blob analysis package (Blobs), as well as miscellaneous functionality in the JavaCV class. Some of these classes also have an OpenCL and OpenGL counterpart, their names ending with CL or starting with GL, i.e.: JavaCVCL, GLCanvasFrame, etc.

But how can we use this solution?

Here we have a basic implementation to stream using UDP.

String streamURL = "udp://ip_destination:port";
recorder = new FFmpegFrameRecorder(streamURL, frameWidth, frameHeight, 1);
recorder.setInterleaved(false);
// video options //
recorder.setFormat("mpegts");
recorder.setVideoOption("tune", "zerolatency");
recorder.setVideoOption("preset", "ultrafast");
recorder.setVideoBitrate(5 * 1024 * 1024);
recorder.setFrameRate(30);
recorder.setSampleRate(AUDIO_SAMPLE_RATE);
recorder.setVideoCodec(AV_CODEC_ID_H264);
recorder.setAudioCodec(AV_CODEC_ID_AAC);

This part of the code shows how to initialize the FFmpegFrameRecorder object called recorder. This object will capture and encode the frames obtained from the camera and the samples obtained from the microphone.

If you want to capture a preview in the same Android app then we need to implement a CameraPreview Class this class will convert the raw data served from the Camera and it will create the Preview and the Frame for the FFmpegFrameRecorder.

Remember to replace the ip_destination with the ip of the pc or device where you want to send the stream. The port can be 8080 as example.

@Override
public Mat onCameraFrame(Mat mat)
{
    if (audioRecordRunnable == null) {
        startTime = System.currentTimeMillis();
        return mat;
    }
    if (recording && mat != null) {
        synchronized (semaphore) {
            try {
                Frame frame = converterToMat.convert(mat);
                long t = 1000 * (System.currentTimeMillis() - startTime);
                if (t > recorder.getTimestamp()) {
                    recorder.setTimestamp(t);
                }
                recorder.record(frame);
            } catch (FFmpegFrameRecorder.Exception e) {
                LogHelper.i(TAG, e.getMessage());
                e.printStackTrace();
            }
        }
    }
    return mat;
}

This method shows the implementation of the onCameraFrame method that get the Mat (picture) from the camera and it is converted as a Frame and recorded by the FFmpegFrameRecorder object.

@Override
public void onSampleReady(ShortBuffer audioData)
{
    if (recorder == null) return;
    if (recording && audioData == null) return;

    try {
        long t = 1000 * (System.currentTimeMillis() - startTime);
        if (t > recorder.getTimestamp()) {
            recorder.setTimestamp(t);
        }
        LogHelper.e(TAG, "audioData: " + audioData);
        recorder.recordSamples(audioData);
    } catch (FFmpegFrameRecorder.Exception e) {
        LogHelper.v(TAG, e.getMessage());
        e.printStackTrace();
    }
}

Same with the audio the audioData is a ShortBuffer object that will be recorder by the FFmpegFrameRecorder.

In the PC or device destination you can run the following command to get the stream.

ffplay udp://ip_source:port

The ip_source is the ip of the smartphone that is streaming the camera and mic stream. The port must be the same 8080.

I created a solution in my github repository here: UDPAVStreamer.

Good luck

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文