Java - Xuggle - 获取框架的最佳方法

发布于 2024-11-10 11:21:22 字数 2859 浏览 4 评论 0原文

我从一周开始就和 xuggle 一起工作,我写了一种方法来获得 视频帧,但如果视频很长,此方法会花费太多时间:

public static void getFrameBySec(IContainer container, int videoStreamId, IStreamCoder videoCoder, IVideoResampler resampler, double sec) 
{ 
    BufferedImage javaImage = new BufferedImage(videoCoder.getWidth(), videoCoder.getHeight(), BufferedImage.TYPE_3BYTE_BGR); 
    IConverter converter = ConverterFactory.createConverter(javaImage, IPixelFormat.Type.BGR24); 
    IPacket packet = IPacket.make(); 
    while(container.readNextPacket(packet) >= 0) 
    { 
        if (packet.getStreamIndex() == videoStreamId) 
        { 
            IVideoPicture picture = IVideoPicture.make(videoCoder.getPixelType(), videoCoder.getWidth(), videoCoder.getHeight()); 
            int offset = 0; 
            while(offset < packet.getSize()) 
            { 
                int bytesDecoded = videoCoder.decodeVideo(picture, packet, offset); 
                if (bytesDecoded < 0) 
                    throw new RuntimeException("got error decoding video"); 
                offset += bytesDecoded; 
                if (picture.isComplete()) 
                { 
                    IVideoPicture newPic = picture; 
                    if (resampler != null) 
                    { 
                        newPic = IVideoPicture.make(resampler.getOutputPixelFormat(), picture.getWidth(), picture.getHeight()); 

                        if (resampler.resample(newPic, picture) < 0) 
                            throw new RuntimeException("could not resample video from"); 
                    } 
                    if (newPic.getPixelType() != IPixelFormat.Type.BGR24) 
                            throw new RuntimeException("could not decode video as RGB 32 bit data in"); 

                    javaImage = converter.toImage(newPic); 
                    try 
                    { 
                        double seconds = ((double)picture.getPts()) / Global.DEFAULT_PTS_PER_SECOND; 
                        if (seconds >= sec && seconds <= (sec +(Global.DEFAULT_PTS_PER_SECOND ))) 
                        { 

                            File file = new File(Config.MULTIMEDIA_PATH, "frame_" + sec + ".png"); 
                            ImageIO.write(javaImage, "png", file); 
                            System.out.printf("at elapsed time of %6.3f seconds wrote: %s \n", seconds, file); 
                            return; 
                        } 
                    } 
                    catch (Exception e) 
                    { 
                        e.printStackTrace(); 
                    } 
                } 
            } 
        } 
        else 
        { 
            // This packet isn't part of our video stream, so we just 
            // silently drop it. 
        } 
    } 
    converter.delete(); 
} 

您知道更好的方法吗?

I'm working with xuggle since one week and I wrote a method to get a
frame by a video but if video is long this method takes too much time:

public static void getFrameBySec(IContainer container, int videoStreamId, IStreamCoder videoCoder, IVideoResampler resampler, double sec) 
{ 
    BufferedImage javaImage = new BufferedImage(videoCoder.getWidth(), videoCoder.getHeight(), BufferedImage.TYPE_3BYTE_BGR); 
    IConverter converter = ConverterFactory.createConverter(javaImage, IPixelFormat.Type.BGR24); 
    IPacket packet = IPacket.make(); 
    while(container.readNextPacket(packet) >= 0) 
    { 
        if (packet.getStreamIndex() == videoStreamId) 
        { 
            IVideoPicture picture = IVideoPicture.make(videoCoder.getPixelType(), videoCoder.getWidth(), videoCoder.getHeight()); 
            int offset = 0; 
            while(offset < packet.getSize()) 
            { 
                int bytesDecoded = videoCoder.decodeVideo(picture, packet, offset); 
                if (bytesDecoded < 0) 
                    throw new RuntimeException("got error decoding video"); 
                offset += bytesDecoded; 
                if (picture.isComplete()) 
                { 
                    IVideoPicture newPic = picture; 
                    if (resampler != null) 
                    { 
                        newPic = IVideoPicture.make(resampler.getOutputPixelFormat(), picture.getWidth(), picture.getHeight()); 

                        if (resampler.resample(newPic, picture) < 0) 
                            throw new RuntimeException("could not resample video from"); 
                    } 
                    if (newPic.getPixelType() != IPixelFormat.Type.BGR24) 
                            throw new RuntimeException("could not decode video as RGB 32 bit data in"); 

                    javaImage = converter.toImage(newPic); 
                    try 
                    { 
                        double seconds = ((double)picture.getPts()) / Global.DEFAULT_PTS_PER_SECOND; 
                        if (seconds >= sec && seconds <= (sec +(Global.DEFAULT_PTS_PER_SECOND ))) 
                        { 

                            File file = new File(Config.MULTIMEDIA_PATH, "frame_" + sec + ".png"); 
                            ImageIO.write(javaImage, "png", file); 
                            System.out.printf("at elapsed time of %6.3f seconds wrote: %s \n", seconds, file); 
                            return; 
                        } 
                    } 
                    catch (Exception e) 
                    { 
                        e.printStackTrace(); 
                    } 
                } 
            } 
        } 
        else 
        { 
            // This packet isn't part of our video stream, so we just 
            // silently drop it. 
        } 
    } 
    converter.delete(); 
} 

Do you know a better way to do this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

日久见人心 2024-11-17 11:21:22

好吧,通过阅读您的代码,我发现可以进行一些优化。

首先读取整个文件一次,创建字节偏移量和秒的索引。然后该函数可以从给定的秒数中查找字节偏移量,您可以在该偏移量处解码视频并执行其余的代码。

另一种选择是使用您的方法,每次读取整个文件,但不要调用所有重新采样器、newPic 和 java 图像转换器代码,而是首先检查秒数是否匹配。如果是,则将图像转换为要显示的新图片。

所以

if(picture.isComplete()){

try { 

  double seconds = ((double)picture.getPts()) / Global.DEFAULT_PTS_PER_SECOND; 
  if (seconds >= sec && seconds <= (sec +(Global.DEFAULT_PTS_PER_SECOND ))) 
  {
    Resample image
    Convert Image
    Do File stuff
  }

Well from just reading your code I see some optimizations that can be made.

One you first read through the entire file once, create an index of byteoffsets and seconds. Then the function can lookup of the byteoffset from the seconds given and you can decode the video at that offset and do the rest of your code.

Another option is to use your method, reading through the whole file each time, but instead of calling all that resampler, newPic, and java image converter code, check if the seconds match up first. If they do, then convert the image into a new pic to be displayed.

So

if(picture.isComplete()){

try { 

  double seconds = ((double)picture.getPts()) / Global.DEFAULT_PTS_PER_SECOND; 
  if (seconds >= sec && seconds <= (sec +(Global.DEFAULT_PTS_PER_SECOND ))) 
  {
    Resample image
    Convert Image
    Do File stuff
  }
蓝眼泪 2024-11-17 11:21:22

使用 seekKeyFrame 选项。您可以使用此功能来查找视频文件中的任何时间(时间以毫秒为单位)。

double timeBase = 0;
int videoStreamId = -1;

private void seekToMs(IContainer container, long timeMs) {
    if(videoStreamId == -1) {
        for(int i = 0; i < container.getNumStreams(); i++) {
            IStream stream = container.getStream(i);
            IStreamCoder coder = stream.getStreamCoder();
            if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
                videoStreamId = i;
                timeBase = stream.getTimeBase().getDouble();
                break;
            }
        }
    }

    long seekTo = (long) (timeMs/1000.0/timeBase);
    container.seekKeyFrame(videoStreamId, seekTo, IContainer.SEEK_FLAG_BACKWARDS);
}

从那里,您可以使用经典的 while(container.readNextPacket(packet) >= 0) 方法将图像获取到文件。

注意:它不会寻求精确的时间,而是近似的时间,因此您仍然需要检查数据包(但当然比以前少得多)。

Use seekKeyFrame option. You can use this function to seek to any time in the video file (time is in milliseconds).

double timeBase = 0;
int videoStreamId = -1;

private void seekToMs(IContainer container, long timeMs) {
    if(videoStreamId == -1) {
        for(int i = 0; i < container.getNumStreams(); i++) {
            IStream stream = container.getStream(i);
            IStreamCoder coder = stream.getStreamCoder();
            if (coder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO) {
                videoStreamId = i;
                timeBase = stream.getTimeBase().getDouble();
                break;
            }
        }
    }

    long seekTo = (long) (timeMs/1000.0/timeBase);
    container.seekKeyFrame(videoStreamId, seekTo, IContainer.SEEK_FLAG_BACKWARDS);
}

From there you use your classic while(container.readNextPacket(packet) >= 0) method of getting the images to files.

Notice: It won't seek to exact time but approximate so you'll still need to go through the packets (but of course much less than before).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文