为什么我可以在 AVCaptureSession 上处理灰度效果,而不能使用 AVAssetReader 处理?

发布于 2024-12-12 12:03:53 字数 5526 浏览 1 评论 0原文

我正在开发 iPhone 应用程序。我想对库中的视频应用一些过滤器。经过研究,我从这篇文章开始 http ://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios 和他的 ColorTracking 源代码。 从这段代码中,我可以使用 AVCaptureSession 实时应用灰度过滤器。正确的。但我想对图书馆中的视频做同样的事情。所以我使用 AVAssetReader 来读取源视频。与 AVCaptureSession 一样,我可以获得 CVImageBufferRef。它具有与 AVCaptureSession 相同的宽度、高度和数据大小,但我的 openGL 视图始终是黑色的。

我捕获帧的源代码:

    - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {


    [self dismissModalViewControllerAnimated:NO];


    /// incoming video
    NSURL *videoURL = [info valueForKey:UIImagePickerControllerMediaURL];
    NSLog(@"Video : %@", videoURL);

    // AVURLAsset to read input movie (i.e. mov recorded to local storage)
    NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
    AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:videoURL options:inputOptions];

    // Load the input asset tracks information
    [inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{


        NSError *error = nil;

        // Check status of "tracks", make sure they were loaded    
        AVKeyValueStatus tracksStatus = [inputAsset statusOfValueForKey:@"tracks" error:&error];
        if (!tracksStatus == AVKeyValueStatusLoaded)
            // failed to load
            return;




        /* Read video samples from input asset video track */
        AVAssetReader *reader = [AVAssetReader assetReaderWithAsset:inputAsset error:&error];

        NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];
        [outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]  forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey];
        AVAssetReaderTrackOutput *readerVideoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[inputAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];


        // Assign the tracks to the reader and start to read
        [reader addOutput:readerVideoTrackOutput];
        if ([reader startReading] == NO) {
            // Handle error
            NSLog(@"Error reading");
        }

        NSAutoreleasePool *pool = [NSAutoreleasePool new];
        while (reader.status == AVAssetReaderStatusReading) {

            CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];
            if (sampleBufferRef) {
                CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBufferRef);
                [self processNewCameraFrame:pixelBuffer];

                CMSampleBufferInvalidate(sampleBufferRef);
                CFRelease(sampleBufferRef);
            }
        }
        [pool release];

        NSLog(@"Finished");
    }];
}

这是处理帧的代码:

- (void)processNewCameraFrame:(CVImageBufferRef)cameraFrame {


    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
    int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

    NSLog(@"Size : %i %i %zu", bufferWidth, bufferHeight, CVPixelBufferGetDataSize(cameraFrame));


    // Create a new texture from the camera frame data, display that using the shaders
    glGenTextures(1, &videoFrameTexture);
    glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    // This is necessary for non-power-of-two textures
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    // Using BGRA extension to pull in video frame data directly
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

    GLenum err = glGetError();
    if (err != GL_NO_ERROR)
        NSLog(@"Error uploading texture. glError: 0x%04X", err);

    [self drawFrame];

    glDeleteTextures(1, &videoFrameTexture);

    CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}

对于在OpenGL视图中绘制帧:

- (void)drawFrame {    
    // Replace the implementation of this method to do your own custom drawing.
    static const GLfloat squareVertices[] = {
        -1.0f, -1.0f,
        1.0f, -1.0f,
        -1.0f,  1.0f,
        1.0f,  1.0f,
    };

    static const GLfloat textureVertices[] = {
        1.0f, 1.0f,
        1.0f, 0.0f,
        0.0f,  1.0f,
        0.0f,  0.0f,
    };

    [glView setDisplayFramebuffer];
    glUseProgram(grayScaleProgram);         

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, videoFrameTexture);

    // Update uniform values
    glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);   

    // Update attribute values.
    glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
    glEnableVertexAttribArray(ATTRIB_VERTEX);
    glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
    glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);



    [glView presentFramebuffer];
}

我不添加代码,但如果需要帮助我,我可以......你有办法帮助我吗?

谢谢!

I'm working on an iPhone app. I want to apply some filters on a Video from Library. After research, I started with this Post http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios and his ColorTracking source.
From this code, I can apply Grayscale filter in realtime with AVCaptureSession. Right. But I want to do the same with a video from Library. So I use AVAssetReader to read the source video. As with AVCaptureSession, i can get a CVImageBufferRef. It has same width, height and data size that with AVCaptureSession but my openGL view is always black.

My source to capture Frame :

    - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {


    [self dismissModalViewControllerAnimated:NO];


    /// incoming video
    NSURL *videoURL = [info valueForKey:UIImagePickerControllerMediaURL];
    NSLog(@"Video : %@", videoURL);

    // AVURLAsset to read input movie (i.e. mov recorded to local storage)
    NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
    AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:videoURL options:inputOptions];

    // Load the input asset tracks information
    [inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{


        NSError *error = nil;

        // Check status of "tracks", make sure they were loaded    
        AVKeyValueStatus tracksStatus = [inputAsset statusOfValueForKey:@"tracks" error:&error];
        if (!tracksStatus == AVKeyValueStatusLoaded)
            // failed to load
            return;




        /* Read video samples from input asset video track */
        AVAssetReader *reader = [AVAssetReader assetReaderWithAsset:inputAsset error:&error];

        NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];
        [outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]  forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey];
        AVAssetReaderTrackOutput *readerVideoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[inputAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];


        // Assign the tracks to the reader and start to read
        [reader addOutput:readerVideoTrackOutput];
        if ([reader startReading] == NO) {
            // Handle error
            NSLog(@"Error reading");
        }

        NSAutoreleasePool *pool = [NSAutoreleasePool new];
        while (reader.status == AVAssetReaderStatusReading) {

            CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];
            if (sampleBufferRef) {
                CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBufferRef);
                [self processNewCameraFrame:pixelBuffer];

                CMSampleBufferInvalidate(sampleBufferRef);
                CFRelease(sampleBufferRef);
            }
        }
        [pool release];

        NSLog(@"Finished");
    }];
}

Here is the code for processing frame :

- (void)processNewCameraFrame:(CVImageBufferRef)cameraFrame {


    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
    int bufferWidth = CVPixelBufferGetWidth(cameraFrame);

    NSLog(@"Size : %i %i %zu", bufferWidth, bufferHeight, CVPixelBufferGetDataSize(cameraFrame));


    // Create a new texture from the camera frame data, display that using the shaders
    glGenTextures(1, &videoFrameTexture);
    glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    // This is necessary for non-power-of-two textures
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    // Using BGRA extension to pull in video frame data directly
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(cameraFrame));

    GLenum err = glGetError();
    if (err != GL_NO_ERROR)
        NSLog(@"Error uploading texture. glError: 0x%04X", err);

    [self drawFrame];

    glDeleteTextures(1, &videoFrameTexture);

    CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}

For drawing frame in OpenGL View:

- (void)drawFrame {    
    // Replace the implementation of this method to do your own custom drawing.
    static const GLfloat squareVertices[] = {
        -1.0f, -1.0f,
        1.0f, -1.0f,
        -1.0f,  1.0f,
        1.0f,  1.0f,
    };

    static const GLfloat textureVertices[] = {
        1.0f, 1.0f,
        1.0f, 0.0f,
        0.0f,  1.0f,
        0.0f,  0.0f,
    };

    [glView setDisplayFramebuffer];
    glUseProgram(grayScaleProgram);         

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, videoFrameTexture);

    // Update uniform values
    glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);   

    // Update attribute values.
    glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
    glEnableVertexAttribArray(ATTRIB_VERTEX);
    glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
    glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);



    [glView presentFramebuffer];
}

I don't add code but if needed to help me, I can... Have you a way to help me?

Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

靑春怀旧 2024-12-19 12:03:54

当您说灰度滤镜时,您的意思是您正在使用 kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange 视频帧吗?如果是这样,那么我假设您将纹理绑定为 GL_LUMINANCE 。但是,在您的 AVAssetReader 中,您使用 kCVPixelFormatType_32BGRA 像素格式,并在处理方法中将纹理绑定为 GL_RGBA

我曾经偶然发现视频捕获像素格式和纹理绑定的错误组合,最终导致黑屏。检查 AVCaptureSession 设置代码中的 AVCaptureVideoDataOutput 设置。像素格式和纹理绑定应该相同。

编辑:
黑屏的另一个可能原因是 OpenGL 上下文不在线程之间共享。如果您对相机视频帧使用主队列 dispatch_get_main_queue() 以外的队列,则意味着该方法

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection  

是在后台线程上调用的,您无法从中更新 UI。

您可以尝试使用主调度队列设置捕获会话,看看会发生什么。

AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
// set video settings, frame rate .. etc
[videoOut setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

When you said a greyscale filter, did you mean that you are using kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange video frames ? If so, then I'm assuming that you bind the texture as GL_LUMINANCE. However in your AVAssetReader you use the kCVPixelFormatType_32BGRA pixel format, and in the processing method you bind the texture as GL_RGBA.

I once stumbled upon such incorrect combination of video capture pixel format and texture binding and ended up with a black screen. Check the AVCaptureVideoDataOutput settings in your AVCaptureSession setup code. The pixel format and texture binding should be the same.

Edit:
Another possible reason for the black screen is the fact that OpenGL context is not shared among threads. If you use a queue for the camera video frames other than the main queue dispatch_get_main_queue() that means that the method

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection  

is called on a background thread, from which you cannot update the UI.

You could try setting the capture session with the main dispatch queue and see what happens.

AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
// set video settings, frame rate .. etc
[videoOut setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
枕头说它不想醒 2024-12-19 12:03:54

看来您没有将您的cameraFrame 与videoFrameTexture 链接起来。
CVImageBufferRef 图像;
glBindTexture( CVOpenGLTextureGetTarget( 图像 ), CVOpenGLTextureGetName( 图像 ) );

It's seems that you dont link your cameraFrame with videoFrameTexture.
CVImageBufferRef image;
glBindTexture( CVOpenGLTextureGetTarget( image ), CVOpenGLTextureGetName( image ) );

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文