在 AVCaptureVideoPreviewLayer 顶部绘制一个矩形,可能吗?

发布于 2024-10-07 14:36:07 字数 2433 浏览 4 评论 0原文

这几天我一直在为这个问题苦恼。

我想在 CALayer (AVCaptureVideoPreviewLayer) 顶部绘制一个矩形,它恰好是 iPhone4 上摄像头的视频源。

这是我的设置的一部分;

    //(in function for initialization)

        -(void)initDevices {
           AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput                   deviceInputWithDevice:[AVCaptureDevicedefaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];

           AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
           captureOutput.alwaysDiscardsLateVideoFrames = YES; 
           captureOutput.minFrameDuration = CMTimeMake(1, 30);
           dispatch_queue_t queue;
           queue = dispatch_queue_create("cameraQueue", NULL);
           [captureOutput setSampleBufferDelegate:self queue:queue];
           dispatch_release(queue);

           NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
           NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
           NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
           [captureOutput setVideoSettings:videoSettings]; 
           self.captureSession = [[AVCaptureSession alloc] init];
           [self.captureSession addInput:captureInput];
           [self.captureSession addOutput:captureOutput];
           [self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];   
           self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
           self.prevLayer.frame = CGRectMake(0, 0, 400, 400); 
           self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
           self.prevLayer.delegate = self;
           [self.view.layer addSublayer: self.prevLayer];
    }

    - (void)captureOutput:(AVCaptureOutput *)captureOutput 
      didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
      fromConnection:(AVCaptureConnection *)connection { 

       [self performSelectorOnMainThread:@selector(drawme) withObject:nil waitUntilDone:YES];
    }

    - (void)drawme {
 [self.prevLayer setNeedsDisplay];
    }

    //delegate function that draws to a CALayer
    - (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx {
     NSLog(@"hello layer!");
     CGContextSetRGBFillColor (ctx, 1, 0, 0, 1);
            CGContextFillRect (ctx, CGRectMake (0, 0, 200, 100 ));
    }

这可能吗?从我当前的代码中,我得到了“hello层”打印,但相机馈送没有填充矩形。

任何帮助都会很棒。 :)

I've been banging my head on this for a few days now.

I want to draw a rectangle on top of a CALayer (AVCaptureVideoPreviewLayer), which just happens to be the video feed from the camera on an iPhone4.

here's part of my setup;

    //(in function for initialization)

        -(void)initDevices {
           AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput                   deviceInputWithDevice:[AVCaptureDevicedefaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];

           AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
           captureOutput.alwaysDiscardsLateVideoFrames = YES; 
           captureOutput.minFrameDuration = CMTimeMake(1, 30);
           dispatch_queue_t queue;
           queue = dispatch_queue_create("cameraQueue", NULL);
           [captureOutput setSampleBufferDelegate:self queue:queue];
           dispatch_release(queue);

           NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
           NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
           NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
           [captureOutput setVideoSettings:videoSettings]; 
           self.captureSession = [[AVCaptureSession alloc] init];
           [self.captureSession addInput:captureInput];
           [self.captureSession addOutput:captureOutput];
           [self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];   
           self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
           self.prevLayer.frame = CGRectMake(0, 0, 400, 400); 
           self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
           self.prevLayer.delegate = self;
           [self.view.layer addSublayer: self.prevLayer];
    }

    - (void)captureOutput:(AVCaptureOutput *)captureOutput 
      didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
      fromConnection:(AVCaptureConnection *)connection { 

       [self performSelectorOnMainThread:@selector(drawme) withObject:nil waitUntilDone:YES];
    }

    - (void)drawme {
 [self.prevLayer setNeedsDisplay];
    }

    //delegate function that draws to a CALayer
    - (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx {
     NSLog(@"hello layer!");
     CGContextSetRGBFillColor (ctx, 1, 0, 0, 1);
            CGContextFillRect (ctx, CGRectMake (0, 0, 200, 100 ));
    }

Is this even possible? From my current code, I get "hello layer" printing, but the camera feed has no filled rectangle.

Any help would be awesome. :)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

面如桃花 2024-10-14 14:36:07

我认为您应该向 AVCaptureVideoPreviewLayer 添加另一个图层,我为您修改示例代码。你可以尝试一下。

    //(in function for initialization)

    -(void)initDevices {
       AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput                   deviceInputWithDevice:[AVCaptureDevicedefaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];

       AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
       captureOutput.alwaysDiscardsLateVideoFrames = YES; 
       captureOutput.minFrameDuration = CMTimeMake(1, 30);
       dispatch_queue_t queue;
       queue = dispatch_queue_create("cameraQueue", NULL);
       [captureOutput setSampleBufferDelegate:self queue:queue];
       dispatch_release(queue);

       NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
       NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
       NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
       [captureOutput setVideoSettings:videoSettings]; 
       self.captureSession = [[AVCaptureSession alloc] init];
       [self.captureSession addInput:captureInput];
       [self.captureSession addOutput:captureOutput];
       [self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];   
       self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
       self.prevLayer.frame = CGRectMake(0, 0, 400, 400); 
       self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
       [self.view.layer addSublayer:self.prevLayer];

       self.drawLayer = [CAShapeLayer layer];
       CGRect parentBox = [self.captureVideoPreviewLayer frame];
       [self.drawLayer setFrame:parentBox];
       [self.drawLayer setDelegate:self];
       [self.drawLayer setNeedsDisplay];
       [self.captureVideoPreviewLayer addSublayer:self.drawLayer];
}

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
    fromConnection:(AVCaptureConnection *)connection { 

    [self performSelectorOnMainThread:@selector(drawme) withObject:nil waitUntilDone:YES];
}

- (void)drawme {
    [self.drawLayer setNeedsDisplay];
}

//delegate function that draws to a CALayer
- (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx {
    NSLog(@"hello layer!");
    CGContextSetRGBFillColor (ctx, 1, 0, 0, 1);
    CGContextFillRect (ctx, CGRectMake (0, 0, 200, 100 ));
}

I think you should add another layer to AVCaptureVideoPreviewLayer and I modify the example code for you. You can try it.

    //(in function for initialization)

    -(void)initDevices {
       AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput                   deviceInputWithDevice:[AVCaptureDevicedefaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];

       AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
       captureOutput.alwaysDiscardsLateVideoFrames = YES; 
       captureOutput.minFrameDuration = CMTimeMake(1, 30);
       dispatch_queue_t queue;
       queue = dispatch_queue_create("cameraQueue", NULL);
       [captureOutput setSampleBufferDelegate:self queue:queue];
       dispatch_release(queue);

       NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
       NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
       NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
       [captureOutput setVideoSettings:videoSettings]; 
       self.captureSession = [[AVCaptureSession alloc] init];
       [self.captureSession addInput:captureInput];
       [self.captureSession addOutput:captureOutput];
       [self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];   
       self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
       self.prevLayer.frame = CGRectMake(0, 0, 400, 400); 
       self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
       [self.view.layer addSublayer:self.prevLayer];

       self.drawLayer = [CAShapeLayer layer];
       CGRect parentBox = [self.captureVideoPreviewLayer frame];
       [self.drawLayer setFrame:parentBox];
       [self.drawLayer setDelegate:self];
       [self.drawLayer setNeedsDisplay];
       [self.captureVideoPreviewLayer addSublayer:self.drawLayer];
}

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
    fromConnection:(AVCaptureConnection *)connection { 

    [self performSelectorOnMainThread:@selector(drawme) withObject:nil waitUntilDone:YES];
}

- (void)drawme {
    [self.drawLayer setNeedsDisplay];
}

//delegate function that draws to a CALayer
- (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)ctx {
    NSLog(@"hello layer!");
    CGContextSetRGBFillColor (ctx, 1, 0, 0, 1);
    CGContextFillRect (ctx, CGRectMake (0, 0, 200, 100 ));
}
像你 2024-10-14 14:36:07

或者您可以只插入图像 - 您显然可以使用 AVCaptureVideoPreviewLayer 进行视频捕获,并创建另一个 CALayer() 并使用 layer.insertSublayer(..., 上面:...) 在视频层上方插入“自定义”层,根据习惯,我的意思是另一个 CALayer,可以说

layer.contents = spinner.cgImage

这里有更详细的信息 Swift 说明

Or you can insert just an image - you can use AVCaptureVideoPreviewLayer for video capturing obviously, and create another CALayer() and use layer.insertSublayer(..., above: ...) to insert your "custom" layer above the video layer, and by custom I mean just yet another CALayer with let say

layer.contents = spinner.cgImage

Here's a bit more detailed instructions for Swift

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文