如何在 iPhone 相机预览窗口中动态更改像素颜色?

发布于 2024-10-03 20:24:07 字数 144 浏览 1 评论 0原文

我正在使用 UIImagePickerController 在 iPhone 上拍照。我想即时调整照片,看来我可以使用 UIImagePickerController 即时调整照片的形状,但我无法找到即时更改颜色的方法。例如,将所有颜色更改为黑/白。

谢谢。

I am using UIImagePickerController to take photos on iPhone. I'd like to adjust the photo on the fly, it appears that I could use UIImagePickerController to adjust the shape of the photo on the fly, but I am not able to find a way to change the color on the fly. For example, change all the color to black/white.

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

月野兔 2024-10-10 20:24:07

最好的方法是使用 AVCaptureSession 对象。我正在做你在我的免费应用程序“Live Effects Cam”中所说的事情

,网上有几个代码示例,它们也可以帮助你实现这一点。以下是可能有帮助的示例代码块:

- (void) activateCameraFeed
    {
    videoSettings = nil;

#if USE_32BGRA
    pixelFormatCode = [[NSNumber alloc] initWithUnsignedInt:(unsigned int)kCVPixelFormatType_32BGRA];
    pixelFormatKey = [[NSString alloc] initWithString:(NSString *)kCVPixelBufferPixelFormatTypeKey];
    videoSettings = [[NSDictionary alloc] initWithObjectsAndKeys:pixelFormatCode, pixelFormatKey, nil]; 
#endif

    videoDataOutputQueue = dispatch_queue_create("com.jellyfilledstudios.ImageCaptureQueue", NULL);

    captureVideoOutput = [[AVCaptureVideoDataOutput alloc] init];
    [captureVideoOutput setAlwaysDiscardsLateVideoFrames:YES]; 
    [captureVideoOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
    [captureVideoOutput setVideoSettings:videoSettings];
    [captureVideoOutput setMinFrameDuration:kCMTimeZero];

    dispatch_release(videoDataOutputQueue); // AVCaptureVideoDataOutput uses dispatch_retain() & dispatch_release() so we can dispatch_release() our reference now

    if ( useFrontCamera )
        {
        currentCameraDeviceIndex = frontCameraDeviceIndex;
        cameraImageOrientation = UIImageOrientationLeftMirrored;
        }
    else
        {
        currentCameraDeviceIndex = backCameraDeviceIndex;
        cameraImageOrientation = UIImageOrientationRight;
        }

    selectedCamera = [[AVCaptureDevice devices] objectAtIndex:(NSUInteger)currentCameraDeviceIndex];

    captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:selectedCamera error:nil];

    captureSession = [[AVCaptureSession alloc] init];

    [captureSession beginConfiguration];

    [self setCaptureConfiguration];

    [captureSession addInput:captureVideoInput];
    [captureSession addOutput:captureVideoOutput];
    [captureSession commitConfiguration];
    [captureSession startRunning];
    }


// AVCaptureVideoDataOutputSampleBufferDelegate
// AVCaptureAudioDataOutputSampleBufferDelegate
//
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
    {
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    if ( captureOutput==captureVideoOutput )
        {
        [self performImageCaptureFrom:sampleBuffer fromConnection:connection];
        }

    [pool drain];
    } 



- (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer
    {
    CVImageBufferRef imageBuffer;

    if ( CMSampleBufferGetNumSamples(sampleBuffer) != 1 )
        return;
    if ( !CMSampleBufferIsValid(sampleBuffer) )
        return;
    if ( !CMSampleBufferDataIsReady(sampleBuffer) )
        return;

    imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    if ( CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA )
        return;

    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    int bufferSize = bytesPerRow * height;

    uint8_t *tempAddress = malloc( bufferSize );
    memcpy( tempAddress, baseAddress, bytesPerRow * height );

    baseAddress = tempAddress;

    //
    // Apply affects to the pixels stored in (uint32_t *)baseAddress
    //
    //
    // example: grayScale( (uint32_t *)baseAddress, width, height );
    // example: sepia( (uint32_t *)baseAddress, width, height );
    //

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = nil;

    if ( cameraDeviceSetting != CameraDeviceSetting640x480 )        // not an iPhone4 or iTouch 5th gen
        newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace,  kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
    else
        newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

    CGImageRef newImage = CGBitmapContextCreateImage( newContext );
    CGColorSpaceRelease( colorSpace );
    CGContextRelease( newContext );

    free( tempAddress );

    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    if ( newImage == nil )
        {
        return;
        }

    // To be able to display the CGImageRef newImage in your UI you will need to do it like this
    // because you are running on a different thread here…
    //
    [self performSelectorOnMainThread:@selector(newCameraImageNotification:) withObject:(id)newImage waitUntilDone:YES];
    }

The best way to do this is with an AVCaptureSession object. I'm doing exactly what you're talking about in my free app "Live Effects Cam"

There are several code examples on-line that a will help you implement this too. Here is a sample chunk of code that might help:

- (void) activateCameraFeed
    {
    videoSettings = nil;

#if USE_32BGRA
    pixelFormatCode = [[NSNumber alloc] initWithUnsignedInt:(unsigned int)kCVPixelFormatType_32BGRA];
    pixelFormatKey = [[NSString alloc] initWithString:(NSString *)kCVPixelBufferPixelFormatTypeKey];
    videoSettings = [[NSDictionary alloc] initWithObjectsAndKeys:pixelFormatCode, pixelFormatKey, nil]; 
#endif

    videoDataOutputQueue = dispatch_queue_create("com.jellyfilledstudios.ImageCaptureQueue", NULL);

    captureVideoOutput = [[AVCaptureVideoDataOutput alloc] init];
    [captureVideoOutput setAlwaysDiscardsLateVideoFrames:YES]; 
    [captureVideoOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
    [captureVideoOutput setVideoSettings:videoSettings];
    [captureVideoOutput setMinFrameDuration:kCMTimeZero];

    dispatch_release(videoDataOutputQueue); // AVCaptureVideoDataOutput uses dispatch_retain() & dispatch_release() so we can dispatch_release() our reference now

    if ( useFrontCamera )
        {
        currentCameraDeviceIndex = frontCameraDeviceIndex;
        cameraImageOrientation = UIImageOrientationLeftMirrored;
        }
    else
        {
        currentCameraDeviceIndex = backCameraDeviceIndex;
        cameraImageOrientation = UIImageOrientationRight;
        }

    selectedCamera = [[AVCaptureDevice devices] objectAtIndex:(NSUInteger)currentCameraDeviceIndex];

    captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:selectedCamera error:nil];

    captureSession = [[AVCaptureSession alloc] init];

    [captureSession beginConfiguration];

    [self setCaptureConfiguration];

    [captureSession addInput:captureVideoInput];
    [captureSession addOutput:captureVideoOutput];
    [captureSession commitConfiguration];
    [captureSession startRunning];
    }


// AVCaptureVideoDataOutputSampleBufferDelegate
// AVCaptureAudioDataOutputSampleBufferDelegate
//
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
    {
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    if ( captureOutput==captureVideoOutput )
        {
        [self performImageCaptureFrom:sampleBuffer fromConnection:connection];
        }

    [pool drain];
    } 



- (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer
    {
    CVImageBufferRef imageBuffer;

    if ( CMSampleBufferGetNumSamples(sampleBuffer) != 1 )
        return;
    if ( !CMSampleBufferIsValid(sampleBuffer) )
        return;
    if ( !CMSampleBufferDataIsReady(sampleBuffer) )
        return;

    imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    if ( CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA )
        return;

    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    int bufferSize = bytesPerRow * height;

    uint8_t *tempAddress = malloc( bufferSize );
    memcpy( tempAddress, baseAddress, bytesPerRow * height );

    baseAddress = tempAddress;

    //
    // Apply affects to the pixels stored in (uint32_t *)baseAddress
    //
    //
    // example: grayScale( (uint32_t *)baseAddress, width, height );
    // example: sepia( (uint32_t *)baseAddress, width, height );
    //

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = nil;

    if ( cameraDeviceSetting != CameraDeviceSetting640x480 )        // not an iPhone4 or iTouch 5th gen
        newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace,  kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
    else
        newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

    CGImageRef newImage = CGBitmapContextCreateImage( newContext );
    CGColorSpaceRelease( colorSpace );
    CGContextRelease( newContext );

    free( tempAddress );

    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    if ( newImage == nil )
        {
        return;
        }

    // To be able to display the CGImageRef newImage in your UI you will need to do it like this
    // because you are running on a different thread here…
    //
    [self performSelectorOnMainThread:@selector(newCameraImageNotification:) withObject:(id)newImage waitUntilDone:YES];
    }
难忘№最初的完美 2024-10-10 20:24:07

您可以在图像上叠加视图并更改混合模式以匹配黑/白效果。

查看 Apple 的 QuartzDemo,特别是演示,混合模式示例

You can overlay a view on the image and change the blending mode to match a Black/White effect.

Check out the QuartzDemo from Apple, specifically in that Demo, the Blending Modes example

微暖i 2024-10-10 20:24:07

另一种方法是使用 AVFoundation 转换每个帧。我对此没有太多经验,但 WWDC2010 的“第 409 期 - 使用带有 AVFoundation 的相机”视频及其示例项目应该对帮助您解决问题大有帮助。

当然,前提是您可以使用 iOS4 类。

Another way to do this would be to convert each frame using AVFoundation. I don't have a ton of experience with this but the "Session 409 - Using the Camera with AVFoundation" video from WWDC2010 and its sample projects should go a long way to helping you with your problem.

That is, of course, if you're okay using iOS4 classes.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文