UIImage 的高质量缩放

发布于 2024-11-08 11:03:01 字数 301 浏览 0 评论 0原文

我需要缩放来自 iPhone 应用程序中视图层的图像的分辨率。显而易见的方法是在 UIGraphicsBeginImageContextWithOptions 中指定比例因子,但只要比例因子不是 1.0,图像质量就会受到影响——远远超出像素损失的预期。

我尝试过其他几种缩放技术,但它们似乎都围绕 CGContext 的东西,并且似乎都做同样的事情。

仅仅改变图像“大小”(不改变点分辨率)是不够的,主要是因为该信息似乎很快就被管道中的其他人丢弃(图像将被转换为 JPG 并通过电子邮件发送)。

iPhone 上还有其他缩放图像的方法吗?

I need to scale the resolution of an image coming from a view layer in an iPhone application. The obvious way is to specify a scale factor in UIGraphicsBeginImageContextWithOptions, but any time the scale factor is not 1.0 then quality of the image goes to pot -- far more than would be expected from the loss of pixels.

I've tried several other scaling techniques, but they all seem to revolve around CGContext stuff and all appear to do the same thing.

Simply changing image "size" (without changing the dot resolution) isn't sufficient, mostly because that info seems to be discarded very quickly by other hands in the pipeline (the image will be converted to a JPG and emailed).

Is there any other way to scale an image on iPhone?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

莫相离 2024-11-15 11:03:01

Swift 扩展:

extension UIImage{

        // returns a scaled version of the image
        func imageScaledToSize(size : CGSize, isOpaque : Bool) -> UIImage{

            // begin a context of the desired size
            UIGraphicsBeginImageContextWithOptions(size, isOpaque, 0.0)

            // draw image in the rect with zero origin and size of the context
            let imageRect = CGRect(origin: CGPointZero, size: size)
            self.drawInRect(imageRect)

            // get the scaled image, close the context and return the image
            let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()

            return scaledImage
       }
}

示例:

aUIImageView.image = aUIImage.imageScaledToSize(aUIImageView.bounds.size, isOpaque : false)

如果图像没有 alpha,则将 isOpaque 设置为 true:绘图将具有更好的性能。

Swift extension:

extension UIImage{

        // returns a scaled version of the image
        func imageScaledToSize(size : CGSize, isOpaque : Bool) -> UIImage{

            // begin a context of the desired size
            UIGraphicsBeginImageContextWithOptions(size, isOpaque, 0.0)

            // draw image in the rect with zero origin and size of the context
            let imageRect = CGRect(origin: CGPointZero, size: size)
            self.drawInRect(imageRect)

            // get the scaled image, close the context and return the image
            let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()

            return scaledImage
       }
}

Example:

aUIImageView.image = aUIImage.imageScaledToSize(aUIImageView.bounds.size, isOpaque : false)

Set isOpaque to true if the image has no alpha: drawing will have better performance.

梦断已成空 2024-11-15 11:03:01

关于UIImage调整大小问题,这篇文章< /a> 提供了多种处理 UIImage 对象的方法。 UIImage 有一些方向问题需要修复。 这个另一篇文章将解决这个问题。


-(UIImage*)resizedImageToSize:(CGSize)dstSize
{
    CGImageRef imgRef = self.CGImage;
    // the below values are regardless of orientation : for UIImages from Camera, width>height (landscape)
    CGSize  srcSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef)); // not equivalent to self.size (which is dependant on the imageOrientation)!

    /* Don't resize if we already meet the required destination size. */
    if (CGSizeEqualToSize(srcSize, dstSize)) {
        return self;
    }

    CGFloat scaleRatio = dstSize.width / srcSize.width;

    // Handle orientation problem of UIImage
    UIImageOrientation orient = self.imageOrientation;
    CGAffineTransform transform = CGAffineTransformIdentity;
    switch(orient) {

        case UIImageOrientationUp: //EXIF = 1
            transform = CGAffineTransformIdentity;
            break;

        case UIImageOrientationUpMirrored: //EXIF = 2
            transform = CGAffineTransformMakeTranslation(srcSize.width, 0.0);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            break;

        case UIImageOrientationDown: //EXIF = 3
            transform = CGAffineTransformMakeTranslation(srcSize.width, srcSize.height);
            transform = CGAffineTransformRotate(transform, M_PI);
            break;

        case UIImageOrientationDownMirrored: //EXIF = 4
            transform = CGAffineTransformMakeTranslation(0.0, srcSize.height);
            transform = CGAffineTransformScale(transform, 1.0, -1.0);
            break;

        case UIImageOrientationLeftMirrored: //EXIF = 5
            dstSize = CGSizeMake(dstSize.height, dstSize.width);
            transform = CGAffineTransformMakeTranslation(srcSize.height, srcSize.width);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2);
            break;  

        case UIImageOrientationLeft: //EXIF = 6  
            dstSize = CGSizeMake(dstSize.height, dstSize.width);
            transform = CGAffineTransformMakeTranslation(0.0, srcSize.width);
            transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2);
            break;  

        case UIImageOrientationRightMirrored: //EXIF = 7  
            dstSize = CGSizeMake(dstSize.height, dstSize.width);
            transform = CGAffineTransformMakeScale(-1.0, 1.0);
            transform = CGAffineTransformRotate(transform, M_PI_2);
            break;  

        case UIImageOrientationRight: //EXIF = 8  
            dstSize = CGSizeMake(dstSize.height, dstSize.width);
            transform = CGAffineTransformMakeTranslation(srcSize.height, 0.0);
            transform = CGAffineTransformRotate(transform, M_PI_2);
            break;  

        default:  
            [NSException raise:NSInternalInconsistencyException format:@"Invalid image orientation"];  

    }  

    /////////////////////////////////////////////////////////////////////////////
    // The actual resize: draw the image on a new context, applying a transform matrix
    UIGraphicsBeginImageContextWithOptions(dstSize, NO, self.scale);

    CGContextRef context = UIGraphicsGetCurrentContext();

       if (!context) {
           return nil;
       }

    if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
        CGContextScaleCTM(context, -scaleRatio, scaleRatio);
        CGContextTranslateCTM(context, -srcSize.height, 0);
    } else {  
        CGContextScaleCTM(context, scaleRatio, -scaleRatio);
        CGContextTranslateCTM(context, 0, -srcSize.height);
    }

    CGContextConcatCTM(context, transform);

    // we use srcSize (and not dstSize) as the size to specify is in user space (and we use the CTM to apply a scaleRatio)
    CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, srcSize.width, srcSize.height), imgRef);
    UIImage* resizedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return resizedImage;
}

About UIImage resize problem, this post give many ways to handle UIImage object. The UIImage has some orientation problems need to be fixed. This and Another post will address it.


-(UIImage*)resizedImageToSize:(CGSize)dstSize
{
    CGImageRef imgRef = self.CGImage;
    // the below values are regardless of orientation : for UIImages from Camera, width>height (landscape)
    CGSize  srcSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef)); // not equivalent to self.size (which is dependant on the imageOrientation)!

    /* Don't resize if we already meet the required destination size. */
    if (CGSizeEqualToSize(srcSize, dstSize)) {
        return self;
    }

    CGFloat scaleRatio = dstSize.width / srcSize.width;

    // Handle orientation problem of UIImage
    UIImageOrientation orient = self.imageOrientation;
    CGAffineTransform transform = CGAffineTransformIdentity;
    switch(orient) {

        case UIImageOrientationUp: //EXIF = 1
            transform = CGAffineTransformIdentity;
            break;

        case UIImageOrientationUpMirrored: //EXIF = 2
            transform = CGAffineTransformMakeTranslation(srcSize.width, 0.0);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            break;

        case UIImageOrientationDown: //EXIF = 3
            transform = CGAffineTransformMakeTranslation(srcSize.width, srcSize.height);
            transform = CGAffineTransformRotate(transform, M_PI);
            break;

        case UIImageOrientationDownMirrored: //EXIF = 4
            transform = CGAffineTransformMakeTranslation(0.0, srcSize.height);
            transform = CGAffineTransformScale(transform, 1.0, -1.0);
            break;

        case UIImageOrientationLeftMirrored: //EXIF = 5
            dstSize = CGSizeMake(dstSize.height, dstSize.width);
            transform = CGAffineTransformMakeTranslation(srcSize.height, srcSize.width);
            transform = CGAffineTransformScale(transform, -1.0, 1.0);
            transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2);
            break;  

        case UIImageOrientationLeft: //EXIF = 6  
            dstSize = CGSizeMake(dstSize.height, dstSize.width);
            transform = CGAffineTransformMakeTranslation(0.0, srcSize.width);
            transform = CGAffineTransformRotate(transform, 3.0 * M_PI_2);
            break;  

        case UIImageOrientationRightMirrored: //EXIF = 7  
            dstSize = CGSizeMake(dstSize.height, dstSize.width);
            transform = CGAffineTransformMakeScale(-1.0, 1.0);
            transform = CGAffineTransformRotate(transform, M_PI_2);
            break;  

        case UIImageOrientationRight: //EXIF = 8  
            dstSize = CGSizeMake(dstSize.height, dstSize.width);
            transform = CGAffineTransformMakeTranslation(srcSize.height, 0.0);
            transform = CGAffineTransformRotate(transform, M_PI_2);
            break;  

        default:  
            [NSException raise:NSInternalInconsistencyException format:@"Invalid image orientation"];  

    }  

    /////////////////////////////////////////////////////////////////////////////
    // The actual resize: draw the image on a new context, applying a transform matrix
    UIGraphicsBeginImageContextWithOptions(dstSize, NO, self.scale);

    CGContextRef context = UIGraphicsGetCurrentContext();

       if (!context) {
           return nil;
       }

    if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
        CGContextScaleCTM(context, -scaleRatio, scaleRatio);
        CGContextTranslateCTM(context, -srcSize.height, 0);
    } else {  
        CGContextScaleCTM(context, scaleRatio, -scaleRatio);
        CGContextTranslateCTM(context, 0, -srcSize.height);
    }

    CGContextConcatCTM(context, transform);

    // we use srcSize (and not dstSize) as the size to specify is in user space (and we use the CTM to apply a scaleRatio)
    CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, srcSize.width, srcSize.height), imgRef);
    UIImage* resizedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return resizedImage;
}
国际总奸 2024-11-15 11:03:01

我想出了这个算法来创建半​​尺寸图像:



- (UIImage*) halveImage:(UIImage*)sourceImage {

    // Compute the target size
    CGSize sourceSize = sourceImage.size;
    CGSize targetSize;
    targetSize.width = (int) (sourceSize.width / 2);
    targetSize.height = (int) (sourceSize.height / 2);

    // Access the source data bytes
    NSData* sourceData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(sourceImage.CGImage));
    unsigned char* sourceBytes = (unsigned char *)[sourceData bytes];

    // Some info we'll need later
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(sourceImage.CGImage);
    int bitsPerComponent = CGImageGetBitsPerComponent(sourceImage.CGImage);
    int bitsPerPixel = CGImageGetBitsPerPixel(sourceImage.CGImage);
    int __attribute__((unused)) bytesPerPixel = bitsPerPixel / 8;
    int sourceBytesPerRow = CGImageGetBytesPerRow(sourceImage.CGImage);
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(sourceImage.CGImage);

    assert(bytesPerPixel == 4);
    assert(bitsPerComponent == 8);

    // Bytes per row is (apparently) rounded to some boundary
    assert(sourceBytesPerRow >= ((int) sourceSize.width) * 4);
    assert([sourceData length] == ((int) sourceSize.height) * sourceBytesPerRow);

    // Allocate target data bytes
    int targetBytesPerRow = ((int) targetSize.width) * 4;
    // Algorigthm happier if bytes/row a multiple of 16
    targetBytesPerRow = (targetBytesPerRow + 15) & 0xFFFFFFF0;
    int targetBytesSize = ((int) targetSize.height) * targetBytesPerRow;
    unsigned char* targetBytes = (unsigned char*) malloc(targetBytesSize);
    UIImage* targetImage = nil;

    // Copy source to target, averaging 4 pixels into 1
    for (int row = 0; row < targetSize.height; row++) {
        unsigned char* sourceRowStart = sourceBytes + (2 * row * sourceBytesPerRow);
        unsigned char* targetRowStart = targetBytes + (row * targetBytesPerRow);
        for (int column = 0; column < targetSize.width; column++) {

            int sourceColumnOffset = 2 * column * 4;
            int targetColumnOffset = column * 4;

            unsigned char* sourcePixel = sourceRowStart + sourceColumnOffset;
            unsigned char* nextRowSourcePixel = sourcePixel + sourceBytesPerRow;
            unsigned char* targetPixel = targetRowStart + targetColumnOffset;

            uint32_t* sourceWord = (uint32_t*) sourcePixel;
            uint32_t* nextRowSourceWord = (uint32_t*) nextRowSourcePixel;
            uint32_t* targetWord = (uint32_t*) targetPixel;

            uint32_t sourceWord0 = sourceWord[0];
            uint32_t sourceWord1 = sourceWord[1];
            uint32_t sourceWord2 = nextRowSourceWord[0];
            uint32_t sourceWord3 = nextRowSourceWord[1];

            // This apparently bizarre sequence scales the data bytes by 4 so that when added together we'll get an average.  We do lose the least significant bits this way, and thus about half a bit of resolution.
            sourceWord0 = (sourceWord0 & 0xFCFCFCFC) >> 2;
            sourceWord1 = (sourceWord1 & 0xFCFCFCFC) >> 2;
            sourceWord2 = (sourceWord2 & 0xFCFCFCFC) >> 2;
            sourceWord3 = (sourceWord3 & 0xFCFCFCFC) >> 2;

            uint32_t resultWord = sourceWord0 + sourceWord1 + sourceWord2 + sourceWord3;
            targetWord[0] = resultWord;
        }
    }

    // Convert the bits to an image.  Supposedly CGCreateImage will dispose of the target bytes buffer.
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, targetBytes, targetBytesSize, NULL);
    CGImageRef targetRef = CGImageCreate(targetSize.width, targetSize.height, bitsPerComponent, bitsPerPixel, targetBytesPerRow, colorSpace, bitmapInfo, provider, NULL, FALSE, kCGRenderingIntentDefault);
    targetImage = [UIImage imageWithCGImage:targetRef];

    // Clean up
    CGColorSpaceRelease(colorSpace);

    // Return result
    return targetImage;
}

我尝试只取每隔一行的每个像素,而不是取平均值,但它产生的图像与默认算法一样糟糕。

I came up with this algorithm to create a half-size image:



- (UIImage*) halveImage:(UIImage*)sourceImage {

    // Compute the target size
    CGSize sourceSize = sourceImage.size;
    CGSize targetSize;
    targetSize.width = (int) (sourceSize.width / 2);
    targetSize.height = (int) (sourceSize.height / 2);

    // Access the source data bytes
    NSData* sourceData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(sourceImage.CGImage));
    unsigned char* sourceBytes = (unsigned char *)[sourceData bytes];

    // Some info we'll need later
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(sourceImage.CGImage);
    int bitsPerComponent = CGImageGetBitsPerComponent(sourceImage.CGImage);
    int bitsPerPixel = CGImageGetBitsPerPixel(sourceImage.CGImage);
    int __attribute__((unused)) bytesPerPixel = bitsPerPixel / 8;
    int sourceBytesPerRow = CGImageGetBytesPerRow(sourceImage.CGImage);
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(sourceImage.CGImage);

    assert(bytesPerPixel == 4);
    assert(bitsPerComponent == 8);

    // Bytes per row is (apparently) rounded to some boundary
    assert(sourceBytesPerRow >= ((int) sourceSize.width) * 4);
    assert([sourceData length] == ((int) sourceSize.height) * sourceBytesPerRow);

    // Allocate target data bytes
    int targetBytesPerRow = ((int) targetSize.width) * 4;
    // Algorigthm happier if bytes/row a multiple of 16
    targetBytesPerRow = (targetBytesPerRow + 15) & 0xFFFFFFF0;
    int targetBytesSize = ((int) targetSize.height) * targetBytesPerRow;
    unsigned char* targetBytes = (unsigned char*) malloc(targetBytesSize);
    UIImage* targetImage = nil;

    // Copy source to target, averaging 4 pixels into 1
    for (int row = 0; row < targetSize.height; row++) {
        unsigned char* sourceRowStart = sourceBytes + (2 * row * sourceBytesPerRow);
        unsigned char* targetRowStart = targetBytes + (row * targetBytesPerRow);
        for (int column = 0; column < targetSize.width; column++) {

            int sourceColumnOffset = 2 * column * 4;
            int targetColumnOffset = column * 4;

            unsigned char* sourcePixel = sourceRowStart + sourceColumnOffset;
            unsigned char* nextRowSourcePixel = sourcePixel + sourceBytesPerRow;
            unsigned char* targetPixel = targetRowStart + targetColumnOffset;

            uint32_t* sourceWord = (uint32_t*) sourcePixel;
            uint32_t* nextRowSourceWord = (uint32_t*) nextRowSourcePixel;
            uint32_t* targetWord = (uint32_t*) targetPixel;

            uint32_t sourceWord0 = sourceWord[0];
            uint32_t sourceWord1 = sourceWord[1];
            uint32_t sourceWord2 = nextRowSourceWord[0];
            uint32_t sourceWord3 = nextRowSourceWord[1];

            // This apparently bizarre sequence scales the data bytes by 4 so that when added together we'll get an average.  We do lose the least significant bits this way, and thus about half a bit of resolution.
            sourceWord0 = (sourceWord0 & 0xFCFCFCFC) >> 2;
            sourceWord1 = (sourceWord1 & 0xFCFCFCFC) >> 2;
            sourceWord2 = (sourceWord2 & 0xFCFCFCFC) >> 2;
            sourceWord3 = (sourceWord3 & 0xFCFCFCFC) >> 2;

            uint32_t resultWord = sourceWord0 + sourceWord1 + sourceWord2 + sourceWord3;
            targetWord[0] = resultWord;
        }
    }

    // Convert the bits to an image.  Supposedly CGCreateImage will dispose of the target bytes buffer.
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, targetBytes, targetBytesSize, NULL);
    CGImageRef targetRef = CGImageCreate(targetSize.width, targetSize.height, bitsPerComponent, bitsPerPixel, targetBytesPerRow, colorSpace, bitmapInfo, provider, NULL, FALSE, kCGRenderingIntentDefault);
    targetImage = [UIImage imageWithCGImage:targetRef];

    // Clean up
    CGColorSpaceRelease(colorSpace);

    // Return result
    return targetImage;
}

I tried just taking every other pixel of every other row, instead of averaging, but it resulted in an image about as bad as the default algorithm.

柠北森屋 2024-11-15 11:03:01

我想你可以使用类似 imagemagick 的东西。显然它已经成功移植到iPhone: http://www.imagemagick.org /discourse-server/viewtopic.php?t=14089

我一直对此库缩放的图像质量感到满意,所以我认为您会对结果感到满意。

I suppose you could use something like imagemagick. Apparently it's been successfully ported to iPhone: http://www.imagemagick.org/discourse-server/viewtopic.php?t=14089

I've always been satisfied with the quality of images scaled by this library, so I think you'll be satisfied with the result.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文