iPhone 录制的视频上有水印。

发布于 2024-12-01 17:15:34 字数 103 浏览 0 评论 0原文

在我的应用程序中,我需要捕获视频并在该视频上添加水印。水印应该是文本(时间和注释)。我看到一个使用“QTKit”框架的代码。不过我读到该框架不适用于 iPhone。

提前致谢。

In my Application I need to capture a video and Put a watermark on that video. The watermark should be Text(Time and Notes). I saw a code using "QTKit" Frame work. However I read that the framework is not available for iPhone.

Thanks in Advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

戏蝶舞 2024-12-08 17:15:34

添加水印更加简单。您只需要使用 CALayer 和 AVVideoCompositionCoreAnimationTool。代码可以按照相同的顺序复制和组装。我只是尝试在两者之间插入一些评论以便更好地理解。

假设您已经录制了视频,因此我们将首先创建 AVURLAsset:

AVURLAsset* videoAsset = [[AVURLAsset alloc]initWithURL:outputFileURL options:nil];
AVMutableComposition* mixComposition = [AVMutableComposition composition];

AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo  preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration) 
                               ofTrack:clipVideoTrack
                                atTime:kCMTimeZero error:nil];

[compositionVideoTrack setPreferredTransform:[[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform]]; 

仅使用此代码,您就可以导出视频,但我们希望首先添加带有水印的图层。请注意,有些代码可能看起来多余,但它是一切正常工作所必需的。

首先,我们创建带有水印图像的图层:

UIImage *myImage = [UIImage imageNamed:@"icon.png"];
CALayer *aLayer = [CALayer layer];
aLayer.contents = (id)myImage.CGImage;
aLayer.frame = CGRectMake(5, 25, 57, 57); //Needed for proper display. We are using the app icon (57x57). If you use 0,0 you will not see it
aLayer.opacity = 0.65; //Feel free to alter the alpha here

如果我们不需要图像而想要文本:

CATextLayer *titleLayer = [CATextLayer layer];
titleLayer.string = @"Text goes here";
titleLayer.font = @"Helvetica";
titleLayer.fontSize = videoSize.height / 6;
//?? titleLayer.shadowOpacity = 0.5;
titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.bounds = CGRectMake(0, 0, videoSize.width, videoSize.height / 6); //You may need to adjust this for proper display

以下代码按正确的顺序对图层进行排序:

CGSize videoSize = [videoAsset naturalSize]; 
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];   
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:aLayer];
[parentLayer addSublayer:titleLayer]; //ONLY IF WE ADDED TEXT

现在我们正在创建合成并添加插入图层的指令:

AVMutableVideoComposition* videoComp = [[AVMutableVideoComposition videoComposition] retain];
videoComp.renderSize = videoSize;
videoComp.frameDuration = CMTimeMake(1, 30);
videoComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];

/// instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComp.instructions = [NSArray arrayWithObject: instruction];

现在我们准备出口:

_assetExport = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetMediumQuality];//AVAssetExportPresetPassthrough   
_assetExport.videoComposition = videoComp;

NSString* videoName = @"mynewwatermarkedvideo.mov";

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL    *exportUrl = [NSURL fileURLWithPath:exportPath];

if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath]) 
{
    [[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}

_assetExport.outputFileType = AVFileTypeQuickTimeMovie; 
_assetExport.outputURL = exportUrl;
_assetExport.shouldOptimizeForNetworkUse = YES;

[strRecordedFilename setString: exportPath];

[_assetExport exportAsynchronouslyWithCompletionHandler:
 ^(void ) {
     [_assetExport release];
     //YOUR FINALIZATION CODE HERE
 }       
 ];   

[audioAsset release];
[videoAsset release];

Adding a watermark is quite more simple. You just need to use a CALayer and AVVideoCompositionCoreAnimationTool. The code can be just copied and assembled in the same order. I have just tried to insert some comments in between for better understanding.

Let's assume you recorded the video already so we are going to create the AVURLAsset first:

AVURLAsset* videoAsset = [[AVURLAsset alloc]initWithURL:outputFileURL options:nil];
AVMutableComposition* mixComposition = [AVMutableComposition composition];

AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo  preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration) 
                               ofTrack:clipVideoTrack
                                atTime:kCMTimeZero error:nil];

[compositionVideoTrack setPreferredTransform:[[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform]]; 

With just this code you would be able to export the video but we want to add the layer with the watermark first. Please note that some code may seem redundant but it is necessary for everything to work.

First we create the layer with the watermark image:

UIImage *myImage = [UIImage imageNamed:@"icon.png"];
CALayer *aLayer = [CALayer layer];
aLayer.contents = (id)myImage.CGImage;
aLayer.frame = CGRectMake(5, 25, 57, 57); //Needed for proper display. We are using the app icon (57x57). If you use 0,0 you will not see it
aLayer.opacity = 0.65; //Feel free to alter the alpha here

If we don't want an image and want text instead:

CATextLayer *titleLayer = [CATextLayer layer];
titleLayer.string = @"Text goes here";
titleLayer.font = @"Helvetica";
titleLayer.fontSize = videoSize.height / 6;
//?? titleLayer.shadowOpacity = 0.5;
titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.bounds = CGRectMake(0, 0, videoSize.width, videoSize.height / 6); //You may need to adjust this for proper display

The following code sorts the layer in proper order:

CGSize videoSize = [videoAsset naturalSize]; 
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];   
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:aLayer];
[parentLayer addSublayer:titleLayer]; //ONLY IF WE ADDED TEXT

Now we are creating the composition and add the instructions to insert the layer:

AVMutableVideoComposition* videoComp = [[AVMutableVideoComposition videoComposition] retain];
videoComp.renderSize = videoSize;
videoComp.frameDuration = CMTimeMake(1, 30);
videoComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];

/// instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComp.instructions = [NSArray arrayWithObject: instruction];

And now we are ready to export:

_assetExport = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetMediumQuality];//AVAssetExportPresetPassthrough   
_assetExport.videoComposition = videoComp;

NSString* videoName = @"mynewwatermarkedvideo.mov";

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL    *exportUrl = [NSURL fileURLWithPath:exportPath];

if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath]) 
{
    [[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}

_assetExport.outputFileType = AVFileTypeQuickTimeMovie; 
_assetExport.outputURL = exportUrl;
_assetExport.shouldOptimizeForNetworkUse = YES;

[strRecordedFilename setString: exportPath];

[_assetExport exportAsynchronouslyWithCompletionHandler:
 ^(void ) {
     [_assetExport release];
     //YOUR FINALIZATION CODE HERE
 }       
 ];   

[audioAsset release];
[videoAsset release];
红焚 2024-12-08 17:15:34

使用AVFoundation。我建议使用 AVCaptureVideoDataOutput 抓取帧,然后将捕获的帧与水印图像叠加,最后将捕获和处理的帧写入文件用户 AVAssetWriter

搜索堆栈溢出,有大量精彩的示例详细说明了如何执行我提到的每件事。我还没有看到任何提供代码示例来实现您想要的效果,但您应该能够非常轻松地混合和匹配。

编辑:

看看这些链接:

iPhone:AVCaptureSession捕获输出崩溃(AVCaptureVideoDataOutput) - 这篇文章可能会因为包含相关代码而有所帮助。

AVCaptureDataOutput 将返回图像作为 CMSampleBufferRef
使用以下代码将它们转换为 CGImageRef :

    - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}

从那里您将转换为 UIImage,

  UIImage *img = [UIImage imageWithCGImage:yourCGImage];  

然后使用

[img drawInRect:CGRectMake(x,y,height,width)]; 

将框架绘制到上下文,在其上绘制水印的 PNG,然后添加处理后的图像使用 AVAssetWriter 到您的输出视频。我建议实时添加它们,这样你就不会用大量的 UIImage 填满内存。

如何将 UIImage 数组导出为影片? - 这篇文章展示了如何将已处理的 UIImage 添加到给定持续时间的视频中。

这应该可以帮助您顺利地为视频添加水印。请记住实行良好的内存管理,因为以 20-30fps 传入的图像泄漏是导致应用程序崩溃的好方法。

Use AVFoundation. I would suggest grabbing frames with AVCaptureVideoDataOutput, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user AVAssetWriter.

Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.

EDIT:

Take a look at these links:

iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - this post might be helpful just by nature of containing relevant code.

AVCaptureDataOutput will return images as CMSampleBufferRefs.
Convert them to CGImageRefs using this code:

    - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}

From there you would convert to a UIImage,

  UIImage *img = [UIImage imageWithCGImage:yourCGImage];  

Then use

[img drawInRect:CGRectMake(x,y,height,width)]; 

to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using AVAssetWriter. I would suggest adding them in real time so you're not filling up memory with tons of UIImages.

How do I export UIImage array as a movie? - this post shows how to add the UIImages you have processed to a video for a given duration.

This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.

北城孤痞 2024-12-08 17:15:34

@Julio 给出的答案在 Objective-c 的情况下已经可以正常工作了
以下是 Swift 3.0 的相同代码库:

水印和生成方形或裁剪视频,如 Instagram

从文档目录和文件目录中获取输出文件创建AVURLAsset

    //output file
    let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
    let outputPath = documentsURL?.appendingPathComponent("squareVideo.mov")
    if FileManager.default.fileExists(atPath: (outputPath?.path)!) {
        do {
           try FileManager.default.removeItem(atPath: (outputPath?.path)!)
        }
        catch {
            print ("Error deleting file")
        }
    }



    //input file
    let asset = AVAsset.init(url: filePath)
    print (asset)
    let composition = AVMutableComposition.init()
    composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)

    //input clip
    let clipVideoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]

创建带有水印图像的图层:

    //adding the image layer
    let imglogo = UIImage(named: "video_button")
    let watermarkLayer = CALayer()
    watermarkLayer.contents = imglogo?.cgImage
    watermarkLayer.frame = CGRect(x: 5, y: 25 ,width: 57, height: 57)
    watermarkLayer.opacity = 0.85

创建带有文本作为水印而不是图像的图层:

    let textLayer = CATextLayer()
    textLayer.string = "Nodat"
    textLayer.foregroundColor = UIColor.red.cgColor
    textLayer.font = UIFont.systemFont(ofSize: 50)
    textLayer.alignmentMode = kCAAlignmentCenter
    textLayer.bounds = CGRect(x: 5, y: 25, width: 100, height: 20)

在视频上正确添加图层水印顺序

  let videoSize = clipVideoTrack.naturalSize
    let parentlayer = CALayer()
    let videoLayer = CALayer()

    parentlayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
    videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
    parentlayer.addSublayer(videoLayer)
    parentlayer.addSublayer(watermarkLayer)
    parentlayer.addSublayer(textLayer) //for text layer only

将视频裁剪为方形格式 - 大小为 300*300

 //make it square
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = CGSize(width: 300, height: 300) //change it as per your needs.
    videoComposition.frameDuration = CMTimeMake(1, 30)
    videoComposition.renderScale = 1.0

    //Magic line for adding watermark to the video
    videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videoLayer], in: parentlayer)

    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30))

旋转为纵向

//rotate to potrait
    let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
    let t1 = CGAffineTransform(translationX: clipVideoTrack.naturalSize.height, y: -(clipVideoTrack.naturalSize.width - clipVideoTrack.naturalSize.height) / 2)
    let t2: CGAffineTransform = t1.rotated(by: .pi/2)
    let finalTransform: CGAffineTransform = t2
    transformer.setTransform(finalTransform, at: kCMTimeZero)
    instruction.layerInstructions = [transformer]
    videoComposition.instructions = [instruction]

导出视频的最后一步

        let exporter = AVAssetExportSession.init(asset: asset, presetName: AVAssetExportPresetMediumQuality)
    exporter?.outputFileType = AVFileTypeQuickTimeMovie
    exporter?.outputURL = outputPath
    exporter?.videoComposition = videoComposition

    exporter?.exportAsynchronously() { handler -> Void in
        if exporter?.status == .completed {
            print("Export complete")
            DispatchQueue.main.async(execute: {
                completion(outputPath)
            })
            return
        } else if exporter?.status == .failed {
            print("Export failed - \(String(describing: exporter?.error))")
        }
        completion(nil)
        return
    }

这会将视频导出到带有水印的方形尺寸作为文本或图像

谢谢

Already the answer given by @Julio works fine in case of objective-c
Here's the same code base for Swift 3.0:

WATERMARK & Generating SQUARE or CROPPED video like Instagram

Getting the output file from Documents Directory & create AVURLAsset

    //output file
    let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
    let outputPath = documentsURL?.appendingPathComponent("squareVideo.mov")
    if FileManager.default.fileExists(atPath: (outputPath?.path)!) {
        do {
           try FileManager.default.removeItem(atPath: (outputPath?.path)!)
        }
        catch {
            print ("Error deleting file")
        }
    }



    //input file
    let asset = AVAsset.init(url: filePath)
    print (asset)
    let composition = AVMutableComposition.init()
    composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)

    //input clip
    let clipVideoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]

Create the layer with the watermark image:

    //adding the image layer
    let imglogo = UIImage(named: "video_button")
    let watermarkLayer = CALayer()
    watermarkLayer.contents = imglogo?.cgImage
    watermarkLayer.frame = CGRect(x: 5, y: 25 ,width: 57, height: 57)
    watermarkLayer.opacity = 0.85

Create the layer with Text as watermark instead of image:

    let textLayer = CATextLayer()
    textLayer.string = "Nodat"
    textLayer.foregroundColor = UIColor.red.cgColor
    textLayer.font = UIFont.systemFont(ofSize: 50)
    textLayer.alignmentMode = kCAAlignmentCenter
    textLayer.bounds = CGRect(x: 5, y: 25, width: 100, height: 20)

Adding the layers over the video in proper order for watermark

  let videoSize = clipVideoTrack.naturalSize
    let parentlayer = CALayer()
    let videoLayer = CALayer()

    parentlayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
    videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
    parentlayer.addSublayer(videoLayer)
    parentlayer.addSublayer(watermarkLayer)
    parentlayer.addSublayer(textLayer) //for text layer only

Cropping the video in square format - of 300*300 in size

 //make it square
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = CGSize(width: 300, height: 300) //change it as per your needs.
    videoComposition.frameDuration = CMTimeMake(1, 30)
    videoComposition.renderScale = 1.0

    //Magic line for adding watermark to the video
    videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videoLayer], in: parentlayer)

    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30))

Rotate to Portrait

//rotate to potrait
    let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
    let t1 = CGAffineTransform(translationX: clipVideoTrack.naturalSize.height, y: -(clipVideoTrack.naturalSize.width - clipVideoTrack.naturalSize.height) / 2)
    let t2: CGAffineTransform = t1.rotated(by: .pi/2)
    let finalTransform: CGAffineTransform = t2
    transformer.setTransform(finalTransform, at: kCMTimeZero)
    instruction.layerInstructions = [transformer]
    videoComposition.instructions = [instruction]

Final step to export the video

        let exporter = AVAssetExportSession.init(asset: asset, presetName: AVAssetExportPresetMediumQuality)
    exporter?.outputFileType = AVFileTypeQuickTimeMovie
    exporter?.outputURL = outputPath
    exporter?.videoComposition = videoComposition

    exporter?.exportAsynchronously() { handler -> Void in
        if exporter?.status == .completed {
            print("Export complete")
            DispatchQueue.main.async(execute: {
                completion(outputPath)
            })
            return
        } else if exporter?.status == .failed {
            print("Export failed - \(String(describing: exporter?.error))")
        }
        completion(nil)
        return
    }

This will export the video in square size with watermark as Text Or Image

Thanks

寒江雪… 2024-12-08 17:15:34

只需下载代码并使用它即可。它位于 Apple 开发人员文档页面中。

http://developer.apple.com/library/ios/ #samplecode/AVSimpleEditoriOS/Listings/AVSimpleEditor_AVSERotateCommand_m.html

Simply Download the code and Use it.It is in Apple developer documentation Page.

http://developer.apple.com/library/ios/#samplecode/AVSimpleEditoriOS/Listings/AVSimpleEditor_AVSERotateCommand_m.html

恏ㄋ傷疤忘ㄋ疼 2024-12-08 17:15:34

通过使用 mikitamanko 的博客 我做了一些小改动来修复出现以下错误:

Error Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The video could not be composed., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x2830559b0 {Error Domain=NSOSStatusErrorDomain Code=-17390 "(null)"}}

解决方案是在设置图层指令时使用合成的视频轨道而不是原始视频轨道,如以下 swift 5 代码所示:

    static func addSketchLayer(url: URL, sketchLayer: CALayer, block: @escaping (Result<URL, VideoExportError>) -> Void) {
        let composition = AVMutableComposition()
        let vidAsset = AVURLAsset(url: url)
        
        let videoTrack = vidAsset.tracks(withMediaType: AVMediaType.video)[0]
        let duration = vidAsset.duration
        let vid_timerange = CMTimeRangeMake(start: CMTime.zero, duration: duration)
        
        let videoRect = CGRect(origin: .zero, size: videoTrack.naturalSize)
        let transformedVideoRect = videoRect.applying(videoTrack.preferredTransform)
        let size = transformedVideoRect.size
                
        let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!
        
        try? compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: CMTime.zero)
        compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
        
        let videolayer = CALayer()
        videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        videolayer.opacity = 1.0
        sketchLayer.contentsScale = 1
        
        let parentlayer = CALayer()
        parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        sketchLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        parentlayer.addSublayer(videolayer)
        parentlayer.addSublayer(sketchLayer)
        
        let layercomposition = AVMutableVideoComposition()
        layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
        layercomposition.renderScale = 1.0
        layercomposition.renderSize = CGSize(width: size.width, height: size.height)

        layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videolayer], in: parentlayer)
        
        let instruction = AVMutableVideoCompositionInstruction()
        instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
        let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionvideoTrack)
        layerinstruction.setTransform(compositionvideoTrack.preferredTransform, at: CMTime.zero)
        instruction.layerInstructions = [layerinstruction] as [AVVideoCompositionLayerInstruction]
        layercomposition.instructions = [instruction] as [AVVideoCompositionInstructionProtocol]
        
        let compositionAudioTrack:AVMutableCompositionTrack? = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
        let audioTracks = vidAsset.tracks(withMediaType: AVMediaType.audio)
        for audioTrack in audioTracks {
            try? compositionAudioTrack?.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: CMTime.zero)
        }
        
        let movieDestinationUrl = URL(fileURLWithPath: NSTemporaryDirectory() + "/exported.mp4")
        try? FileManager().removeItem(at: movieDestinationUrl)
        
        let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)!
        assetExport.outputFileType = AVFileType.mp4
        assetExport.outputURL = movieDestinationUrl
        assetExport.videoComposition = layercomposition
        
        assetExport.exportAsynchronously(completionHandler: {
            switch assetExport.status {
            case AVAssetExportSessionStatus.failed:
                print(assetExport.error ?? "unknown error")
                block(.failure(.failed))
            case AVAssetExportSessionStatus.cancelled:
                print(assetExport.error ?? "unknown error")
                block(.failure(.canceled))
            default:
                block(.success(movieDestinationUrl))
            }
        })
    }

enum VideoExportError: Error {
    case failed
    case canceled
}

请注意,根据 AVFoundation 在导出带有文本层的视频时崩溃此代码仅在模拟器上崩溃,但在真实设备上运行

另请注意,宽度和高度是在应用首选视频转换后使用的。

Adding a CALayer to a video by working with the swift example code found in mikitamanko's blog I made a few small changes to fix the following error:

Error Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The video could not be composed., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x2830559b0 {Error Domain=NSOSStatusErrorDomain Code=-17390 "(null)"}}

The solution is to use the composition's video track instead of the original video track when setting the layer instruction like in the following swift 5 code:

    static func addSketchLayer(url: URL, sketchLayer: CALayer, block: @escaping (Result<URL, VideoExportError>) -> Void) {
        let composition = AVMutableComposition()
        let vidAsset = AVURLAsset(url: url)
        
        let videoTrack = vidAsset.tracks(withMediaType: AVMediaType.video)[0]
        let duration = vidAsset.duration
        let vid_timerange = CMTimeRangeMake(start: CMTime.zero, duration: duration)
        
        let videoRect = CGRect(origin: .zero, size: videoTrack.naturalSize)
        let transformedVideoRect = videoRect.applying(videoTrack.preferredTransform)
        let size = transformedVideoRect.size
                
        let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!
        
        try? compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: CMTime.zero)
        compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
        
        let videolayer = CALayer()
        videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        videolayer.opacity = 1.0
        sketchLayer.contentsScale = 1
        
        let parentlayer = CALayer()
        parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        sketchLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        parentlayer.addSublayer(videolayer)
        parentlayer.addSublayer(sketchLayer)
        
        let layercomposition = AVMutableVideoComposition()
        layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
        layercomposition.renderScale = 1.0
        layercomposition.renderSize = CGSize(width: size.width, height: size.height)

        layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videolayer], in: parentlayer)
        
        let instruction = AVMutableVideoCompositionInstruction()
        instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
        let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionvideoTrack)
        layerinstruction.setTransform(compositionvideoTrack.preferredTransform, at: CMTime.zero)
        instruction.layerInstructions = [layerinstruction] as [AVVideoCompositionLayerInstruction]
        layercomposition.instructions = [instruction] as [AVVideoCompositionInstructionProtocol]
        
        let compositionAudioTrack:AVMutableCompositionTrack? = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
        let audioTracks = vidAsset.tracks(withMediaType: AVMediaType.audio)
        for audioTrack in audioTracks {
            try? compositionAudioTrack?.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: CMTime.zero)
        }
        
        let movieDestinationUrl = URL(fileURLWithPath: NSTemporaryDirectory() + "/exported.mp4")
        try? FileManager().removeItem(at: movieDestinationUrl)
        
        let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)!
        assetExport.outputFileType = AVFileType.mp4
        assetExport.outputURL = movieDestinationUrl
        assetExport.videoComposition = layercomposition
        
        assetExport.exportAsynchronously(completionHandler: {
            switch assetExport.status {
            case AVAssetExportSessionStatus.failed:
                print(assetExport.error ?? "unknown error")
                block(.failure(.failed))
            case AVAssetExportSessionStatus.cancelled:
                print(assetExport.error ?? "unknown error")
                block(.failure(.canceled))
            default:
                block(.success(movieDestinationUrl))
            }
        })
    }

enum VideoExportError: Error {
    case failed
    case canceled
}

Note that according to AVFoundation Crash on Exporting Video With Text Layer this code crashes only on simulator but works on a real device

Also note that the width and height are used after applying the preferred video transform.

遗弃M 2024-12-08 17:15:34

这是swift3 上的示例 如何将动画(图像/幻灯片/帧数组)和静态图像水印插入录制的内容 视频。

它使用 CAKeyframeAnimation 为帧设置动画,并使用 AVMutableCompositionTrackAVAssetExportSessionAVMutableVideoComposition 以及 AVMutableVideoCompositionInstruction 来组合所有内容一起。

Here's the example on swift3 how to insert both animated (array of images/slides/frames) and static image watermarks into the recorded video.

It uses CAKeyframeAnimation to animate the frames, and it uses AVMutableCompositionTrack, AVAssetExportSession and AVMutableVideoComposition together with AVMutableVideoCompositionInstruction to combine everything together.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文