如何同时使用 AVAssetReader 和 AVAssetWriter 处理多个轨道(音频和视频)?

发布于 2024-10-20 17:10:45 字数 364 浏览 1 评论 0原文

我知道如何使用 AVAssetReaderAVAssetWriter,并成功使用它们从一部电影中获取视频轨道并将其转码为另一部电影。不过,我也想用音频来做到这一点。完成初始转码后是否必须创建 AVAssetExportSession,或者是否有某种方法可以在写入会话期间在轨道之间切换?我不想处理 AVAssetExportSession 的开销。

我问这个问题是因为,使用拉式方法 - while ([assetWriterInput isReadyForMoreMediaData]) {...} - 假设仅一个轨道。它如何用于多个轨道,即音频轨道和视频轨道?

I know how to use AVAssetReader and AVAssetWriter, and have successfully used them to grab a video track from one movie and transcode it into another. However, I'd like to do this with audio as well. Do I have to create and AVAssetExportSession after I've done with the initial transcode, or is there some way to switch between tracks while in the midst of a writing session? I'd hate to have to deal with the overhead of an AVAssetExportSession.

I ask because, using the pull style method - while ([assetWriterInput isReadyForMoreMediaData]) {...} - assumes one track only. How could it be used for more than one track, i.e. both an audio and a video track?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

冬天旳寂寞 2024-10-27 17:10:45

AVAssetWriter 将自动交错关联的 AVAssetWriterInput 上的请求,以便将不同的轨道集成到输出文件中。只需为您拥有的每个轨道添加一个 AVAssetWriterInput ,然后在每个 AVAssetWriterInput 上调用 requestMediaDataWhenReadyOnQueue:usingBlock: 即可。

这是我调用 requestMediaDataWhenReadyOnQueue:usingBlock: 的方法。我从对我拥有的输出/输入对的数量的循环中调用此方法。 (单独的方法不仅有利于代码的可读性,而且还因为与循环不同,每个调用都会为该块设置一个单独的堆栈帧。)

您只需要一个 dispatch_queue_t 并且可以将其重用于所有轨道。请注意,您绝对不应该从块中调用 dispatch_async,因为 requestMediaDataWhenReadyOnQueue:usingBlock: 期望块阻塞,直到它填满AVAssetWriterInput 将获取尽可能多的数据。在那之前你不想回来。

- (void)requestMediaDataForTrack:(int)i {
  AVAssetReaderOutput *output = [[_reader outputs] objectAtIndex:i];
  AVAssetWriterInput *input = [[_writer inputs] objectAtIndex:i];

  [input requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
    ^{
      [self retain];
      while ([input isReadyForMoreMediaData]) {
        CMSampleBufferRef sampleBuffer;
        if ([_reader status] == AVAssetReaderStatusReading &&
            (sampleBuffer = [output copyNextSampleBuffer])) {

          BOOL result = [input appendSampleBuffer:sampleBuffer];
          CFRelease(sampleBuffer);

          if (!result) {
            [_reader cancelReading];
            break;
          }
        } else {
          [input markAsFinished];

          switch ([_reader status]) {
            case AVAssetReaderStatusReading:
              // the reader has more for other tracks, even if this one is done
              break;

            case AVAssetReaderStatusCompleted:
              // your method for when the conversion is done
              // should call finishWriting on the writer
              [self readingCompleted];
              break;

            case AVAssetReaderStatusCancelled:
              [_writer cancelWriting];
              [_delegate converterDidCancel:self];
              break;

            case AVAssetReaderStatusFailed:
              [_writer cancelWriting];
              break;
          }

          break;
        }
      }
    }
  ];
}

AVAssetWriter will automatically interleave requests on its associated AVAssetWriterInputs in order to integrate different tracks into the output file. Just add an AVAssetWriterInput for each of the tracks that you have, and then call requestMediaDataWhenReadyOnQueue:usingBlock: on each of your AVAssetWriterInputs.

Here's a method I have that calls requestMediaDataWhenReadyOnQueue:usingBlock:. I call this method from a loop over the number of output/input pairs I have. (A separate method is good both for code readability and also because, unlike a loop, each call sets up a separate stack frame for the block.)

You only need one dispatch_queue_t and can reuse it for all of the tracks. Note that you definitely should not call dispatch_async from your block, because requestMediaDataWhenReadyOnQueue:usingBlock: expects the block to, well, block until it has filled in as much data as the AVAssetWriterInput will take. You don't want to return before then.

- (void)requestMediaDataForTrack:(int)i {
  AVAssetReaderOutput *output = [[_reader outputs] objectAtIndex:i];
  AVAssetWriterInput *input = [[_writer inputs] objectAtIndex:i];

  [input requestMediaDataWhenReadyOnQueue:_processingQueue usingBlock:
    ^{
      [self retain];
      while ([input isReadyForMoreMediaData]) {
        CMSampleBufferRef sampleBuffer;
        if ([_reader status] == AVAssetReaderStatusReading &&
            (sampleBuffer = [output copyNextSampleBuffer])) {

          BOOL result = [input appendSampleBuffer:sampleBuffer];
          CFRelease(sampleBuffer);

          if (!result) {
            [_reader cancelReading];
            break;
          }
        } else {
          [input markAsFinished];

          switch ([_reader status]) {
            case AVAssetReaderStatusReading:
              // the reader has more for other tracks, even if this one is done
              break;

            case AVAssetReaderStatusCompleted:
              // your method for when the conversion is done
              // should call finishWriting on the writer
              [self readingCompleted];
              break;

            case AVAssetReaderStatusCancelled:
              [_writer cancelWriting];
              [_delegate converterDidCancel:self];
              break;

            case AVAssetReaderStatusFailed:
              [_writer cancelWriting];
              break;
          }

          break;
        }
      }
    }
  ];
}
爱的那么颓废 2024-10-27 17:10:45

您是否尝试过使用两个 AVAssetWriterInputs 并将样本推送到工作队列?这是一个粗略的草图。

processing_queue = dispatch_queue_create("com.mydomain.gcdqueue.mediaprocessor", NULL);

[videoAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
    dispatch_asyc(processing_queue, ^{process video});
}];

[audioAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
    dispatch_asyc(processing_queue, ^{process audio});
}];

Have you tried using two AVAssetWriterInputs and pushing the samples through a worker queue? Here is a rough sketch.

processing_queue = dispatch_queue_create("com.mydomain.gcdqueue.mediaprocessor", NULL);

[videoAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
    dispatch_asyc(processing_queue, ^{process video});
}];

[audioAVAssetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
    dispatch_asyc(processing_queue, ^{process audio});
}];
极度宠爱 2024-10-27 17:10:45

您可以使用调度组!

查看 MacOSX 的 AVReaderWriter 示例...

我直接引用示例 RWDocument.m:

- (BOOL)startReadingAndWritingReturningError:(NSError **)outError
{
    BOOL success = YES;
    NSError *localError = nil;

    // Instruct the asset reader and asset writer to get ready to do work
    success = [assetReader startReading];
    if (!success)
        localError = [assetReader error];
    if (success)
    {
        success = [assetWriter startWriting];
        if (!success)
            localError = [assetWriter error];
    }

    if (success)
    {
        dispatch_group_t dispatchGroup = dispatch_group_create();

        // Start a sample-writing session
        [assetWriter startSessionAtSourceTime:[self timeRange].start];

        // Start reading and writing samples
        if (audioSampleBufferChannel)
        {
            // Only set audio delegate for audio-only assets, else let the video channel drive progress
            id <RWSampleBufferChannelDelegate> delegate = nil;
            if (!videoSampleBufferChannel)
                delegate = self;

            dispatch_group_enter(dispatchGroup);
            [audioSampleBufferChannel startWithDelegate:delegate completionHandler:^{
                dispatch_group_leave(dispatchGroup);
            }];
        }
        if (videoSampleBufferChannel)
        {
            dispatch_group_enter(dispatchGroup);
            [videoSampleBufferChannel startWithDelegate:self completionHandler:^{
                dispatch_group_leave(dispatchGroup);
            }];
        }

        // Set up a callback for when the sample writing is finished
        dispatch_group_notify(dispatchGroup, serializationQueue, ^{
            BOOL finalSuccess = YES;
            NSError *finalError = nil;

            if (cancelled)
            {
                [assetReader cancelReading];
                [assetWriter cancelWriting];
            }
            else
            {
                if ([assetReader status] == AVAssetReaderStatusFailed)
                {
                    finalSuccess = NO;
                    finalError = [assetReader error];
                }

                if (finalSuccess)
                {
                    finalSuccess = [assetWriter finishWriting];
                    if (!finalSuccess)
                        finalError = [assetWriter error];
                }
            }

            [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
        });

        dispatch_release(dispatchGroup);
    }

    if (outError)
        *outError = localError;

    return success;
}

You can use dispatch groups!

Check out the AVReaderWriter example for MacOSX...

I am quoting directly from the sample RWDocument.m:

- (BOOL)startReadingAndWritingReturningError:(NSError **)outError
{
    BOOL success = YES;
    NSError *localError = nil;

    // Instruct the asset reader and asset writer to get ready to do work
    success = [assetReader startReading];
    if (!success)
        localError = [assetReader error];
    if (success)
    {
        success = [assetWriter startWriting];
        if (!success)
            localError = [assetWriter error];
    }

    if (success)
    {
        dispatch_group_t dispatchGroup = dispatch_group_create();

        // Start a sample-writing session
        [assetWriter startSessionAtSourceTime:[self timeRange].start];

        // Start reading and writing samples
        if (audioSampleBufferChannel)
        {
            // Only set audio delegate for audio-only assets, else let the video channel drive progress
            id <RWSampleBufferChannelDelegate> delegate = nil;
            if (!videoSampleBufferChannel)
                delegate = self;

            dispatch_group_enter(dispatchGroup);
            [audioSampleBufferChannel startWithDelegate:delegate completionHandler:^{
                dispatch_group_leave(dispatchGroup);
            }];
        }
        if (videoSampleBufferChannel)
        {
            dispatch_group_enter(dispatchGroup);
            [videoSampleBufferChannel startWithDelegate:self completionHandler:^{
                dispatch_group_leave(dispatchGroup);
            }];
        }

        // Set up a callback for when the sample writing is finished
        dispatch_group_notify(dispatchGroup, serializationQueue, ^{
            BOOL finalSuccess = YES;
            NSError *finalError = nil;

            if (cancelled)
            {
                [assetReader cancelReading];
                [assetWriter cancelWriting];
            }
            else
            {
                if ([assetReader status] == AVAssetReaderStatusFailed)
                {
                    finalSuccess = NO;
                    finalError = [assetReader error];
                }

                if (finalSuccess)
                {
                    finalSuccess = [assetWriter finishWriting];
                    if (!finalSuccess)
                        finalError = [assetWriter error];
                }
            }

            [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
        });

        dispatch_release(dispatchGroup);
    }

    if (outError)
        *outError = localError;

    return success;
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文