iOS 中使用 MonoTouch 捕获视频

发布于 2024-11-06 09:36:03 字数 2945 浏览 0 评论 0 原文

我有代码可以在 Objective-C 中创建、配置和启动视频捕获会话,运行没有问题。我将示例移植到 C# 和 MonoTouch 4.0.3 并遇到了一些问题,代码如下:

    void Initialize ()
    {   
        // Create notifier delegate class 
        captureVideoDelegate = new CaptureVideoDelegate(this);

        // Create capture session
        captureSession = new AVCaptureSession();
        captureSession.SessionPreset = AVCaptureSession.Preset640x480;

        // Create capture device
        captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);

        // Create capture device input
        NSError error;
        captureDeviceInput = new AVCaptureDeviceInput(captureDevice, out error);
        captureSession.AddInput(captureDeviceInput);

        // Create capture device output
        captureVideoOutput = new AVCaptureVideoDataOutput();
        captureSession.AddOutput(captureVideoOutput);
        captureVideoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV32BGRA;
        captureVideoOutput.MinFrameDuration = new CMTime(1, 30);
        //
        // ISSUE 1
        // In the original Objective-C code I was creating a dispatch_queue_t object, passing it to
        // setSampleBufferDelegate:queue message and worked, here I could not find an equivalent to 
        // the queue mechanism. Also not sure if the delegate should be used like this).
        //
        captureVideoOutput.SetSampleBufferDelegatequeue(captureVideoDelegate, ???????);

        // Create preview layer
        previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession);
        previewLayer.Orientation = AVCaptureVideoOrientation.LandscapeRight;
        //
        // ISSUE 2:
        // Didn't find any VideoGravity related enumeration in MonoTouch (not sure if string will work)
        //
        previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill";
        previewLayer.Frame = new RectangleF(0, 0, 1024, 768);
        this.View.Layer.AddSublayer(previewLayer);

        // Start capture session
        captureSession.StartRunning();

    }

    #endregion

    public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate
    {
        private VirtualDeckViewController mainViewController;

        public CaptureVideoDelegate(VirtualDeckViewController viewController)
        {
            mainViewController = viewController;
        }

        public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
        {
            // TODO: Implement - see: http://go-mono.com/docs/index.aspx?link=T%3aMonoTouch.Foundation.ModelAttribute

        }
    }

问题 1: 不确定如何正确使用 SetSampleBufferDelegatequeue 方法中的委托。也没有找到与dispatch_queue_t对象等效的机制,该机制在Objective-C中可以很好地传递第二个参数。

问题2: 我在 MonoTouch 库中没有找到任何 VideoGravity 枚举,不确定传递具有常量值的字符串是否有效。

我已经寻找任何线索来解决这个问题,但周围没有明确的样本。任何有关如何在 MonoTouch 中执行相同操作的示例或信息将受到高度赞赏。

非常感谢。

I have the code to create, configure and start a video capturing session in Objective-C running without problems. I'm porting the sample to C# and MonoTouch 4.0.3 and have a few problems, here is the code:

    void Initialize ()
    {   
        // Create notifier delegate class 
        captureVideoDelegate = new CaptureVideoDelegate(this);

        // Create capture session
        captureSession = new AVCaptureSession();
        captureSession.SessionPreset = AVCaptureSession.Preset640x480;

        // Create capture device
        captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);

        // Create capture device input
        NSError error;
        captureDeviceInput = new AVCaptureDeviceInput(captureDevice, out error);
        captureSession.AddInput(captureDeviceInput);

        // Create capture device output
        captureVideoOutput = new AVCaptureVideoDataOutput();
        captureSession.AddOutput(captureVideoOutput);
        captureVideoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV32BGRA;
        captureVideoOutput.MinFrameDuration = new CMTime(1, 30);
        //
        // ISSUE 1
        // In the original Objective-C code I was creating a dispatch_queue_t object, passing it to
        // setSampleBufferDelegate:queue message and worked, here I could not find an equivalent to 
        // the queue mechanism. Also not sure if the delegate should be used like this).
        //
        captureVideoOutput.SetSampleBufferDelegatequeue(captureVideoDelegate, ???????);

        // Create preview layer
        previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession);
        previewLayer.Orientation = AVCaptureVideoOrientation.LandscapeRight;
        //
        // ISSUE 2:
        // Didn't find any VideoGravity related enumeration in MonoTouch (not sure if string will work)
        //
        previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill";
        previewLayer.Frame = new RectangleF(0, 0, 1024, 768);
        this.View.Layer.AddSublayer(previewLayer);

        // Start capture session
        captureSession.StartRunning();

    }

    #endregion

    public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate
    {
        private VirtualDeckViewController mainViewController;

        public CaptureVideoDelegate(VirtualDeckViewController viewController)
        {
            mainViewController = viewController;
        }

        public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
        {
            // TODO: Implement - see: http://go-mono.com/docs/index.aspx?link=T%3aMonoTouch.Foundation.ModelAttribute

        }
    }

Issue 1:
Not sure how to use correctly the delegate in the SetSampleBufferDelegatequeue method. Also not found an equivalent mechanism to dispatch_queue_t object that works fine in Objective-C to pass in the second parameter.

Issue 2:
I did not find any VideoGravity enumerations in MonoTouch libraries, not sure if passing a string with the constant value will work.

I have look for any clue to solve this but no clear samples around. Any sample or information on how to do the same in MonoTouch would be highly appreciated.

Many thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

瞎闹 2024-11-13 09:36:03

这是我的代码。好好利用它。我只是删除了重要的内容,所有初始化都在那里,以及示例输出缓冲区的读取。

然后,我有处理 CVImageBuffer 的代码,形成一个链接的自定义 ObjC 库,如果您需要在 Monotouch 中处理它,那么您需要加倍努力,将其转换为 CGImage 或 UIImage。 Monotouch 中没有这样的功能(据我所知),因此您需要从普通的 ObjC 中自行绑定它。 ObjC 中的示例位于:如何将 CVImageBufferRef 转换为 UIImage

public void InitCapture ()
        {
            try
            {
                // Setup the input
                NSError error = new NSError ();
                captureInput = new AVCaptureDeviceInput (AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video), out error); 

                // Setup the output
                captureOutput = new AVCaptureVideoDataOutput (); 
                captureOutput.AlwaysDiscardsLateVideoFrames = true; 
                captureOutput.SetSampleBufferDelegateAndQueue (avBufferDelegate, dispatchQueue);
                captureOutput.MinFrameDuration = new CMTime (1, 10);

                // Set the video output to store frame in BGRA (compatible across devices)
                captureOutput.VideoSettings = new AVVideoSettings (CVPixelFormatType.CV32BGRA);

                // Create a capture session
                captureSession = new AVCaptureSession ();
                captureSession.SessionPreset = AVCaptureSession.PresetMedium;
                captureSession.AddInput (captureInput);
                captureSession.AddOutput (captureOutput);

                // Setup the preview layer
                prevLayer = new AVCaptureVideoPreviewLayer (captureSession);
                prevLayer.Frame = liveView.Bounds;
                prevLayer.VideoGravity = "AVLayerVideoGravityResize"; // image may be slightly distorted, but red bar position will be accurate

                liveView.Layer.AddSublayer (prevLayer);

                StartLiveDecoding ();
            }
            catch (Exception ex)
            {
                Console.WriteLine (ex.ToString ());
            }
        }

public void DidOutputSampleBuffer (AVCaptureOutput captureOutput, MonoTouch.CoreMedia.CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
        {   
            Console.WriteLine ("DidOutputSampleBuffer: enter");

            if (isScanning) 
            {
                CVImageBuffer imageBuffer = sampleBuffer.GetImageBuffer (); 

                Console.WriteLine ("DidOutputSampleBuffer: calling decode");

                //      NSLog(@"got image w=%d h=%d bpr=%d",CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer), CVPixelBufferGetBytesPerRow(imageBuffer));
                // call the decoder
                DecodeImage (imageBuffer);
            }
            else
            {
                Console.WriteLine ("DidOutputSampleBuffer: not scanning");
            }

            Console.WriteLine ("DidOutputSampleBuffer: quit");
        } 

this is mine code. Use it well. I just cut out the important stuff, all the initialisation is there, as well as the reading of the sample output buffer.

I have then code that processes the CVImageBuffer form a linked custom ObjC library, if you need to process this in Monotouch, then you need to go the extra mile and convert it to CGImage or UIImage. There is no function for that in Monotouch (AFAIK), so you need to bind it yourself, from the plain ObjC. Sample in ObjC is here: how to convert a CVImageBufferRef to UIImage

public void InitCapture ()
        {
            try
            {
                // Setup the input
                NSError error = new NSError ();
                captureInput = new AVCaptureDeviceInput (AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video), out error); 

                // Setup the output
                captureOutput = new AVCaptureVideoDataOutput (); 
                captureOutput.AlwaysDiscardsLateVideoFrames = true; 
                captureOutput.SetSampleBufferDelegateAndQueue (avBufferDelegate, dispatchQueue);
                captureOutput.MinFrameDuration = new CMTime (1, 10);

                // Set the video output to store frame in BGRA (compatible across devices)
                captureOutput.VideoSettings = new AVVideoSettings (CVPixelFormatType.CV32BGRA);

                // Create a capture session
                captureSession = new AVCaptureSession ();
                captureSession.SessionPreset = AVCaptureSession.PresetMedium;
                captureSession.AddInput (captureInput);
                captureSession.AddOutput (captureOutput);

                // Setup the preview layer
                prevLayer = new AVCaptureVideoPreviewLayer (captureSession);
                prevLayer.Frame = liveView.Bounds;
                prevLayer.VideoGravity = "AVLayerVideoGravityResize"; // image may be slightly distorted, but red bar position will be accurate

                liveView.Layer.AddSublayer (prevLayer);

                StartLiveDecoding ();
            }
            catch (Exception ex)
            {
                Console.WriteLine (ex.ToString ());
            }
        }

public void DidOutputSampleBuffer (AVCaptureOutput captureOutput, MonoTouch.CoreMedia.CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
        {   
            Console.WriteLine ("DidOutputSampleBuffer: enter");

            if (isScanning) 
            {
                CVImageBuffer imageBuffer = sampleBuffer.GetImageBuffer (); 

                Console.WriteLine ("DidOutputSampleBuffer: calling decode");

                //      NSLog(@"got image w=%d h=%d bpr=%d",CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer), CVPixelBufferGetBytesPerRow(imageBuffer));
                // call the decoder
                DecodeImage (imageBuffer);
            }
            else
            {
                Console.WriteLine ("DidOutputSampleBuffer: not scanning");
            }

            Console.WriteLine ("DidOutputSampleBuffer: quit");
        } 
萌无敌 2024-11-13 09:36:03

所有问题都解决了,最终工作正常,发生冻结是因为在我的测试中,我尚未在方法 DidOutputSampleBuffer 中处理 SampleBuffer。我的观点的最终代码如下:

更新 1:更改了 VideoSettings CVPixelFormat 的分配,这是不正确的,会导致 SampleBuffer 中的 BytesPerPixel 错误。

public partial class VirtualDeckViewController : UIViewController
{   
    public CaptureVideoDelegate captureVideoDelegate;

    public AVCaptureVideoPreviewLayer previewLayer;
    public AVCaptureSession captureSession;
    public AVCaptureDevice captureDevice;
    public AVCaptureDeviceInput captureDeviceInput;
    public AVCaptureVideoDataOutput captureVideoOutput;

...

    public override void ViewDidLoad ()
    {
        base.ViewDidLoad ();

        SetupVideoCaptureSession();
    }

    public void SetupVideoCaptureSession()
    {
        // Create notifier delegate class 
        captureVideoDelegate = new CaptureVideoDelegate();

        // Create capture session
        captureSession = new AVCaptureSession();
        captureSession.BeginConfiguration();
        captureSession.SessionPreset = AVCaptureSession.Preset640x480;

        // Create capture device
        captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);

        // Create capture device input
        NSError error;
        captureDeviceInput = new AVCaptureDeviceInput(captureDevice, out error);
        captureSession.AddInput(captureDeviceInput);

        // Create capture device output
        captureVideoOutput = new AVCaptureVideoDataOutput();
        captureVideoOutput.AlwaysDiscardsLateVideoFrames = true;
                    // UPDATE: Wrong videosettings assignment
        //captureVideoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV32BGRA;
                    // UPDATE Correct videosettings assignment
                    captureVideoOutput.VideoSettings = new AVVideoSettings(CVPixelFormatType.CV32BGRA);
        captureVideoOutput.MinFrameDuration = new CMTime(1, 30);
        DispatchQueue dispatchQueue = new DispatchQueue("VideoCaptureQueue");
        captureVideoOutput.SetSampleBufferDelegateAndQueue(captureVideoDelegate, dispatchQueue);
        captureSession.AddOutput(captureVideoOutput);

        // Create preview layer
        previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession);
        previewLayer.Orientation = AVCaptureVideoOrientation.LandscapeLeft;
        previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill";
        previewLayer.Frame = new RectangleF(0, 0, 1024, 768);
        this.View.Layer.AddSublayer(previewLayer);

        // Start capture session
        captureSession.CommitConfiguration();
        captureSession.StartRunning();  
    }

    public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate
    {   
        public CaptureVideoDelegate() : base()
        {   
        }

        public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
        {
            // TODO: Implement buffer processing

            // Very important (buffer needs to be disposed or it will freeze)
            sampleBuffer.Dispose();
        }
    }

最后一块拼图得到了我最终在这里找到的 Miguel de Icaza 示例的解答:链接

感谢 Miguel 和 Pavel

All issues solved and finally working fine, the freezing was happening because in my test I was not yet disposing the sampleBuffer in the method DidOutputSampleBuffer. The final code for my view is here:

UPDATE 1: Changed assignment of VideoSettings CVPixelFormat, was incorrect and would cause a wrong BytesPerPixel in the sampleBuffer.

public partial class VirtualDeckViewController : UIViewController
{   
    public CaptureVideoDelegate captureVideoDelegate;

    public AVCaptureVideoPreviewLayer previewLayer;
    public AVCaptureSession captureSession;
    public AVCaptureDevice captureDevice;
    public AVCaptureDeviceInput captureDeviceInput;
    public AVCaptureVideoDataOutput captureVideoOutput;

...

    public override void ViewDidLoad ()
    {
        base.ViewDidLoad ();

        SetupVideoCaptureSession();
    }

    public void SetupVideoCaptureSession()
    {
        // Create notifier delegate class 
        captureVideoDelegate = new CaptureVideoDelegate();

        // Create capture session
        captureSession = new AVCaptureSession();
        captureSession.BeginConfiguration();
        captureSession.SessionPreset = AVCaptureSession.Preset640x480;

        // Create capture device
        captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);

        // Create capture device input
        NSError error;
        captureDeviceInput = new AVCaptureDeviceInput(captureDevice, out error);
        captureSession.AddInput(captureDeviceInput);

        // Create capture device output
        captureVideoOutput = new AVCaptureVideoDataOutput();
        captureVideoOutput.AlwaysDiscardsLateVideoFrames = true;
                    // UPDATE: Wrong videosettings assignment
        //captureVideoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV32BGRA;
                    // UPDATE Correct videosettings assignment
                    captureVideoOutput.VideoSettings = new AVVideoSettings(CVPixelFormatType.CV32BGRA);
        captureVideoOutput.MinFrameDuration = new CMTime(1, 30);
        DispatchQueue dispatchQueue = new DispatchQueue("VideoCaptureQueue");
        captureVideoOutput.SetSampleBufferDelegateAndQueue(captureVideoDelegate, dispatchQueue);
        captureSession.AddOutput(captureVideoOutput);

        // Create preview layer
        previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession);
        previewLayer.Orientation = AVCaptureVideoOrientation.LandscapeLeft;
        previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill";
        previewLayer.Frame = new RectangleF(0, 0, 1024, 768);
        this.View.Layer.AddSublayer(previewLayer);

        // Start capture session
        captureSession.CommitConfiguration();
        captureSession.StartRunning();  
    }

    public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate
    {   
        public CaptureVideoDelegate() : base()
        {   
        }

        public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
        {
            // TODO: Implement buffer processing

            // Very important (buffer needs to be disposed or it will freeze)
            sampleBuffer.Dispose();
        }
    }

The final piece of the puzzle was answered with the Miguel de Icaza sample I finally found here: link

Thanks to Miguel and Pavel

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文