如何在 iOS Swift 中为 WebRTC 视频通话添加画中画 (PIP)
我们使用以下步骤为 WebRTC 视频通话集成 PIP(画中画):
-
我们正在项目中启用音频、Airplay 和画中画功能模式。
-
我们添加了一个在多任务处理时访问相机的权利文件,请参阅访问多任务处理时使用相机。)
-
从文档链接中,我们遵循:
<块引用>配置您的应用
您的帐户有权使用该权利后,您可以按照以下步骤使用该权利创建新的配置文件:
登录您的 Apple 开发者帐户。
转到证书、标识符和证书个人资料。
生成一个新的 为您的应用程序配置配置文件。
从您帐户的附加权利中选择多任务摄像头访问权利。
-
我们还集成了以下链接,但是如何在这个
SampleBufferVideoCallView
中添加视频渲染层视图我们没有任何具体提示。 https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls?changes=__8 -
此外,
RTCMTLVideoView
创建不支持MTKView
,但我们使用了WebRTC
默认视频渲染视图,例如用于GLKView
的RTCEAGLVideoView
视频渲染。
PIP 与 WebRTC iOS Swift 集成代码:
class SampleBufferVideoCallView: UIView {
override class var layerClass: AnyClass {
get { return AVSampleBufferDisplayLayer.self }
}
var sampleBufferDisplayLayer: AVSampleBufferDisplayLayer {
return layer as! AVSampleBufferDisplayLayer
}
}
func startPIP() {
if #available(iOS 15.0, *) {
let sampleBufferVideoCallView = SampleBufferVideoCallView()
let pipVideoCallViewController = AVPictureInPictureVideoCallViewController()
pipVideoCallViewController.preferredContentSize = CGSize(width: 1080, height: 1920)
pipVideoCallViewController.view.addSubview(sampleBufferVideoCallView)
let remoteVideoRenderar = RTCEAGLVideoView()
remoteVideoRenderar.contentMode = .scaleAspectFill
remoteVideoRenderar.frame = viewUser.frame
viewUser.addSubview(remoteVideoRenderar)
let pipContentSource = AVPictureInPictureController.ContentSource(
activeVideoCallSourceView: self.viewUser,
contentViewController: pipVideoCallViewController)
let pipController = AVPictureInPictureController(contentSource: pipContentSource)
pipController.canStartPictureInPictureAutomaticallyFromInline = true
pipController.delegate = self
} else {
// Fallback on earlier versions
}
}
如何将 viewUser GLKView
添加到 pipContentSource
以及如何将远程视频缓冲区视图集成到 SampleBufferVideoCallView
中?
是否可以通过这种方式或任何其他方式在 AVSampleBufferDisplayLayer
中视频渲染缓冲层视图?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
代码级支持 当被问及这个问题时,Apple 给出了以下建议:
Code-Level Support Apple gave the following advice when asked about this problem:
要使用提供的代码在视频通话中通过 WebRTC 显示画中画 (PIP),请按照以下步骤操作:
步骤 1:初始化 WebRTC 视频通话
确保您已经设置了 WebRTC 视频通话,并建立了必要的信令和对等连接。此代码假设您已经有一个代表从远程用户接收的视频流的remoteVideoTrack。
第2步:创建FrameRenderer对象
实例化 FrameRenderer 对象,该对象将负责渲染从远程用户接收的视频帧以进行 PIP 显示。
// 在初始化视频通话的位置添加此代码(渲染开始之前)
步骤 3:将远程视频渲染到 FrameRenderer
在 renderRemoteVideo 函数中,将 RemoteVideoTrack 中的视频帧添加到 FrameRenderer 对象以在 PIP 视图中渲染它们。
步骤 4:从渲染远程视频中删除 FrameRenderer
在removeRenderRemoteVideo函数中,当您想要停止PIP显示时,从渲染视频帧中删除FrameRenderer对象。
第5步:定义FrameRenderer类
FrameRenderer 类负责在 PIP 视图中渲染从 WebRTC 接收的视频帧。
步骤 6:实现 PIP 功能
根据提供的代码,您似乎已经使用 AVPictureInPictureController 实现了 PIP 功能。当您想要在视频通话期间启用 PIP 时,请确保调用 startPIP 函数。 SampleBufferVideoCallView 用于显示从frameRenderer 接收到的PIP 视频帧。
注意:FrameRenderer 对象应该在您的应用程序中定义,并且您应该确保 PIP 视图的位置和大小设置适当,以实现所需的 PIP 效果。此外,请记住处理呼叫结束场景并优雅地释放frameRenderer和WebRTC连接。
请记住,提供的代码假设您已经拥有必要的 WebRTC 设置,并且此代码仅关注 PIP 渲染方面。此外,从 iOS 15.0 开始支持 PIP,因此请确保正确处理运行早期版本的设备。
To display a picture-in-picture (PIP) with WebRTC in a video call using the provided code, follow these steps:
Step 1: Initialize the WebRTC video call
Make sure you have already set up the WebRTC video call with the necessary signaling and peer connection establishment. This code assumes you already have a remoteVideoTrack that represents the video stream received from the remote user.
Step 2: Create a FrameRenderer object
Instantiate the FrameRenderer object, which will be responsible for rendering the video frames received from the remote user for the PIP display.
// Add this code where you initialize your video call (before rendering starts)
Step 3: Render remote video to the FrameRenderer
In the renderRemoteVideo function, add the video frames from the remoteVideoTrack to the FrameRenderer object to render them in the PIP view.
Step 4: Remove the FrameRenderer from rendering remote video
In the removeRenderRemoteVideo function, remove the FrameRenderer object from rendering the video frames when you want to stop the PIP display.
Step 5: Define the FrameRenderer class
The FrameRenderer class is responsible for rendering video frames received from WebRTC in the PIP view.
Step 6: Implement the PIP functionality
Based on the provided code, it seems you already have a PIP functionality implemented using the AVPictureInPictureController. Ensure that the startPIP function is called when you want to enable PIP during the video call. The SampleBufferVideoCallView is used to display the PIP video frames received from the frameRenderer.
Note: The FrameRenderer object should be defined in your application, and you should ensure that the PIP view's position and size are appropriately set up to achieve the desired PIP effect. Additionally, remember to handle the call-end scenario and release the frameRenderer and WebRTC connections gracefully.
Keep in mind that the code provided assumes you already have the necessary WebRTC setup, and this code focuses on the PIP rendering aspect only. Additionally, PIP is supported from iOS 15.0 onwards, so make sure to handle devices running earlier versions appropriately.