kvm中如何进行设备模拟

发布于 2024-09-06 19:15:04 字数 315 浏览 12 评论 0原文

我知道 qemu-kvm 在 KVM 中进行设备模拟。 qemu-kvm 是否在主机的用户空间中执行?因此,当遇到kick函数时,它会通过对hypervisor的hypercall退出VM,然后hypervisor将其移交给 主机用户空间中的 qemu-kvm。完成所需操作后下一步 qemu-kvm 转换到虚拟机管理程序,然后 虚拟机管理程序返回到虚拟机。所以这意味着有两个系统调用 来自 VM-->Hypervisor 和 qemu-kvm-->Hypervisor?这些是步骤吗 发生这种情况还是我错了?如果有任何有关的文档 诸如此类的东西,请给我链接。非常感谢...

谢谢, 巴拉

I know that the qemu-kvm does the device emulation stuff in KVM. Is the qemu-kvm being executed in the userspace of the host? So when a kick function is encountered, it exits the VM through a hypercall into the hypervisor, then the hypervisor hand over to
qemu-kvm in host userspace. Next after doing the needed
things, the qemu-kvm transits to the hypervisor and then the
hypervisor back to the VM. So it means there are two system calls one
from VM-->Hypervisor and qemu-kvm-->Hypervisor? Are these the steps
that take place or i am wrong? If there is any documentation about
these kind of stuff, please give me the link. Thank you very much...

Thanks,
Bala

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

煞人兵器 2024-09-13 19:15:04

我更熟悉在 x86 架构上工作的 KVM 部分,因此尝试在 KVM 的 x86 实现中解释这一点。

在 x86 架构中,KVM 利用 CPU 的功能来分离虚拟机管理程序和访客模式。用 Intel 术语来说,它们分别是 VMX 根模式和非根模式。

VM 条目(管理程序 -> VM)由 KVM 使用 VMLAUNCH 指令触发,所有来宾所需的信息都在内核模式下填充到 CPU 的 VMCS 中。仅从 qemu-kvm 到 kvm 内核模块调用系统调用。

当来宾操作系统正在处理超出其权限的操作(例如访问物理硬件或发生中断)时,就会发生虚拟机退出。之后,发出 VM 条目,CPU 再次更改为非 root 模式以执行 guest 代码。综上所述,VM退出(VM→Hypervisor)是由HW自动完成的,并且相应的退出原因和信息会被记录在VMCS中。然后 KVM 检查 VMCS 以确定下一步。 VM没有系统调用->管理程序。

大多数设备模拟都基于用户空间,qemu-kvm 可以利用现有的 qemu 代码。然而,某些设备直通技术(例如 Intel VT-d)允许来宾直接通过 IOMMU 或其他方式访问硬件。尤其是在高速网络设备上,可以带来更强大的性能。

如果你想挖掘源代码,我建议首先关注CPU虚拟化(Intel VT-x),它位于linux/arch/x86/kvm/vmx.c。英特尔软件开发人员指南也对 VT 进行了全面的介绍。

I am more familiar with KVM part working on x86 architecture, so try to explain this in KVM's x86 implementation.

In x86 architecture, KVM leverages CPU's functionality to separate hypervisor and guest mode. In Intel terms, they are VMX root and non-root modes respectively.

VM entry (hypervisor -> VM) is fired by KVM with VMLAUNCH instruction with all guest-needed information filled in CPU's VMCS in kernel mode. Only a system call is invoked from qemu-kvm to kvm kernel module.

A VM exit happens while guest OS is handling something that out of its privilege, such as accessing a physical HW or an interrupt happened. After that, a VM entry is issued and CPU changes to non-root mode again to execute guest code. In summary, VM exit (VM -> hypervisor) is done by HW automatically, and the corresponding exit reason and information would be recored in VMCS. KVM then check VMCS to determine its next step. There is no system call for VM -> hypervisor.

Most device emulations are based in userspace where qemu-kvm can leverage the existing qemu's code. However some device passthrough technologies, such as Intel VT-d, allow guest to access hardware directly through IOMMU or others. Which can bring more powerful performance especially on high speed networking devices.

If you want to dig out the source code, I recommend to focus on CPU virtualization (Intel VT-x) first, which is located in linux/arch/x86/kvm/vmx.c. Intel software developer guide also has comprehensive introduction to VT as well.

栀子花开つ 2024-09-13 19:15:04

kvm 是由一家名为 qumranet 的以色列公司创立的。这些介绍性论文是由这些人撰写的,建议阅读:

基于内核的虚拟机技术:http://www.fujitsu.com/downloads/MAG/vol47-3/paper18.pdf
KVM:基于内核的虚拟化驱动程序:http://www.linuxinsight.com/files/kvm_whitepaper.pdf

KVM 使用 QEMU 进行 I/O 仿真,这在论文中进行了解释。
它将帮助您了解从访客模式切换到主机模式的工作原理、切换背后的原因、qemu 如何在用户空间完成 I/O 模拟以及它如何切换回访客模式。这些都是优秀而简短的论文。

kvm was started by an Israeli firm called qumranet. These introductory papers are written by those guys and are recommended for reading:

Kernel-based Virtual Machine Technology: http://www.fujitsu.com/downloads/MAG/vol47-3/paper18.pdf
KVM: Kernel-based Virtualization Driver: http://www.linuxinsight.com/files/kvm_whitepaper.pdf

KVM uses QEMU for I/O emulation which is explained in the paper.
It will help you to understand how a switch from guest to host mode works, the reasons behind the switch, how I/O emulation is done by qemu at userspace and how it switches back to the guest. These are excellent, brief papers.

萌能量女王 2024-09-13 19:15:04

我发现很好。至少对于基础知识来说。希望有帮助。

I found this good. Atleast for the basics. Hope it helps.

征棹 2024-09-13 19:15:04

qemu-kvm 是否在主机的用户空间中执行?是的,这也是一个性能瓶颈,并且正在开发解决它的方法。查看用于网络的 PCI SR-IOV NIC 和用于光纤通道的 NPIV。它们都是专门为细分 I/O 控制器而设计的特殊硬件,以便 KVM/qemu 可以将 VM 连接到控制器上的专用通道。

所以这意味着有两个系统调用,一个来自VM-->Hypervisor,另一个来自qemu-kvm-->Hypervisor?我不确定,但我认为存在跨越用户内核空间边界的设备中断,而不是系统调用。

也许这个文档会对您有所帮助:

http: //www.linux-kvm.org/wiki/images/4/42/Kvm-device-assignment.pdf

Is the qemu-kvm being executed in the userspace of the host? yes, this is a performance bottleneck too and there are ways around it being developed. Look at PCI SR-IOV NIC for network and NPIV for fibrechannel. They both are special hardware designed to subdivided I/O controllers so that KVM/qemu can attach the VM to a private channel on the controller.

So it means there are two system calls one from VM-->Hypervisor and qemu-kvm-->Hypervisor? I don't know for certain but I think there are device interrupts crossing user-kernel space boundaries not systems calls.

Perhaps this document will help you a bit:

http://www.linux-kvm.org/wiki/images/4/42/Kvm-device-assignment.pdf

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文