目标 xcopy 部署后 iSCSI / MPIO 磁盘粘在一起

发布于 2024-08-23 03:37:22 字数 781 浏览 5 评论 0原文

我们拥有以下基础设施: WUDSS 2003 R2 提供 iSCSI 目标,这些目标由 Server 2008 R2 群集使用并作为直通磁盘转发到 Hyper-V 来宾。我们不将 VHD 用于 Hyper-V,并且直到最近,我们才将 MPIO 用于 iSCSI。

对于操作系统部署,我们选择了以下场景:我们已经预先配置了安装了操作系统和软件的“主”来宾。每次我们需要部署新的来宾系统时,我们都会复制与其中一个“主”来宾系统相对应的虚拟磁盘(在 WUDSS)。复制新磁盘后,我们将其导入 WinTarget,为新虚拟机创建新的 iSCSI 目标。最后,我们使用新目标创建了一个新的来宾计算机,并对新的来宾计算机进行了 sysprep。

到目前为止,它运行得非常好:为新的访客机器提供的时间只有几分钟。现在我们已经安装了用于 iSCSI 流量平衡的 MPIO,但出现了部署问题。

现在,启用 MPIO 后,当两个或多个此类“克隆”映像通过 iSCSI 启动器连接时,iSCSI 启动器会将它们分配给单个物理驱动器(例如 \.\PhysicalDrive5 )。每个连接的目标都有自己的 LUN,但 MPIO 路径连接到第一个连接的目标,并且只有一个磁盘对 Hyper-V 主机可见。

很明显,iSCSI/MPIO 在磁盘上存储了一些信息,我们最初的想法是磁盘 id。但是,我们尝试在 diskpart 工具的帮助下更改磁盘 ID,但磁盘 ID 似乎不起作用。

目前,我们必须切换到基于 WIM/ImageX 的部署,但这需要更多时间,我们想知道是否有任何方法可以防止上述“粘在一起”行为,并有可能部署新的 iSCSI 目标/VM guest 虚拟机使用 xcopy 方法。

We have the following infrastructure: WUDSS 2003 R2 provides iSCSI targets which are consumed by a Server 2008 R2 cluster and forwarded as pass-through disks to Hyper-V guests. We do not use VHDs for Hyper-V and, until recently, we have used no MPIO for iSCSI.

For OS deployment, we have chosen the following scenario: We have pre-configured "master" guests with an OS and software installed. We copied virtual disks (at WUDSS) corresponding to one of those ‘master’ guests every time we needed to deploy a new guest system. As a new disk is copied we imported it into WinTarget, created a new iSCSI target for new virtual machine. Finally we created a new guest machine with the new target and sysprep-ed the new guest machine.

So far it worked wonderful: the provision of time for a new guest machine was just few minutes. Now we have installed MPIO for iSCSI traffic balancing and a deployment problem appeared.

Now, with MPIO enabled, when two or more such "cloned" images are connected via iSCSI Initiator, the iSCSI initiator assigns them to a single physical drive (e.g. \.\PhysicalDrive5 ) . Each connected target has its own LUN, but MPIO paths are connected to the target connected first and there is only one disk is visible to the Hyper-V host.

It’s clear that iSCSI/MPIO stores some information on disk, and our original thought was that it’s disk id. However, we tried changing disk id with help of diskpart tool and disk id doesn’t seem to play a role.

Currently we had to switch to WIM/ImageX based deployment, but it takes more time and we want to know if there is any way to prevent the ‘stick together’ behavior described above and to have a possibility to deploy new iSCSI targets/VM guests using xcopy approach.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

别念他 2024-08-30 03:37:22

好的,问题解决了。该问题与 VHD 文件唯一 ID 有关,该 ID 似乎是通过 SCSI INQUIRY 命令传递给启动器的。不知道为什么没有 MPIO 也能正常工作。

无论如何,VHD 规范是开放的,我用几行代码编写了一个工具来更改此 ID。

Ok, the issue is solved. The problem is related to the VHD file unique id which seem to be passed via the SCSI INQUIRY command to the initiator. Have no idea why it works fine without MPIO.

Anyway, the VHD specification is open and with a few lines of code I wrote a tool to change this ID.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文