高级:链路聚合、MPIO、iSCSI MC/S

发布于 2025-01-01 14:03:07 字数 683 浏览 1 评论 0原文

我正在尝试找到完成以下任务的正确方法。

我想为访问 ESXi 服务器上的文件服务器来宾虚拟机的客户端提供 2Gb/s 的访问,该服务器本身通过 iSCSI 访问数据存储。因此,ESXi 服务器需要 2Gbps 与 NAS 连接。我还想直接在 NAS 上提供 2Gbps。

看起来有三种技术可以提供帮助。链路聚合(802.3ad、LAG、Trunk)、多路径 IO (MPIO) 和 iSCSI 每个会话多个连接 (MC/S)。

然而,每个都有自己的目的和缺点,聚合提供总共 2Gbps,但单个连接(我认为它基于源/目标 MAC 地址)只能获得 1Gbps,这是无用的(例如,我认为对于 iSCSI,它是单个流) ,MPIO 似乎是 iSCSI 的一个不错的选择,因为它平衡两个连接上的任何流量,但它似乎需要源上的 2 个 IP 和目标上的 2 个 IP,我不确定 MC。

这是我想要存档的内容,但是我不确定在每对 1Gbps 的 NIC 对上采用的技术。

我还认为这种设计是有缺陷的,因为在 NAS 和交换机之间进行链路聚合会阻止我在 ESX 上使用 MPIO,因为它还需要 nas 上的 2 个 IP,而且我认为链路聚合将为我提供一个 IP。

也许使用 MC 代替 MPIO 会有效?

这是一个图表:

在此处输入图像描述

I am trying to find the proper way of accomplishing the following.

I would like to provide 2Gb/s access for clients accessing a fileserver guest vm on a ESXi server, which itself access the datastore over iSCSI. Therefore the ESXi server need 2Gbps connection to the NAS. I would also like to provide 2Gbps directly on the NAS.

Looks like there are three technology which can help. Link aggregation (802.3ad, LAG, Trunk), Multi Path IO (MPIO), and iSCSI Multiple connection per session (MC/S).

However each have their own purpose and drawbacks, Aggregation provide 2Gbps total but a single connection (I think it's based on source/dest MAC address) can only get 1Gbps, which is useless (I think for iSCSI for example which is a single stream), MPIO seem a good option for iSCSI as it balance any traffic on two connection however it seem to require 2 IPs on the Source and 2 IPs on the DEST, I am unsure about MCs.

Here is what I would like to archive, however I am not sure of the technology to employ on each NIC pair of 1Gbps.

I also think this design is flawed because doing link aggregation between the NAS and the switch would prevent me from using MPIO on the ESX as it also require 2 IP on the nas and I think link aggregation will give me a single IP.

Maybe using MCs instead of MPIO would work?

Here a diagram:

enter image description here

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

一百个冬季 2025-01-08 14:03:07

如果您想在 ESX 中的虚拟机上实现 2Gbps,可以使用 MPIO 和 MPIO。 iSCSI,但正如您所说,ESX 主机上需要两个适配器,NAS 上需要两个适配器。缺点是您的 NAS 需要支持来自同一启动器的多个连接,但并非所有连接都支持。路径策略需要设置为循环,以便您可以使用主动-主动连接。为了让 ESX 使用两个路径 @ 超过 50%,您需要调整循环平衡模式以每 1 IOPS 而不是 1000 切换一次路径。您可以通过 SSH 连接到主机并使用 esxcli 来完成此操作(如果您需要)我可以向他们提供有关如何操作的完整说明)。

之后,您应该能够在虚拟机上运行 IOMeter 并查看超过 1Gbps 的数据速率,对于 1500 MTU 可能为 150MB/s,如果您使用巨型帧,那么您将获得大约 200MB/s。

另一方面(这可能对您将来的设置有用),当使用附带的 MPIO iSCSI 启动器时,可以通过源上的两个适配器和 NAS 上的绑定适配器实现 2Gbps(因此 2→1) Server 2008。此启动器的工作方式与 VMWare 略有不同,并且不需要您的 NAS 支持来自一个启动器的多个连接——据我所知,它会产生多个连接发起者而不是会话。

If you want to achieve 2Gbps to a VM in ESX it is possible using MPIO & iSCSI but as you say you will need two adapters on the ESX host and two on the NAS. The drawback is that your NAS will need to support multiple connections from the same initiator, not all of them do. The path policy will need to be set to round-robin so you can use Active-Active connections. In order to get ESX to use both paths @ over 50% each you will need to adjust the round robin balancing mode to switch paths every 1 IOPS instead of 1000. You can do this by SSHing to the host and using esxcli (if you need full instructions on how to do that I can provide them).

After this you should be able to run IOMeter on a VM and see the data rate @ over 1Gbps, maybe 150MB/s for 1500 MTU and if you are using jumbo frames, then you will get around 200MB/s.

On another note (which might prove useful to your setups in the future), it is possible to achieve 2Gbps with two adapters on the source and bonded adapter on the NAS (so 2 → 1) when using the MPIO iSCSI Initiator that comes with Server 2008. This initiator works slightly different to VMWare and doesn't require your NAS to support many connections from one initiator — from what I can tell it spawns multiple initiators instead of sessions.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文