Optimize HDX bandwidth over high latency connections 编辑

July 20, 2022 Contributed by:  C L

Optimize HDX bandwidth over high latency connections

Important:

This information applies to deployments with Citrix Virtual Apps and Desktops 7 1808, XenApp and XenDesktop 7.18, and earlier, and Citrix Workspace app 1808 for Windows and earlier.

Citrix Virtual Apps and Desktops administrators can configure HDX to get better network throughput over high latency connections. Configuring the right number of buffers used to send the data, can make HDX use all the available bandwidth over high latency connections.


Tune buffers

By default, HDX successfully uses the available bandwidth as long as the optimal TCP receive window size for the connection is 64 Kilobytes. To use all of the available bandwidth when the optimal TCP Receive Window is above 64 Kilobytes, you must increase the number of buffers. This involves calculating the optimal TCP receive window for the HDX connection and then using it to determine how many buffers are required to fully use the available bandwidth.


Calculate optimal TCP receive window

If you know the bandwidth and round-trip time (RTT) latency of the HDX session between the client and the server, you can use the following formula to calculate the optimal TCP receive window size:Optimal TCP Receive Window in Bytes = Bandwidth (kbps) / 8 X Latency in ms. Then, round it up to a multiple of TCP MSS (Maximum Segment Size): TCP MSS = MTU (1500) – IP+TCP Header (40) = 1460 (1428 if timestamp is enabled).

In this release, the default window size is increased from 64 Kilobytes to 146 Kilobytes by increasing the default buffer count from 44 to 100. You must modify the content for the new default window sizes and new default outbuf count.

For example:6 Megabits per second (6144 Kilobits per second) bandwidth and 200 milliseconds RTT latencyoptimal TCP receive window = 6144/8 X 200 = 153600 bytesRounding it up to MSS size of 1460 = 154760 bytes

If an optimal receive window is above 146 Kilobytes, HDX cannot use all the available bandwidth of 6 Megabits per second from server to client with the default setting. Tests confirm that it can use only 2.5 Megabits per second out of the 6. It impacts the performance of HDX session in this network scenario.

Calculate the number of buffers

When the TCP receive window rounds up to TCP MSS size, use the following formula to calculate the number of buffers required:Number of Buffers = TCP Receive Window/TCP MSS.

Caution:

Using Registry Editor incorrectly can cause serious problems that might require you to reinstall your operating system. Citrix cannot guarantee that problems resulting from the incorrect use of Registry Editor can be solved. Use Registry Editor at your own risk. Be sure to back up the registry before you edit it.

Ensure that the TCP MSS size used to round up matches OutBufLength under the following registry (default is 1460) on the client side:

HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\ICA Client\Engine\Configuration\Advanced\Modules\TCP/IP"OutBufLength"="1460"

After calculating the number of buffers, update the client side registry with the obtained values:

HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\ICA Client\Engine\Configuration\Advanced\Modules\TCP/IP"OutBufCountClient2"= Number of OutBufs (default is 100)"OutBufCountHost2"= Number of OutBufs (default is 100)

For 6 Megabits per second bandwidth and 200 milliseconds RTT latency, Number of OutBufs = 154760/1460=106

Then disconnect and reconnect the sessions.

Note:

On StoreFront, use the default ICA file to set these client side OutBuf settings.


Additional resources

Recommended buffers for WAN

BandwidthRTT LatencyApproximate Optimal TCP WindowRecommended Buffers to utilize all the available bandwidthTCP Window with the recommended Buffers (Multiples of 64 KB) Buffers * TCP MSS
2 Mbps100 ms26 KB4464 KB
2 Mbps200 ms52 KB4464 KB
2 Mbps300 ms77 KB88128 KB
3 Mbps100 ms39 KB4464 KB
3 Mbps200 ms78 KB88128 KB
3 Mbps300 ms116 KB88128 KB
6 Mbps100 ms77 KB88128 KB
6 Mbps200 ms153 KB176256 KB
6 Mbps300 ms230 KB176256 KB

With the release of XenApp and XenDesktop 7.12, adaptive transport for XenApp and XenDesktop optimizes data transport by applying a new Citrix protocol called Enlightened Data Transport (EDT) in preference to TCP whenever possible.

Compared to TCP, EDT delivers a superior user experience on challenging long-haul WAN and Internet. However, the settings shown above can be used to further optimize the performance of EDT when Session Reliability is enabled.

In XenApp and XenDesktop 7.16 and later / Citrix Virtual Apps and Desktops 7 1808 and later, HDX adaptive transport is set to Preferred by default. Versions of Citrix Receiver and Citrix Workspace app that support Enlightened Data Transport use EDT whenever possible.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据

词条统计

浏览:49 次

字数:6470

最后编辑:7年前

编辑次数:0 次

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文