Optimize HDX bandwidth over high latency connections 编辑
Optimize HDX bandwidth over high latency connections
Important:
This information applies to deployments with Citrix Virtual Apps and Desktops 7 1808, XenApp and XenDesktop 7.18, and earlier, and Citrix Workspace app 1808 for Windows and earlier.
Citrix Virtual Apps and Desktops administrators can configure HDX to get better network throughput over high latency connections. Configuring the right number of buffers used to send the data, can make HDX use all the available bandwidth over high latency connections.
Tune buffers
By default, HDX successfully uses the available bandwidth as long as the optimal TCP receive window size for the connection is 64 Kilobytes. To use all of the available bandwidth when the optimal TCP Receive Window is above 64 Kilobytes, you must increase the number of buffers. This involves calculating the optimal TCP receive window for the HDX connection and then using it to determine how many buffers are required to fully use the available bandwidth.
Calculate optimal TCP receive window
If you know the bandwidth and round-trip time (RTT) latency of the HDX session between the client and the server, you can use the following formula to calculate the optimal TCP receive window size:Optimal TCP Receive Window in Bytes = Bandwidth (kbps) / 8 X Latency in ms. Then, round it up to a multiple of TCP MSS (Maximum Segment Size): TCP MSS = MTU (1500) – IP+TCP Header (40) = 1460 (1428 if timestamp is enabled).
In this release, the default window size is increased from 64 Kilobytes to 146 Kilobytes by increasing the default buffer count from 44 to 100. You must modify the content for the new default window sizes and new default outbuf count.
For example:6 Megabits per second (6144 Kilobits per second) bandwidth and 200 milliseconds RTT latencyoptimal TCP receive window = 6144/8 X 200 = 153600 bytesRounding it up to MSS size of 1460 = 154760 bytes
If an optimal receive window is above 146 Kilobytes, HDX cannot use all the available bandwidth of 6 Megabits per second from server to client with the default setting. Tests confirm that it can use only 2.5 Megabits per second out of the 6. It impacts the performance of HDX session in this network scenario.
Calculate the number of buffers
When the TCP receive window rounds up to TCP MSS size, use the following formula to calculate the number of buffers required:Number of Buffers = TCP Receive Window/TCP MSS.
Caution:
Using Registry Editor incorrectly can cause serious problems that might require you to reinstall your operating system. Citrix cannot guarantee that problems resulting from the incorrect use of Registry Editor can be solved. Use Registry Editor at your own risk. Be sure to back up the registry before you edit it.
Ensure that the TCP MSS size used to round up matches OutBufLength under the following registry (default is 1460) on the client side:
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\ICA Client\Engine\Configuration\Advanced\Modules\TCP/IP"OutBufLength"="1460"
After calculating the number of buffers, update the client side registry with the obtained values:
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\ICA Client\Engine\Configuration\Advanced\Modules\TCP/IP"OutBufCountClient2"= Number of OutBufs (default is 100)"OutBufCountHost2"= Number of OutBufs (default is 100)
For 6 Megabits per second bandwidth and 200 milliseconds RTT latency, Number of OutBufs = 154760/1460=106
Then disconnect and reconnect the sessions.
Note:
On StoreFront, use the default ICA file to set these client side OutBuf settings.
Additional resources
Recommended buffers for WAN
Bandwidth | RTT Latency | Approximate Optimal TCP Window | Recommended Buffers to utilize all the available bandwidth | TCP Window with the recommended Buffers (Multiples of 64 KB) Buffers * TCP MSS |
---|---|---|---|---|
2 Mbps | 100 ms | 26 KB | 44 | 64 KB |
2 Mbps | 200 ms | 52 KB | 44 | 64 KB |
2 Mbps | 300 ms | 77 KB | 88 | 128 KB |
3 Mbps | 100 ms | 39 KB | 44 | 64 KB |
3 Mbps | 200 ms | 78 KB | 88 | 128 KB |
3 Mbps | 300 ms | 116 KB | 88 | 128 KB |
6 Mbps | 100 ms | 77 KB | 88 | 128 KB |
6 Mbps | 200 ms | 153 KB | 176 | 256 KB |
6 Mbps | 300 ms | 230 KB | 176 | 256 KB |
With the release of XenApp and XenDesktop 7.12, adaptive transport for XenApp and XenDesktop optimizes data transport by applying a new Citrix protocol called Enlightened Data Transport (EDT) in preference to TCP whenever possible.
Compared to TCP, EDT delivers a superior user experience on challenging long-haul WAN and Internet. However, the settings shown above can be used to further optimize the performance of EDT when Session Reliability is enabled.
In XenApp and XenDesktop 7.16 and later / Citrix Virtual Apps and Desktops 7 1808 and later, HDX adaptive transport is set to Preferred by default. Versions of Citrix Receiver and Citrix Workspace app that support Enlightened Data Transport use EDT whenever possible.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论