Autoscale architecture for Google Cloud 编辑
Autoscale architecture for Google Cloud
Citrix ADM handles the client traffic distribution using Google Network Load Balancer. The following diagram illustrates how the autoscaling occurs using the Google Network Load Balancer as the traffic distributor:
Google Network Load Balancer is the distribution tier to the cluster nodes. Network Load Balancer manages the client traffic and distributes it to Citrix ADC VPX clusters. Network Load Balancer sends the client traffic to Citrix ADC VPX cluster nodes that are available in the Citrix ADM autoscaling group across zones.
Citrix ADM triggers the scale-out or scale-in action at the cluster level. When a scale-out is triggered the registered virtual machines are provisioned and added to the cluster. Similarly, when a scale-in is triggered, the nodes are removed and de-provisioned from the Citrix ADC VPX clusters.
Citrix ADM Autoscale group is a group of Citrix ADC instances that load balance applications as a single entity and trigger autoscaling based on the configured threshold parameter values.
How the autoscaling works
The following flowchart illustrates the autoscaling workflow:
The Citrix ADM collects the statistics (CPU, Memory, and throughput) from the Autoscale provisioned clusters for every minute.
The statistics are evaluated against the configuration thresholds. Depending on the statistics, scale out or scale in is triggered. Scale-out is triggered when the statistics exceed the maximum threshold. Scale-in is triggered when the statistics are operating below the minimum threshold.
If a scale-out is triggered:
New node is provisioned.
The node is attached to the cluster and the configuration is synchronized from the cluster to the new node.
The node is registered with Citrix ADM.
The new node IP addresses are updated in the Google Network Load Balancer.
If a scale-in is triggered:
The node is identified to remove.
Stop new connections to the selected node.
The node is detached from the cluster, deregistered from Citrix ADM, and then de-provisioned from Google Cloud.
Note
When the application is deployed, an IP set is created on clusters in every availability zone. Then, the domain and instance IP addresses are registered with the Google Network Load Balancer. When the application is removed, the domain and instance IP addresses are deregistered from the Google Network Load Balancer. Then, the IP set is deleted.
Example autoscaling scenario
Consider that you have created an Autoscale group named asg_arn in a single availability zone with the following configuration.
Selected threshold parameters – Memory usage.
Threshold limit set to memory:
Minimum limit: 40
Maximum limit: 85
Watch time – 2 minutes.
Cooldown period – 10 minutes.
Time to wait during de-provision – 10 minutes.
DNS time to live – 10 seconds.
After the Autoscale group is created, statistics are collected from the Autoscale group. The Autoscale policy also evaluates if any an Autoscale event is in progress. If the autoscaling is in progress, wait for that event to complete before collecting the statistics.
The sequence of events
Memory usage exceeds the threshold limit at T2. However, the scale-out is not triggered because it did not breach for the specified watch time.
Scale-out is triggered at T5 after a maximum threshold is breached for 2 minutes (watch time) continuously.
No action was taken for the breach between T5-T10 because the node provisioning is in progress.
Node is provisioned at T10 and added to the cluster. Cooldown period started.
No action was taken for the breach between T10-T20 because of the cooldown period. This period ensures the organic growing of instances of an Autoscale group. Before triggering the next scaling decision, it waits for the current traffic to stabilize and average out on the current set of instances.
Memory usage drops below the minimum threshold limit at T23. However, the scale-in is not triggered because it did not breach for the specified watch time.
Scale-in is triggered at T26 after the minimum threshold is breached for 2 minutes (watch time) continuously. A node in the cluster is identified for de-provisioning.
No action was taken for the breach between T26-T36 because Citrix ADM is waiting to drain existing connections. For DNS based autoscaling, TTL is in effect.
Note
For DNS based autoscaling, Citrix ADM waits for the specified Time-To-Live (TTL) period. Then, it waits for existing connections to drain before initiating node de-provisioning.
No action was taken for the breach between T37-T39 because the node de-provisioning is in progress.
Node is removed and de-provisioned at T40 from the cluster.
All the connections to the selected node were drained before initiating node de-provisioning. Therefore, the cooldown period is skipped after the node de-provision.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论