在 Windows Azure 中扩展 IO 性能
Windows Azure 宣传三种类型的 IO 性能级别:
- 特小:低
- 小:中等
- 中及以上:高
那么,如果我有一个 IO 密集型应用程序(而不是 CPU 或内存密集型)并且需要至少 6 个 CPU 来处理我的工作负载- 使用 12-15 个 Extra Small、6 个 Small 或 3 个 Medium 可以获得更好的 IO 性能吗?
我确信这会根据应用程序而有所不同 - 有没有一种简单的方法来测试它?是否有任何数字可以更好地说明您在迁移到大型实例角色时获得了多少 IO 性能提升?
看起来较小角色的 IO 性能可能与较大角色相当,如果整体负载变得太大,它们只是首先受到限制的角色。听起来对吗?
Windows Azure advertises three types of IO performance levels:
- Extra Small : Low
- Small: Moderate
- Medium and above: High
So, if I have an IO bound application (rather than CPU or Memory bound) and need at least 6 CPUs to process my work load - will I get better IO performance with 12-15 Extra Smalls, 6 Smalls, or 3 Mediums?
I'm sure this varies based on applications - is there an easy way to go about testing this? Are there any numbers that give a better picture of how much of an IO performance increase you get as you move to large instance roles?
It seems like the IO performance for smaller roles could be equivalent to the larger ones, they are just the ones that get throttled down first if the overall load becomes too great. Does that sound right?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
Windows Azure 计算大小提供大约。每核 100Mbps。超小型实例的速度要低得多,为 5Mbps。请参阅此博文 了解更多详情。如果您受 IO 限制,6-Small 设置将提供比 12 Extra-Small 大得多的带宽。
当您谈论处理工作量时,您是否在排队工作?如果是这样,多个辅助角色(每个角色都是小型实例)就可以使用 100Mbps 管道。您必须进行一些基准测试,以确定 3 个中型是否能够为您提供足够的性能提升,以证明更大的 VM 大小是合理的,因为您知道,当工作负载下降时,您每小时的“空闲”成本占用现在是 2 个核心(中型,0.24 美元) ) vs 1(小,0.12 美元)。
Windows Azure compute sizes offer approx. 100Mbps per core. Extra Small instances are much lower, at 5Mbps. See this blog post for more details. If you're IO-bound, the 6-Small setup is going to offer far greater bandwidth than 12 Extra-Smalls.
When you talk about processing your workload, are you working off a queue? If so, multiple worker roles, each being Small instance, could then each work with a 100Mbps pipe. You'd have to do some benchmarking to determine if 3 Mediums gives you enough of a performance boost to justify the larger VM size, knowing that when workload is down, your "idle" cost footprint per hour is now 2 cores (medium, $0.24) vs 1 (small, $0.12).
据我了解,每个核心允许的 IO 量是恒定的,并且应该是专用的。但我尚未能得到对此的正式确认。对于以共享模式运行的 x-small 实例来说,这可能有所不同,而不是像其他 Windows Azure 虚拟机实例那样专用。
As I understand it, the amount of IO allowed per-core is constant and supposed to be dedicated. But I haven't been able to get formal confirmation of this. This likely is different for x-small instances which operatin in a shared mode and not dedicated like the other Windows Azure vm instances.
我想你的怀疑实际上是正确的,即使是 IO 绑定也会因应用程序而异。我认为您可以通过使用计时器并将输出写入存储上的文件来实现计时目标,然后您可以检索。做一些数学计算,找出通过在小型或中型实例中塞满尽可能多的工作单元/小时可以处理 X 个工作单元。如果您的工作单位大小大幅波动,您可能也需要进行一些平均。如果可能的话,我总是更喜欢较小的实例,并且在需要更多火力时启动更多副本。
I'd imagine what you suspect is in fact true, that even being IO-bound varies by application. I think you could accomplish your goal of timing by using Timers and writing the output to a file on storage you could then retrieve. Do some math to figure out you can process X number of work units / hour by cramming as many through a small then a medium instance as possible. If your work unit size drastically fluctuates, you might have to do some averaging too. I would always prefer smaller instances if possible and just spin up more copies as you have need for more firepower.