不扩大规模与缩小规模的原因是什么?
作为一名程序员,我每隔几年就会做出革命性的发现。我在相位上要么领先曲线,要么落后曲线约 π。我学到的一个惨痛的教训是,向外扩展并不总是更好,通常最大的性能提升是在我们重新组合和扩展时。
您进行横向扩展还是纵向扩展的原因是什么?价格、性能、愿景、预计使用情况?如果是这样,这对你有什么作用?
我们曾经扩展到数百个节点,将必要的数据序列化并缓存到每个节点,并对记录运行数学过程。需要(交叉)分析许多、数十亿条记录。这是采用横向扩展的完美业务和技术案例。我们不断优化,直到在 26 小时内处理了大约 24 小时的数据。长话短说,我们租了一台巨大的(暂时的)IBM pSeries,在上面安装了 Oracle Enterprise,为我们的数据建立了索引,并最终在大约 6 小时内处理了相同的 24 小时数据。对我来说革命。
许多企业系统都是 OLTP 并且数据没有分片,但许多人的愿望是集群或横向扩展。这是对新技术或感知性能的反应吗?
当今的一般应用程序或我们的编程模型是否更适合横向扩展?未来我们是否/应该始终考虑这一趋势?
As a programmer I make revolutionary findings every few years. I'm either ahead of the curve, or behind it by about π in the phase. One hard lesson I learned was that scaling OUT is not always better, quite often the biggest performance gains are when we regrouped and scaled up.
What reasons to you have for scaling out vs. up? Price, performance, vision, projected usage? If so, how did this work for you?
We once scaled out to several hundred nodes that would serialize and cache necessary data out to each node and run maths processes on the records. Many, many billions of records needed to be (cross-)analyzed. It was the perfect business and technical case to employ scale-out. We kept optimizing until we processed about 24 hours of data in 26 hours wallclock. Really long story short, we leased a gigantic (for the time) IBM pSeries, put Oracle Enterprise on it, indexed our data and ended up processing the same 24 hours of data in about 6 hours. Revolution for me.
So many enterprise systems are OLTP and the data are not shard'd, but the desire by many is to cluster or scale-out. Is this a reaction to new techniques or perceived performance?
Do applications in general today or our programming matras lend themselves better for scale-out? Do we/should we take this trend always into account in the future?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
因为扩展
在某种程度上,因为这就是谷歌所做的。
Because scaling up
And also to some extent, because that's what Google do.
横向扩展最适合解决令人尴尬的并行问题。这需要一些工作,但许多 Web 服务都符合该类别(因此目前很流行)。否则你会遇到阿姆达尔定律,这意味着要获得速度,你必须扩大规模不出来。我怀疑你遇到了这个问题。此外,IO 密集型操作在横向扩展方面也往往表现良好,主要是因为等待 IO 增加了可并行化的百分比。
Scaling out is best for embarrassingly parallel problems. It takes some work, but a number of web services fit that category (thus the current popularity). Otherwise you run into Amdahl's law, which then means to gain speed you have to scale up not out. I suspect you ran into that problem. Also IO bound operations also tend to do well with scaling out largely because waiting for IO increases the % that is parallelizable.
博客文章扩展与扩展:隐藏成本< Jeff Atwood 的 /a> 有一些值得考虑的有趣点,例如软件许可和电力成本。
The blog post Scaling Up vs. Scaling Out: Hidden Costs by Jeff Atwood has some interesting points to consider, such as software licensing and power costs.
毫不奇怪,这完全取决于您的问题。如果您可以轻松地将其划分为沟通不多的子问题,那么横向扩展会带来微不足道的加速。例如,在 1B 网页中搜索单词可以由一台机器搜索 1B 页面来完成,或者由 1M 台机器每台搜索 1000 个页面来完成,而不会显着降低效率(因此加速了 1,000,000 倍)。这被称为“尴尬并行”。
然而,其他算法确实需要子部分之间进行更密集的通信。您需要交叉分析的示例是一个完美的示例,说明通信通常会淹没添加更多盒子所带来的性能增益。在这些情况下,您需要在(更大的)盒子内保持通信,通过高速互连,而不是像 (10-)Gig-E 这样“常见”的东西。
当然,这是一个相当理论化的观点。其他因素,例如 I/O、可靠性、易于编程(一台大型共享内存机器通常比集群少得多)也会产生很大的影响。
最后,由于使用廉价商品硬件进行扩展的(通常是极端的)成本效益,集群/网格方法最近吸引了更多的(算法)研究。这使得开发出新的并行化方法,最大限度地减少通信,从而在集群上做得更好——而常识表明这些类型的算法只能在大型机器上有效运行......
Not surprisingly, it all depends on your problem. If you can easily partition it with into subproblems that don't communicate much, scaling out gives trivial speedups. For instance, searching for a word in 1B web pages can be done by one machine searching 1B pages, or by 1M machines doing 1000 pages each without a significant loss in efficiency (so with a 1,000,000x speedup). This is called "embarrassingly parallel".
Other algorithms, however, do require much more intensive communication between the subparts. Your example requiring cross-analysis is the perfect example of where communication can often drown out the performance gains of adding more boxes. In these cases, you'll want to keep communication inside a (bigger) box, going over high-speed interconnects, rather than something as 'common' as (10-)Gig-E.
Of course, this is a fairly theoretical point of view. Other factors, such as I/O, reliability, easy of programming (one big shared-memory machine usually gives a lot less headaches than a cluster) can also have a big influence.
Finally, due to the (often extreme) cost benefits of scaling out using cheap commodity hardware, the cluster/grid approach has recently attracted much more (algorithmic) research. This makes that new ways of parallelization have been developed that minimize communication, and thus do much better on a cluster -- whereas common knowledge used to dictate that these types of algorithms could only run effectively on big iron machines...