如何创建一个拥有1500台服务器并即时提供结果的系统?

发布于 2024-08-09 23:32:16 字数 1054 浏览 2 评论 0原文

我想创建一个在 100 毫秒内提供用户界面响应的系统,但这需要几分钟的计算。幸运的是,我可以将其分成非常小的部分,这样我就可以将其分发到很多服务器,比如说 1500 台服务器。查询将被传递到其中一个服务器,然后重新分发到 10-100 个其他服务器,然后这些服务器重新分发等等,在进行数学计算后,结果再次传播回来并由单个服务器返回。换句话说,类似于谷歌搜索。

问题是,我应该使用什么技术?云计算听起来很明显,但 1500 台服务器需要通过提供特定于任务的数据来为其任务做好准备。这可以使用任何现有的云计算平台来完成吗?或者我应该创建 1500 个不同的云计算应用程序并将它们全部上传?

编辑:专用物理服务器没有意义,因为平均负载会非常非常小。因此,我们自己运行服务器也是没有意义的——它需要是外部提供商的某种共享服务器。

Edit2:我基本上想总共购买 30 个 CPU 分钟,并且我愿意为此花费最多 3000 美元,相当于每个 CPU 天 144,000 美元。唯一的标准是,这 30 个 CPU 分钟分布在 1500 台响应服务器上。

Edit3:我希望解决方案类似于“使用 Google Apps,创建 1500 个应用程序并部署它们”或“联系 XYZ 并编写一个他们的服务可以部署的 asp.net 脚本,然后您根据 CPU 时间向他们付费你使用”或类似的东西。

Edit4:一个低端网络服务提供商,以 1 美元/月的价格提供 asp.net 实际上可以解决问题(!) - 我可以创建 1500 个帐户,并且延迟还可以(我检查过),并且一切都会好 - 除了我需要 1500 个帐户位于不同的服务器上,而且我不知道有哪个提供商拥有足够的服务器能够将我的帐户分布在不同的服务器上。我完全意识到,延迟会因服务器而异,并且有些可能不可靠 - 但这可以通过在不同服务器上重试来在软件中解决。

Edit5:我刚刚尝试过,并以每月 1 美元的价格对低端网络服务提供商进行了基准测试。如果预加载的话,他们可以进行节点计算并在 15 毫秒内将结果传送到我的笔记本电脑。可以通过在需要实际性能之前不久发出请求来完成预加载。如果一个节点在 15 毫秒内没有响应,该节点的部分任务可以分发到许多其他服务器,其中一台很可能在 15 毫秒内响应。不幸的是,他们没有 1500 台服务器,这就是我在这里问的原因。

I want to create a system that delivers user interface response within 100ms, but which requires minutes of computation. Fortunately, I can divide it up into very small pieces, so that I could distribute this to a lot of servers, let's say 1500 servers. The query would be delivered to one of them, which then redistributes to 10-100 other servers, which then redistribute etc., and after doing the math, results propagate back again and are returned by a single server. In other words, something similar to Google Search.

The problem is, what technology should I use? Cloud computing sounds obvious, but the 1500 servers need to be prepared for their task by having task-specific data available. Can this be done using any of the existing cloud computing platforms? Or should I create 1500 different cloud computing applications and upload them all?

Edit: Dedicated physical servers does not make sense, because the average load will be very, very small. Therefore, it also does not make sense, that we run the servers ourselves - it needs to be some kind of shared servers at an external provider.

Edit2: I basically want to buy 30 CPU minutes in total, and I'm willing to spend up to $3000 on it, equivalent to $144,000 per CPU-day. The only criteria is, that those 30 CPU minutes are spread across 1500 responsive servers.

Edit3: I expect the solution to be something like "Use Google Apps, create 1500 apps and deploy them" or "Contact XYZ and write an asp.net script which their service can deploy, and you pay them based on the amount of CPU time you use" or something like that.

Edit4: A low-end webservice provider, offering asp.net at $1/month would actually solve the problem (!) - I could create 1500 accounts, and the latency is ok (I checked), and everything would be ok - except that I need the 1500 accounts to be on different servers, and I don't know any provider that has enough servers that is able to distribute my accounts on different servers. I am fully aware that the latency will differ from server to server, and that some may be unreliable - but that can be solved in software by retrying on different servers.

Edit5: I just tried it and benchmarked a low-end webservice provider at $1/month. They can do the node calculations and deliver results to my laptop in 15ms, if preloaded. Preloading can be done by making a request shortly before the actual performance is needed. If a node does not respond within 15ms, that node's part of the task can be distributed to a number of other servers, of which one will most likely respond within 15ms. Unfortunately, they don't have 1500 servers, and that's why I'm asking here.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(12

童话里做英雄 2024-08-16 23:32:16

[提前向小组道歉,因为他们使用了部分响应空间来处理类似元的问题]

来自 OP,Lars D:
我不认为[这个]答案是问题的答案,因为它并没有让我更接近解决方案。我知道什么是云计算,而且我知道如果需要的话,该算法可以完美地拆分为超过 300,000 台服务器,尽管由于网络延迟,额外的成本不会带来太多额外的性能。

Lars,
对于以天真和笼统的方式阅读和回答您的问题,我深表歉意。我希望您能够看到问题本身(尤其是其原始形式)缺乏特异性,以及问题(1)的有些不寻常的性质如何促使我以类似的方式回答这个问题。事实上,关于 SO 的此类问题通常源自那些对这个过程很少进行思考和研究的人的假设,这是我相信我,一个非实践者的借口。分布式系统],可以帮助你的追求。许多类似的回复(其中一些得益于您提供的额外见解)以及向您提出的许多评论和其他问题表明,我并不是唯一有这种心态的人。

(1) 不寻常的问题:[显然]主要是计算过程(没有提到分布式/复制存储结构),非常高度可并行化(1,500 个服务器),分成五十毫秒大小的任务,这些任务共同提供亚秒级响应(?人类消费?)。然而,这个过程只需要几次[每天..?]。

回顾够了!
实际情况中,您可以考虑以下一些来帮助改进这个SO问题(或将其转移到其他/替代问题),从而促进的帮助>该领域的专家

  • 作为一个独特的(更具体的)问题重新发布。事实上,可能有几个问题:例如。关于mapreduce进程[可能]较差的延迟和/或开销、当前价格(针对特定 TOS和卷详细信息)、不同供应商的分布式进程的机架意识等
  • 。标题
  • 添加有关您手头流程的详细信息(请参阅问题和许多答复的注释中的许多问题)
  • 在某些问题中 ,添加特定于给定供应商或技术(EC2、Azure ... )因为这可能不是完全不买,但仍然有帮助,这些公司的代理商的评论
  • 表明您了解您的追求有些艰巨,
  • 明确说明您希望底层技术的有效实践者做出回应(也许还包括那些正在“涉足”这些技术的人,因为除了物理/高能人员等,顺便说一句,他们传统上使用集群而不是云,许多技术和实践都是相对较新)

另外,如果您发现这样做将有助于促进更好的反应,我将很高兴接受您的提示(本页其他人的隐含非否决权),删除我的回复。

-- 原始回复--

警告:并非所有流程或数学计算都可以轻松拆分为单独的部分,然后可以并行运行...

也许您可以从 云计算,了解云计算并不是唯一允许并行计算的架构。

如果您的流程/计算可以有效地分成可并行的部分,也许您可​​以研究Hadoop< /a> 或 MapReduce 的其他实现,用于一般情况了解这些并行过程。此外,(我相信利用相同或相似的算法),还存在商业可用的框架,例如来自 亚马逊

但请注意,上述系统并不是特别适合非常快的响应时间。它们在长达一小时(然后是一些)的数据/数字处理和类似工作方面表现更好,而不是一分钟长的计算,例如您希望并行化的计算,以便它在 1/10 秒内提供结果。

上述框架是通用的,从某种意义上说,它们可以运行几乎任何性质的流程(同样,至少部分可以分块的流程),但也存在针对特定应用程序的各种产品,例如搜索或 DNA 匹配等。搜索应用程序尤其可以具有非常短的响应时间(例如,参见 Google),顺便说一句,这在一定程度上与这样的事实有关:此类作业可以非常容易且快速地进行分块以进行并行处理。

[in advance, apologies to the group for using part of the response space for meta-like matters]

From the OP, Lars D:
I do not consider [this] answer to be an answer to the question, because it does not bring me closer to a solution. I know what cloud computing is, and I know that the algorithm can be perfectly split into more than 300,000 servers if needed, although the extra costs wouldn't give much extra performance because of network latency.

Lars,
I sincerely apologize for reading and responding to your question at a naive and generic level. I hope you can see how both the lack of specifity in the question itself, particularly in its original form, and also the somewhat unusual nature of the problem (1) would prompt me respond to the question in like fashion. This, and the fact that such questions on SO typically emanate from hypotheticals by folks who have put but little thought and research into the process, are my excuses for believing that I, a non-practionner [of massively distributed systems], could help your quest. The many similar responses (some of which had the benefits of the extra insight you provided) and also the many remarks and additional questions addressed to you show that I was not alone with this mindset.

(1) Unsual problem: An [apparently] mostly computational process (no mention of distributed/replicated storage structures), very highly paralellizable (1,500 servers), into fifty-millisecondish-sized tasks which collectively provide a sub-second response (? for human consumption?). And yet, a process that would only be required a few times [daily..?].

Enough looking back!
In practical terms, you may consider some of the following to help improve this SO question (or move it to other/alternate questions), and hence foster the help from experts in the domain.

  • re-posting as a distinct (more specific) question. In fact, probably several questions: eg. on the [likely] poor latency and/or overhead of mapreduce processes, on the current prices (for specific TOS and volume details), on the rack-awareness of distributed processes at various vendors etc.
  • Change the title
  • Add details about the process you have at hand (see many questions in the notes of both the question and of many of the responses)
  • in some of the questions, add tags specific to a give vendor or technique (EC2, Azure...) as this my bring in the possibly not quite unbuyist but helpful all the same, commentary from agents at these companies
  • Show that you understand that your quest is somewhat of a tall order
  • Explicitly state that you wish responses from effective practionners of the underlying technologies (maybe also include folks that are "getting their feet wet" with these technologies as well, since with the exception of the physics/high-energy folks and such, who BTW traditionnaly worked with clusters rather than clouds, many of the technologies and practices are relatively new)

Also, I'll be pleased to take the hint from you (with the implicit non-veto from other folks on this page), to delete my response, if you find that doing so will help foster better responses.

-- original response--

Warning: Not all processes or mathematical calculations can readily be split in individual pieces that can then be run in parallel...

Maybe you can check Wikipedia's entry from Cloud Computing, understanding that cloud computing is however not the only architecture which allows parallel computing.

If your process/calculation can efficitively be chunked in parallelizable pieces, maybe you can look into Hadoop, or other implementations of MapReduce, for an general understanding about these parallel processes. Also, (and I believe utilizing the same or similar algorithms), there also exist commercially available frameworks such as EC2 from amazon.

Beware however that the above systems are not particularly well suited for very quick response time. They fare better with hour long (and then some) data/number crunching and similar jobs, rather than minute long calculations such as the one you wish to parallelize so it provides results in 1/10 second.

The above frameworks are generic, in a sense that they could run processes of most any nature (again, the ones that can at least in part be chunked), but there also exist various offerings for specific applications such as searching or DNA matching etc. The search applications in particular can have very short response times (cf Google for example) and BTW this is in part tied to fact that such jobs can very easily and quickly be chunked for parallel processing.

九厘米的零° 2024-08-16 23:32:16

抱歉,你期望太多了。

问题是您只需要为处理能力付费。然而,您的主要限制是延迟,并且您希望它是免费的。那是行不通的。您需要弄清楚您的延迟预算是多少。

仅仅聚合来自多个计算服务器的数据就需要每个级别几毫秒的时间。这里将存在高斯分布,因此对于 1500 个服务器,最慢的服务器将在 3σ 后响应。由于需要一个层次结构,第二层有 40 台服务器,您将再次等待最慢的服务器。

互联网往返次数也迅速增加;这也需要花费 20 到 30 毫秒的延迟预算。

另一个考虑因素是这些假设的服务器将大部分时间处于闲置状态。这意味着它们已通电,消耗电力但不产生收入。拥有这么多闲置服务器的任何一方都会将其关闭,或者至少处于睡眠模式,以节省电力。

Sorry, but you are expecting too much.

The problem is that you are expecting to pay for processing power only. Yet your primary constraint is latency, and you expect that to come for free. That doesn't work out. You need to figure out what your latency budgets are.

The mere aggregating of data from multiple compute servers will take several milliseconds per level. There will be a gaussian distribution here, so with 1500 servers the slowest server will respond after 3σ. Since there's going to be a need for a hierarchy, the second level with 40 servers , where again you'll be waiting for the slowest server.

Internet roundtrips also add up quickly; that too should take 20 to 30 ms of your latency budget.

Another consideration is that these hypothethical servers will spend much of their time idle. That means they're powered on, drawing electricity yet not generating revenue. Any party with that many idle servers would turn them off, or at the very least in sleep mode just to conserve electricity.

玩世 2024-08-16 23:32:16

MapReduce 不是解决方案! Google、雅虎和微软使用 MapReduce 从他们磁盘上的海量数据(整个网络!)创建索引。这个任务是巨大的,MapReduce 的构建是为了让它在几小时而不是几年内完成,但是启动 MapReduce 的主控制器已经是 2 秒了,所以对于你的 100ms 这不是一个选择。

现在,您可以从 Hadoop 中获得分布式文件系统的优势。它可能允许您将任务分发到靠近数据物理位置的位置,但仅此而已。顺便说一句:设置和管理 Hadoop 分布式文件系统意味着控制您的 1500 台服务器!

坦率地说,在您的预算中,我没有看到任何“云”服务可以让您租用 1500 台服务器。唯一可行的解​​决方案是在 Sun 和 IBM 提供的网格计算解决方案上租用时间,但据我所知,他们希望您投入数小时的 CPU 时间。

顺便说一句:在 Amazon EC2 上,您在几分钟内就有了一台新服务器,您需要至少保留一个小时!

希望您能找到解决方案!

MapReduce is not the solution! Map Reduce is used in Google, Yahoo and Microsoft for creating the indexes out of the huge data (the whole Web!) they have on their disk. This task is enormous and Map Reduce was built to make it happens in hours instead of years, but starting a Master controller of Map Reduce is already 2 seconds, so for your 100ms this is not an option.

Now, from Hadoop you may get advantages out of the distributed file system. It may allow you to distribute the tasks close to where the data is physically, but that's it. BTW: Setting up and managing an Hadoop Distributed File System means controlling your 1500 servers!

Frankly in your budget I don't see any "cloud" service that will allow you to rent 1500 servers. The only viable solution, is renting time on a Grid Computing solution like Sun and IBM are offering, but they want you to commit to hours of CPU from what I know.

BTW: On Amazon EC2 you have a new server up in a couple of minutes that you need to keep for an hour minimum!

Hope you'll find a solution!

夜血缘 2024-08-16 23:32:16

我不明白你为什么要这样做,只是因为“我们的用户界面通常旨在在 100 毫秒以内完成所有操作,并且该标准也应适用于此”。

首先,“旨在”!=“必须”,它是一个指导方针,为什么你会因此而引入这些庞大的过程。考虑 1500 毫秒 x 100 = 150 秒 = 2.5 分钟。将 2.5 分钟减少到几秒是一个更健康的目标。有一个地方显示“我们正在处理您的请求”以及动画。

所以我对此的回答是 - 发布问题的修改版本,并设定合理的目标:几秒钟,30-50 个服务器。我没有这个问题的答案,但这里发布的问题感觉不对。甚至可以是 6-8 台多处理器服务器。

I don't get why you would want to do that, only because "Our user interfaces generally aim to do all actions in less than 100ms, and that criteria should also apply to this".

First, 'aim to' != 'have to', its a guideline, why would u introduce these massive process just because of that. Consider 1500 ms x 100 = 150 secs = 2.5 mins. Reducing the 2.5 mins to a few seconds its a much more healthy goal. There is a place for 'we are processing your request' along with an animation.

So my answer to this is - post a modified version of the question with reasonable goals: a few secs, 30-50 servers. I don't have the answer for that one, but the question as posted here feels wrong. Could even be 6-8 multi-processor servers.

旧故 2024-08-16 23:32:16

谷歌通过拥有一个由小型 Linux 服务器组成的巨大网络来实现这一点。他们使用一种针对搜索算法进行定制修改的 Linux。成本是软件开发和廉价的个人电脑。

Google does it by having a gigantic farm of small Linux servers, networked together. They use a flavor of Linux that they have custom modified for their search algorithms. Costs are software development and cheap PC's.

↙温凉少女 2024-08-16 23:32:16

看来您确实期望通过将工作分配给多台计算机来实现至少 1000 倍的加速。那可能没问题。不过,您的延迟要求似乎很棘手。

您是否考虑过分配作业所固有的延迟?本质上,计算机必须距离相当近,以免遇到光速问题。此外,机器所在的数据中心也必须距离您的客户端相当近,以便您可以在 100 毫秒内将请求发送给他们并返回。至少在同一个大陆上。

另请注意,任何额外的延迟都需要您在系统中拥有更多节点。由于延迟或其他不并行化的原因而损失 50% 的可用计算时间,您需要将并行部分的计算能力加倍才能跟上。

我怀疑云计算系统是否最适合解决此类问题。至少我的印象是,云计算的支持者甚至不愿意告诉你你的机器在哪里。当然,我在 SLA 中没有看到任何 可用。

It would seem that you are indeed expecting at least 1000-fold speedup from distributing your job to a number of computers. That may be ok. Your latency requirement seems tricky, though.

Have you considered the latencies inherent in distributing the job? Essentially the computers would have to be fairly close together in order to not run into speed of light issues. Also, the data center in which the machines would be would again have to be fairly close to your client so that you can get your request to them and back in less than 100 ms. On the same continent, at least.

Also note that any extra latency requires you to have many more nodes in the system. Losing 50% of available computing time to latency or anything else that doesn't parallelize requires you to double the computing capacity of the parallel portions just to keep up.

I doubt a cloud computing system would be the best fit for a problem like this. My impression at least is that the proponents of cloud computing would prefer to not even tell you where your machines are. Certainly I haven't seen any latency terms in the SLAs that are available.

听,心雨的声音 2024-08-16 23:32:16

你们的要求是相互矛盾的。您对 100 毫秒延迟的要求与您仅偶尔运行程序的愿望相矛盾。

您在问题中提到的 Google 搜索类型方法的特征之一是集群的延迟取决于最慢节点。因此,您可以让 1499 台机器在 100 毫秒内响应,但如果一台机器花费更长的时间(例如 1 秒)——无论是由于重试,还是因为它需要将您的应用程序分页,或者连接不良——您的整个集群将需要 1 秒才能生成一个回答。这种方法是不可避免的。

实现您所寻求的延迟的唯一方法是让集群中的所有计算机始终将程序及其所需的所有数据加载到 RAM 中。必须从磁盘加载程序,甚至必须从磁盘调入程序,将花费超过 100 毫秒的时间。一旦您的一台服务器必须访问磁盘,您的 100 毫秒延迟要求就结束了。

在共享服务器环境中,考虑到您的成本限制,这就是我们在这里讨论的环境,几乎可以肯定的是,您的 1500 台服务器中至少有一台需要访问磁盘才能激活您的应用程序。

因此,您要么必须支付足够的费用才能说服某人让您的程序始终保持活动状态并在内存中,要么您将不得不放松延迟要求。

You have conflicting requirements. You're requirement for 100ms latency is directly at odds with your desire to only run your program sporadically.

One of the characteristics of the Google-search type approach you mentioned in your question is that the latency of the cluster is dependent on the slowest node. So you could have 1499 machines respond in under 100ms, but if one machine took longer, say 1s - whether due to a retry, or because it needed to page you application in, or bad connectivity - your whole cluster would take 1s to produce an answer. It's inescapable with this approach.

The only way to achieve the kinds of latencies you're seeking would be to have all of the machines in your cluster keep your program loaded in RAM - along with all the data it needs - all of the time. Having to load your program from disk, or even having to page it in from disk, is going to take well over 100ms. As soon as one of your servers has to hit the disk, it is game over for your 100ms latency requirement.

In a shared server environment, which is what we're talking about here given your cost constraints, it is a near certainty that at least one of your 1500 servers is going to need to hit the disk in order to activate your app.

So you are either going to have to pay enough to convince someone to keep you program active and in memory at all times, or you're going to have to loosen your latency requirements.

留一抹残留的笑 2024-08-16 23:32:16

两种思路:

a)如果这些限制确实、绝对、真正建立在常识之上,并且按照您在第 n 次编辑中提出的方式可行,那么似乎预先提供的数据并不大。那么,将预计算的存储空间换成时间怎么样?桌子有多大?太字节很便宜!

b) 这听起来很像雇主/客户的要求,但在常识上并没有充分的依据。 (根据我的经验)

让我们假设一个核心的计算时间为 15 分钟。我想这就是你所说的。
只需花费合理的钱,您就可以购买具有 16 个核心、32 个超线程核心和 48 GB RAM 的系统。

这应该使我们处于 30 秒的范围内。
添加十几 TB 的存储空间和一些预计算。
也许可以实现 10 倍的增长。
3秒。
3秒太慢了吗?如果是,为什么?

Two trains of thought:

a) if those restraints are really, absolutely, truly founded in common sense, and doable in the way you propose in the nth edit, it seems the presupplied data is not huge. So how about trading storage for precomputation to time. How big would the table(s) be? Terabytes are cheap!

b) This sounds a lot like a employer / customer request that is not well founded in common sense. (from my experience)

Let´s assume the 15 minutes of computation time on one core. I guess thats what you say.
For a reasonable amount of money, you can buy a system with 16 proper, 32 hyperthreading cores and 48 GB RAM.

This should bring us in the 30 second range.
Add a dozen Terabytes of storage, and some precomputation.
Maybe a 10x increase is reachable there.
3 secs.
Are 3 secs too slow? If yes, why?

梦初启 2024-08-16 23:32:16

听起来您需要利用像 MapReduce:简化大型集群上的数据处理

< a href="http://en.wikipedia.org/wiki/MapReduce" rel="nofollow noreferrer">Wiki。

Sounds like you need to utilise an algorithm like MapReduce: Simplified Data Processing on Large Clusters

Wiki.

紫轩蝶泪 2024-08-16 23:32:16

查看这篇维基百科文章中的并行计算和相关文章 - “并发编程语言、库、API 和并行编程模型已经为并行计算机编程而创建。” ... http://en.wikipedia.org/wiki/Parallel_computing

Check out Parallel computing and related articles in this WikiPedia-article - "Concurrent programming languages, libraries, APIs, and parallel programming models have been created for programming parallel computers." ... http://en.wikipedia.org/wiki/Parallel_computing

等风来 2024-08-16 23:32:16

尽管云计算是城里很酷的新事物,但您的场景听起来更像是您需要一个 集群,即如何使用并行性在更短的时间内解决问题。
我的解决方案是:

  1. 理解如果你遇到一个可以在一个 cpu 上用 n 个时间步解决的问题,并不能保证它可以在 m 个 cpu 上用 n/m 解决。实际上n/m是理论下限。并行性通常会迫使您进行更多通信,因此您几乎永远不会达到此限制。
  2. 并行化您的顺序算法,确保它仍然正确,并且您不会遇到任何竞争条件
  3. 查找提供商,看看他可以在编程语言/API 方面为您提供什么(没有这方面的经验)

Although Cloud Computing is the cool new kid in town, your scenario sounds more like you need a cluster, i.e. how can I use parallelism to solve a problem in a shorter time.
My solution would be:

  1. Understand that if you got a problem that can be solved in n time steps on one cpu, does not guarantee that it can be solved in n/m on m cpus. Actually n/m is the theoretical lower limit. Parallelism is usually forcing you to communicate more and therefore you'll hardly ever achieve this limit.
  2. Parallelize your sequential algorithm, make sure it is still correct and you don't get any race conditions
  3. Find a provider, see what he can offer you in terms of programming languages / APIs (no experience with that)
孤单情人 2024-08-16 23:32:16

您所要求的并不存在,原因很简单,这样做需要在 1500 台机器上有 1500 个应用程序实例(可能具有大量内存数据)空闲,从而消耗所有机器上的资源。现有的云计算产品均不以此为基础计费。 App Engine 和 Azure 等平台无法让您直接控制应用程序的分发方式,而 Amazon EC2 等平台则按实例小时收费,每天的费用将超过 2000 美元。

What you're asking for doesn't exist, for the simple reason that doing this would require having 1500 instances of your application (likely with substantial in-memory data) idle on 1500 machines - consuming resources on all of them. None of the existing cloud computing offerings bill on such a basis. Platforms like App Engine and Azure don't give you direct control over how your application is distributed, while platforms like Amazon's EC2 charge by the instance-hour, at a rate that would cost you over $2000 a day.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文