文件绑定应用程序中的架构设计和与 Azure 的角色通信

发布于 2024-10-25 02:02:12 字数 364 浏览 2 评论 0原文

出于可扩展性的目的,我正在考虑将我的 Web 应用程序迁移到 Windows Azure,但我想知道如何最好地对我的应用程序进行分区。

我希望我的场景是典型的,如下所示:我的应用程序允许用户上传原始数据,对其进行处理并生成报告。然后,用户可以查看他们的原始数据并查看他们的报告。

到目前为止,我正在考虑网络角色和工作者角色。但是,我知道 VHD 可以安装到具有读/写访问权限的单个实例,因此我的 Web 角色和辅助角色实际上都需要访问公共文件存储。因此,也许我需要一个 Web 角色和两个独立的辅助角色,一个辅助角色用于处理,另一个用于读取和写入文件存储。这是一个好方法吗?

我很难想象角色之间的管道,并且担心此分区之间的通信造成的开销,因此欢迎在这里提供任何意见。

I am considering moving my web application to Windows Azure for scalability purposes but I am wondering how best to partition my application.

I expect my scenario is typical and is as follows: my application allows users to upload raw data, this is processed and a report is generated. The user can then review their raw data and view their report.

So far I’m thinking a web role and a worker role. However, I understand that a VHD can be mounted to a single instance with read/write access so really both my web role and worker role need access to a common file store. So perhaps I need a web role and two separate worker roles, one worker role for the processing and the other for reading and writing to a file store. Is this a good approach?

I am having difficulty picturing the plumbing between the roles and concerned of the overhead caused by the communication between this partitioning so would welcome any input here.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

魂归处 2024-11-01 02:02:12

Stuart 的出色答案是:Blob 可以存储任何内容,大小可达 200GB。如果您需要/想要保留持久的整个目录结构,则只需几行代码即可挂载 VHD。它是您的应用程序可以与之交互的 NTFS 卷,就像任何其他驱动器一样。

就您而言,vhd 不太适合,因为您的网络应用程序必须安装 vhd 并成为它的唯一写入者。如果您有多个 Web 角色实例(如果您想要 SLA 并想要扩展,您就会这样做),那么您只能拥有一名编写者。在这种情况下,单个斑点更适合。

正如 Stuart 所说,这是一种非常正常且常见的模式。同样,只需几行代码,您就可以调用存储 SDK 将文件从 Blob 存储复制到实例的本地磁盘。然后您可以使用常规文件 IO 操作来处理该文件。报告完成后,您可以通过另外几行代码将报告复制到新的 blob 中(很可能位于 Web 角色知道要查找的众所周知的容器中)。

您可以更进一步,将行插入按客户分区的 Azure 表中,其中行键标识单个上传的文件,第三个字段表示已完成报告的 URI。这使得 Web 应用程序可以轻松显示客户已完成的报告。

Adding to Stuart's excellent answer: Blobs can store anything, with sizes up to 200GB. If you needed / wanted to persist an entire directory structure that's durable, you can mount a VHD with just a few lines of code. It's an NTFS volume that your app can interact with, just like any other drive.

In your case, a vhd doesn't fit well, because your web app would have to mount a vhd and be the sole writer to it. And if you have more than one web role instance (which you would if you wanted the SLA and wanted to scale), you could only have one writer. In this case, individual blobs fit MUCH better.

As Stuart stated, this is a very normal and common pattern. And again, with only a few lines of code, you can call upon the storage sdk to copy a file from blob storage to your instance's local disk. Then you can process the file using regular File IO operations. When your report is complete, another few lines of code lets you copy your report into a new blob (most likely in a well-known container that the web role knows to look in).

You can take this a step further and insert rows into an Azure table that are partitioned by customer, with row key identifying the individual uploaded file, and a 3rd field representing the URI to the completed report. This makes it trivial for the web app to display a customer's completed reports.

看海 2024-11-01 02:02:12

Blob 存储是存储文件的最简单的地方,许多角色和角色实例都可以访问这些文件 - 其中没有一个需要特殊访问权限。

建议的正常模式似乎是:

  • 允许使用 Web 角色实例上传原始文件
  • 这些 Web 角色实例返回 HTTP 调用而不进行处理 - 它们将原始文件存储在 Blob 存储中,并添加“执行此工作”消息“到队列中。
  • 辅助角色实例从队列中获取消息,读取原始 blob,执行工作,存储报告结果,然后从队列中删除该消息 然后,
  • 当用户请求时,所有 Web 角色都可以访问该报告

这就是“建议使用正常模式”,您可以看到它在第一个 Azure PDC 的照片上传/缩略图生成应用程序等中实现 - 它也用于此 培训课程 - 继续阅读第二页。

当然,在实践中,您可能需要根据您正在处理的数据的大小和类型来构建此模式。

Blob storage is the easiest place to store files which lots of roles and role instances can then access - with none of them requiring special access.

The normal pattern suggested seems to be:

  • allow the raw files to be uploaded using instances of a web role
  • these web role instances return the HTTP call without doing processing - they store the raw files in blob storage, and add a "do this work message" to a queue.
  • the worker role instances pick up the message from the queue, read the raw blob, do the work, store the report result, then delete the message from the queue
  • all the web roles can then access the report when the user asks for it

That's the "normal pattern suggested" and you can see it implemented in things like the photo upload/thumbnail generation apps from the very first Azure PDC - its also used in this training course - follow through to the second page.

Of course, in practice you may need to build on this pattern depending on the size and type of data you are processing.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文