具有订阅者缓存的 WCF Pub/Sub

发布于 2024-07-10 23:25:18 字数 934 浏览 7 评论 0原文

问题:如何使用 WCF 提供分布式、可扩展且具有抗灾能力的发布/订阅服务。

详细信息:

请注意,除了 Tibco EMS 等消息传递/中间件解决方案之外,我们还在考虑这种方法。

我一直在研究 WCF,特别是它如何用于提供发布/订阅。 关于这个主题,这篇文章非常好:WCF pub-sub

在本文中,作者尝试解决拥有多个发布者的问题(就像跨多个盒子扩展的服务层一样)。 问题在于,如果客户端 A 向发布者 A 注册,但发布者 B 希望发布事件,则发布者 B 将不知道客户端 A。即,没有人告诉发布者 B 客户端 A 希望收到有关事件的通知。 作者建议使用发布/订阅服务作为解决方案。 发布/订阅服务将集中存储订阅。 但是,如果我想通过拥有辅助/双发布/订阅服务来使发布/订阅服务具有抗灾能力,那么我会遇到同样的原始问题。

所以,我认为这个问题有几个解决方案:

  1. 将订户详细信息存储在分布式缓存中(请参阅问题:q1q2)。
  2. 将订户详细信息存储在数据库/中央文件系统中。

谁能想到任何其他解决方案(即我没有错过 WCF 的一些奇妙的神奇功能?) 任何意见表示赞赏。

Problem: how to provide a distributed, scalable and disaster resistant pub/sub service with WCF.

Details:

Note that this approach is being considered in addition to messaging/middleware solutions such as Tibco EMS.

I've been looking into WCF, particularly how it may be used to offer pub/sub. On this subject this article is very good: WCF pub-sub.

In the article the author attempts to tackle the problem of having multiple publishers (as one would have with a service layer scaled across several boxes). The problem being that if client A registers with Publisher A but Publisher B wishes to publish an event, then publisher B won't know about client A. i.e. no one told publisher B that client A wanted to be notified about events. The author suggests a pub/sub service as a solution. The pub/sub service would centrally store subscriptions. However, if I wanted to make the pub/sub service disaster resistant by having a secondary/dual pub/sub service then I have the same original problem.

So, I think there are a couple of solutions to the problem:

  1. Store subscriber details in a distributed cache (see questions: q1 and q2).
  2. Store subscriber details in a database/central file system.

Can anyone think of any other solutions (i.e. I've not missed some fantastic magical feature of WCF?)
Any comments appreciated.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

So尛奶瓶 2024-07-17 23:25:18

我遇到了同样的问题,并且我对这个问题做了很多研究。 问题其实很简单。 您希望以分布式方式保留一些集中式状态。 我发现实现这一点的最佳方法是使用分布式缓存。 以速度为例。 据我所知,没有任何原生 WCF 解决方案可以解决状态管理问题。 我什至研究了持久服务,其中状态管理由 WCF 处理,但不适合发布/订阅服务,因为状态需要集中用于所有客户端连接。 将数据存储在数据库中也是一种选择,但代价是需要数据库,并且即使使用数据库,如果数据库没有跨多台机器进行集群,也可能会出现单点故障。

最后,我认为实现零故障点的东西实际上是昂贵的,如果你决定去那里,那么看看 Azure,存储的未来是在云上,Azure 服务将是完全可扩展和分布式的,但我们还没有到那一步。

I had the same problem and I did a lot of research on the issue. The problem is actually simple. You want to keep some centralized state, but in distributed way. I found that the best way to achieve this is by using a distributed cache. Look at velocity for example. There is no native WCF solution that I know that can solve the state management issue. I have even looked into durable services, where state management is handled by WCF, however not suitable for a pub/ sub service, because the state needs to be centralized for all client connections. Storing data in a database is also an option, but the cost is the need for a database, and even with a database you can have a single point of failure if the database is not clustered accorss multiple machines.

At the end, I figured it is actually expensive to implement something with zero points of failure and if you do decide to go there then take a look at Azure, the future of storage is on the cloud, Azure services will be fully scalable and distributed, but we are not there yet.

瘫痪情歌 2024-07-17 23:25:18

我认为 WCF 还没有实现。 您需要的是一个为您处理所有这些细节的代理,以便您可以实现您的业务逻辑。 有一些非常好的工具,例如 ActiveMQ。 如果您需要编排,那么您可能需要使用也可以位于代理之上的总线。 我认为 WCF 很棒,但尝试将其变成不那么好的东西并不是一个好主意。

I think WCF is just not there yet. What you need is a broker that handles all these details for you so that you can just implement your business logic. There are a few very good ones out there like ActiveMQ. If you need orchestration then you're probably want to use a bus which can also sit on top of a broker. I think WCF is wonderful but to try and make it into something that is not, is not a good idea.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文