C#/SQL 数据库监听器

发布于 2024-08-19 14:26:48 字数 159 浏览 7 评论 0原文

我需要连续监视数据库行以检查更改(更新)。如果其他来源有一些更改或更新,则应在我的应用程序上触发事件(我正在使用 WCF)。有没有办法连续监听数据库行的变化?

我可能有更多的事件来监视同一个表中的不同行。性能上有没有问题。我正在使用 C# Web 服务来监视 SQL Server 后端。

I have a requirement to monitor the Database rows continuously to check for the Changes(updates). If there are some changes or updates from the other sources the Event should be fired on my application (I am using a WCF). Is there any way to listen the database row continuously for the changes?

I may be having more number of events to monitor different rows in the same table. is there any problem in case of performance. I am using C# web service to monitor the SQL Server back end.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

我不在是我 2024-08-26 14:26:48

您可以在相应的表上使用 AFTER UPDATE 触发器将项目添加到 SQL Server Service Broker 队列。然后将排队的通知发送到您的 Web 服务。

另一位发帖者提到了 SqlDependency,我也想提一下,但 MSDN 文档有点奇怪,因为它提供了一个 Windows 客户端示例,但也提供了以下建议:

SqlDependency 被设计用于使用
在 ASP.NET 或中间层服务中
那里有一个相对较小的
具有依赖关系的服务器数量
主动针对数据库。原来是
不适用于客户端
应用程序,其中数百或
数千台客户端计算机将
已设置 SqlDependency 对象
单个数据库服务器。

参考

You could use an AFTER UPDATE trigger on the respective tables to add an item to a SQL Server Service Broker queue. Then have the queued notifications sent to your web service.

Another poster mentioned SqlDependency, which I also thought of mentioning but the MSDN documentation is a little strange in that it provides a windows client example but also offers this advice:

SqlDependency was designed to be used
in ASP.NET or middle-tier services
where there is a relatively small
number of servers having dependencies
active against the database. It was
not designed for use in client
applications, where hundreds or
thousands of client computers would
have SqlDependency objects set up for
a single database server.

Ref.

却一份温柔 2024-08-26 14:26:48

前段时间我有一个非常类似的需求,我使用 CLR SP 将数据推送到消息队列来解决它。

为了简化部署,我创建了一个 CLR SP,其中包含一个名为 SendMessage 的小函数,该函数只是将消息推送到消息队列中,并使用 AFTER INSERT 触发器(正常触发器,而不是CLR 触发器)。

在这种情况下,性能是我主要关心的问题,但我对其进行了压力测试,它大大超出了我的预期。与 SQL Server Service Broker 相比,它是一个非常易于部署的解决方案。 CLR SP 中的代码也非常简单。

I had a very similar requirement some time ago, and I solved it using a CLR SP to push the data into a message queue.

To ease deployment, I created an CLR SP with a tiny little function called SendMessage that was just pushing a message into a Message Queue, and tied it to my tables using an AFTER INSERT trigger (normal trigger, not CLR trigger).

Performance was my main concern in this case, but I have stress tested it and it greatly exceeded my expectations. And compared to SQL Server Service Broker, it's a very easy-to-deploy solution. The code in the CLR SP is really trivial as well.

半暖夏伤 2024-08-26 14:26:48

“连续”监控可能意味着每隔几小时、几分钟、几秒甚至几毫秒。此解决方案可能不适用于毫秒更新:但如果您只需每分钟“监视”表几次,您只需让外部进程检查表的更新即可。 (如果存在 DateTime 列。)然后您可以处理更改的或新添加的行并执行您需要的任何通知。所以你不会监听变化,你会检查它们。以这种方式进行检查的一个好处是,如果在给定的时间内更新了很多行,那么您就不会冒太大的性能损失的风险,因为您将它们聚集在一起(而不是响应每个行)以及每个单独的更改。)

Monitoring "continuously" could mean every few hours, minutes, seconds or even milliseconds. This solution might not work for millisecond updates: but if you only have to "monitor" a table a few times a minute you could simply have an external process check a table for updates. (If there is a DateTime column present.) You could then process the changed or newly added rows and perform whatever notification you need to. So you wouldn't be listening for changes, you'd be checking for them. One benefit of doing the checking in this manner would be that you wouldn't risk as much of a performance hit if a lot of rows were updated during a given quantum of time since you'd bulk them together (as opposed to responding to each and every change individually.)

听,心雨的声音 2024-08-26 14:26:48

我思考了 CLR 函数的想法
或者类似的东西
成功后的服务
插入/更新/删除数据
桌子。这还好吗
情况如何?

也许这不是一个好主意,但我想这仍然比陷入表触发器地狱要好。

我假设你的问题是你想在每次数据修改后做一些事情,比如说,重新计算一些值或其他什么。让数据库对此负责并不是一个好主意,因为它会对性能产生严重影响。

您提到您想要检测不同表上的插入、更新和删除。按照您倾向于的方式进行操作,这将需要您为每个表设置三个触发器/CLR 函数,并让它们将事件发布到您的 WCF 服务(这在 sql server 内可用的 .net 子集中是否支持?)。 WCF 服务根据收到的事件采取适当的操作。

该问题的更好解决方案是将检测数据修改的责任从数据库转移到应用程序。这实际上可以非常容易且有效地实现。

每个表都有一个主键(int、GUID 或其他)和一个时间戳列,指示条目上次更新的时间。这是您在乐观并发场景中经常看到的设置,因此甚至可能不需要更新架构定义。但是,如果您需要添加此列并且无法使用数据库将时间戳更新卸载到应用程序,则只需为每个表编写一个更新触发器,在每次更新后更新时间戳。

为了检测修改,您的 WCF 服务/监视应用程序会在给定的时间间隔内使用主键/时间戳对构建本地字典(最好是哈希表)。使用数据库中的覆盖索引,此操作应该非常快。下一步是比较两个词典,瞧,就这样了。

但这种方法有一些注意事项。其中之一是每个表的记录总和,另一个是更新频率(如果太低则无效),还有一个问题是您是否需要访问修改/插入之前的数据。

希望这有帮助。

I pondered the idea of a CLR function
or something of the sort that calls
the service after successfully
inserting/updating/deleting data from
the tables. Is that even good in this
situation?

Probably it's not a good idea, but I guess it's still better than getting into table trigger hell.

I assume your problem is you want to do something after every data modification, let's say, recalculate some value or whatever. Letting the database be responsible for this is not a good idea because it can have severe impacts on performance.

You mentioned you want to detect inserts, updates and deletes on different tables. Doing it the way you are leaning towards, this would require you to setup three triggers/CLR functions per table and have them post an event to your WCF Service (is that even supported in the subset of .net available inside sql server?). The WCF Service takes the appropriate actions based on the events received.

A better solution for the problem would be moving the responsibility for detecting data modification from your database to your application. This can actually be implemented very easily and efficiently.

Each table has a primary key (int, GUID or whatever) and a timestamp column, indicating when the entry was last updated. This is a setup you'll see very often in optimistic concurrency scenarios, so it may not even be necessary to update your schema definitions. Though, if you need to add this column and can't offload updating the timestamp to the application using the database, you just need to write a single update trigger per table, updating the timestamp after each update.

To detect modifications, your WCF Service/Monitoring application builds up a local dictionay (preferably a hashtable) with primary key/timestamp pairs at a given time interval. Using a coverage index in the database, this operation should be really fast. The next step is to compare both dictionaries and voilá, there you go.

There are some caveats to this approach though. One of them is the sum of records per table, another one is the update frequency (if it gets too low it's ineffective) and yet another pinpoint is if you need access to the data previous to modification/insertion.

Hope this helps.

霞映澄塘 2024-08-26 14:26:48

为什么不使用 SQL Server 通知服务?我认为这正是您正在寻找的东西。查看通知服务的文档,看看是否符合您的要求。

Why don't you use SQL Server Notification service? I think that's the exact thing you are looking for. Go through the documentation of notification services and see if that fits your requirement.

败给现实 2024-08-26 14:26:48

我认为这里有一些很棒的想法;从可扩展性的角度来看,我想说,外部化检查(例如 Paul Sasik 的答案)可能是迄今为止最好的(对他+1)。

如果由于某种原因,您不想外部化检查,那么另一种选择是使用 HttpCache 来存储观察程序和回调。

简而言之,当您将要监视的记录放入数据库时​​,您还可以将其添加到缓存中(使用 .Add 方法)并在其上设置 SqlCacheDependency ,以及回调到您想要在以下情况下调用的任何逻辑:调用依赖项并将该项目从缓存中弹出。

I think there's some great ideas here; from the scalability perspective I'd say that externalizing the check (e.g. Paul Sasik's answer) is probably the best one so far (+1 to him).

If, for some reason, you don't want to externalize the check, then another option would be to use the HttpCache to store a watcher and a callback.

In short, when you put the record in the DB that you want to watch, you also add it to the cache (using the .Add method) and set a SqlCacheDependency on it, and a callback to whatever logic you want to call when the dependency is invoked and the item is ejected from the cache.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文