同一目录上具有 FileSystemWatcher 的多个服务器
我希望在多个服务器上使用相同的服务来监视单个目录(在共享服务器或 SAN 上)。当一个文件出现在该目录中时,我需要一个且只有一个服务来获取该文件并处理其内容。
我尝试通过在处理文件之前将文件移出共享目录来对此进行编程。我可以简单地处理无法移动文件的服务器上的异常。问题是发生冲突并导致任一服务器都无法处理该文件。
文件很可能会分批到达,而不是一份一份地到达。有谁知道一种有保证的方法?
I would like to have the same service on multiple servers watching a single directory (on a shared server or SAN). When a file appears in that directory I want one, and only one of those services to pick up that file and process its contents.
I attempted to program this by moving the file out of the shared directory before processing it. I am fine with simply handling the exception on whichever server fails to move the file. The problem is that conflicts occur and cause the file not to be processed by either server.
It is likely that files will arrive in batches, not one by one. Does anyone know an approach to this that will work in a guaranteed way?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
尝试使用 FSW 提供一项“主”服务。主服务处理所有 FSW 事件并命令远程计算机上的“从”服务来处理某个文件。轻松负载平衡,无多个 FSW 问题,无需数据库。您可以使用 WCF 轻松完成此任务。
Try having one 'master' service with a FSW. The master service processes all FSW events and commands 'slave' services on remote machines to handle a certain file. Easy load balancing, no multiple FSW issues, no database needed. You could easily accomplish this with WCF.
我只有一台服务器/服务监视该文件夹并写入数据库文件路径并更改日期和事件(复制、重命名...),然后您可以拥有您想要获取该表中最近的新记录之一的所有服务,锁定它并处理它。基本上,由于 FSW 在并发方面表现不佳,我们将并发处理移至数据库。
I would have only one server/service monitoring that folder and writing in a database filepath and changed date and event (copy,rename...) then you can have all services you want to grab one of those recent new records in that table, lock it and processing it. Basically since FSW is working badly with concurrency we move concurrency handling to the database.
我还将建议 Davide Piras 的解决方案。
但是,如果您仍然希望在多个服务器中使用文件观察器应用程序来查找共享目录,则可以想到的一种解决方案是在文件观察器应用程序之间创建 tcp 连接,当任何文件到达共享目录时,所有文件观察器应用程序都会创建 TCP 连接。生成随机数并彼此共享该数字,然后应用程序生成的最大(或最小)数字应该处理该文件。这样,只有一个文件观察器应用程序会处理该文件
i will also suggest the solution form Davide Piras.
But if you still want to have the file watcher application in multiple servers looking for shared directory then one solution comes in mind is that you create the tcp connections between your file watcher application, as any file arrives in the shared dir then all file watcher applications generates random number and share that number with each other, then the application generated largest (or smallest) number should process that file. In this way only one file watcher application will process the file