FileSystemWatcher Changed 事件引发两次
我有一个正在查找文本文件的应用程序,如果对该文件进行了任何更改,我将使用 OnChanged
事件处理程序来处理该事件。我正在使用 NotifyFilters.LastWriteTime
但该事件仍然被触发两次。这是代码。
public void Initialize()
{
FileSystemWatcher _fileWatcher = new FileSystemWatcher();
_fileWatcher.Path = "C:\\Folder";
_fileWatcher.NotifyFilter = NotifyFilters.LastWrite;
_fileWatcher.Filter = "Version.txt";
_fileWatcher.Changed += new FileSystemEventHandler(OnChanged);
_fileWatcher.EnableRaisingEvents = true;
}
private void OnChanged(object source, FileSystemEventArgs e)
{
.......
}
就我而言,当我更改文本文件 version.txt
并保存它时,OnChanged
被调用两次。
I have an application where I am looking for a text file and if there are any changes made to the file I am using the OnChanged
eventhandler to handle the event. I am using the NotifyFilters.LastWriteTime
but still the event is getting fired twice. Here is the code.
public void Initialize()
{
FileSystemWatcher _fileWatcher = new FileSystemWatcher();
_fileWatcher.Path = "C:\\Folder";
_fileWatcher.NotifyFilter = NotifyFilters.LastWrite;
_fileWatcher.Filter = "Version.txt";
_fileWatcher.Changed += new FileSystemEventHandler(OnChanged);
_fileWatcher.EnableRaisingEvents = true;
}
private void OnChanged(object source, FileSystemEventArgs e)
{
.......
}
In my case the OnChanged
is called twice, when I change the text file version.txt
and save it.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(30)
一种可能的“黑客”方法是使用反应性扩展来限制事件,例如:
在本例中,我将时间限制为 50 毫秒,在我的系统上这已经足够了,但较高的值应该更安全。 (就像我说的,它仍然是一个“黑客”)。
One possible 'hack' would be to throttle the events using Reactive Extensions for example:
In this case I'm throttling to 50ms, on my system that was enough, but higher values should be safer. (And like I said, it's still a 'hack').
我花了很多时间使用 FileSystemWatcher,这里的一些方法不起作用。我真的很喜欢禁用事件的方法,但不幸的是,如果有> 1个文件被删除,它就不起作用,第二个文件即使不是每次都会被错过,也会被错过。
所以我使用以下方法:
I spent some significant amount of time using the FileSystemWatcher, and some of the approaches here will not work. I really liked the disabling events approach, but unfortunately, it doesn't work if there is >1 file being dropped, second file will be missed most if not all times.
So I use the following approach:
我这里有一个非常快速和简单的解决方法,它确实对我有用,无论事件偶尔会被触发一次、两次或更多次,请检查一下:
I have a very quick and simple workaround here, it does work for me, and no matter the event would be triggered once or twice or more times occasionally, check it out:
这是您可以尝试的新解决方案。对我来说效果很好。在已更改事件的事件处理程序中,如果需要,以编程方式从设计器输出消息中删除处理程序,然后以编程方式将处理程序添加回来。例子:
Here is a new solution you can try. Works well for me. In the event handler for the changed event programmatically remove the handler from the designer output a message if desired then programmatically add the handler back. example:
我只想对最后一个事件做出反应,以防万一,也在 Linux 文件更改上做出反应,似乎该文件在第一次调用时为空,然后在下一次调用时再次填充,并且不介意浪费一些时间以防操作系统出现问题决定做一些文件/属性更改。
我在这里使用 .NET async 来帮助我进行线程处理。
I wanted to react only on the last event, just in case, also on a linux file change it seemed that the file was empty on the first call and then filled again on the next and did not mind loosing some time just in case the OS decided to do some file/attribute change.
I am using .NET async here to help me do the threading.
主要原因是
第一个事件的最后访问时间是当前时间(文件写入或更改时间)。
那么第二个事件是文件的原始上次访问时间。
我在代码下解决。
The main reason was
first event's last access time was current time(file write or changed time).
then second event was file's original last access time.
I solve under code.
这段代码对我有用。
This code worked for me.
主要是为了未来的我:)
我使用 Rx 编写了一个包装器:
用法:
mostly for future me :)
I wrote a wrapper using Rx:
Usage:
试试这个,效果很好
Try this, It's working fine
我认为解决该问题的最佳解决方案是使用反应式扩展
当你将 event 转换为 observable 时,你可以只添加 Throttling(..) (最初称为 Debounce(..))
示例代码在这里
I think the best solution to solve the issue is to use reactive extensions
When you transform event into observable, then you can just add Throttling(..) (originally called Debounce(..))
Sample code here
这是另一种方法。这里不是传播快速连续事件中的第一个事件并抑制所有后续事件,而是抑制除最后一个事件之外的所有事件。我认为可以从这种方法中受益的场景更为常见。此策略的技术术语是去抖动。
为了实现这一点,我们必须使用滑动延迟。每个传入事件都会取消触发前一个事件的计时器,并启动一个新的计时器。这开启了一种可能性,即一系列永无休止的事件将永远延迟传播。为了简单起见,下面的扩展方法中没有针对这种异常情况的规定。
用法示例:
此行组合了对两个事件的订阅:
Created
和Changed
。所以它大致相当于这些:区别在于,这两个事件被视为单一类型的事件,并且如果这些事件快速连续,则仅传播最后一个事件。例如,如果
已创建
事件后面跟着两个Changed
事件,并且这三个事件之间的时间间隔不大于 100 毫秒,只有第二个Changed
事件会通过调用MyFileSystemWatcher_Event handler,之前的事件将被丢弃。
IDisposable
返回值可用于取消订阅事件。调用subscription.Dispose()
会取消并丢弃所有记录的事件,但它不会停止或等待任何正在执行的处理程序。特别针对
重命名
< /a> 事件,FileSystemEventArgs
参数可以转换为RenamedEventArgs
以访问该事件的额外信息。例如:在
FileSystemWatcher.SynchronizingObject
,如果已配置,否则在ThreadPool
上。调用逻辑已从 .NET 7 源代码。Here is another approach. Instead of propagating the first event of a quick succession of events and suppressing all that follow, here all events are suppressed except from the last one. I think that the scenarios that can benefit from this approach are more common. The technical term for this strategy is debouncing.
To make this happen we must use a sliding delay. Every incoming event cancels the timer that would fire the previous event, and starts a new timer. This opens the possibility that a never-ending series of events will delay the propagation forever. To keep things simple, there is no provision for this abnormal case in the extension methods below.
Usage example:
This line combines the subscription to two events, the
Created
and theChanged
. So it is roughly equivalent to these:The difference is that the two events are regarded as a single type of event, and in case of a quick succession of these events only the last one will be propagated. For example if a
Created
event is followed by twoChanged
events, and there is no time gap larger than 100 msec between these three events, only the secondChanged
event will be propagated by invoking theMyFileSystemWatcher_Event
handler, and the previous events will be discarded.The
IDisposable
return value can be used to unsubscribe from the events. Callingsubscription.Dispose()
cancels and discards all recorded events, but it doesn't stop or wait for any handlers that are in the midst of their execution.Specifically for the
Renamed
event, theFileSystemEventArgs
argument can be cast toRenamedEventArgs
in order to access the extra information of this event. For example:The debounced events are invoked on the
FileSystemWatcher.SynchronizingObject
, if it has been configured, otherwise on theThreadPool
. The invocation logic has been copy-pasted from the .NET 7 source code.您可以尝试打开它进行写入,如果成功,那么您可以假设其他应用程序已完成该文件。
只是打开它进行写入似乎不会引发更改的事件。所以应该是安全的。
You could try to open it for write, and if successful then you could assume the other application is done with the file.
Just opening it for write appears not to raise the changed event. So it should be safe.
抱歉挖坟了,但我已经和这个问题斗争了一段时间了,终于想出了一种方法来处理这些多个触发事件。我要感谢这个线程中的每个人,因为我在解决这个问题时在许多参考文献中都使用了它。
这是我的完整代码。它使用字典来跟踪文件上次写入的日期和时间。它会比较该值,如果相同,则会抑制事件。然后它在启动新线程后设置该值。
Sorry for the grave dig, but I've been battling this issue for a while now and finally came up with a way to handle these multiple fired events. I would like to thank everyone in this thread as I have used it in many references when battling this issue.
Here is my complete code. It uses a dictionary to track the date and time of the last write of the file. It compares that value, and if it is the same, it suppresses the events. It then sets the value after starting the new thread.
如果没有询问的话,很遗憾没有现成的 F# 解决方案示例。
解决这个问题是我的秘诀,因为我可以,而且 F# 是一种很棒的 .NET 语言。
使用 FSharp.Control.Reactive 包过滤掉重复的事件,该包只是反应式扩展的 F# 包装器。所有这些都可以针对完整框架或
netstandard2.0
:Event if not asked, it is a shame there are no ready solution samples for F#.
To fix this here is my recipe, just because I can and F# is a wonderful .NET language.
Duplicated events are filtered out using
FSharp.Control.Reactive
package, which is just a F# wrapper for reactive extensions. All that can be targeted to full framework ornetstandard2.0
:就我而言,插入完成后需要立即获取由其他应用程序插入的文本文件的最后一行。这是我的解决方案。当第一个事件引发时,我禁止观察者引发其他事件,然后我调用计时器 TimeElapsedEvent 因为当调用我的句柄函数 OnChanged 时我需要文本文件的大小,但当时的大小不是实际大小,它是插入之前文件的大小。所以我等待一段时间才能继续使用正确的文件大小。
In my case need to get the last line of a text file that is inserted by other application, as soon as insertion is done. Here is my solution. When the first event is raised, i disable the watcher from raising others, then i call the timer TimeElapsedEvent because when my handle function OnChanged is called i need the size of the text file, but the size at that time is not the actual size, it is the size of the file imediatelly before the insertion. So i wait for a while to proceed with the right file size.
我们可以像这样简单化。这对我有用。
We can make it simple like this. It works for me.
跳过重复项风险太大,因为代码可能会看到不完整的数据版本。
相反,我们可以等到指定的合并毫秒内没有任何更改。
重要:下面的示例仅适用于您希望在一个或多个文件更改时收到单个通知的情况。原因之一是它对 NotifyFilter 进行了硬编码,可以对其进行修改以允许更改。另一个原因是它不会告诉您哪些文件已更改或更改的类型 (FileSystemEventArgs),也可以对其进行修改以提供检测到的所有更改的列表。
一个很大的缺点是如果文件更新频率超过合并毫秒,则它不会触发。
解决上述限制的方法是始终在第一次更改的合并毫秒后触发。代码只是稍微复杂一些。
这样做的一个缺点是,如果 mergeMilliseconds 设置得非常低,您可能会得到大量额外的触发。
Skipping the duplicates is too risky, as the code could see an incomplete version of the data.
Instead, we can wait until there are no changes for a specified merge milliseconds.
Important: the examples below are only really suited when you want a single notification when one or multiple files change. One of the reasons is it hardcodes the NotifyFilter, which could be modified to allow it to be changed. The other reason is it does not tell you which files changed or the type of change (FileSystemEventArgs), which could also be modified to provide a list of all the changes detected.
A big downside is it does not fire if the files are updated more often than the merge milliseconds.
A solution to the above limitation is to always trigger after merge milliseconds of the first change. Code is only slightly more complex.
One downside to this one is that if mergeMilliseconds is set very low, you may get plenty of extra firings.
我改变了监视目录中文件的方式。我没有使用 FileSystemWatcher,而是轮询另一个线程上的位置,然后查看文件的 LastWriteTime。
使用此信息并保留文件路径的索引及其最新写入时间,我可以确定已更改的文件或已在特定位置创建的文件。这让我摆脱了 FileSystemWatcher 的怪异之处。主要缺点是您需要一个数据结构来存储 LastWriteTime 和对文件的引用,但它可靠且易于实现。
I have changed the way I monitor files in directories. Instead of using the FileSystemWatcher I poll locations on another thread and then look at the LastWriteTime of the file.
Using this information and keeping an index of a file path and it's latest write time I can determine files that have changed or that have been created in a particular location. This removes me from the oddities of the FileSystemWatcher. The main downside is that you need a data structure to store the LastWriteTime and the reference to the file, but it is reliable and easy to implement.
我可以通过添加一个检查缓冲区数组中是否有重复项的函数来做到这一点。
然后使用计时器在数组 X 时间内没有被修改后执行操作:
- 每次将数据写入缓冲区时重置计时器
- 对勾执行操作
这也捕获了另一种重复类型。如果修改文件夹内的文件,该文件夹也会引发 Change 事件。
I was able to do this by added a function that checks for duplicates in an buffer array.
Then perform the action after the array has not been modified for X time using a timer:
- Reset timer every time something is written to the buffer
- Perform action on tick
This also catches another duplication type. If you modify a file inside a folder, the folder also throws a Change event.
该解决方案适用于我的生产应用程序:
环境:
VB.Net Framework 4.5.2
手动设置对象属性:NotifyFilter = Size
然后使用以下代码:
This solution worked for me on production application:
Environment:
VB.Net Framework 4.5.2
Set manually object properties: NotifyFilter = Size
Then use this code:
恐怕这是
FileSystemWatcher
类的一个众所周知的错误/功能。这是来自该类的文档:现在这段文字是关于 Created 事件的,但同样的事情也适用于其他文件事件。在某些应用程序中,您可能可以使用
NotifyFilter
属性来解决此问题,但我的经验表明,有时您还必须执行一些手动重复过滤(黑客)。不久前,我用一些 FileSystemWatcher 提示(已存档)。您可能想检查一下。
I am afraid that this is a well-known bug/feature of the
FileSystemWatcher
class. This is from the documentation of the class:Now this bit of text is about the
Created
event, but the same thing applies to other file events as well. In some applications you might be able to get around this by using theNotifyFilter
property, but my experience says that sometimes you have to do some manual duplicate filtering (hacks) as well.A while ago I bookedmarked a page with a few FileSystemWatcher tips(Archived). You might want to check it out.
我在我的委托中使用以下策略“修复”了该问题:
I've "fixed" that problem using the following strategy in my delegate:
通过检查相关文件的
File.GetLastWriteTime
时间戳,可以检测并丢弃来自FileSystemWatcher
的任何重复的OnChanged
事件。就像这样:Any duplicated
OnChanged
events from theFileSystemWatcher
can be detected and discarded by checking theFile.GetLastWriteTime
timestamp on the file in question. Like so:这是我的解决方案,它帮助我停止引发两次事件:
这里我仅使用文件名和大小设置了
NotifyFilter
属性。watcher
是我的 FileSystemWatcher 对象。希望这会有所帮助。Here is my solution which helped me to stop the event being raised twice:
Here I have set the
NotifyFilter
property with only Filename and size.watcher
is my object of FileSystemWatcher. Hope this will help.我创建了一个 Git 存储库,其中包含一个扩展
FileSystemWatcher
的类,仅在复制完成时才触发事件。它会丢弃除最后一个之外的所有已更改事件,并且仅当文件可供读取时才引发该事件。下载 FileSystemSafeWatcher 并将其添加到您的项目中。
然后将其用作普通的 FileSystemWatcher 并监视事件何时触发。
I have created a Git repo with a class that extends
FileSystemWatcher
to trigger the events only when copy is done. It discards all the changed events exept the last and it raise it only when the file become available for read.Download FileSystemSafeWatcher and add it to your project.
Then use it as a normal
FileSystemWatcher
and monitor when the events are triggered.这是我的方法:
这是我在一个项目中用来解决此问题的解决方案,在该项目中我将文件作为邮件附件发送。
即使计时器间隔较小,它也可以轻松避免两次触发事件,但在我的情况下,1000 就可以了,因为我更高兴错过一些更改,而不是用 > 淹没邮箱。每秒 1 条消息。
至少在多个文件同时更改的情况下它可以正常工作。
我想到的另一个解决方案是用字典将列表替换为各自的 MD5 映射文件,这样您就不必选择任意间隔,因为您不必删除该条目而是更新其值,并且如果你的东西没有改变,就取消它。
它的缺点是随着文件的监视,字典在内存中不断增长,并且占用越来越多的内存,但我在某处读到,监视的文件数量取决于 FSW 的内部缓冲区,所以也许不是那么重要。
也不知道 MD5 计算时间会如何影响代码的性能,小心 =\
Here's my approach :
This is the solution I used to solve this issue on a project where I was sending the file as attachment in a mail.
It will easily avoid the twice fired event even with a smaller timer interval but in my case 1000 was alright since I was happier with missing few changes than with flooding the mailbox with > 1 message per second.
At least it works just fine in case several files are changed at the exact same time.
Another solution I've thought of would be to replace the list with a dictionary mapping files to their respective MD5, so you wouldn't have to choose an arbitrary interval since you wouldn't have to delete the entry but update its value, and cancel your stuff if it hasn't changed.
It has the downside of having a Dictionary growing in memory as files are monitored and eating more and more memory, but I've read somewhere that the amount of files monitored depends on the FSW's internal buffer, so maybe not that critical.
Dunno how MD5 computing time would affect your code's performances either, careful =\
我的场景是,我有一个虚拟机,其中有一个 Linux 服务器。我正在 Windows 主机上开发文件。当我更改主机上文件夹中的某些内容时,我希望上传所有更改,并通过 Ftp 同步到虚拟服务器。这就是我在写入文件时消除重复更改事件的方法(它也标记了包含要修改的文件的文件夹):
主要是我创建一个哈希表来存储文件写入时间信息。然后,如果哈希表的文件路径已修改,并且其时间值与当前通知的文件的更改相同,那么我知道它是事件的重复项并忽略它。
My scenario is that I have a virtual machine with a Linux server in it. I am developing files on the Windows host. When I change something in a folder on the host I want all the changes to be uploaded, synced onto the virtual server via Ftp. This is how I do eliminate the duplicate change event when I write to a file ( which flags the folder containing the file to be modified as well ) :
Mainly I create a hashtable to store file write time information. Then if the hashtable has the filepath that is modified and it's time value is the same as the currently notified file's change then I know it is the duplicate of the event and ignore it.
尝试使用以下代码:
Try with this code:
我知道这是一个老问题,但有同样的问题,并且上述解决方案都没有真正解决我面临的问题。我创建了一个字典,它将文件名与 LastWriteTime 进行映射。因此,如果该文件不在字典中,则将继续该过程,否则检查最后修改时间是什么时候,以及是否与字典中的不同,运行代码。
I know this is an old issue, but had the same problem and none of the above solution really did the trick for the problem I was facing. I have created a dictionary which maps the file name with the LastWriteTime. So if the file is not in the dictionary will go ahead with the process other wise check to see when was the last modified time and if is different from what it is in the dictionary run the code.