在Python中为大文件创建校验和的最快方法

发布于 2024-08-07 04:41:52 字数 504 浏览 3 评论 0原文

我需要通过网络传输大文件,并且需要每小时为它们创建校验和。所以生成校验和的速度对我来说至关重要。

不知何故,我无法使 zlib.crc32 和 zlib.adler32 在 Windows XP Pro 64 位计算机上处​​理大于 4GB 的文件。我怀疑我已经达到了 32 位限制?使用 hashlib.md5 我可以获得结果,但问题是速度。生成 4.8GB 文件的 md5 大约需要 5 分钟左右。任务管理器显示该进程仅使用一个核心。

我的问题是:

  1. 有没有办法让 crc 处理大文件?我更喜欢使用 crc 而不是 md5,
  2. 如果不是的话,有没有办法加快 md5.hexdigest()/md5.digest 的速度?或者在这种情况下有任何 hashlib hexdigest/digest?也许将其分成多线程进程?我该怎么做?

PS:我正在开发类似“资产管理”系统的东西,有点像svn,但资产由大型压缩图像文件组成。这些文件有微小的增量更改。需要散列/校验和来检测更改和错误检测。

i need to transfer large files across network and need to create checksum for them on hourly basis. so the speed for generating checksum is critical for me.

somehow i can't make zlib.crc32 and zlib.adler32 working with files larger than 4GB on Windows XP Pro 64bit machine. i suspect i've hit the 32bit limitation here? using hashlib.md5 i could get a result but the problem is the speed. it takes roughly about 5 minutes to generate an md5 for 4.8GB file. task manager shows that the process is using one core only.

my questions are:

  1. is there a way to make crc works on large file? i prefer to use crc than md5
  2. if not then is there a way to speed up the md5.hexdigest()/md5.digest? or in this case any hashlib hexdigest/digest? maybe spliting it into multi thread process? how do i do that?

PS: i'm working on somethimg similar like an "Asset Management" system, kind of like svn but the asset consist of large compressed image files. the files have tiny bit incremental changes. the hashing/checksum is needed for detecting changes and error detection.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

乖不如嘢 2024-08-14 04:41:52

这是算法选择问题,而不是库/语言选择问题!

似乎主要考虑两点:

  • 磁盘 I/O 对整体性能的影响有多大?
  • 错误检测功能的预期可靠性是多少?

显然,第二个问题的答案类似于“允许一些误报”,因为相对于 4Gb 消息,即使在适度的情况下,任何 32 位哈希的可靠性也是如此。嘈杂的通道,实际上并不是绝对的。

假设可以通过多线程改进 I/O,我们可以选择不需要顺序扫描完整消息的哈希。相反,我们可以并行处理文件,对各个部分进行散列,然后组合散列值或附加它们,以形成更长、更可靠的错误检测设备。

下一步可能是将文件的处理形式化为有序部分,并按顺序传输它们(在接收者端重新粘合在一起)。这种方法以及有关文件生成方式的附加信息(例如,它们可以通过附加进行专门修改,如日志文件),甚至可以允许限制所需的哈希计算量。这种方法增加的复杂性需要与快速 CRC 计算的愿望相权衡。

旁注:Alder32 限制消息大小低于特定阈值。这可能只是 zlib API 的限制。 (顺便说一句,我找到的关于 zlib.adler32 的参考文献使用了缓冲区,而且......在我们的巨大消息的上下文中应该避免这种方法,有利于流式处理:从文件中读取一点,计算,重复。 .)

It's an algorithm selection problem, rather than a library/language selection problem!

There appears to be two points to consider primarily:

  • how much would the disk I/O affect the overall performance?
  • what is the expected reliability of the error detection feature?

Apparently, the answer to the second question is something like 'some false negative allowed' since the reliability of any 32 bits hash, relative to a 4Gb message, even in a moderately noisy channel, is not going to be virtually absolute.

Assuming that I/O can be improved through multithreading, we may choose a hash that doesn't require a sequential scan of the complete message. Instead we can maybe work the file in parallel, hashing individual sections and either combining the hash values or appending them, to form a longer, more reliable error detection device.

The next step could be to formalize this handling of files as ordered sections, and to transmit them as such (to be re-glued together at the recipient's end). This approach, along additional information about the way the files are produced (for ex. they may be exclusively modified by append, like log files), may even allow to limit the amount of hash calculation required. The added complexity of this approach needs to weighted against the desire to have zippy fast CRC calculation.

Side note: Alder32 is not limited to message sizes below a particular threshold. It may just be a limit of the zlib API. (BTW, the reference I found about zlib.adler32 used a buffer, and well... this approach is to be avoided in the context of our huge messages, in favor of streamed processes: read a little from file, calculate, repeat..)

晨与橙与城 2024-08-14 04:41:52

首先,任何 CRC 算法都没有固有的东西可以阻止它们处理任意长度的数据(但是,特定的实现很可能会施加限制)。

但是,在文件同步应用程序中,这可能并不重要,因为您可能不想在文件变大时对整个文件进行哈希处理,无论如何都只是块。如果对整个文件进行哈希处理,并且两端的哈希值不同,则必须复制整个文件。如果您散列固定大小的块,那么您只需复制散列已更改的块。如果对文件的大部分更改都是本地化的(例如数据库),那么这可能需要更少的复制(并且更容易将每个块计算分散到多个核心)。

至于哈希算法本身,基本的权衡是速度与避免冲突(两个不同的数据块产生相同的哈希值)。 CRC-32 速度很快,但只有 2^32 个唯一值,可能会出现冲突。 MD5 慢得多,但有 2^128 个唯一值,因此几乎不会出现冲突(但理论上仍然有可能)。较大的哈希值(SHA1,SHA256,...)具有更多的唯一值,但速度仍然较慢:我怀疑您需要它们:您担心意外冲突,不像数字签名应用程序,您担心故意(恶意)设计的碰撞。

听起来您正在尝试做一些与 rsync 实用程序非常相似的事情。你可以只使用rsync吗?

First, there is nothing inherent in any of the CRC algorithms that would prevent them working on an arbitrary length of data (however, a particular implementation might well impose a limit).

However, in a file syncing application, that probably doesn't matter, as you may not want to hash the entire file when it gets large, just chunks anyway. If you hash the entire file, and the hashes at each end differ, you have to copy the entire file. If you hash fixed sized chunks, then you only have to copy the chunks whose hash has changed. If most of the changes to the files are localized (e.g. database) then this will likely require much less copying (and it' easier to spread per chunk calculations across multiple cores).

As for the hash algorithm itself, the basic tradeoff is speed vs. lack of collisions (two different data chunks yielding the same hash). CRC-32 is fast, but with only 2^32 unique values, collisions may be seen. MD5 is much slower, but has 2^128 unique values, so collisions will almost never be seen (but are still theoretically possible). The larger hashes (SHA1, SHA256, ...) have even more unique values, but are slower still: I doubt you need them: you're worried about accidental collisions, unlike digital signature applications, where you're worried about deliberately (malicously) engineered collisions.

It sounds like you're trying to do something very similar to what the rsync utility does. Can you just use rsync?

我还不会笑 2024-08-14 04:41:52

md5 本身不能并行运行。但是,您可以分段(并行)对文件进行 md5 处理,并获取哈希列表的 md5 值。

然而,这假设散列不受 IO 限制,我怀疑是这样。正如 Anton Gogolev 建议的那样 - 确保您正在有效地读取文件(以 2 的幂为大块)。完成此操作后,请确保文件没有碎片。

对于新项目,还应选择 sha256 等哈希值,而不是 md5。

对于 4Gb 文件,zlib 校验和比 md5 快得多吗?

md5 itself can't be run in parallel. However you can md5 the file in sections (in parallel) and the take an md5 of the list of hashes.

However that assumes that the hashing is not IO-limited, which I would suspect it is. As Anton Gogolev suggests - make sure that you're reading the file efficiently (in large power-of-2 chunks). Once you've done that, make sure the file isn't fragmented.

Also a hash such as sha256 should be selected rather than md5 for new projects.

Are the zlib checksums much faster than md5 for 4Gb files?

み青杉依旧 2024-08-14 04:41:52

您可能会遇到 XP 中文件的大小限制。 64 位为您提供了更多的寻址空间(消除了每个应用程序 2GB(左右)的寻址空间),但可能对文件大小问题没有任何作用。

You might be hitting a size limit for files in XP. The 64-bit gives you more addressing space (removing the 2GB (or so) addressing space per application), but probably does nothing for the file size problem.

池木 2024-08-14 04:41:52

由于 MD5 的本质,您不可能使用多个核心来计算大文件的 MD5 哈希值:它期望将消息分成块并按严格的顺序输入哈希函数。但是,您可以使用一个线程将文件读入内部队列,然后在单独的线程中计算哈希值,这样。但我认为这不会给你带来任何显着的性能提升。

处理大文件需要很长时间的事实可能是由于“无缓冲”读取造成的。尝试一次读取 16 Kb,然后将内容分块输入哈希函数。

You cannot possibly use more than one core to calculate MD5 hash of a large file because of the very nature of MD5: it expects a message to be broken up in chunks and fed into hashing function in strict sequence. However, you can use one thread to read a file into internal queue, and then calculate hash in a separate thread so that. I do not think though that this will give you any significant performance boost.

The fact that it takes so long to process a big file might be due to "unbuffered" reads. Try reading, say, 16 Kb at a time and then feed the content in chunks to hashing function.

满地尘埃落定 2024-08-14 04:41:52

您是否尝试过 crc-generator 模块?

Did you try the crc-generator module?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文