在Python中获取大文件的MD5哈希值

发布于 2024-07-26 14:57:49 字数 422 浏览 6 评论 0原文

我已经使用了hashlib(它取代了 md5 in Python 2.6/3.0),如果我打开一个文件并将其内容放入 hashlib.md5() 功能。

问题在于文件非常大,其大小可能会超过 RAM 大小。

如何在不将整个文件加载到内存的情况下获取文件的 MD5 哈希值?

I have used hashlib (which replaces md5 in Python 2.6/3.0), and it worked fine if I opened a file and put its content in the hashlib.md5() function.

The problem is with very big files that their sizes could exceed the RAM size.

How can I get the MD5 hash of a file without loading the whole file into memory?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(14

始于初秋 2024-08-02 14:57:50

您需要以合适大小的块读取文件:

def md5_for_file(f, block_size=2**20):
    md5 = hashlib.md5()
    while True:
        data = f.read(block_size)
        if not data:
            break
        md5.update(data)
    return md5.digest()

注意:确保使用“rb”打开文件 - 否则您将得到错误的结果。

因此,要用一种方法完成所有工作 - 使用类似的方法:

def generate_file_md5(rootdir, filename, blocksize=2**20):
    m = hashlib.md5()
    with open( os.path.join(rootdir, filename) , "rb" ) as f:
        while True:
            buf = f.read(blocksize)
            if not buf:
                break
            m.update( buf )
    return m.hexdigest()

上面的更新基于 Frerich Raabe 提供的注释 - 我对此进行了测试,发现它在我的 Python 2.7.2 Windows 安装上是正确的,

我使用 jacksum 工具。

jacksum -a md5 <filename>

You need to read the file in chunks of suitable size:

def md5_for_file(f, block_size=2**20):
    md5 = hashlib.md5()
    while True:
        data = f.read(block_size)
        if not data:
            break
        md5.update(data)
    return md5.digest()

Note: Make sure you open your file with the 'rb' to the open - otherwise you will get the wrong result.

So to do the whole lot in one method - use something like:

def generate_file_md5(rootdir, filename, blocksize=2**20):
    m = hashlib.md5()
    with open( os.path.join(rootdir, filename) , "rb" ) as f:
        while True:
            buf = f.read(blocksize)
            if not buf:
                break
            m.update( buf )
    return m.hexdigest()

The update above was based on the comments provided by Frerich Raabe - and I tested this and found it to be correct on my Python 2.7.2 Windows installation

I cross-checked the results using the jacksum tool.

jacksum -a md5 <filename>
尘曦 2024-08-02 14:57:50

将文件分成 8192 字节的块(或 128 字节的其他倍数),并使用 update() 将它们连续提供给 MD5。

这利用了 MD5 有 128 字节摘要块(8192 是 128×64)这一事实。 由于您没有将整个文件读入内存,因此不会使用超过 8192 字节的内存。

在 Python 3.8+ 中你可以这样做

import hashlib
with open("your_filename.txt", "rb") as f:
    file_hash = hashlib.md5()
    while chunk := f.read(8192):
        file_hash.update(chunk)
print(file_hash.digest())
print(file_hash.hexdigest())  # to get a printable str instead of bytes

Break the file into 8192-byte chunks (or some other multiple of 128 bytes) and feed them to MD5 consecutively using update().

This takes advantage of the fact that MD5 has 128-byte digest blocks (8192 is 128×64). Since you're not reading the entire file into memory, this won't use much more than 8192 bytes of memory.

In Python 3.8+ you can do

import hashlib
with open("your_filename.txt", "rb") as f:
    file_hash = hashlib.md5()
    while chunk := f.read(8192):
        file_hash.update(chunk)
print(file_hash.digest())
print(file_hash.hexdigest())  # to get a printable str instead of bytes
生死何惧 2024-08-02 14:57:50

蟒蛇 3.7

import hashlib

def checksum(filename, hash_factory=hashlib.md5, chunk_num_blocks=128):
    h = hash_factory()
    with open(filename,'rb') as f: 
        for chunk in iter(lambda: f.read(chunk_num_blocks*h.block_size), b''): 
            h.update(chunk)
    return h.digest()

Python 3.8 及以上

import hashlib

def checksum(filename, hash_factory=hashlib.md5, chunk_num_blocks=128):
    h = hash_factory()
    with open(filename,'rb') as f: 
        while chunk := f.read(chunk_num_blocks*h.block_size): 
            h.update(chunk)
    return h.digest()

版本 原始文章

如果您想要一种更Pythonic(没有 while True)的方式来读取文件,请检查此代码:

import hashlib

def checksum_md5(filename):
    md5 = hashlib.md5()
    with open(filename,'rb') as f: 
        for chunk in iter(lambda: f.read(8192), b''): 
            md5.update(chunk)
    return md5.digest()

请注意,iter() 函数需要返回的迭代器在 EOF 处停止的空字节字符串,因为 read() 返回 b'' (不仅仅是 '')。

Python < 3.7

import hashlib

def checksum(filename, hash_factory=hashlib.md5, chunk_num_blocks=128):
    h = hash_factory()
    with open(filename,'rb') as f: 
        for chunk in iter(lambda: f.read(chunk_num_blocks*h.block_size), b''): 
            h.update(chunk)
    return h.digest()

Python 3.8 and above

import hashlib

def checksum(filename, hash_factory=hashlib.md5, chunk_num_blocks=128):
    h = hash_factory()
    with open(filename,'rb') as f: 
        while chunk := f.read(chunk_num_blocks*h.block_size): 
            h.update(chunk)
    return h.digest()

Original post

If you want a more Pythonic (no while True) way of reading the file, check this code:

import hashlib

def checksum_md5(filename):
    md5 = hashlib.md5()
    with open(filename,'rb') as f: 
        for chunk in iter(lambda: f.read(8192), b''): 
            md5.update(chunk)
    return md5.digest()

Note that the iter() function needs an empty byte string for the returned iterator to halt at EOF, since read() returns b'' (not just '').

冷情 2024-08-02 14:57:50

这是我的 Piotr Czapla 方法

def md5sum(filename):
    md5 = hashlib.md5()
    with open(filename, 'rb') as f:
        for chunk in iter(lambda: f.read(128 * md5.block_size), b''):
            md5.update(chunk)
    return md5.hexdigest()

Here's my version of Piotr Czapla's method:

def md5sum(filename):
    md5 = hashlib.md5()
    with open(filename, 'rb') as f:
        for chunk in iter(lambda: f.read(128 * md5.block_size), b''):
            md5.update(chunk)
    return md5.hexdigest()
初熏 2024-08-02 14:57:50

对于这个问题使用多个评论/答案,这是我的解决方案:

import hashlib
def md5_for_file(path, block_size=256*128, hr=False):
    '''
    Block size directly depends on the block size of your filesystem
    to avoid performances issues
    Here I have blocks of 4096 octets (Default NTFS)
    '''
    md5 = hashlib.md5()
    with open(path,'rb') as f:
        for chunk in iter(lambda: f.read(block_size), b''):
             md5.update(chunk)
    if hr:
        return md5.hexdigest()
    return md5.digest()
  • 这是 Pythonic
  • 这是一个函数
  • 它避免隐式值:总是更喜欢显式值。
  • 它允许(非常重要的)性能优化

Using multiple comment/answers for this question, here is my solution:

import hashlib
def md5_for_file(path, block_size=256*128, hr=False):
    '''
    Block size directly depends on the block size of your filesystem
    to avoid performances issues
    Here I have blocks of 4096 octets (Default NTFS)
    '''
    md5 = hashlib.md5()
    with open(path,'rb') as f:
        for chunk in iter(lambda: f.read(block_size), b''):
             md5.update(chunk)
    if hr:
        return md5.hexdigest()
    return md5.digest()
  • This is Pythonic
  • This is a function
  • It avoids implicit values: always prefer explicit ones.
  • It allows (very important) performance optimizations
┈┾☆殇 2024-08-02 14:57:50

Python 2/3 可移植解决方案

要计算校验和(md5、sha1 等),您必须以二进制模式打开文件,因为您将对字节值求和:

要成为 Python 2.7 和 移植,您应该使用 io 包,如下所示:

import hashlib
import io


def md5sum(src):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        content = fd.read()
        md5.update(content)
    return md5

如果您的文件很大,您可能更喜欢按块读取文件,以避免将整个文件内容存储在内存中:

def md5sum(src, length=io.DEFAULT_BUFFER_SIZE):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
    return md5

Python 3可 这里是使用 iter() 带有哨兵(空字符串)的函数。

在这种情况下创建的迭代器将在每次调用其 next() 方法时调用o [lambda 函数],且不带任何参数; 如果返回的值等于sentinel,则将引发StopIteration,否则将返回该值。

如果您的文件非常很大,您可能还需要显示进度信息。 您可以通过调用打印或记录计算的字节数的回调函数来做到这一点:

def md5sum(src, callback, length=io.DEFAULT_BUFFER_SIZE):
    calculated = 0
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
            calculated += len(chunk)
            callback(calculated)
    return md5

A Python 2/3 portable solution

To calculate a checksum (md5, sha1, etc.), you must open the file in binary mode, because you'll sum bytes values:

To be Python 2.7 and Python 3 portable, you ought to use the io packages, like this:

import hashlib
import io


def md5sum(src):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        content = fd.read()
        md5.update(content)
    return md5

If your files are big, you may prefer to read the file by chunks to avoid storing the whole file content in memory:

def md5sum(src, length=io.DEFAULT_BUFFER_SIZE):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
    return md5

The trick here is to use the iter() function with a sentinel (the empty string).

The iterator created in this case will call o [the lambda function] with no arguments for each call to its next() method; if the value returned is equal to sentinel, StopIteration will be raised, otherwise the value will be returned.

If your files are really big, you may also need to display progress information. You can do that by calling a callback function which prints or logs the amount of calculated bytes:

def md5sum(src, callback, length=io.DEFAULT_BUFFER_SIZE):
    calculated = 0
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
            calculated += len(chunk)
            callback(calculated)
    return md5
绮筵 2024-08-02 14:57:50

Bastien Semene 的代码的混音这需要Hawkwing评论关于通用哈希功能考虑...

def hash_for_file(path, algorithm=hashlib.algorithms[0], block_size=256*128, human_readable=True):
    """
    Block size directly depends on the block size of your filesystem
    to avoid performances issues
    Here I have blocks of 4096 octets (Default NTFS)

    Linux Ext4 block size
    sudo tune2fs -l /dev/sda5 | grep -i 'block size'
    > Block size:               4096

    Input:
        path: a path
        algorithm: an algorithm in hashlib.algorithms
                   ATM: ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
        block_size: a multiple of 128 corresponding to the block size of your filesystem
        human_readable: switch between digest() or hexdigest() output, default hexdigest()
    Output:
        hash
    """
    if algorithm not in hashlib.algorithms:
        raise NameError('The algorithm "{algorithm}" you specified is '
                        'not a member of "hashlib.algorithms"'.format(algorithm=algorithm))

    hash_algo = hashlib.new(algorithm)  # According to hashlib documentation using new()
                                        # will be slower then calling using named
                                        # constructors, ex.: hashlib.md5()
    with open(path, 'rb') as f:
        for chunk in iter(lambda: f.read(block_size), b''):
             hash_algo.update(chunk)
    if human_readable:
        file_hash = hash_algo.hexdigest()
    else:
        file_hash = hash_algo.digest()
    return file_hash

A remix of Bastien Semene's code that takes the Hawkwing comment about generic hashing function into consideration...

def hash_for_file(path, algorithm=hashlib.algorithms[0], block_size=256*128, human_readable=True):
    """
    Block size directly depends on the block size of your filesystem
    to avoid performances issues
    Here I have blocks of 4096 octets (Default NTFS)

    Linux Ext4 block size
    sudo tune2fs -l /dev/sda5 | grep -i 'block size'
    > Block size:               4096

    Input:
        path: a path
        algorithm: an algorithm in hashlib.algorithms
                   ATM: ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
        block_size: a multiple of 128 corresponding to the block size of your filesystem
        human_readable: switch between digest() or hexdigest() output, default hexdigest()
    Output:
        hash
    """
    if algorithm not in hashlib.algorithms:
        raise NameError('The algorithm "{algorithm}" you specified is '
                        'not a member of "hashlib.algorithms"'.format(algorithm=algorithm))

    hash_algo = hashlib.new(algorithm)  # According to hashlib documentation using new()
                                        # will be slower then calling using named
                                        # constructors, ex.: hashlib.md5()
    with open(path, 'rb') as f:
        for chunk in iter(lambda: f.read(block_size), b''):
             hash_algo.update(chunk)
    if human_readable:
        file_hash = hash_algo.hexdigest()
    else:
        file_hash = hash_algo.digest()
    return file_hash
狼性发作 2024-08-02 14:57:50

如果不阅读完整内容,您无法获取其 md5。 但您可以使用更新 函数逐块读取文件内容。

m.update(a); m.update(b) 相当于m.update(a+b)

You can't get its md5 without reading the full content. But you can use the update function to read the file's content block by block.

m.update(a); m.update(b) is equivalent to m.update(a+b).

梦幻的味道 2024-08-02 14:57:50

我认为下面的代码更Pythonic

from hashlib import md5

def get_md5(fname):
    m = md5()
    with open(fname, 'rb') as fp:
        for chunk in fp:
            m.update(chunk)
    return m.hexdigest()

I think the following code is more Pythonic:

from hashlib import md5

def get_md5(fname):
    m = md5()
    with open(fname, 'rb') as fp:
        for chunk in fp:
            m.update(chunk)
    return m.hexdigest()
暗恋未遂 2024-08-02 14:57:50

我不喜欢循环。 基于 Nathan Feger 的回答

md5 = hashlib.md5()
with open(filename, 'rb') as f:
    functools.reduce(lambda _, c: md5.update(c), iter(lambda: f.read(md5.block_size * 128), b''), None)
md5.hexdigest()

I don't like loops. Based on Nathan Feger's answer:

md5 = hashlib.md5()
with open(filename, 'rb') as f:
    functools.reduce(lambda _, c: md5.update(c), iter(lambda: f.read(md5.block_size * 128), b''), None)
md5.hexdigest()
雨落星ぅ辰 2024-08-02 14:57:50

Yuval Adam 的回答的实现Django

import hashlib
from django.db import models

class MyModel(models.Model):
    file = models.FileField()  # Any field based on django.core.files.File

    def get_hash(self):
        hash = hashlib.md5()
        for chunk in self.file.chunks(chunk_size=8192):
            hash.update(chunk)
        return hash.hexdigest()

Implementation of Yuval Adam's answer for Django:

import hashlib
from django.db import models

class MyModel(models.Model):
    file = models.FileField()  # Any field based on django.core.files.File

    def get_hash(self):
        hash = hashlib.md5()
        for chunk in self.file.chunks(chunk_size=8192):
            hash.update(chunk)
        return hash.hexdigest()
夏九 2024-08-02 14:57:50

正如 @pseyfert 的评论中提到的; 在 Python 3.11 及更高版本中,hashlib.file_digest()。 虽然没有明确记录,但该函数内部使用了类似于已接受答案中的分块方法,从其 源代码(第 230-236 行)。

该函数还提供了一个仅关键字参数 _bufsize ,其默认值为 2^18=262,144 字节,用于控制分块的缓冲区大小; 然而,考虑到它的前导下划线和缺失的文档,它可能应该被视为一个实现细节。

无论如何,以下代码等效地重现了 Python 3.11+ 中接受的答案(除了不同的块大小):

import hashlib
with open("your_filename.txt", "rb") as f:
    file_hash = hashlib.file_digest(f, "md5")  # or `hashlib.md5` as 2nd arg
print(file_hash.digest())
print(file_hash.hexdigest())  # to get a printable str instead of bytes

As mentioned in @pseyfert's comment; in Python 3.11 and above, hashlib.file_digest() can be used. While not explicitly documented, internally the function uses a chunking approach similar to the one in the accepted answer, as can be seen from its source code (lines 230–236).

The function also provides a keyword-only argument _bufsize with a default value of 2^18 = 262,144 bytes that controls the buffer size for chunking; however, given its leading underscore and missing documentation, it should probably rather be considered an implementation detail.

In any case, the following code equivalently reproduces the accepted answer in Python 3.11+ (apart from the different chunk size):

import hashlib
with open("your_filename.txt", "rb") as f:
    file_hash = hashlib.file_digest(f, "md5")  # or `hashlib.md5` as 2nd arg
print(file_hash.digest())
print(file_hash.hexdigest())  # to get a printable str instead of bytes
梦巷 2024-08-02 14:57:50

我不确定这里是否有太多的喧闹。 我最近在 MySQL 中的 md5 和存储为 blob 的文件方面遇到了问题,因此我尝试了各种文件大小和简单的 Python 方法,即:

FileHash = hashlib.md5(FileData).hexdigest()

对于 2 KB 到 20 KB 的文件大小范围,我无法检测到任何明显的性能差异。 MB,因此不需要对散列进行“分块”。 无论如何,如果 Linux 必须访问磁盘,它可能至少会像普通程序员阻止它这样做的能力一样完成。 事实上,这个问题与 md5 无关。 如果您使用 MySQL,请不要忘记已经存在的 md5() 和 sha1() 函数。

I'm not sure that there isn't a bit too much fussing around here. I recently had problems with md5 and files stored as blobs in MySQL, so I experimented with various file sizes and the straightforward Python approach, viz:

FileHash = hashlib.md5(FileData).hexdigest()

I couldn’t detect any noticeable performance difference with a range of file sizes 2 KB to 20 MB and therefore no need to 'chunk' the hashing. Anyway, if Linux has to go to disk, it will probably do it at least as well as the average programmer's ability to keep it from doing so. As it happened, the problem was nothing to do with md5. If you're using MySQL, don't forget the md5() and sha1() functions already there.

绝不服输 2024-08-02 14:57:50
import hashlib,re
opened = open('/home/parrot/pass.txt','r')
opened = open.readlines()
for i in opened:
    strip1 = i.strip('\n')
    hash_object = hashlib.md5(strip1.encode())
    hash2 = hash_object.hexdigest()
    print hash2
import hashlib,re
opened = open('/home/parrot/pass.txt','r')
opened = open.readlines()
for i in opened:
    strip1 = i.strip('\n')
    hash_object = hashlib.md5(strip1.encode())
    hash2 = hash_object.hexdigest()
    print hash2
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文