在存储之前对密码进行两次哈希处理是否比仅对其进行一次哈希处理更安全?
我所说的是这样做:
$hashed_password = hash(hash($plaintext_password));
而不是仅仅这样:
$hashed_password = hash($plaintext_password);
如果它不太安全,你能提供一个好的解释(或一个链接)吗?
另外,使用的哈希函数有什么区别吗? 如果混合使用 md5 和 sha1(例如)而不是重复相同的哈希函数,会有什么区别吗?
注 1:当我说“双重哈希”时,我指的是对密码进行两次哈希处理,以使其更加模糊。 我不是在谈论解决冲突的技术。
注2:我知道我需要添加随机盐才能真正确保其安全。 问题是使用相同的算法进行两次哈希是否有助于或损害哈希。
Is hashing a password twice before storage any more or less secure than just hashing it once?
What I'm talking about is doing this:
$hashed_password = hash(hash($plaintext_password));
instead of just this:
$hashed_password = hash($plaintext_password);
If it is less secure, can you provide a good explanation (or a link to one)?
Also, does the hash function used make a difference? Does it make any difference if you mix md5 and sha1 (for example) instead of repeating the same hash function?
Note 1: When I say "double hashing" I'm talking about hashing a password twice in an attempt to make it more obscured. I'm not talking about the technique for resolving collisions.
Note 2: I know I need to add a random salt to really make it secure. The question is whether hashing twice with the same algorithm helps or hurts the hash.
发布评论
评论(16)
对密码进行一次哈希处理是不安全
的 不,多次哈希处理也不会降低安全性; 它们是安全密码使用的重要组成部分。
迭代哈希会增加攻击者尝试候选列表中每个密码所需的时间。 您可以轻松地将攻击密码所需的时间从几小时延长到几年。
简单的迭代是不够的
仅仅将哈希输出链接到输入不足以保证安全性。 迭代应该在保留密码熵的算法的上下文中进行。 幸运的是,有几种已发布的算法经过了足够的审查,使其设计充满信心。
像 PBKDF2 这样的良好密钥派生算法会将密码注入到每轮哈希中,从而减轻对哈希输出冲突的担忧。 PBKDF2 可按原样用于密码身份验证。 Bcrypt 在密钥导出之后进行加密步骤; 这样,如果发现了一种快速逆转密钥派生的方法,攻击者仍然必须完成已知明文攻击。
如何破解密码
存储的密码需要防止离线攻击。 如果密码未加盐,则可以通过预先计算的字典攻击(例如,使用彩虹表)来破解它们。 否则,攻击者必须花时间计算每个密码的哈希值,并查看它是否与存储的哈希值匹配。
所有密码出现的可能性并不相同。 攻击者可能会详尽地搜索所有短密码,但他们知道,每增加一个字符,暴力破解成功的机会就会急剧下降。 相反,他们使用最可能的密码的有序列表。 它们从“password123”开始,逐渐过渡到不常用的密码。
假设攻击者名单很长,有 100 亿候选者; 还假设桌面系统每秒可以计算 100 万个哈希值。 如果仅使用一次迭代,攻击者可以在不到三个小时的时间内测试她的整个列表。 但如果仅使用 2000 次迭代,则时间会延长至近 8 个月。 为了击败更复杂的攻击者(例如,能够下载可以利用 GPU 能力的程序的攻击者),您需要更多的迭代。
多少钱才够呢?
使用的迭代次数是安全性和用户体验之间的权衡。 攻击者可以使用的专用硬件很便宜,但它仍然可以每秒执行数亿次迭代。< /a> 攻击者系统的性能决定了在给定迭代次数的情况下破解密码所需的时间。 但您的应用程序不太可能使用这种专用硬件。 在不惹恼用户的情况下,您可以执行多少次迭代取决于您的系统。
您可以让用户在身份验证期间额外等待 3/4 秒左右。 分析您的目标平台,并使用尽可能多的迭代。 我测试过的平台(移动设备上的一个用户,或服务器平台上的多个用户)可以轻松支持 PBKDF2 迭代次数在 60,000 到 120,000 之间,或 bcrypt成本因子为 12 或 13。
更多背景
阅读 PKCS #5,了解有关盐和迭代在散列中的作用的权威信息。 尽管 PBKDF2 旨在从密码生成加密密钥,但它作为密码身份验证的单向哈希效果很好。 bcrypt 的每次迭代都比 SHA-2 哈希更昂贵,因此您可以使用更少的迭代,但想法是相同的。 Bcrypt 还超越了大多数基于 PBKDF2 的解决方案,使用派生密钥来加密众所周知的纯文本。 生成的密文与一些元数据一起存储为“哈希”。 然而,没有什么可以阻止您使用 PBKDF2 做同样的事情。
以下是我就此主题撰写的其他答案:
Hashing a password once is insecure
No, multiple hashes are not less secure; they are an essential part of secure password use.
Iterating the hash increases the time it takes for an attacker to try each password in their list of candidates. You can easily increase the time it takes to attack a password from hours to years.
Simple iteration is not enough
Merely chaining hash output to input isn't sufficient for security. The iteration should take place in the context of an algorithm that preserves the entropy of the password. Luckily, there are several published algorithms that have had enough scrutiny to give confidence in their design.
A good key derivation algorithm like PBKDF2 injects the password into each round of hashing, mitigating concerns about collisions in hash output. PBKDF2 can be used for password authentication as-is. Bcrypt follows the key derivation with an encryption step; that way, if a fast way to reverse the key derivation is discovered, an attacker still has to complete a known-plaintext attack.
How to break a password
Stored passwords need protection from an offline attack. If passwords aren't salted, they can be broken with a pre-computed dictionary attack (for example, using a Rainbow Table). Otherwise, the attacker must spend time to compute a hash for each password and see if it matches the stored hash.
All passwords are not equally likely. Attackers might exhaustively search all short passwords, but they know that their chances for brute-force success drop sharply with each additional character. Instead, they use an ordered list of the most likely passwords. They start with "password123" and progress to less frequently used passwords.
Let's say an attackers list is long, with 10 billion candidates; suppose also that a desktop system can compute 1 million hashes per second. The attacker can test her whole list is less than three hours if only one iteration is used. But if just 2000 iterations are used, that time extends to almost 8 months. To defeat a more sophisticated attacker—one capable of downloading a program that can tap the power of their GPU, for example—you need more iterations.
How much is enough?
The number of iterations to use is a trade-off between security and user experience. Specialized hardware that can be used by attackers is cheap, but it can still perform hundreds of millions of iterations per second. The performance of the attacker's system determines how long it takes to break a password given a number of iterations. But your application is not likely to use this specialized hardware. How many iterations you can perform without aggravating users depends on your system.
You can probably let users wait an extra ¾ second or so during authentication. Profile your target platform, and use as many iterations as you can afford. Platforms I've tested (one user on a mobile device, or many users on a server platform) can comfortably support PBKDF2 with between 60,000 and 120,000 iterations, or bcrypt with cost factor of 12 or 13.
More background
Read PKCS #5 for authoritative information on the role of salt and iterations in hashing. Even though PBKDF2 was meant for generating encryption keys from passwords, it works well as a one-way-hash for password authentication. Each iteration of bcrypt is more expensive than a SHA-2 hash, so you can use fewer iterations, but the idea is the same. Bcrypt also goes a step beyond most PBKDF2-based solutions by using the derived key to encrypt a well-known plain text. The resulting cipher text is stored as the "hash," along with some meta-data. However, nothing stops you from doing the same thing with PBKDF2.
Here are other answers I've written on this topic:
对于那些说它是安全的人来说,他们总体上是正确的。 对于特定问题,“双重”散列(或其逻辑扩展,迭代散列函数)如果做得正确是绝对安全的。
对于那些说它不安全的人来说,在这种情况下他们是正确的。 问题中发布的代码不安全。 让我们谈谈原因:
我们关心哈希函数的两个基本属性:
前像抵抗 - 给定一个哈希
$h
,它应该很难找到一条消息$m
使得$h === hash($m)
第二预像阻力 - 给定消息
$m1
,应该很难找到不同的消息$m2
使得hash($m1) === hash($m2)
碰撞抵抗 - 应该很难找到一对消息
($m1, $m2)
这样hash($m1) === hash($m2)
(请注意,这类似于第二前图像抵抗,但不同之处在于攻击者可以控制两条消息)...对于密码的存储,所有我们真正关心的是原像抵抗。 另外两个是没有意义的,因为
$m1
是我们试图保证安全的用户密码。 因此,如果攻击者已经拥有它,则哈希就没有什么可保护的......免责声明
接下来的一切都基于这样的前提:我们关心的只是原像抵抗。 哈希函数的其他两个基本属性可能不会(并且通常不会)以相同的方式成立。 因此,本文中的结论仅适用于使用哈希函数存储密码的情况。 它们并不适用于一般情况......
让我们开始
为了讨论的目的,让我们发明我们自己的哈希函数:
现在这个哈希函数的作用应该非常明显了。 它将输入的每个字符的 ASCII 值相加,然后将该结果与 256 取模。
所以让我们测试一下:
现在,让我们看看如果我们围绕一个函数运行几次会发生什么:
输出:
Hrm , 哇。 我们产生了碰撞! 让我们尝试看看原因:
这是对每个可能的散列输出的字符串进行散列的输出:
请注意数字更高的趋势。 事实证明,这就是我们的死局。 运行哈希 4 次(对于每个元素,$hash = ourHash($hash)`)最终会得到:
我们已将范围缩小到 8 个值...这不好...我们的原始函数将
S(∞)
映射到S(256)
。 也就是说,我们创建了一个 Surjective Function 将$input
映射到 <代码>$输出。由于我们有一个满射函数,因此我们无法保证输入的任何子集的映射不会发生冲突(事实上,实际上它们会发生冲突)。
这就是这里发生的事情! 我们的功能很糟糕,但这不是它起作用的原因(这就是它起作用如此快且如此彻底的原因)。
MD5
也会发生同样的情况。 它将S(∞)
映射到S(2^128)
。 由于无法保证运行MD5(S(output))
将是单射,意味着它不会发生冲突。TL/DR部分
因此,由于直接将输出反馈给
md5
会产生碰撞,因此每次迭代都会增加碰撞的机会。 然而,这是线性增加,这意味着虽然2^128
的结果集减少了,但减少的速度还不够快,不足以构成严重缺陷。因此,
迭代次数越多,减少的程度就越深。
修复
对我们来说幸运的是,有一个简单的方法来解决这个问题:将一些东西反馈到进一步的迭代中:
请注意,对于每个个体,进一步的迭代不是 2^128
$input
的值。 这意味着我们可能能够生成仍然会发生冲突的$input
值(因此会在远小于2^128
可能的输出时稳定或共振)。 但$input
的一般情况仍然与单轮一样强大。等等,是吗? 让我们使用
ourHash()
函数来测试一下。 切换到 $hash = ourHash($input . $hash);,进行 100 次迭代:仍然有一个粗略的模式,但请注意,它并不比我们的底层功能(已经很弱)。
但请注意,
0
和3
发生了冲突,即使它们不在单次运行中。 这是我之前所说的应用(对于所有输入集,碰撞阻力保持相同,但由于底层算法的缺陷,可能会打开特定的碰撞路线)。TL/DR 部分
通过将输入反馈到每次迭代中,我们有效地打破了先前迭代中可能发生的任何冲突。
因此,
md5($input . md5($input));
应该(至少理论上)与md5($input)
一样强大。这重要吗?
是的。 这是 PBKDF2 在 RFC 2898 中取代 PBKDF1 的原因之一。 考虑两者的内部循环:
PBKDF1:
其中
c
是迭代计数,P
是密码,S
是盐PBKDF2:
其中 PRF 实际上只是一个 HMAC。 但出于我们这里的目的,我们只是说 PRF(P, S) = Hash(P || S) (也就是说,2 个输入的 PRF 是相同的,粗略地说,与哈希值相同)两者连接在一起)。 这非常不,但就我们的目的而言,它是。
因此,PBKDF2 保持了底层 Hash 函数的抗碰撞性,而 PBKDF1 则不然。
将所有这些结合在一起:
我们知道迭代哈希的安全方法。 事实上:
通常是安全的。
现在,为了了解为什么我们想要对其进行哈希处理,让我们分析一下熵的运动。
哈希接受无限集合:
S(∞)
,并生成一个较小的、大小一致的集合S(n)
。 下一次迭代(假设输入被传回)再次将S(∞)
映射到S(n)
:请注意,最终输出完全相同熵的大小作为第一个。 迭代不会“使其更加模糊”。 熵是相同的。 不存在不可预测性的神奇来源(它是伪随机函数,而不是随机函数)。
然而,迭代是有好处的。 它人为地使散列过程变慢。 这就是为什么迭代可能是一个好主意。 事实上,这是大多数现代密码哈希算法的基本原理(事实上,一遍又一遍地做某件事会使其变慢)。
慢是好事,因为它正在对抗主要的安全威胁:暴力破解。 我们的哈希算法越慢,攻击者就越难攻击从我们这里窃取的密码哈希。 这是一件好事!
To those who say it's secure, they are correct in general. "Double" hashing (or the logical expansion of that, iterating a hash function) is absolutely secure if done right, for a specific concern.
To those who say it's insecure, they are correct in this case. The code that is posted in the question is insecure. Let's talk about why:
There are two fundamental properties of a hash function that we're concerned about:
Pre-Image Resistance - Given a hash
$h
, it should be difficult to find a message$m
such that$h === hash($m)
Second-Pre-Image Resistance - Given a message
$m1
, it should be difficult to find a different message$m2
such thathash($m1) === hash($m2)
Collision Resistance - It should be difficult to find a pair of messages
($m1, $m2)
such thathash($m1) === hash($m2)
(note that this is similar to Second-Pre-Image resistance, but different in that here the attacker has control over both messages)...For the storage of passwords, all we really care about is Pre-Image Resistance. The other two would be moot, because
$m1
is the user's password we're trying to keep safe. So if the attacker already has it, the hash has nothing to protect...DISCLAIMER
Everything that follows is based on the premise that all we care about is Pre-Image Resistance. The other two fundamental properties of hash functions may not (and typically don't) hold up in the same way. So the conclusions in this post are only applicable when using hash functions for the storage of passwords. They are not applicable in general...
Let's Get Started
For the sake of this discussion, let's invent our own hash function:
Now it should be pretty obvious what this hash function does. It sums together the ASCII values of each character of input, and then takes the modulo of that result with 256.
So let's test it out:
Now, let's see what happens if we run it a few times around a function:
That outputs:
Hrm, wow. We've generated collisions!!! Let's try to look at why:
Here's the output of hashing a string of each and every possible hash output:
Notice the tendency towards higher numbers. That turns out to be our deadfall. Running the hash 4 times ($hash = ourHash($hash)`, for each element) winds up giving us:
We've narrowed ourselves down to 8 values... That's bad... Our original function mapped
S(∞)
ontoS(256)
. That is we've created a Surjective Function mapping$input
to$output
.Since we have a Surjective function, we have no guarantee the mapping for any subset of the input won't have collisions (in fact, in practice they will).
That's what happened here! Our function was bad, but that's not why this worked (that's why it worked so quickly and so completely).
The same thing happens with
MD5
. It mapsS(∞)
ontoS(2^128)
. Since there's no guarantee that runningMD5(S(output))
will be Injective, meaning that it won't have collisions.TL/DR Section
Therefore, since feeding the output back to
md5
directly can generate collisions, every iteration will increase the chance of collisions. This is a linear increase however, which means that while the result set of2^128
is reduced, it's not significantly reduced fast enough to be a critical flaw.So,
The more times you iterate, the further the reduction goes.
The Fix
Fortunately for us, there's a trivial way to fix this: Feed back something into the further iterations:
Note that the further iterations aren't 2^128 for each individual value for
$input
. Meaning that we may be able to generate$input
values that still collide down the line (and hence will settle or resonate at far less than2^128
possible outputs). But the general case for$input
is still as strong as it was for a single round.Wait, was it? Let's test this out with our
ourHash()
function. Switching to$hash = ourHash($input . $hash);
, for 100 iterations:There's still a rough pattern there, but note that it's no more of a pattern than our underlying function (which was already quite weak).
Notice however that
0
and3
became collisions, even though they weren't in the single run. That's an application of what I said before (that the collision resistance stays the same for the set of all inputs, but specific collision routes may open up due to flaws in the underlying algorithm).TL/DR Section
By feeding back the input into each iteration, we effectively break any collisions that may have occurred in the prior iteration.
Therefore,
md5($input . md5($input));
should be (theoretically at least) as strong asmd5($input)
.Is This Important?
Yes. This is one of the reasons that PBKDF2 replaced PBKDF1 in RFC 2898. Consider the inner loops of the two::
PBKDF1:
Where
c
is the iteration count,P
is the Password andS
is the saltPBKDF2:
Where PRF is really just a HMAC. But for our purposes here, let's just say that
PRF(P, S) = Hash(P || S)
(that is, the PRF of 2 inputs is the same, roughly speaking, as hash with the two concatenated together). It's very much not, but for our purposes it is.So PBKDF2 maintains the collision resistance of the underlying
Hash
function, where PBKDF1 does not.Tie-ing All Of It Together:
We know of secure ways of iterating a hash. In fact:
Is typically safe.
Now, to go into why we would want to hash it, let's analyze the entropy movement.
A hash takes in the infinite set:
S(∞)
and produces a smaller, consistently sized setS(n)
. The next iteration (assuming the input is passed back in) mapsS(∞)
ontoS(n)
again:Notice that the final output has exactly the same amount of entropy as the first one. Iterating will not "make it more obscured". The entropy is identical. There's no magic source of unpredictability (it's a Pseudo-Random-Function, not a Random Function).
There is however a gain to iterating. It makes the hashing process artificially slower. And that's why iterating can be a good idea. In fact, it's the basic principle of most modern password hashing algorithms (the fact that doing something over-and-over makes it slower).
Slow is good, because it's combating the primary security threat: brute forcing. The slower we make our hashing algorithm, the harder attackers have to work to attack password hashes stolen from us. And that's a good thing!!!
是的,重新散列减少了搜索空间,但不,这并不重要——有效的减少是微不足道的。
重新散列会增加暴力破解所需的时间,但只这样做两次也不是最理想的。
你真正想要的是使用 PBKDF2 对密码进行哈希处理 - 一个经过验证的使用带盐和迭代的安全哈希的方法。 查看此 SO 响应。
编辑:我差点忘了 - 不要使用 MD5!!!! 使用现代加密哈希,例如 SHA-2 系列(SHA-256、SHA-384、和 SHA-512)。
Yes, re-hashing reduces the search space, but no, it doesn't matter - the effective reduction is insignificant.
Re-hashing increases the time it takes to brute-force, but doing so only twice is also suboptimal.
What you really want is to hash the password with PBKDF2 - a proven method of using a secure hash with salt and iterations. Check out this SO response.
EDIT: I almost forgot - DON'T USE MD5!!!! Use a modern cryptographic hash such as the SHA-2 family (SHA-256, SHA-384, and SHA-512).
是的 - 它减少了与该字符串匹配的可能字符串的数量。
正如您已经提到的,加盐哈希要好得多。
这里有一篇文章: http://websecurity.ro/blog/2007/ 11/02/md5md5-vs-md5/,尝试证明为什么它是等价的,但我不确定逻辑。 他们部分地假设没有可用于分析 md5(md5(text)) 的软件,但显然生成彩虹表是相当简单的。
我仍然坚持我的答案,即 md5(md5(text)) 类型哈希的数量少于 md5(text) 哈希,增加了冲突的机会(即使仍然是不太可能的概率)并减少了搜索空间。
Yes - it reduces the number of possibly strings that match the string.
As you have already mentioned, salted hashes are much better.
An article here: http://websecurity.ro/blog/2007/11/02/md5md5-vs-md5/, attempts a proof at why it is equivalent, but I'm not sure with the logic. Partly they assume that there isn't software available to analyse md5(md5(text)), but obviously it's fairly trivial to produce the rainbow tables.
I'm still sticking with my answer that there are smaller number of md5(md5(text)) type hashes than md5(text) hashes, increasing the chance of collision (even if still to an unlikely probability) and reducing the search space.
大多数答案都是由没有密码学或安全背景的人提供的。 他们错了。 使用盐,如果可能的话每个记录都是唯一的。 MD5/SHA/等太快了,与您想要的相反。 PBKDF2 和 bcrypt 速度较慢(这很好),但可以被 ASIC/FPGA/GPU 击败(现在非常便宜)。 因此需要一个内存难算法:输入 scrypt。
这是关于盐和速度的外行解释(但不是关于内存困难算法)。
Most answers are by people without a background in cryptography or security. And they are wrong. Use a salt, if possible unique per record. MD5/SHA/etc are too fast, the opposite of what you want. PBKDF2 and bcrypt are slower (wich is good) but can be defeated with ASICs/FPGA/GPUs (very afordable nowadays). So a memory-hard algorithm is needed: enter scrypt.
Here's a layman explanation on salts and speed (but not about memory-hard algorithms).
一般来说,它不为双重散列或双重加密某些内容提供额外的安全性。 如果你能破解一次哈希值,你就可以再次破解它。 不过,这样做通常不会损害安全性。
在您使用 MD5 的示例中,您可能知道存在一些冲突问题。 “双重散列”并不能真正帮助防止这种情况,因为相同的冲突仍然会导致相同的第一个散列,然后您可以再次对其进行 MD5 以获得第二个散列。
这确实可以防止字典攻击,例如“反向 MD5 数据库”,但加盐也是如此。
顺便说一句,双重加密不会提供任何额外的安全性,因为它所做的只是产生一个不同的密钥,该密钥是实际使用的两个密钥的组合。 因此,寻找“钥匙”的工作量并没有加倍,因为实际上并不需要找到两个钥匙。 对于散列来说并非如此,因为散列结果通常与原始输入的长度不同。
In general, it provides no additional security to double hash or double encrypt something. If you can break the hash once, you can break it again. It usually doesn't hurt security to do this, though.
In your example of using MD5, as you probably know there are some collision issues. "Double Hashing" doesn't really help protect against this, since the same collisions will still result in the same first hash, which you can then MD5 again to get the second hash.
This does protect against dictionary attacks, like those "reverse MD5-databases", but so does salting.
On a tangent, Double encrypting something doesn't provide any additional security because all it does is result in a different key which is a combination of the two keys actually used. So the effort to find the "key" is not doubled because two keys do not actually need to be found. This isn't true for hashing, because the result of the hash is not usually the same length as the original input.
我只是从实际的角度来看这个问题。 黑客到底想做什么? 为什么,字符组合在通过哈希函数时会生成所需的哈希值。
您只保存最后一个哈希值,因此,黑客只需暴力破解一个哈希值。 假设每个暴力步骤中遇到所需哈希值的几率大致相同,则哈希值的数量是无关紧要的。 您可以进行一百万次哈希迭代,并且它不会增加或降低安全性一点点,因为在行的末尾仍然只有一个哈希需要破解,并且破解它的几率与任何哈希相同。
也许之前的发帖者认为输入是相关的; 它不是。 只要您输入散列函数的内容生成所需的散列,它就会帮助您完成正确的输入或错误的输入。
现在,彩虹桌是另一回事了。 由于彩虹表仅携带原始密码,因此两次散列可能是一个很好的安全措施,因为包含每个散列的每个散列的彩虹表会太大。
当然,我只考虑OP给出的例子,其中只是一个被散列的纯文本密码。 如果您在哈希中包含用户名或盐,那就是另一回事了; 散列两次是完全没有必要的,因为彩虹表已经太大而无法实用并包含正确的散列。
不管怎样,我不是这里的安全专家,但这只是我从我的经验中得出的结论。
I just look at this from a practical standpoint. What is the hacker after? Why, the combination of characters that, when put through the hash function, generates the desired hash.
You are only saving the last hash, therefore, the hacker only has to bruteforce one hash. Assuming you have roughly the same odds of stumbling across the desired hash with each bruteforce step, the number of hashes is irrelevant. You could do a million hash iterations, and it would not increase or reduce security one bit, since at the end of the line there's still only one hash to break, and the odds of breaking it are the same as any hash.
Maybe the previous posters think that the input is relevant; it's not. As long as whatever you put into the hash function generates the desired hash, it will get you through, correct input or incorrect input.
Now, rainbow tables are another story. Since a rainbow table only carries raw passwords, hashing twice may be a good security measure, since a rainbow table that contains every hash of every hash would be too large.
Of course, I'm only considering the example the OP gave, where it's just a plain-text password being hashed. If you include the username or a salt in the hash, it's a different story; hashing twice is entirely unnecessary, since the rainbow table would already be too large to be practical and contain the right hash.
Anyway, not a security expert here, but that's just what I've figured from my experience.
根据我所读到的内容,实际上可能建议对密码重新哈希数百或数千次。
这个想法是,如果您可以花更多时间对密码进行编码,那么攻击者就可以通过多次猜测来破解密码。 这似乎是重新散列的优势——并不是说它在密码学上更安全,而是它只是需要更长的时间来生成字典攻击。
当然,计算机一直在变得越来越快,因此这种优势会随着时间的推移而减弱(或者需要您增加迭代次数)。
From what I've read, it may actually be recommended to re-hash the password hundreds or thousands of times.
The idea is that if you can make it take more time to encode the password, it's more work for an attacker to run through many guesses to crack the password. That seems to be the advantage to re-hashing -- not that it's more cryptographically secure, but it simply takes longer to generate a dictionary attack.
Of course computers get faster all the time, so this advantage diminishes over time (or requires you to increase the iterations).
仅当我在客户端上对密码进行散列,然后将该散列的散列(使用不同的盐)保存在服务器上时,双重散列对我才有意义。
这样,即使有人侵入服务器(从而忽略了 SSL 提供的安全性),他仍然无法获得明确的密码。
是的,他将拥有侵入系统所需的数据,但他无法使用该数据来破坏用户拥有的外部帐户。 众所周知,人们几乎对任何事情都使用相同的密码。
他获得清晰密码的唯一方法是在客户端上安装注册机 - 这不再是你的问题了。
简而言之:
Double hashing makes sense to me only if I hash the password on the client, and then save the hash (with different salt) of that hash on the server.
That way even if someone hacked his way into the server (thereby ignoring the safety SSL provides), he still can't get to the clear passwords.
Yes he will have the data required to breach into the system, but he wouldn't be able to use that data to compromise outside accounts the user has. And people are known to use the same password for virtually anything.
The only way he could get to the clear passwords is installing a keygen on the client - and that's not your problem anymore.
So in short:
就我个人而言,我不会打扰多个哈希,但我会确保还对用户名(或另一个用户 ID 字段)以及密码进行哈希,这样两个具有相同密码的用户就不会最终得到相同的哈希值。 另外,为了更好的测量,我可能也会将一些其他常量字符串放入输入字符串中。
Personally I wouldn't bother with multiple hashses, but I'd make sure to also hash the UserName (or another User ID field) as well as the password so two users with the same password won't end up with the same hash. Also I'd probably throw some other constant string into the input string too for good measure.
让我们假设您使用哈希算法:计算 rot13,取前 10 个字符。 如果执行两次(甚至 2000 次),则可以创建一个更快的函数,但给出相同的结果(即只取前 10 个字符)。
同样,可以制作一个更快的函数,提供与重复散列函数相同的输出。 因此,您选择的散列函数非常重要:与 rot13 示例一样,并没有考虑到重复散列会提高安全性。 如果没有研究表明该算法是为递归使用而设计的,那么更安全的做法是假设它不会为您提供额外的保护。
也就是说:对于除最简单的散列函数之外的所有函数,很可能需要密码学专家来计算更快的函数,因此,如果您要防范无法访问密码学专家的攻击者,那么在实践中使用重复的散列函数可能更安全。
Let us assume you use the hashing algorithm: compute rot13, take the first 10 characters. If you do that twice (or even 2000 times) it is possible to make a function that is faster, but which gives the same result (namely just take the first 10 chars).
Likewise it may be possible to make a faster function that gives the same output as a repeated hashing function. So your choice of hashing function is very important: as with the rot13 example it is not given that repeated hashing will improve security. If there is no research saying that the algorithm is designed for recursive use, then it is safer to assume that it will not give you added protection.
That said: For all but the simplest hashing functions it will most likely take cryptography experts to compute the faster functions, so if you are guarding against attackers that do not have access to cryptography experts it is probably safer in practice to use a repeated hashing function.
对减少搜索空间的担忧在数学上是正确的,尽管搜索空间仍然足够大,足以满足所有实际目的(假设您使用盐),即 2^128。 然而,由于我们讨论的是密码,根据我的粗略计算,可能的 16 个字符的字符串(字母数字、大写字母、一些符号)的数量大约为 2^98。 因此,感知到的搜索空间的减少并不真正相关。
除此之外,从密码学的角度来看,确实没有什么区别。
尽管有一种称为“哈希链”的加密原语——一种允许您执行一些很酷的技巧的技术,例如在使用后公开签名密钥,而不会牺牲系统的完整性——在给定最小时间同步的情况下,这种技术允许您干净地回避初始密钥分发的问题。 基本上,您预先计算一大组哈希值 - h(h(h(h....(h(k))...))) ,使用第 n 个值进行签名,在设定的时间间隔后,您发送取出密钥,并使用密钥 (n-1) 对其进行签名。 收件人现在可以验证您是否发送了之前的所有消息,并且没有人可以伪造您的签名,因为签名的有效期限已过。
像 Bill 建议的那样重新散列数十万次只是浪费你的 CPU。如果你担心有人破坏 128 位,请使用更长的密钥。
The concern about reducing the search space is mathematically correct, although the search space remains large enough that for all practical purposes (assuming you use salts), at 2^128. However, since we are talking about passwords, the number of possible 16-character strings (alphanumeric, caps matter, a few symbols thrown in) is roughly 2^98, according to my back-of-the-envelope calculations. So the perceived decrease in the search space is not really relevant.
Aside from that, there really is no difference, cryptographically speaking.
Although there is a crypto primitive called a "hash chain" -- a technique that allows you to do some cool tricks, like disclosing a signature key after it's been used, without sacrificing the integrity of the system -- given minimal time synchronization, this allows you to cleanly sidestep the problem of initial key distribution. Basically, you precompute a large set of hashes of hashes - h(h(h(h....(h(k))...))) , use the nth value to sign, after a set interval, you send out the key, and sign it using key (n-1). The recepients can now verify that you sent all the previous messages, and no one can fake your signature since the time period for which it is valid has passed.
Re-hashing hundreds of thousands of times like Bill suggests is just a waste of your cpu.. use a longer key if you are concerned about people breaking 128 bits.
正如本文中的一些回复所表明的那样,在某些情况下它可能会提高安全性,而在其他情况下它肯定会损害安全性。 有一个更好的解决方案肯定会提高安全性。 不要将计算哈希的次数加倍,而是将盐的大小加倍,或者将哈希中使用的位数加倍,或者两者都做! 跳到 SHA-512,而不是 SHA-245。
As several responses in this article suggest, there are some cases where it may improves security and others where it definately hurts it. There is a better solution that will definately improve security. Instead of doubling the number of times you calculate the hash, double the size of your salt, or double the number of bits used int the hash, or do both! Instead of SHA-245, jump up to SHA-512.
双重散列是丑陋的,因为攻击者很可能已经构建了一个表来得出大多数散列。 更好的方法是给哈希值加盐,然后将哈希值混合在一起。 还有新的模式可以“签名”哈希值(基本上是加盐),但以更安全的方式。
Double hashing is ugly because it's more than likely an attacker has built a table to come up with most hashes. Better is to salt your hashes, and mix hashes together. There are also new schemas to "sign" hashes (basically salting), but in a more secure manner.
是的。
绝对不要使用传统哈希函数的多次迭代,例如
md5(md5(md5(password)))
。 在最好的情况下,您将获得安全性的边际提升(像这样的方案几乎无法提供任何针对 GPU 攻击的保护;只是将其管道化。)在最坏的情况下,您会减少哈希空间(因此安全性)与您添加的每次迭代。 在安全方面,明智的做法是做最坏的打算。不要使用由有能力的密码学家设计的密码,该密码是有效的密码散列,并且能够抵抗暴力和时空攻击。 其中包括 bcrypt、scrypt,在某些情况下还包括 PBKDF2。 基于 glibc SHA-256 的哈希也是可以接受的。
Yes.
Absolutely do not use multiple iterations of a conventional hash function, like
md5(md5(md5(password)))
. At best you will be getting a marginal increase in security (a scheme like this offers hardly any protection against a GPU attack; just pipeline it.) At worst, you're reducing your hash space (and thus security) with every iteration you add. In security, it's wise to assume the worst.Do use a password has that's been designed by a competent cryptographer to be an effective password hash, and resistant to both brute-force and time-space attacks. These include bcrypt, scrypt, and in some situations PBKDF2. The glibc SHA-256-based hash is also acceptable.
我要大胆地说,在某些情况下它更安全……不过,不要对我投反对票!
从数学/密码学的角度来看,它不太安全,因为我确信其他人会给你比我更清楚的解释。
但是,存在 MD5 哈希值的大型数据库,这些数据库更有可能包含“密码”文本,而不是其 MD5。 因此,通过双重散列,您会降低这些数据库的有效性。
当然,如果你使用盐,那么这个优点(缺点?)就会消失。
I'm going to go out on a limb and say it's more secure in certain circumstances... don't downvote me yet though!
From a mathematical / cryptographical point of view, it's less secure, for reasons that I'm sure someone else will give you a clearer explanation of than I could.
However, there exist large databases of MD5 hashes, which are more likely to contain the "password" text than the MD5 of it. So by double-hashing you're reducing the effectiveness of those databases.
Of course, if you use a salt then this advantage (disadvantage?) goes away.