熵源的安全混合

发布于 2024-09-13 19:41:39 字数 413 浏览 9 评论 0原文

假设我们正在生成非常大的数字(例如 128 或 256 位)作为分组密码的密钥。

让我们进一步假设我们戴着锡箔帽子(至少在外面时)。

由于如此偏执,我们希望确定可用的熵,但我们并不完全信任任何特定来源。也许政府正在操纵我们的硬币。也许这些骰子的重量非常微妙。如果输入到 /dev/random 的硬件中断有点一致怎么办? (除了偏执之外,我们还很懒,不想手动生成所有内容......)

所以,让我们将它们全部混合起来。

执行此操作的安全方法是什么?据推测,仅仅连接每个源的几个字节并不完全安全——如果其中一个源有偏见,那么从理论上讲,它可能会导致诸如相关密钥攻击之类的事情。

对串联字节运行 SHA-256 是否足够?

(是的,很快我就会拿起一本《密码学工程》。:))

Let us assume we're generating very large (e.g. 128 or 256bit) numbers to serve as keys for a block cipher.

Let us further assume that we wear tinfoil hats (at least when outside).

Being so paranoid, we want to be sure of our available entropy, but we don't entirely trust any particular source. Maybe the government is rigging our coins. Maybe these dice are ever so subtly weighted. What if the hardware interrupts feeding into /dev/random are just a little too consistent? (Besides being paranoid, we're lazy enough that we don't want to generate it all by hand...)

So, let's mix them all up.

What are the secure method(s) for doing this? Presumably just concatenating a few bytes from each source isn't entirely secure -- if one of the sources is biased, it might, in theory, lend itself to such things as a related-key attack, for example.

Is running SHA-256 over the concatenated bytes sufficient?

(And yes, at some point soon I am going to pick up a copy of Cryptography Engineering. :))

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

沒落の蓅哖 2024-09-20 19:41:39

既然您提到了 /dev/random - 至少在 Linux 上,/dev/random 由一种算法提供,该算法可以完成您所描述的大部分操作。它需要几个不同信任的熵源,并使用多项式函数将它们混合到一个“熵池”中——对于进入的每个新的熵字节,它会被异或到池中,然后整个池与混合功能。当需要从池​​中获得一些随机性时,使用 SHA-1 对整个池进行哈希处理以获得输出,然后再次混合池(实际上,还会进行更多哈希、折叠和破坏,以确保逆转该过程与逆转 SHA-1 一样困难)。与此同时,还有大量的记账工作正在进行——每次将一些熵添加到池中时,都会将其价值的熵位数的估计值添加到帐户中,并且每次都会从池中提取一些字节。池中,该数字被减去,如果帐户低于零,随机设备将阻塞(等待更多的外部熵)。当然,如果您使用“urandom”设备,则不会发生阻塞,并且池只是不断进行散列和混合以产生更多字节,这会将其变成 PRNG 而不是 RNG。

不管怎样……它实际上非常有趣并且评论也很好——你可能想研究一下。 linux-2.6 树中的 drivers/char/random.c

Since you mention /dev/random -- on Linux at least, /dev/random is fed by an algorithm that does very much what you're describing. It takes several variously-trusted entropy sources and mixes them into an "entropy pool" using a polynomial function -- for each new byte of entropy that comes in, it's xor'd into the pool, and then the entire pool is stirred with the mixing function. When it's desired to get some randomness out of the pool, the entire pool is hashed with SHA-1 to get the output, then the pool is mixed again (and actually there's some more hashing, folding, and mutilating going on to make sure that reversing the process is about as hard as reversing SHA-1). At the same time, there's a bunch of accounting going on -- each time some entropy is added to the pool, an estimate of the number of bits of entropy it's worth is added to the account, and each time some bytes are extracted from the pool, that number is subtracted, and the random device will block (waiting on more external entropy) if the account would go below zero. Of course, if you use the "urandom" device, the blocking doesn't happen and the pool simply keeps getting hashed and mixed to produce more bytes, which turns it into a PRNG instead of an RNG.

Anyway... it's actually pretty interesting and pretty well commented -- you might want to study it. drivers/char/random.c in the linux-2.6 tree.

赠我空喜 2024-09-20 19:41:39

使用哈希函数是一种很好的方法 - 只要确保您低估了每个源贡献的熵量,这样,如果您对其中一个或多个源不完全随机的看法是正确的,那么您就不会过度削弱您的密钥。

这与 按键拉伸 中使用的方法没有什么不同(尽管您不需要多个此处迭代)。

Using a hash function is a good approach - just make sure you underestimate the amount of entropy each source contributes, so that if you are right about one or more of them being less than totally random, you haven't weakened your key unduly.

This isn't dissimilar to the approach used in key stretching (though you have no need for multiple iterations here).

゛清羽墨安 2024-09-20 19:41:39

我以前已经这样做过,我的方法只是将它们逐个字节地相互异或。

通过其他算法(例如 SHA-256)运行它们的效率非常,因此不实用,而且我认为它不是真正有用,而且可能有害。

如果你确实非常偏执,并且有一点点钱,那么购买一个“真实的”(取决于你对量子力学的确信程度)一个量子随机数生成器

-- 编辑:

FWIW,我认为我上面描述的方法(或类似的方法)实际上是一个 一个-Time Pad 从任一来源的角度来看,假设其中一个是随机的,因此假设它们是独立的并且会攻击你,那么就无法受到攻击。如果有人对此提出异议,我很高兴得到纠正,并且我鼓励任何不对此提出异议的人无论如何都要提出质疑,并亲自找出答案。

I've done this before, and my approach was just to XOR them, byte-by-byte, against each other.

Running them through some other algorithm, like SHA-256, is terribly inefficient, so it's not practical, and I think it would be not really useful and possibly harmful.

If you do happen to be incredibly paranoid, and have a tiny bit of money, it might be fun to buy a "true" (depending on how convinced you are by Quantum Mechanics) a Quantum Random Number Generator.

-- Edit:

FWIW, I think the method I describe above (or something similar) is effectively a One-Time Pad from the point of view of either sources, assuming one of them is random, and therefore unattackable assuming they are independant and out to get you. I'm happy to be corrected on this if someone takes issue with it, and I encourage anyone not taking issue with it to question it anyway, and find out for yourself.

肩上的翅膀 2024-09-20 19:41:39

如果您有随机性来源,但不确定它是否有偏差,那么有很多不同的算法。根据您想要做的工作量,您从原始来源浪费的熵会有所不同。

最简单的算法是(改进的)范诺依曼算法。您可以在此 pdf 中找到详细信息:
http://security1.win.tue.nl/~bskoric/ physsec/files/PhysSec_LectureNotes.pdf
第 27 页。

如果您对如何从给定来源产生均匀随机性、真正的随机数生成器如何工作等感兴趣,我还建议您阅读本文档!

If you have a source of randomness but you're not sure whether it is biased or not, then there are a lot of different algorithms. Depending on how much work you want to do, the entropy you waste from the original source differes.

The easiest algorithm is the (improved) van Neumann algorithm. You can find the details in this pdf:
http://security1.win.tue.nl/~bskoric/physsec/files/PhysSec_LectureNotes.pdf
at page 27.

I also recommend you to read this document if you're interested in how to produce uniformly randomness from a given souce, how true random number generators work, etc!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文