快速、安全的随机数

发布于 2024-09-16 15:21:45 字数 1669 浏览 6 评论 0原文

当我偶然发现 这个有趣的花絮:

生成非常好的非随机但接近随机位的一个好技巧是使用 /dev/random 的熵来播种快速对称流密码(我最喜欢的是河豚),并将其输出重定向到应用程序需要它。

这不是初学者的技术,但使用两行或三行 shell 脚本和一些创意管道很容易设置。

进一步的研究得出了施奈尔对安全性的评论:

如果您要“注入熵”,有多种方法可以实现,但更好的方法之一是将其“传播”到高速流密码中,并将其与非确定性采样系统结合起来。< /p>

如果我错了,请纠正我,但就速度和安全性而言,这种生成随机位的方法似乎比 /dev/urandom 更好。

所以,这是我对实际代码的看法:

time dd if=/dev/zero bs=1M count=400 | openssl bf-ofb -pass pass:`cat /dev/urandom | tr -dc [:graph:] | head -c56` > /dev/null

这个速度测试需要 400MB 的零,并使用使用由伪随机、可打印字符组成的 448 位密钥的河豚对其进行加密。这是我上网本上的输出:

400+0 条记录 400+0 条记录输出 复制了 419430400 字节 (419 MB),14.0068 秒,29.9 MB/秒

真实 0m14.025s 用户0m12.909s 系统0m2.004s

太棒了!但它的随机性有多大呢?让我们将结果通过管道传递给 ent

熵 = 每字节 8.000000 位。

最佳压缩会减小大小 这个 419430416 字节文件的 0%。

419430416 个样本的卡方分布为 250.92,并且随机 50.00% 的情况下会超过该值。

数据字节的算术平均值为 127.5091(127.5 = 随机)。 Pi 的蒙特卡罗值为 3.141204882(误差 0.01%)。 序列相关系数为-0.000005(完全不相关= 0.0)。

看起来不错。但是,我的代码有一些明显的缺陷:

  1. 它使用 /dev/urandom 作为初始熵源。
  2. 密钥强度不等于 448 位,因为仅使用可打印字符。
  3. 应定期重新播种密码以“传播”熵。

所以,我想知道我是否走在正确的道路上。如果有人知道如何修复这些缺陷,那就太好了。另外,如果不是 /dev/urandomsfillbadblocks 或 DBAN,您能否分享一下您用于安全擦除磁盘的内容? ?

谢谢你!

编辑:更新了代码以使用河豚作为流密码。

I was searching for a faster alternative to /dev/urandom when I stumbled across this interesting tidbit:

One good trick for generating very good non-random-but-nearly-random bits is to use /dev/random's entropy to seed a fast symmetric stream cipher (my favorite is blowfish), and redirect it's output to the application that needs it.

That's not a beginners technique, but it's easy to set up with a two or three line shell script and some creative pipes.

Further research yielded this comment from Schneier on Security:

If you are going to "inject entropy" there are a number of ways to do it but one of the better ways is to "spread" it across a high speed stream cipher and couple it with a non determanistic sampling system.

Correct me if I'm wrong, but it appears that this method of generating random bits is simply better than /dev/urandom in terms of speed and security.

So, here is my take on the actual code:

time dd if=/dev/zero bs=1M count=400 | openssl bf-ofb -pass pass:`cat /dev/urandom | tr -dc [:graph:] | head -c56` > /dev/null

This speed test takes 400MB of zeroes and encrypts it using blowfish with a 448 bit key made of pseudo-random, printable characters. Here's the output on my netbook:

400+0 records in
400+0 records out
419430400 bytes (419 MB) copied, 14.0068 s, 29.9 MB/s

real 0m14.025s
user 0m12.909s
sys 0m2.004s

That's great! But how random is it? Lets pipe the results to ent:

Entropy = 8.000000 bits per byte.

Optimum compression would reduce the size
of this 419430416 byte file by 0 percent.

Chi square distribution for 419430416 samples is 250.92, and randomly
would exceed this value 50.00 percent of the times.

Arithmetic mean value of data bytes is 127.5091 (127.5 = random).
Monte Carlo value for Pi is 3.141204882 (error 0.01 percent).
Serial correlation coefficient is -0.000005 (totally uncorrelated = 0.0).

It looks good. However, my code has some obvious flaws:

  1. It uses /dev/urandom for the initial entropy source.
  2. Key strength is not equivalent to 448 bits because only printable characters are used.
  3. The cipher should be periodically re-seeded to "spread" out the entropy.

So, I was wondering if I am on the right track. And if anyone knows how to fix any of these flaws that would be great. Also, could you please share what you use to securely wipe disks if it's anything other than /dev/urandom, sfill, badblocks, or DBAN?

Thank you!

Edit: Updated code to use blowfish as a stream cipher.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

熊抱啵儿 2024-09-23 15:21:45

如果您只是想安全地擦除磁盘,那么您实际上不必太担心写入数据的随机性。重要的是尽可能写信——也许写几次。任何比这更多的事情都太过分了,除非你的“对手”是一个大型政府组织,有足够的资源来进行数据恢复(而且即使如此,他们是否可以读取数据也不清楚 - 如今随着磁盘密度的提高,情况并非如此)现在使用)。我使用过 GNU 'shred' 程序 - 但我只是偶尔关心它。当我这样做时,我将磁盘系统格式化到磁盘驱动器上,然后用包含准随机数据的单个文件填充它,然后将其粉碎。我认为这主要是矫枉过正。

也许您应该阅读 Schneier 的“密码学工程”一书?

If you're simply seeking to erase disks securely, you really don't have to worry that much about the randomness of the data you write. The important thing is to write to everything you possibly can - maybe a couple of times. Anything much more than that is overkill unless your 'opponent' is a large government organization with the resources to spare to indulge in the data recovery (and it is not clear cut that they can read it even so - not these days with the disk densities now used). I've used the GNU 'shred' program - but I'm only casually concerned about it. When I did that, I formatted a disk system onto the disk drive, then filled it with a single file containing quasi-random data, then shredded that. I think it was mostly overkill.

Maybe you should read Schneier's 'Cryptography Engineering' book?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文