缓存 SHA1 摘要结果?

发布于 2024-08-27 08:35:35 字数 211 浏览 11 评论 0原文

我根据原始文件名及其版本的摘要存储文件的多个版本,如下所示:

$filename = sha1($original . ':' . $version);

是否值得将摘要($filename)作为键/值对缓存在memcache中(键是原始+版本并值sha1哈希值),或者足够快地生成摘要(对于高流量的php web)应用程序)?

谢谢,

乔纳森

I'm storing several versions of a file based on a digest of the original filename and its version, like this:

$filename = sha1($original . ':' . $version);

Would it be worth it to cache the digest ($filename) in memcache as a key/value pair (the key being the original + version and value the sha1 hash), or is generating the digest quick enough (for a high traffic php web app)?

Thanks,

Johnathan

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

禾厶谷欠 2024-09-03 08:35:35

你最好不要缓存哈希值。在我的笔记本电脑(相当快的 Core 2 Duo)上计算短文件名的 100,000 个哈希大约需要 1/2 秒:

        byte[][] fileNames = Enumerable.Range(0, 100).Select(i => new UnicodeEncoding().GetBytes(System.IO.Path.GetRandomFileName())).ToArray();
        Stopwatch stopWatch = new Stopwatch();

        using (SHA1CryptoServiceProvider sha1 = new SHA1CryptoServiceProvider())
        {
            stopWatch.Start();
            for (int j = 0; j < 1000; j++)
            {
                for (int i = 0; i < 100; i++)
                {
                    sha1.ComputeHash(fileNames[i]);
                }
            }
            stopWatch.Stop();
            Console.WriteLine("Total: {0}", stopWatch.Elapsed);
            Console.WriteLine("Time per hash: {0}", new TimeSpan(stopWatch.ElapsedTicks / 100000));
        }

总计:00:00:00.5186110
每个哈希时间:00:00:00.0000014

You're much better off not caching the hashes. Computing 100,000 hashes on short filenames takes around 1/2 a second on my laptop (a reasonably fast Core 2 Duo):

        byte[][] fileNames = Enumerable.Range(0, 100).Select(i => new UnicodeEncoding().GetBytes(System.IO.Path.GetRandomFileName())).ToArray();
        Stopwatch stopWatch = new Stopwatch();

        using (SHA1CryptoServiceProvider sha1 = new SHA1CryptoServiceProvider())
        {
            stopWatch.Start();
            for (int j = 0; j < 1000; j++)
            {
                for (int i = 0; i < 100; i++)
                {
                    sha1.ComputeHash(fileNames[i]);
                }
            }
            stopWatch.Stop();
            Console.WriteLine("Total: {0}", stopWatch.Elapsed);
            Console.WriteLine("Time per hash: {0}", new TimeSpan(stopWatch.ElapsedTicks / 100000));
        }

Total: 00:00:00.5186110
Time per hash: 00:00:00.0000014

樱娆 2024-09-03 08:35:35

哈希非常快,特别是对于小输入(例如文件的名称和版本)。

现在,如果您对文件本身进行哈希处理,并且它们非常大,那将是一个不同的故事(仅仅是因为从磁盘读取整个文件需要很长时间)

Hashes are extremely fast, especially for small inputs (such as the name and version of a file).

Now, if you were hashing the files themselves, and they were very large, that would be a different story (simply because it would take so long to read the entire file from off the disk)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文