字符串的哈希函数

发布于 2024-12-08 12:22:21 字数 276 浏览 5 评论 0 原文

我正在用 C 语言研究哈希表,并且正在测试字符串的哈希函数。

我尝试的第一个功能是添加 ascii 代码并使用模 (% 100),但第一次数据测试结果很差:130 个单词有 40 次冲突。

最终输入数据将包含 8000 个单词(它是存储在文件中的字典)。哈希表声明为 int table[10000] 并包含单词在 .txt 文件中的位置。

  • 哈希字符串的最佳算法是什么?
  • 那么如何确定哈希表的大小呢?

I'm working on hash table in C language and I'm testing hash function for string.

The first function I've tried is to add ascii code and use modulo (% 100) but i've got poor results with the first test of data: 40 collisions for 130 words.

The final input data will contain 8000 words (it's a dictionary stores in a file). The hash table is declared as int table[10000] and contains the position of the word in a .txt file.

  • Which is the best algorithm for hashing string?
  • And how to determinate the size of hash table?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(11

柠北森屋 2024-12-15 12:22:21

我使用 Dan Bernstein 的 djb2 取得了不错的效果。

unsigned long
hash(unsigned char *str)
{
    unsigned long hash = 5381;
    int c;

    while (c = *str++)
        hash = ((hash << 5) + hash) + c; /* hash * 33 + c */

    return hash;
}

I've had nice results with djb2 by Dan Bernstein.

unsigned long
hash(unsigned char *str)
{
    unsigned long hash = 5381;
    int c;

    while (c = *str++)
        hash = ((hash << 5) + hash) + c; /* hash * 33 + c */

    return hash;
}
画骨成沙 2024-12-15 12:22:21

首先,您通常不想对哈希表使用加密哈希。按照加密标准来看非常的算法,按照哈希表标准来看仍然慢得令人难以忍受。

其次,您要确保输入的每一位都可以/将会影响结果。一种简单的方法是将当前结果旋转一定位数,然后将当前哈希码与当前字节进行异或。重复直到到达字符串的末尾。请注意,您通常也不希望旋转是字节大小的偶数倍。

例如,假设 8 位字节的常见情况,您可能会旋转 5 位:

int hash(char const *input) { 
    int result = 0x55555555;

    while (*input) { 
        result ^= *input++;
        result = rol(result, 5);
    }
}

编辑:另请注意,对于哈希表大小来说,10000 个槽很少是一个好的选择。您通常需要以下两件事之一:您要么需要质数作为大小(确保某些类型的哈希解析的正确性所需),要么需要 2 的幂(因此可以通过简单的方法将值减少到正确的范围)位掩码)。

First, you generally do not want to use a cryptographic hash for a hash table. An algorithm that's very fast by cryptographic standards is still excruciatingly slow by hash table standards.

Second, you want to ensure that every bit of the input can/will affect the result. One easy way to do that is to rotate the current result by some number of bits, then XOR the current hash code with the current byte. Repeat until you reach the end of the string. Note that you generally do not want the rotation to be an even multiple of the byte size either.

For example, assuming the common case of 8 bit bytes, you might rotate by 5 bits:

int hash(char const *input) { 
    int result = 0x55555555;

    while (*input) { 
        result ^= *input++;
        result = rol(result, 5);
    }
}

Edit: Also note that 10000 slots is rarely a good choice for a hash table size. You usually want one of two things: you either want a prime number as the size (required to ensure correctness with some types of hash resolution) or else a power of 2 (so reducing the value to the correct range can be done with a simple bit-mask).

余生共白头 2024-12-15 12:22:21

我想验证卞晓宁的回答,可惜他没有贴出他的代码。因此,我实现了一个小测试套件,并在 列表上运行了不同的小哈希函数466K 英语单词 查看每个单词的冲突数量:

Hash function      |     Collisions | Time (words) | Time (file) | Avalanching
===============================================================================
CRC32              |    23 (0.005%) |     96.1 ms  |   120.9 ms  |      99.8%
MurmurOAAT         |    26 (0.006%) |     17.2 ms  |    19.4 ms  |     100.0%
FNV hash           |    32 (0.007%) |     16.4 ms  |    14.0 ms  |      98.6%
Jenkins OAAT       |    36 (0.008%) |     19.9 ms  |    18.5 ms  |     100.0%
DJB2 hash          |   344 (0.074%) |     18.2 ms  |    12.7 ms  |      92.9%
K&R V2             |   356 (0.076%) |     18.0 ms  |    11.9 ms  |      92.8%
Coffin             |   763 (0.164%) |     16.3 ms  |    11.7 ms  |      91.4%
x17 hash           |  2242 (0.481%) |     18.6 ms  |    20.0 ms  |      91.5%
-------------------------------------------------------------------------------
large_prime        |    25 (0.005%) |     16.5 ms  |    18.6 ms  |      96.6%
MurmurHash3_x86_32 |    19 (0.004%) |     20.5 ms  |     4.4 ms  |     100.0%

我包括了两者的时间:分别对所有单词进行哈希处理,并对所有英语单词的整个文件进行一次哈希处理。我还在我的测试中添加了一个更复杂的 MurmurHash3_x86_32 以供参考。此外,我还计算了“雪崩”,它是衡量连续单词的哈希值的不可预测性的指标。任何低于 100% 的值都意味着“相当可预测”(这对于哈希表可能没问题,但对于其他用途来说很糟糕)。

结论:

  • 在 Intel x86-64(或 AArch64)架构上使用流行的 DJB2 哈希函数来处理字符串几乎没有意义。因为它比类似的函数有更多的冲突(FNV, 詹金斯OAAT 和 MurmurOAAT),同时具有可比的吞吐量。 Bernstein 的 DJB2 在短弦上表现尤其糟糕。冲突示例:Liz/MHzBon/COMRey/性别
  • 如果必须使用“乘以奇数”方法(例如 DJB2 = x33、K&R V2 = x31 或 SDBM = x65599),请选择较大的 32 位素数而不是较小的 8 位素数。它对于减少碰撞次数有很大的影响。例如,hash = 17000069 * hash + s[i] 仅产生 25 次冲突,而 DJB2 则产生 344 次冲突。

测试代码(使用 gcc -O2 编译):

#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>

#define MODE     1          // 1 = hash each word individually; 2 = hash the entire file
#define MAXLEN   50         // maximum length of a word
#define MAXWORDS 500000     // maximum number of words in a file
#define SEED     0x12345678

#if MODE == 1
char words[MAXWORDS][MAXLEN];
#else
char file[MAXWORDS * MAXLEN];
#endif

uint32_t DJB2_hash(const uint8_t *str)
{
    uint32_t hash = 5381;
    uint8_t c;
    while ((c = *str++))
        hash = ((hash << 5) + hash) + c; /* hash * 33 + c */
    return hash;
}

uint32_t FNV(const void* key, uint32_t h)
{
    // See: https://github.com/aappleby/smhasher/blob/master/src/Hashes.cpp
    h ^= 2166136261UL;
    const uint8_t* data = (const uint8_t*)key;
    for(int i = 0; data[i] != '\0'; i++)
    {
        h ^= data[i];
        h *= 16777619;
    }
    return h;
}

uint32_t MurmurOAAT_32(const char* str, uint32_t h)
{
    // One-byte-at-a-time hash based on Murmur's mix
    // Source: https://github.com/aappleby/smhasher/blob/master/src/Hashes.cpp
    for (; *str; ++str) {
        h ^= *str;
        h *= 0x5bd1e995;
        h ^= h >> 15;
    }
    return h;
}

uint32_t KR_v2_hash(const char *s)
{
    // Source: https://stackoverflow.com/a/45641002/5407270
    // a.k.a. Java String hashCode()
    uint32_t hashval = 0;
    for (hashval = 0; *s != '\0'; s++)
        hashval = *s + 31*hashval;
    return hashval;
}

uint32_t Jenkins_one_at_a_time_hash(const char *str)
{
    uint32_t hash, i;
    for (hash = i = 0; str[i] != '\0'; ++i)
    {
        hash += str[i];
        hash += (hash << 10);
        hash ^= (hash >> 6);
    }
    hash += (hash << 3);
    hash ^= (hash >> 11);
    hash += (hash << 15);
    return hash;
}

uint32_t crc32b(const uint8_t *str) {
    // Source: https://stackoverflow.com/a/21001712
    unsigned int byte, crc, mask;
    int i = 0, j;
    crc = 0xFFFFFFFF;
    while (str[i] != 0) {
        byte = str[i];
        crc = crc ^ byte;
        for (j = 7; j >= 0; j--) {
            mask = -(crc & 1);
            crc = (crc >> 1) ^ (0xEDB88320 & mask);
        }
        i = i + 1;
    }
    return ~crc;
}

inline uint32_t _rotl32(uint32_t x, int32_t bits)
{
    return x<<bits | x>>(32-bits);      // C idiom: will be optimized to a single operation
}

uint32_t Coffin_hash(char const *input) { 
    // Source: https://stackoverflow.com/a/7666668/5407270
    uint32_t result = 0x55555555;
    while (*input) { 
        result ^= *input++;
        result = _rotl32(result, 5);
    }
    return result;
}

uint32_t x17(const void * key, uint32_t h)
{
    // See: https://github.com/aappleby/smhasher/blob/master/src/Hashes.cpp
    const uint8_t * data = (const uint8_t*)key;
    for (int i = 0; data[i] != '\0'; ++i)
    {
        h = 17 * h + (data[i] - ' ');
    }
    return h ^ (h >> 16);
}

uint32_t large_prime(const uint8_t *s, uint32_t hash)
{
    // Better alternative to x33: multiply by a large co-prime instead of a small one
    while (*s != '\0')
        hash = 17000069*hash + *s++;
    return hash;
}

void readfile(FILE* fp, char* buffer) {
    fseek(fp, 0, SEEK_END);
    size_t size = ftell(fp);
    rewind(fp);
    fread(buffer, size, 1, fp);
    buffer[size] = '\0';
}

struct timeval stop, start;
void timing_start() {
    gettimeofday(&start, NULL);
}

void timing_end() {
    gettimeofday(&stop, NULL);
    printf("took %lu us\n", (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
}

uint32_t apply_hash(int hash, const char* line)
{
    uint32_t result;
#if MODE == 2
    timing_start();
#endif
    switch (hash) {
        case 1: result = crc32b((const uint8_t*)line); break;
        case 2: result = large_prime((const uint8_t *)line, SEED); break;
        case 3: result = MurmurOAAT_32(line, SEED); break;
        case 4: result = FNV(line, SEED); break;
        case 5: result = Jenkins_one_at_a_time_hash(line); break;
        case 6: result = DJB2_hash((const uint8_t*)line); break;
        case 7: result = KR_v2_hash(line); break;
        case 8: result = Coffin_hash(line); break;
        case 9: result = x17(line, SEED); break;
        default: break;
    }
#if MODE == 2
    timing_end();
#endif
    return result;
}

int main(int argc, char* argv[])
{
    // Read arguments
    const int hash_choice = atoi(argv[1]);
    char const* const fname = argv[2];

    printf("Algorithm: %d\n", hash_choice);
    printf("File name: %s\n", fname);

    // Open file
    FILE* f = fopen(fname, "r");

#if MODE == 1
    char line[MAXLEN];
    size_t word_count = 0;
    uint32_t hashes[MAXWORDS];

    // Read file line by line
    while (fgets(line, sizeof(line), f)) {
        line[strcspn(line, "\n")] = '\0';   // strip newline
        strcpy(words[word_count], line);
        word_count++;
    }
    fclose(f);

    // Calculate hashes
    timing_start();
    for (size_t i = 0; i < word_count; i++) {
        hashes[i] = apply_hash(hash_choice, words[i]);
    }
    timing_end();

    // Output hashes
    for (size_t i = 0; i < word_count; i++) {
        printf("%08x\n", hashes[i]);
    }

#else 
    // Read entire file
    readfile(f, file);
    fclose(f);
    printf("Len: %lu\n", strlen(file)); 

    // Calculate hash
    uint32_t hash = apply_hash(hash_choice, file);
    printf("%08x\n", hash);
#endif

    return 0;
}

PS 对现代哈希函数的速度和质量的更全面的审查可以在 SMHasher 存储库。请注意表中的“质量问题”列。

I wanted to verify Xiaoning Bian's answer, but unfortunately he didn't post his code. So I implemented a little test suite and ran different little hashing functions on the list of 466K English words to see number of collisions for each:

Hash function      |     Collisions | Time (words) | Time (file) | Avalanching
===============================================================================
CRC32              |    23 (0.005%) |     96.1 ms  |   120.9 ms  |      99.8%
MurmurOAAT         |    26 (0.006%) |     17.2 ms  |    19.4 ms  |     100.0%
FNV hash           |    32 (0.007%) |     16.4 ms  |    14.0 ms  |      98.6%
Jenkins OAAT       |    36 (0.008%) |     19.9 ms  |    18.5 ms  |     100.0%
DJB2 hash          |   344 (0.074%) |     18.2 ms  |    12.7 ms  |      92.9%
K&R V2             |   356 (0.076%) |     18.0 ms  |    11.9 ms  |      92.8%
Coffin             |   763 (0.164%) |     16.3 ms  |    11.7 ms  |      91.4%
x17 hash           |  2242 (0.481%) |     18.6 ms  |    20.0 ms  |      91.5%
-------------------------------------------------------------------------------
large_prime        |    25 (0.005%) |     16.5 ms  |    18.6 ms  |      96.6%
MurmurHash3_x86_32 |    19 (0.004%) |     20.5 ms  |     4.4 ms  |     100.0%

I included time for both: hashing all words individually and hashing the entire file of all English words once. I also included a more complex MurmurHash3_x86_32 into my test for reference. Additionally, I also calculated "avalanching", which is a measure of how unpredictable the hashes of consecutive words are. Anything below 100% means "quite predictable" (which might be fine for hashing tables, but is bad for other uses).

Conclusions:

  • there is almost no point in using the popular DJB2 hash function for strings on Intel x86-64 (or AArch64 for that matter) architecture. Because it has much more collisions than similar functions (FNV, Jenkins OAAT and MurmurOAAT) while having comparable throughput. Bernstein's DJB2 performs especially bad on short strings. Example collisions: Liz/MHz, Bon/COM, Rey/SEX.
  • if you have to use "multiply by an odd number" method (such as DJB2 = x33, K&R V2 = x31 or SDBM = x65599), choose a large 32-bit prime number instead of a small 8-bit number. It makes a lot of difference for reducing the number of collisions. For example, hash = 17000069 * hash + s[i] produces only 25 collisions compared to DJB2's 344 collisions.

Test code (compiled with gcc -O2):

#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>

#define MODE     1          // 1 = hash each word individually; 2 = hash the entire file
#define MAXLEN   50         // maximum length of a word
#define MAXWORDS 500000     // maximum number of words in a file
#define SEED     0x12345678

#if MODE == 1
char words[MAXWORDS][MAXLEN];
#else
char file[MAXWORDS * MAXLEN];
#endif

uint32_t DJB2_hash(const uint8_t *str)
{
    uint32_t hash = 5381;
    uint8_t c;
    while ((c = *str++))
        hash = ((hash << 5) + hash) + c; /* hash * 33 + c */
    return hash;
}

uint32_t FNV(const void* key, uint32_t h)
{
    // See: https://github.com/aappleby/smhasher/blob/master/src/Hashes.cpp
    h ^= 2166136261UL;
    const uint8_t* data = (const uint8_t*)key;
    for(int i = 0; data[i] != '\0'; i++)
    {
        h ^= data[i];
        h *= 16777619;
    }
    return h;
}

uint32_t MurmurOAAT_32(const char* str, uint32_t h)
{
    // One-byte-at-a-time hash based on Murmur's mix
    // Source: https://github.com/aappleby/smhasher/blob/master/src/Hashes.cpp
    for (; *str; ++str) {
        h ^= *str;
        h *= 0x5bd1e995;
        h ^= h >> 15;
    }
    return h;
}

uint32_t KR_v2_hash(const char *s)
{
    // Source: https://stackoverflow.com/a/45641002/5407270
    // a.k.a. Java String hashCode()
    uint32_t hashval = 0;
    for (hashval = 0; *s != '\0'; s++)
        hashval = *s + 31*hashval;
    return hashval;
}

uint32_t Jenkins_one_at_a_time_hash(const char *str)
{
    uint32_t hash, i;
    for (hash = i = 0; str[i] != '\0'; ++i)
    {
        hash += str[i];
        hash += (hash << 10);
        hash ^= (hash >> 6);
    }
    hash += (hash << 3);
    hash ^= (hash >> 11);
    hash += (hash << 15);
    return hash;
}

uint32_t crc32b(const uint8_t *str) {
    // Source: https://stackoverflow.com/a/21001712
    unsigned int byte, crc, mask;
    int i = 0, j;
    crc = 0xFFFFFFFF;
    while (str[i] != 0) {
        byte = str[i];
        crc = crc ^ byte;
        for (j = 7; j >= 0; j--) {
            mask = -(crc & 1);
            crc = (crc >> 1) ^ (0xEDB88320 & mask);
        }
        i = i + 1;
    }
    return ~crc;
}

inline uint32_t _rotl32(uint32_t x, int32_t bits)
{
    return x<<bits | x>>(32-bits);      // C idiom: will be optimized to a single operation
}

uint32_t Coffin_hash(char const *input) { 
    // Source: https://stackoverflow.com/a/7666668/5407270
    uint32_t result = 0x55555555;
    while (*input) { 
        result ^= *input++;
        result = _rotl32(result, 5);
    }
    return result;
}

uint32_t x17(const void * key, uint32_t h)
{
    // See: https://github.com/aappleby/smhasher/blob/master/src/Hashes.cpp
    const uint8_t * data = (const uint8_t*)key;
    for (int i = 0; data[i] != '\0'; ++i)
    {
        h = 17 * h + (data[i] - ' ');
    }
    return h ^ (h >> 16);
}

uint32_t large_prime(const uint8_t *s, uint32_t hash)
{
    // Better alternative to x33: multiply by a large co-prime instead of a small one
    while (*s != '\0')
        hash = 17000069*hash + *s++;
    return hash;
}

void readfile(FILE* fp, char* buffer) {
    fseek(fp, 0, SEEK_END);
    size_t size = ftell(fp);
    rewind(fp);
    fread(buffer, size, 1, fp);
    buffer[size] = '\0';
}

struct timeval stop, start;
void timing_start() {
    gettimeofday(&start, NULL);
}

void timing_end() {
    gettimeofday(&stop, NULL);
    printf("took %lu us\n", (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);
}

uint32_t apply_hash(int hash, const char* line)
{
    uint32_t result;
#if MODE == 2
    timing_start();
#endif
    switch (hash) {
        case 1: result = crc32b((const uint8_t*)line); break;
        case 2: result = large_prime((const uint8_t *)line, SEED); break;
        case 3: result = MurmurOAAT_32(line, SEED); break;
        case 4: result = FNV(line, SEED); break;
        case 5: result = Jenkins_one_at_a_time_hash(line); break;
        case 6: result = DJB2_hash((const uint8_t*)line); break;
        case 7: result = KR_v2_hash(line); break;
        case 8: result = Coffin_hash(line); break;
        case 9: result = x17(line, SEED); break;
        default: break;
    }
#if MODE == 2
    timing_end();
#endif
    return result;
}

int main(int argc, char* argv[])
{
    // Read arguments
    const int hash_choice = atoi(argv[1]);
    char const* const fname = argv[2];

    printf("Algorithm: %d\n", hash_choice);
    printf("File name: %s\n", fname);

    // Open file
    FILE* f = fopen(fname, "r");

#if MODE == 1
    char line[MAXLEN];
    size_t word_count = 0;
    uint32_t hashes[MAXWORDS];

    // Read file line by line
    while (fgets(line, sizeof(line), f)) {
        line[strcspn(line, "\n")] = '\0';   // strip newline
        strcpy(words[word_count], line);
        word_count++;
    }
    fclose(f);

    // Calculate hashes
    timing_start();
    for (size_t i = 0; i < word_count; i++) {
        hashes[i] = apply_hash(hash_choice, words[i]);
    }
    timing_end();

    // Output hashes
    for (size_t i = 0; i < word_count; i++) {
        printf("%08x\n", hashes[i]);
    }

#else 
    // Read entire file
    readfile(f, file);
    fclose(f);
    printf("Len: %lu\n", strlen(file)); 

    // Calculate hash
    uint32_t hash = apply_hash(hash_choice, file);
    printf("%08x\n", hash);
#endif

    return 0;
}

P.S. A more comprehensive review of speed and quality of modern hash functions can be found in SMHasher repository of Reini Urban (rurban). Notice the "Quality problems" column in the table.

枫以 2024-12-15 12:22:21

维基百科展示了一个很好的字符串哈希函数,称为 Jenkins One At A Time Hash。它还引用了该哈希的改进版本。

uint32_t jenkins_one_at_a_time_hash(char *key, size_t len)
{
    uint32_t hash, i;
    for(hash = i = 0; i < len; ++i)
    {
        hash += key[i];
        hash += (hash << 10);
        hash ^= (hash >> 6);
    }
    hash += (hash << 3);
    hash ^= (hash >> 11);
    hash += (hash << 15);
    return hash;
}

Wikipedia shows a nice string hash function called Jenkins One At A Time Hash. It also quotes improved versions of this hash.

uint32_t jenkins_one_at_a_time_hash(char *key, size_t len)
{
    uint32_t hash, i;
    for(hash = i = 0; i < len; ++i)
    {
        hash += key[i];
        hash += (hash << 10);
        hash ^= (hash >> 6);
    }
    hash += (hash << 3);
    hash ^= (hash >> 11);
    hash += (hash << 15);
    return hash;
}
栖竹 2024-12-15 12:22:21

djb2 很好

虽然 djb2 ,正如 cnicutar 在 stackoverflow 上介绍的那样,几乎可以肯定更好,我认为也值得显示 K&R 哈希值:

K&R 哈希值之一是很糟糕,有一个可能相当不错:

  1. 显然是一种糟糕的哈希算法,如 K&R 第一版中所介绍的那样。 这只是字符串中所有字节的总和():
    无符号长哈希(unsigned char *str)
    {
        无符号整型哈希 = 0;
        整数c;
    
        而 (c = *str++)
            散列+=c;
    
        返回哈希值;
    }
    
  2. 可能是一个相当不错的哈希算法,如 K&R 版本 2 中所示(由我在本书第 144 页验证);注意:如果您打算在哈希算法之外对数组长度进行模数调整,请务必从返回语句中删除 % HASHSIZE 。另外,我建议您将返回和“hashval”类型设置为unsigned long,甚至更好:uint32_tuint64_t,而不是简单的<代码>无符号(int)。 这是一个简单的算法,通过执行以下算法风格来考虑字符串中每个字节的字节顺序hashvalue = new_byte + 31*hashvalue,例如字符串中的所有字节:
    无符号哈希(char *s)
    {
        无符号哈希值;
    
        for (hashval = 0; *s != '\0'; s++)
            哈希值 = *s + 31*哈希值;
        返回 hashval % HASHSIZE;
    }
    

请注意,从这两种算法可以清楚地看出,第一版哈希如此糟糕的原因之一是它没有考虑字符串字符顺序,因此hash(“ab”) 因此将返回与 hash("ba") 相同的值。然而,对于第二版哈希来说,情况并非如此,它会(更好!)为这些字符串返回两个不同的值。

std::unordered_map 模板容器哈希表使用的 GCC C++11 哈希函数非常优秀

用于 unordered_map 的 GCC C++11 哈希函数 (哈希表模板)和 unordered_set(哈希集模板)如下所示。

代码:

// Implementation of Murmur hash for 32-bit size_t.
size_t _Hash_bytes(const void* ptr, size_t len, size_t seed)
{
  const size_t m = 0x5bd1e995;
  size_t hash = seed ^ len;
  const char* buf = static_cast<const char*>(ptr);

  // Mix 4 bytes at a time into the hash.
  while (len >= 4)
  {
    size_t k = unaligned_load(buf);
    k *= m;
    k ^= k >> 24;
    k *= m;
    hash *= m;
    hash ^= k;
    buf += 4;
    len -= 4;
  }

  // Handle the last few bytes of the input array.
  switch (len)
  {
    case 3:
      hash ^= static_cast<unsigned char>(buf[2]) << 16;
      [[gnu::fallthrough]];
    case 2:
      hash ^= static_cast<unsigned char>(buf[1]) << 8;
      [[gnu::fallthrough]];
    case 1:
      hash ^= static_cast<unsigned char>(buf[0]);
      hash *= m;
  };

  // Do a few final mixes of the hash.
  hash ^= hash >> 13;
  hash *= m;
  hash ^= hash >> 15;
  return hash;
}

Austin Appleby 的 MurmerHash3 是最好!这甚至比上面使用的 gcc C++11 std::unordered_map 哈希有所改进。

Austin 不仅是所有这些中最好的,而且还将 MurmerHash3 发布到公共领域。在这里查看我的其他答案:C++ std::unordered_map 中使用的默认哈希函数是什么?

另请参阅

  1. 尝试和测试的其他哈希表算法:http://www.cse .yorku.ca/~oz/hash.html。那里提到的哈希算法:
    1. djb2
    2. sdbm
    3. 输就输(K&R 第一版)

djb2 is good

Though djb2, as presented on stackoverflow by cnicutar, is almost certainly better, I think it's worth showing the K&R hashes too:

One of the K&R hashes is terrible, one is probably pretty good:

  1. Apparently a terrible hash algorithm, as presented in K&R 1st edition. This is simply a summation of all bytes in the string (source):
    unsigned long hash(unsigned char *str)
    {
        unsigned int hash = 0;
        int c;
    
        while (c = *str++)
            hash += c;
    
        return hash;
    }
    
  2. Probably a pretty decent hash algorithm, as presented in K&R version 2 (verified by me on pg. 144 of the book); NB: be sure to remove % HASHSIZE from the return statement if you plan on doing the modulus sizing-to-your-array-length outside the hash algorithm. Also, I recommend you make the return and "hashval" type unsigned long, or even better: uint32_t or uint64_t, instead of the simple unsigned (int). This is a simple algorithm which takes into account byte order of each byte in the string by doing this style of algorithm: hashvalue = new_byte + 31*hashvalue, for all bytes in the string:
    unsigned hash(char *s)
    {
        unsigned hashval;
    
        for (hashval = 0; *s != '\0'; s++)
            hashval = *s + 31*hashval;
        return hashval % HASHSIZE;
    }
    

Note that it's clear from the two algorithms that one reason the 1st edition hash is so terrible is because it does NOT take into consideration string character order, so hash("ab") would therefore return the same value as hash("ba"). This is not so with the 2nd edition hash, however, which would (much better!) return two different values for those strings.

The GCC C++11 hashing function used by the std::unordered_map<> template container hash table is excellent.

The GCC C++11 hashing functions used for unordered_map (a hash table template) and unordered_set (a hash set template) appear to be as follows.

Code:

// Implementation of Murmur hash for 32-bit size_t.
size_t _Hash_bytes(const void* ptr, size_t len, size_t seed)
{
  const size_t m = 0x5bd1e995;
  size_t hash = seed ^ len;
  const char* buf = static_cast<const char*>(ptr);

  // Mix 4 bytes at a time into the hash.
  while (len >= 4)
  {
    size_t k = unaligned_load(buf);
    k *= m;
    k ^= k >> 24;
    k *= m;
    hash *= m;
    hash ^= k;
    buf += 4;
    len -= 4;
  }

  // Handle the last few bytes of the input array.
  switch (len)
  {
    case 3:
      hash ^= static_cast<unsigned char>(buf[2]) << 16;
      [[gnu::fallthrough]];
    case 2:
      hash ^= static_cast<unsigned char>(buf[1]) << 8;
      [[gnu::fallthrough]];
    case 1:
      hash ^= static_cast<unsigned char>(buf[0]);
      hash *= m;
  };

  // Do a few final mixes of the hash.
  hash ^= hash >> 13;
  hash *= m;
  hash ^= hash >> 15;
  return hash;
}

MurmerHash3 by Austin Appleby is best! It's an improvement over even his gcc C++11 std::unordered_map<> hash used above.

Not only is is the best of all of these, but Austin released MurmerHash3 into the public domain. See my other answer on this here: What is the default hash function used in C++ std::unordered_map?.

See also

  1. Other hash table algorithms to try out and test: http://www.cse.yorku.ca/~oz/hash.html. Hash algorithms mentioned there:
    1. djb2
    2. sdbm
    3. lose lose (K&R 1st edition)
尛丟丟 2024-12-15 12:22:21

djb2 对于 这个 466k 英语词典 有 317 次冲突,而 MurmurHash 没有对于 64 位哈希值,21 个用于 32 位哈希值(对于 466k 随机 32 位哈希值,预计约为 25)。
我的建议是使用 MurmurHash(如果可用),它非常快,因为它一次需要几个字节时间。但是,如果您需要一个简单而短的哈希函数来复制并粘贴到您的项目中,我建议使用 murmurs 一次一个字节的版本:

uint32_t inline MurmurOAAT32 ( const char * key)
{
  uint32_t h(3323198485ul);
  for (;*key;++key) {
    h ^= *key;
    h *= 0x5bd1e995;
    h ^= h >> 15;
  }
  return h;
}

uint64_t inline MurmurOAAT64 ( const char * key)
{
  uint64_t h(525201411107845655ull);
  for (;*key;++key) {
    h ^= *key;
    h *= 0x5bd1e9955bd1e995;
    h ^= h >> 47;
  }
  return h;
}

哈希表的最佳大小 - 简而言之 - 尽可能大同时仍然适合记忆。因为我们通常不知道或不想查找我们有多少可用内存,而且它甚至可能会改变,所以最佳哈希表大小大约是表中存储的预期元素数量的 2 倍。分配比这更多的值将使您的哈希表更快,但回报会迅速递减,使您的哈希表小于该值将使其速度呈指数级下降。这是因为空间和时间复杂度之间存在非线性权衡哈希表,最佳负载因子为 2-sqrt(2) = 0.58...显然。

djb2 has 317 collisions for this 466k english dictionary while MurmurHash has none for 64 bit hashes, and 21 for 32 bit hashes (around 25 is to be expected for 466k random 32 bit hashes).
My recommendation is using MurmurHash if available, it is very fast, because it takes in several bytes at a time. But if you need a simple and short hash function to copy and paste to your project I'd recommend using murmurs one-byte-at-a-time version:

uint32_t inline MurmurOAAT32 ( const char * key)
{
  uint32_t h(3323198485ul);
  for (;*key;++key) {
    h ^= *key;
    h *= 0x5bd1e995;
    h ^= h >> 15;
  }
  return h;
}

uint64_t inline MurmurOAAT64 ( const char * key)
{
  uint64_t h(525201411107845655ull);
  for (;*key;++key) {
    h ^= *key;
    h *= 0x5bd1e9955bd1e995;
    h ^= h >> 47;
  }
  return h;
}

The optimal size of a hash table is - in short - as large as possible while still fitting into memory. Because we don't usually know or want to look up how much memory we have available, and it might even change, the optimal hash table size is roughly 2x the expected number of elements to be stored in the table. Allocating much more than that will make your hash table faster but at rapidly diminishing returns, making your hash table smaller than that will make it exponentially slower. This is because there is a non-linear trade-off between space and time complexity for hash tables, with an optimal load factor of 2-sqrt(2) = 0.58... apparently.

ゞ花落谁相伴 2024-12-15 12:22:21

有许多现有的 C 哈希表实现,从 C 标准库 hcreate/hdestroy/hsearch 到 APR 和 < a href="http://developer.gnome.org/glib/">glib,它还提供预构建的哈希函数。我强烈建议使用它们,而不是发明自己的哈希表或哈希函数;它们针对常见用例进行了大量优化。

但是,如果您的数据集是静态的,那么最好的解决方案可能是使用完美哈希gperf 将为给定的数据集生成完美的哈希值。

There are a number of existing hashtable implementations for C, from the C standard library hcreate/hdestroy/hsearch, to those in the APR and glib, which also provide prebuilt hash functions. I'd highly recommend using those rather than inventing your own hashtable or hash function; they've been optimized heavily for common use-cases.

If your dataset is static, however, your best solution is probably to use a perfect hash. gperf will generate a perfect hash for you for a given dataset.

放低过去 2024-12-15 12:22:21

首先,130 个单词的 40 次冲突哈希为 0..99 是否不好?如果您不采取专门的措施来实现完美的散列,那么您就不能期望它会发生。大多数时候,普通的哈希函数的冲突并不比随机生成器少。

声誉良好的哈希函数是 MurmurHash3

最后,关于哈希表的大小,这实际上取决于您想要什么样的哈希表,特别是存储桶是可扩展的还是单槽的。如果存储桶是可扩展的,那么还有一个选择:您可以根据内存/速度限制选择平均存储桶长度。

First, is 40 collisions for 130 words hashed to 0..99 bad? You can't expect perfect hashing if you are not taking steps specifically for it to happen. An ordinary hash function won't have fewer collisions than a random generator most of the time.

A hash function with a good reputation is MurmurHash3.

Finally, regarding the size of the hash table, it really depends what kind of hash table you have in mind, especially, whether buckets are extensible or one-slot. If buckets are extensible, again there is a choice: you choose the average bucket length for the memory/speed constraints that you have.

半衾梦 2024-12-15 12:22:21

我尝试了这些哈希函数并得到了以下结果。我有大约 960^3 个条目,每个条目长 64 个字节,64 个不同顺序的字符,哈希值 32 位。代码来自此处

Hash function    | collision rate | how many minutes to finish
==============================================================
MurmurHash3      |           6.?% |                      4m15s
Jenkins One..    |           6.1% |                      6m54s   
Bob, 1st in link |          6.16% |                      5m34s
SuperFastHash    |            10% |                      4m58s
bernstein        |            20% |       14s only finish 1/20
one_at_a_time    |          6.16% |                       7m5s
crc              |          6.16% |                      7m56s

一件奇怪的事情是,几乎所有哈希函数对我的数据都有 6% 的冲突率。

I have tried these hash functions and got the following result. I have about 960^3 entries, each 64 bytes long, 64 chars in different order, hash value 32bit. Codes from here.

Hash function    | collision rate | how many minutes to finish
==============================================================
MurmurHash3      |           6.?% |                      4m15s
Jenkins One..    |           6.1% |                      6m54s   
Bob, 1st in link |          6.16% |                      5m34s
SuperFastHash    |            10% |                      4m58s
bernstein        |            20% |       14s only finish 1/20
one_at_a_time    |          6.16% |                       7m5s
crc              |          6.16% |                      7m56s

One strange things is that almost all the hash functions have 6% collision rate for my data.

拥抱影子 2024-12-15 12:22:21

我使用过的效果很好的一件事如下(我不知道它是否已经被提及,因为我不记得它的名字了)。

您预先计算一个表 T,其中包含密钥字母表中每个字符的随机数 [0,255]。您通过采用 T[k0] xor T[k1] xor ... xor T[kN] 来散列密钥“k0 k1 k2 ... kN”。您可以轻松地证明,这与随机数生成器一样随机,并且在计算上非常可行,如果您确实遇到了具有大量冲突的非常糟糕的实例,您可以使用一批新的随机数重复整个过程。

One thing I've used with good results is the following (I don't know if its mentioned already because I can't remember its name).

You precompute a table T with a random number for each character in your key's alphabet [0,255]. You hash your key 'k0 k1 k2 ... kN' by taking T[k0] xor T[k1] xor ... xor T[kN]. You can easily show that this is as random as your random number generator and its computationally very feasible and if you really run into a very bad instance with lots of collisions you can just repeat the whole thing using a fresh batch of random numbers.

南巷近海 2024-12-15 12:22:21

我想为像我这样的 C 新手总结一下。根据 Andriy Makukha 的精确工作,MurmurHash3 是最好的。

一个不错的 C 端口可以在 murmurhash.c 中找到。

I want to summarize it all for newbies to C like me. According to Andriy Makukha precision efforts, MurmurHash3 is the best.

A decent C port can be found in murmurhash.c.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文