如何快速解码哈夫曼码?
我已经实施了 一个简单的压缩器在Windows下使用纯哈夫曼代码。但是我不太了解如何快速解码压缩文件,我的糟糕算法是:
枚举代码表中的所有哈夫曼代码,然后将其与中的位进行比较结果是可怕的结果:解压3MB的文件需要6个小时。
你能提供一个更有效的算法吗?我应该使用哈希还是其他算法?
更新: 我已经实现了 带状态表的解码器,根据我朋友Lin的建议。我认为这个方法应该比遍历哈夫曼树更好,6秒内3MB。
谢谢。
I have implementated a simple compressor using pure huffman code under Windows.But I do not know much about how to decode the compressed file quickly,my bad algorithm is:
Enumerate all the huffman code in the code table then compare it with the bits in the compressed file.It turns out horrible result:decompressing 3MB file would need 6 hours.
Could you provide a much more efficient algorithm?Should I use Hash or something?
Update:
I have implementated the decoder with state table,based on my friend Lin's advice.I think this method should be better than travesal huffman tree,3MB within 6s.
thanks.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
优化二叉树方法的一种方法是使用查找表。您可以排列该表,以便可以直接查找特定的编码位模式,从而允许任何代码的最大可能位宽度。
由于大多数代码不使用完整的最大宽度,因此它们包含在表中的多个位置 - 一个位置对应未使用位的每个组合。该表指示从输入以及解码输出中丢弃多少位。
如果最长的代码太长,那么该表不切实际,折衷方案是使用较小的固定宽度下标查找树。例如,您可以使用 256 项表来处理一个字节。如果输入代码超过 8 位,则表条目指示解码不完整,并引导您到处理接下来最多 8 位的表。较大的表以内存换取速度 - 256 个项目可能太小了。
我相信这种通用方法称为“前缀表”,这就是 BobMcGees 引用的代码正在做的事情。一个可能的区别是,某些压缩算法需要在解压缩期间更新前缀表 - 这对于简单的霍夫曼来说是不需要的。 IIRC,我第一次在一本关于位图图形文件格式(包括 GIF)的书中看到它,是在专利恐慌之前的一段时间。
应该很容易从二叉树模型中预先计算完整的查找表、等效哈希表或小表树。二叉树仍然是代码如何工作的关键表示(心理模型) - 这个查找表只是实现它的优化方法。
One way to optimise the binary-tree approach is to use a lookup table. You arrange the table so that you can look up a particular encoded bit-pattern directly, allowing for the maximum possible bit-width of any code.
Since most codes don't use the full maximum width, they are included at multiple locations in the table - one location for each combination of the unused bits. The table indicates how many bits to discard from the input as well as the decoded output.
If the longest code is too long, so the table is impractical, a compromise is to use a tree of smaller fixed-width-subscript lookups. For example, you can use a 256-item table to handle a byte. If the input code is more than 8 bits, the table entry indicates that decoding is incomplete and directs you to a table that handles the next up-to 8 bits. Larger tables trade memory for speed - 256 items is probably too small.
I believe this general approach is called "prefix tables", and is what BobMcGees quoted code is doing. A likely difference is that some compression algorithms require the prefix table to be updated during decompression - this is not needed for simple Huffman. IIRC, I first saw it in a book about bitmapped graphics file formats which included GIF, some time before the patent panic.
It should be easy to precalculate either a full lookup table, a hashtable equivalent, or a tree-of-small-tables from a binary tree model. The binary tree is still the key representation (mental model) of how the code works - this lookup table is just an optimised way to implement it.
为什么不看看 GZIP 源 是如何做到的,特别是专门解压中的 Huffman 解压代码。 c?它所做的正是你所做的,只是速度要快得多。
据我所知,它使用查找数组和对整个单词进行移位/掩码操作来运行得更快。不过代码相当密集。
编辑:这是完整的来源
Why not take a look at how the GZIP source does it, specifically the Huffman decompression code in specifically unpack.c? It's doing exactly what you are, except it's doing it much, much faster.
From what I can tell, it's using a lookup array and shift/mask operations operating on whole words to run faster. Pretty dense code though.
EDIT: here is the complete source
解压缩霍夫曼码的典型方法是使用二叉树。您将代码插入树中,以便代码中的每个位代表左侧 (0) 或右侧 (1) 的分支,并在叶子中包含解码后的字节(或您拥有的任何值)。
解码就是从编码内容中读取位,遍历树中的每个位。当到达叶子时,发出解码后的值,并继续读取,直到输入耗尽。
更新: 此页面描述了该技术,并且具有奇特的功能图形。
The typical way to decompress a Huffman code is using a binary tree. You insert your codes in the tree, so that each bit in a code represents a branch either to the left (0) or right (1), with decoded bytes (or whatever values you have) in the leaves.
Decoding is then just a case of reading bits from the coded content, walking the tree for each bit. When you reach a leaf, emit that decoded value, and keep reading until the input is exhausted.
Update: this page describes the technique, and has fancy graphics.
您可以在通常的霍夫曼树查找上执行一种批量查找:
选择四的倍数的深度(例如深度8)非常适合移位操作。
后记这与potatoswatter对unwind答案的评论中的想法以及Steve314在使用多个表中的答案不同:这意味着所有n位查找都被使用,因此应该更快,但会使表构建和查找变得更加棘手,并且对于给定深度将消耗更多空间。
You can perform a kind of batch lookup on the usual Huffmann tree lookup:
Choosing a depth that is a multiple of four, e.g., depth 8, is a good fit for bit shifting operations.
Postscript This differs from the idea in potatoswatter's comment on unwind's answer and from Steve314's answer in using multiple tables: this means that all of the n-bit lookup is put to use, so should be faster but makes table construction and lookup significantly trickier, and will consume much more space for a given depth.
为什么不在同一个源模块中使用解压缩算法?这似乎是一个不错的算法。
Why not use the decompress algorithm in the same source module? It appears to be a decent algorithm.
其他答案是正确的,但这里是我最近编写的一些 Rust 代码,以使想法具体化。这是关键例程:
棘手的部分是设置查找表,请参阅 Rust 中完整的 RFC 1951 解码器中的 BitDecoder::setup_code:
Github 存储库位于此处
The other answers are right, but here is some code in Rust I wrote recently to make the ideas concrete. This is the key routine:
The tricky bit is setting up the lookup table, see BitDecoder::setup_code in this complete RFC 1951 decoder in Rust:
Github repository here