zlib 内存使用/性能。具有 500kb 数据
zLib 值得吗?还有其他更适合的压缩机吗?
我正在使用嵌入式系统。我的应用程序通常只有 3MB 或更少的 RAM。所以我正在考虑使用 zlib 来压缩我的缓冲区。然而,我担心开销。
缓冲区的平均大小为 30kb。 这可能不会被 zlib 压缩。有人知道适合极其有限的内存环境的好压缩器吗?
但是,我偶尔会遇到 700kb 的最大缓冲区大小,其中 500kb 更常见。 在这种情况下 zlib 值得吗?还是开销太大而无法证明?
我对压缩的唯一考虑因素是算法的RAM开销和性能至少一样好作为 zlib.
许可证:我更喜欢压缩器获得 BSD、zLib 或同等许可证的许可。
Is zLib Worth it? Are there other better suited compressors?
I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however.
The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments?
However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify?
My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.
LICENSE: I prefer the compressor be licensed under BSD, zLib, or equivalent license.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
如果使用
1
、2
或3
使用lm_init()
初始化 zlib,则deflate_fast(将使用 )
例程代替deflate()
,后者将使用更小的运行时缓冲区和更快的算法。代价是压缩效果更差。这可能是值得的。如果您编译 zlib 时定义了
SMALL_MEM
,则在对输入字符串进行哈希处理时,它将使用较小的哈希桶。文档(在deflate.c
中)声称:希望这两种技术的结合可以使 zlib 融入您的应用程序范围内。这是一个普遍存在的标准,能够重用陈旧的组件可能值得在应用程序的其他地方做出牺牲。但是,如果您了解数据的分布情况,从而可以编写自己的压缩例程,那么您可能会做得更好,但您可以快速将 zlib 删除——编写和测试您自己的压缩例程可能需要更多时间。
更新
以下是使用
SMALL_MEM
构建的 zlib 的一些输出,使用不同的压缩级别设置,在我找到的第一个 600k 文件上:整个
gzip
程序无论要求的压缩级别如何,都需要大约 2.6 MB 的内存;也许只使用您需要的特定功能而不是整个 gzip 程序会降低这个数字,但对于您的小机器来说可能太昂贵了。If you initialize zlib with
lm_init()
with1
,2
, or3
, thedeflate_fast()
routine will be used instead ofdeflate()
, which will use smaller runtime buffers and faster algorithms. The tradeoff is worse compression. It is probably worth it.If you compile zlib with
SMALL_MEM
defined, it will use smaller hash buckets when hashing input strings. The documentation (indeflate.c
) claims:Hopefully, these two techniques combined can bring zlib into range with your application. It's a ubiquitous standard, and being able to re-use well-worn components may be worth sacrifices elsewhere in the application. But if you know something about the distribution of your data that allows you to write your own compression routines, you may be able to do better, but you can drop zlib in place quickly -- writing and testing your own might take more time.
Update
Here's some output on a zlib built with
SMALL_MEM
, using different compression level settings, on the first 600k file I found:The entire
gzip
program takes around 2.6 megabytes of memory, regardless of the compression level asked for; perhaps just using the specific functions you need rather than the entiregzip
program would bring that number down some, but it might be too expensive for your little machine.看看 LZO。
从文档中:
Have a look at LZO.
From the documentation:
LZS 是一个非常简单的滑动窗口压缩器和解压缩器,指定用于各种互联网协议。这可能是一个很好的技术解决方案。
我已经为 LZS 压缩和解压缩编写了一些 C 和 Python 代码。
LZS is a very simple sliding-window compressor and decompressor, specified for use in various Internet protocols. It could be a good technical solution.
I've written some C and Python code for LZS compression and decompression.