zlib 内存使用/性能。具有 500kb 数据

发布于 2024-10-22 20:08:00 字数 450 浏览 7 评论 0原文

zLib 值得吗?还有其他更适合的压缩机吗?

我正在使用嵌入式系统。我的应用程序通常只有 3MB 或更少的 RAM。所以我正在考虑使用 zlib 来压缩我的缓冲区。然而,我担心开销。

缓冲区的平均大小为 30kb。 这可能不会被 zlib 压缩。有人知道适合极其有限的内存环境的好压缩器吗?

但是,我偶尔会遇到 700kb 的最大缓冲区大小,其中 500kb 更常见。 在这种情况下 zlib 值得吗?还是开销太大而无法证明?

我对压缩的唯一考虑因素是算法的RAM开销性能至少一样好作为 zlib.

许可证:我更喜欢压缩器获得 BSD、zLib 或同等许可证的许可。

Is zLib Worth it? Are there other better suited compressors?

I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however.

The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments?

However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify?

My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.

LICENSE: I prefer the compressor be licensed under BSD, zLib, or equivalent license.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

記憶穿過時間隧道 2024-10-29 20:08:00

如果使用 123 使用 lm_init() 初始化 zlib,则 deflate_fast(将使用 ) 例程代替 deflate(),后者将使用更小的运行时缓冲区和更快的算法。代价是压缩效果更差。这可能是值得的。

如果您编译 zlib 时定义了 SMALL_MEM,则在对输入字符串进行哈希处理时,它将使用较小的哈希桶。文档(在 deflate.c 中)声称:

/* Compile with MEDIUM_MEM to reduce the memory requirements or
 * with SMALL_MEM to use as little memory as possible. Use BIG_MEM if the
 * entire input file can be held in memory (not possible on 16 bit systems).
 * Warning: defining these symbols affects HASH_BITS (see below) and thus
 * affects the compression ratio. The compressed output
 * is still correct, and might even be smaller in some cases.
 */

希望这两种技术的结合可以使 zlib 融入您的应用程序范围内。这是一个普遍存在的标准,能够重用陈旧的组件可能值得在应用程序的其他地方做出牺牲。但是,如果您了解数据的分布情况,从而可以编写自己的压缩例程,那么您可能会做得更好,但您可以快速将 zlib 删除——编写和测试您自己的压缩例程可能需要更多时间。

更新

以下是使用 SMALL_MEM 构建的 zlib 的一些输出,使用不同的压缩级别设置,在我找到的第一个 600k 文件上:

$ ls -l abi-2.6.31-14-generic
-rw-r--r-- 1 sarnold sarnold 623709 2011-03-18 18:09 abi-2.6.31-14-generic
$ for i in `seq 1 9` ; do /usr/bin/time ./gzip -c -${i} abi-2.6.31-14-generic | wc -c ; done
0.02user 0.00system 0:00.02elapsed 76%CPU (0avgtext+0avgdata 2816maxresident)k
0inputs+0outputs (0major+213minor)pagefaults 0swaps
162214
0.01user 0.00system 0:00.01elapsed 52%CPU (0avgtext+0avgdata 2800maxresident)k
0inputs+0outputs (0major+212minor)pagefaults 0swaps
158817
0.02user 0.00system 0:00.02elapsed 95%CPU (0avgtext+0avgdata 2800maxresident)k
0inputs+0outputs (0major+212minor)pagefaults 0swaps
156708
0.02user 0.00system 0:00.02elapsed 76%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+211minor)pagefaults 0swaps
143843
0.03user 0.00system 0:00.03elapsed 96%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+212minor)pagefaults 0swaps
140706
0.03user 0.00system 0:00.03elapsed 81%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+211minor)pagefaults 0swaps
140126
0.04user 0.00system 0:00.04elapsed 95%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+211minor)pagefaults 0swaps
138801
0.05user 0.00system 0:00.05elapsed 84%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+212minor)pagefaults 0swaps
138446
0.06user 0.00system 0:00.06elapsed 96%CPU (0avgtext+0avgdata 2768maxresident)k
0inputs+0outputs (0major+210minor)pagefaults 0swaps
138446

整个 gzip 程序无论要求的压缩级别如何,都需要大约 2.6 MB 的内存;也许只使用您需要的特定功能而不是整个 gzip 程序会降低这个数字,但对于您的小机器来说可能太昂贵了。

If you initialize zlib with lm_init() with 1, 2, or 3, the deflate_fast() routine will be used instead of deflate(), which will use smaller runtime buffers and faster algorithms. The tradeoff is worse compression. It is probably worth it.

If you compile zlib with SMALL_MEM defined, it will use smaller hash buckets when hashing input strings. The documentation (in deflate.c) claims:

/* Compile with MEDIUM_MEM to reduce the memory requirements or
 * with SMALL_MEM to use as little memory as possible. Use BIG_MEM if the
 * entire input file can be held in memory (not possible on 16 bit systems).
 * Warning: defining these symbols affects HASH_BITS (see below) and thus
 * affects the compression ratio. The compressed output
 * is still correct, and might even be smaller in some cases.
 */

Hopefully, these two techniques combined can bring zlib into range with your application. It's a ubiquitous standard, and being able to re-use well-worn components may be worth sacrifices elsewhere in the application. But if you know something about the distribution of your data that allows you to write your own compression routines, you may be able to do better, but you can drop zlib in place quickly -- writing and testing your own might take more time.

Update

Here's some output on a zlib built with SMALL_MEM, using different compression level settings, on the first 600k file I found:

$ ls -l abi-2.6.31-14-generic
-rw-r--r-- 1 sarnold sarnold 623709 2011-03-18 18:09 abi-2.6.31-14-generic
$ for i in `seq 1 9` ; do /usr/bin/time ./gzip -c -${i} abi-2.6.31-14-generic | wc -c ; done
0.02user 0.00system 0:00.02elapsed 76%CPU (0avgtext+0avgdata 2816maxresident)k
0inputs+0outputs (0major+213minor)pagefaults 0swaps
162214
0.01user 0.00system 0:00.01elapsed 52%CPU (0avgtext+0avgdata 2800maxresident)k
0inputs+0outputs (0major+212minor)pagefaults 0swaps
158817
0.02user 0.00system 0:00.02elapsed 95%CPU (0avgtext+0avgdata 2800maxresident)k
0inputs+0outputs (0major+212minor)pagefaults 0swaps
156708
0.02user 0.00system 0:00.02elapsed 76%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+211minor)pagefaults 0swaps
143843
0.03user 0.00system 0:00.03elapsed 96%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+212minor)pagefaults 0swaps
140706
0.03user 0.00system 0:00.03elapsed 81%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+211minor)pagefaults 0swaps
140126
0.04user 0.00system 0:00.04elapsed 95%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+211minor)pagefaults 0swaps
138801
0.05user 0.00system 0:00.05elapsed 84%CPU (0avgtext+0avgdata 2784maxresident)k
0inputs+0outputs (0major+212minor)pagefaults 0swaps
138446
0.06user 0.00system 0:00.06elapsed 96%CPU (0avgtext+0avgdata 2768maxresident)k
0inputs+0outputs (0major+210minor)pagefaults 0swaps
138446

The entire gzip program takes around 2.6 megabytes of memory, regardless of the compression level asked for; perhaps just using the specific functions you need rather than the entire gzip program would bring that number down some, but it might be too expensive for your little machine.

傾旎 2024-10-29 20:08:00

看看 LZO

从文档中:

  • 解压不需要内存。
  • 需要 64 kB 内存进行压缩。

如果你巧妙地安排你的数据,你可以做重叠(就地)
解压意味着您可以解压到相同
压缩数据所在的块。

您还可以在进行压缩时部分覆盖缓冲区。

Have a look at LZO.

From the documentation:

  • Requires no memory for decompression.
  • Requires 64 kB of memory for compression.

If you cleverly arrange your data, you can do an overlapping (in-place)
decompression which means that you can decompress to the same
block where the compressed data resides.

You can also partly overlay the buffers when doing compression.

安稳善良 2024-10-29 20:08:00

LZS 是一个非常简单的滑动窗口压缩器和解压缩器,指定用于各种互联网协议。这可能是一个很好的技术解决方案。

我已经为 LZS 压缩和解压缩编写了一些 C 和 Python 代码。

LZS is a very simple sliding-window compressor and decompressor, specified for use in various Internet protocols. It could be a good technical solution.

I've written some C and Python code for LZS compression and decompression.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文