Matlab,图像压缩

发布于 2024-12-22 23:33:20 字数 468 浏览 2 评论 0原文

我不确定这要求我在 matlab 中做什么?编码是什么意思?答案应该是什么格式?谁能帮我解决一下吗? 对 8x8 图像补丁进行编码并打印结果

我得到了 8X8 图像

symbols=[0 20 50 99];
p=[32 8 16 8];
p = p/sum(p);
[dict, avglen] = huffmandict(symbols, p);
A = ...
[99 99 99 99 99 99 99 99 ...
20 20 20 20 20 20 20 20 ...
0 0 0 0 0 0 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 0 0 0 0 0 0];
comp=huffmanenco(A,dict);
ratio=(8*8*8)/length(comp)

i am unsure about what this is asking me to do in matlab? what does it mean to encode? what format should the answer be? can anyone help me to work it out please?
Encode the 8x8 image patch and print out the results

I have got an 8X8 image

symbols=[0 20 50 99];
p=[32 8 16 8];
p = p/sum(p);
[dict, avglen] = huffmandict(symbols, p);
A = ...
[99 99 99 99 99 99 99 99 ...
20 20 20 20 20 20 20 20 ...
0 0 0 0 0 0 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 0 0 0 0 0 0];
comp=huffmanenco(A,dict);
ratio=(8*8*8)/length(comp)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

一张白纸 2024-12-29 23:33:20

你了解霍夫曼编码的原理吗?

简而言之,它是一种用于压缩数据(如您的情况中的图像)的算法。这意味着算法的输入是图像,输出是尺寸小于输入的数字代码:因此进行了压缩。

霍夫曼编码的原理(大致)是用根据符号概率归属的数字代码替换原始数据中的符号(在您的情况下是图像的每个像素的值)。最可能的(即最常见的)符号将被更短的代码替代,以实现数据的压缩。

为了解决您的问题,Matlab 在 Communications Toolbox 中提供了两个函数:huffmandicthuffmanenco

huffmandict此函数构建一个字典,用于将符号从原始数据转换为其数字霍夫曼码字。为了构建这个字典,huffmandict 需要数据中使用的符号列表及其出现概率,即它们的使用次数除以数据中的符号总数。

huffmanenco此函数用于通过使用huffmandict构建的字典来翻译您的原始数据。原始数据中的每个符号都被转换为数字霍夫曼代码。为了衡量这种压缩方法的大小增益,您可以计算压缩比,即用于描述原始数据的位数与霍夫曼相应代码的位数之间的比率。在您的情况下,从压缩比的计算推断,您有一个 8 x 8 图像,使用 8 位整数来描述每个像素,并且霍夫曼相应的代码使用 length(comp) 位。

考虑到所有这些,您可以通过以下方式阅读代码:

% Original image
A = ...
[99 99 99 99 99 99 99 99 ...
20 20 20 20 20 20 20 20 ...
0 0 0 0 0 0 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 0 0 0 0 0 0];

% First step: extract the symbols used in the original image
% and their probability (number of occurences / number of total symbols)
symbols=[0 20 50 99];
p=[32 8 16 8];
p=p/sum(p);
% To do this you could also use the following which automatically extracts 
% the symbols and their probability
[symbols,p]=hist(A,unique(A));
p=p/sum(p);

% Second step: build the Huffman dictionary
[dict,avglen]=huffmandict(symbols,p);

% Third step: encode your original image with the dictionary you just built
comp=huffmanenco(A,dict);

% Finally you can compute the compression ratio
ratio=(8*8*8)/length(comp)

Do you understand the principle of Huffman coding?

To put it simply, it is an algorithm used to compress data (like images in your case). This means that the input of the algorithm is an image and the output is a numeric code that is smaller in size than the input: hence the compression.

The principle of Huffman coding is (roughly) to replace symbols in the original data (in your case the value of each pixel of the image) by a numeric code that is attributed according to the probability of the symbol. The most probable (i.e. the most common) symbol will be replaced by shorter codes in order to realize a compression of the data.

To solve your problem, Matlab has two functions in the Communications Toolbox: huffmandict and huffmanenco.

huffmandict: this function build a dictionary that is used to translate symbols from the original data to their numeric Huffman codewords. To build this dictionary, huffmandict needs the list of symbols used in the data and their probability of appearance which is the number of time they are used divided by the total number of symbols in your data.

huffmanenco: this function is used to translate your original data by using the dictionary built by huffmandict. Each symbol in the original data is translated to a numeric Huffman code. To measure the gain in size of this compression method, you can compute the compression ration, which is the ratio between the number of bits used to describe your original data and the number of bits of the Huffman corresponding code. In your case, infering from your computation of the compression ratio, you have an 8 by 8 image using 8 bits integer to describe each pixel, and the Huffman corresponding code uses length(comp) bits.

With all this in mind, you could read your code in this way:

% Original image
A = ...
[99 99 99 99 99 99 99 99 ...
20 20 20 20 20 20 20 20 ...
0 0 0 0 0 0 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 50 50 50 50 0 0 ...
0 0 0 0 0 0 0 0];

% First step: extract the symbols used in the original image
% and their probability (number of occurences / number of total symbols)
symbols=[0 20 50 99];
p=[32 8 16 8];
p=p/sum(p);
% To do this you could also use the following which automatically extracts 
% the symbols and their probability
[symbols,p]=hist(A,unique(A));
p=p/sum(p);

% Second step: build the Huffman dictionary
[dict,avglen]=huffmandict(symbols,p);

% Third step: encode your original image with the dictionary you just built
comp=huffmanenco(A,dict);

% Finally you can compute the compression ratio
ratio=(8*8*8)/length(comp)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文