FJCore库中的VsampFactor和HsampFactor

发布于 2024-08-24 14:44:40 字数 2710 浏览 3 评论 0原文

我一直在 Silverlight 项目中使用 FJCore 库来帮助进行一些实时图像处理,我正在尝试找出如何从库中获得更多的压缩和性能。现在,据我了解,JPEG 标准允许您指定色度二次采样比率(请参阅 http:// en.wikipedia.org/wiki/Chroma_subsamplinghttp://en.wikipedia.org/维基/Jpeg);看起来这应该是使用 HsampFactor 和 VsampFactor 数组在 FJCore 库中实现的:

    public static readonly byte[] HsampFactor = { 1, 1, 1 };
    public static readonly byte[] VsampFactor = { 1, 1, 1 };

但是,我很难弄清楚如何使用它们。在我看来,当前值应该代表 4:4:4 子采样(例如,根本没有子采样),如果我想获得 4:1:1 子采样,正确的值将是这样的:

    public static readonly byte[] HsampFactor = { 2, 1, 1 };
    public static readonly byte[] VsampFactor = { 2, 1, 1 };

至少,这是其他类似库使用这些值的方式(例如,请参阅示例代码 此处)。

但是,上述 {2, 1, 1} 值以及除 {1, 1, 1} 之外我尝试过的任何其他值集都无法生成清晰的图像。从代码来看,它似乎也不是这样写的。但对于我的一生,我无法弄清楚 FJCore 代码实际上试图做什么。看起来它只是使用样本因子来重复已经完成的操作 - 即,如果我不知道更好,我会说这是一个错误。但这是一个相当成熟的库,基于一些相当成熟的 Java 代码,所以如果情况确实如此,我会感到惊讶。

有人对如何使用这些值来获得 4:2:2 或 4:1:1 色度子采样有任何建议吗?

对于它的价值,这里是来自 JpegEncoder 类:

for (comp = 0; comp < _input.Image.ComponentCount; comp++)
{
    Width = _input.BlockWidth[comp];
    Height = _input.BlockHeight[comp];

    inputArray = _input.Image.Raster[comp];

    for (i = 0; i < _input.VsampFactor[comp]; i++)
    {
        for (j = 0; j < _input.HsampFactor[comp]; j++)
        {
            xblockoffset = j * 8;
            yblockoffset = i * 8;
            for (a = 0; a < 8; a++)
            {
                // set Y value.  check bounds
                int y = ypos + yblockoffset + a; if (y >= _height) break;

                for (b = 0; b < 8; b++)
                {
                    int x = xpos + xblockoffset + b; if (x >= _width) break;
                    dctArray1[a, b] = inputArray[x, y];
                }
            }
            dctArray2 = _dct.FastFDCT(dctArray1);
            dctArray3 = _dct.QuantizeBlock(dctArray2, FrameDefaults.QtableNumber[comp]);
            _huf.HuffmanBlockEncoder(buffer, dctArray3, lastDCvalue[comp], FrameDefaults.DCtableNumber[comp], FrameDefaults.ACtableNumber[comp]);
            lastDCvalue[comp] = dctArray3[0];
        }
    }
}

请注意,在 i & 中j 循环,它们不控制任何类型的像素跳跃:如果 HsampFactor[0] 设置为 2,则它只是抓取两个块而不是一个。

I've been using the FJCore library in a Silverlight project to help with some realtime image processing, and I'm trying to figure out how to get a tad more compression and performance out of the library. Now, as I understand it, the JPEG standard allows you to specify a chroma subsampling ratio (see http://en.wikipedia.org/wiki/Chroma_subsampling and http://en.wikipedia.org/wiki/Jpeg); and it appears that this is supposed to be implemented in the FJCore library using the HsampFactor and VsampFactor arrays:

    public static readonly byte[] HsampFactor = { 1, 1, 1 };
    public static readonly byte[] VsampFactor = { 1, 1, 1 };

However, I'm having a hard time figuring out how to use them. It looks to me like the current values are supposed to represent 4:4:4 subsampling (e.g., no subsampling at all), and that if I wanted to get 4:1:1 subsampling, the right values would be something like this:

    public static readonly byte[] HsampFactor = { 2, 1, 1 };
    public static readonly byte[] VsampFactor = { 2, 1, 1 };

At least, that's the way that other similar libraries use these values (for instance, see the example code here for libjpeg).

However, neither the above values of {2, 1, 1} nor any other set of values that I've tried besides {1, 1, 1} produce a legible image. Nor, in looking at the code, does it seem like that's the way it's written. But for the life of me, I can't figure out what the FJCore code is actually trying to do. It seems like it's just using the sample factors to repeat operations that it's already done -- i.e., if I didn't know better, I'd say that it was a bug. But this is a fairly established library, based on some fairly well established Java code, so I'd be surprised if that were the case.

Does anybody have any suggestions for how to use these values to get 4:2:2 or 4:1:1 chroma subsampling?

For what it's worth, here's the relevant code from the JpegEncoder class:

for (comp = 0; comp < _input.Image.ComponentCount; comp++)
{
    Width = _input.BlockWidth[comp];
    Height = _input.BlockHeight[comp];

    inputArray = _input.Image.Raster[comp];

    for (i = 0; i < _input.VsampFactor[comp]; i++)
    {
        for (j = 0; j < _input.HsampFactor[comp]; j++)
        {
            xblockoffset = j * 8;
            yblockoffset = i * 8;
            for (a = 0; a < 8; a++)
            {
                // set Y value.  check bounds
                int y = ypos + yblockoffset + a; if (y >= _height) break;

                for (b = 0; b < 8; b++)
                {
                    int x = xpos + xblockoffset + b; if (x >= _width) break;
                    dctArray1[a, b] = inputArray[x, y];
                }
            }
            dctArray2 = _dct.FastFDCT(dctArray1);
            dctArray3 = _dct.QuantizeBlock(dctArray2, FrameDefaults.QtableNumber[comp]);
            _huf.HuffmanBlockEncoder(buffer, dctArray3, lastDCvalue[comp], FrameDefaults.DCtableNumber[comp], FrameDefaults.ACtableNumber[comp]);
            lastDCvalue[comp] = dctArray3[0];
        }
    }
}

And notice that in the i & j loops, they're not controlling any kind of pixel skipping: if HsampFactor[0] is set to two, it's just grabbing two blocks instead of one.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

海之角 2024-08-31 14:44:40

我想通了。我认为通过设置采样因子,您可以告诉库对栅格组件本身进行二次采样。事实证明,当您设置采样因子时,您实际上是在告诉库您所提供的栅格组件的相对大小。换句话说,在将图像提交到 FJCore 库进行压缩之前,您需要自己对图像进行色度子采样。它正在寻找这样的东西:

    private byte[][,] GetSubsampledRaster()
    {
        byte[][,] raster = new byte[3][,];
        raster[Y] = new byte[width / hSampleFactor[Y], height / vSampleFactor[Y]];
        raster[Cb] = new byte[width / hSampleFactor[Cb], height / vSampleFactor[Cb]];
        raster[Cr] = new byte[width / hSampleFactor[Cr], height / vSampleFactor[Cr]];

        int rgbaPos = 0;
        for (short y = 0; y < height; y++)
        {
            int Yy = y / vSampleFactor[Y];
            int Cby = y / vSampleFactor[Cb];
            int Cry = y / vSampleFactor[Cr];
            int Yx = 0, Cbx = 0, Crx = 0;
            for (short x = 0; x < width; x++)
            {
                // Convert to YCbCr colorspace.
                byte b = RgbaSample[rgbaPos++];
                byte g = RgbaSample[rgbaPos++];
                byte r = RgbaSample[rgbaPos++];
                YCbCr.fromRGB(ref r, ref g, ref b);

                // Only include the byte in question in the raster if it matches the appropriate sampling factor.
                if (IncludeInSample(Y, x, y))
                {
                    raster[Y][Yx++, Yy] = r;
                }
                if (IncludeInSample(Cb, x, y))
                {
                    raster[Cb][Cbx++, Cby] = g;
                }
                if (IncludeInSample(Cr, x, y))
                {
                    raster[Cr][Crx++, Cry] = b;
                }

                // For YCbCr, we ignore the Alpha byte of the RGBA byte structure, so advance beyond it.
                rgbaPos++;
            }
        }
        return raster;
    }

    static private bool IncludeInSample(int slice, short x, short y)
    {
        // Hopefully this gets inlined . . . 
        return ((x % hSampleFactor[slice]) == 0) && ((y % vSampleFactor[slice]) == 0);
    }

可能有其他方法来优化它,但它目前正在工作。

I figured it out. I thought that by setting the sampling factors, you were telling the library to subsample the raster components itself. Turns out that when you set the sampling factors, you're actually telling the library the relative size of the raster components that you're providing. In other words, you need to do the chroma subsampling of the image yourself, before you ever submit it to the FJCore library for compression. Something like this is what it's looking for:

    private byte[][,] GetSubsampledRaster()
    {
        byte[][,] raster = new byte[3][,];
        raster[Y] = new byte[width / hSampleFactor[Y], height / vSampleFactor[Y]];
        raster[Cb] = new byte[width / hSampleFactor[Cb], height / vSampleFactor[Cb]];
        raster[Cr] = new byte[width / hSampleFactor[Cr], height / vSampleFactor[Cr]];

        int rgbaPos = 0;
        for (short y = 0; y < height; y++)
        {
            int Yy = y / vSampleFactor[Y];
            int Cby = y / vSampleFactor[Cb];
            int Cry = y / vSampleFactor[Cr];
            int Yx = 0, Cbx = 0, Crx = 0;
            for (short x = 0; x < width; x++)
            {
                // Convert to YCbCr colorspace.
                byte b = RgbaSample[rgbaPos++];
                byte g = RgbaSample[rgbaPos++];
                byte r = RgbaSample[rgbaPos++];
                YCbCr.fromRGB(ref r, ref g, ref b);

                // Only include the byte in question in the raster if it matches the appropriate sampling factor.
                if (IncludeInSample(Y, x, y))
                {
                    raster[Y][Yx++, Yy] = r;
                }
                if (IncludeInSample(Cb, x, y))
                {
                    raster[Cb][Cbx++, Cby] = g;
                }
                if (IncludeInSample(Cr, x, y))
                {
                    raster[Cr][Crx++, Cry] = b;
                }

                // For YCbCr, we ignore the Alpha byte of the RGBA byte structure, so advance beyond it.
                rgbaPos++;
            }
        }
        return raster;
    }

    static private bool IncludeInSample(int slice, short x, short y)
    {
        // Hopefully this gets inlined . . . 
        return ((x % hSampleFactor[slice]) == 0) && ((y % vSampleFactor[slice]) == 0);
    }

There might be additional ways to optimize this, but it's working for now.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文