分析 NSAffineTransform 和 CILineOverlay 过滤器生成的位图

发布于 2024-09-05 14:29:20 字数 8774 浏览 6 评论 0原文

我正在尝试使用 CIFilter 链来操作图像,然后检查生成的图像(位图)的每个字节。从长远来看,我不需要显示结果图像(位图)——我只需要在内存中“分析”它。但近期我会将其显示在屏幕上,以帮助调试。

我有一些“位图检查”代码,在检查我用作输入的 NSImage(位图表示)(从 JPG 文件加载到 NSImage 中)时,它们可以按预期工作。当我在下面的代码生成的outputBitmap上使用它时,它有时会按预期工作。更具体地说,当我使用 NSAffineTransform 过滤器创建 outputBitmap 时,outputBitmap 包含我期望的数据。但是,如果我使用 CILineOverlay 过滤器创建 outputBitmap,则位图中的所有字节都没有任何数据。我相信这两个过滤器都按预期工作,因为当我在屏幕上显示它们的结果(通过outputImageView)时,它们看起来“正确”。然而,当我检查 outputBitmaps 时,从 CILineOverlay 过滤器创建的输出位图是“空”,而从 NSAffineTransfer 创建的输出位图包含数据。此外,如果我将两个过滤器链接在一起,则最终生成的位图似乎仅在我最后运行 AffineTransform 时才包含数据。对我来说似乎很奇怪???

我的理解(通过阅读 CI 编程指南)是 CIImage 应该被视为“图像配方”而不是实际图像,因为在“绘制”图像之前,图像实际上并未创建。鉴于此,CIimage 位图没有数据是有道理的——但我不明白为什么在运行 NSAffineTransform 后它有数据,但在运行 CILineOverlay 转换后没有数据?基本上,我试图确定从 CIImage (myResult) 创建 NSCIImageRep (下面的代码中的 ir)是否相当于“绘制”CIImage——换句话说,这是否应该强制填充位图?如果有人知道这个问题的答案,请告诉我——这将节省我几个小时的反复试验!

最后,如果答案是“你必须绘制到图形上下文”......那么我还有另一个问题:我是否需要按照 Quartz 2D 编程指南:图形上下文,清单 2 中描述的内容做一些事情 - 7 和 2-8:绘制位图图形上下文?这就是我即将走的路……但似乎有很多代码只是为了强制将位图数据转储到我可以获取它的数组中。因此,如果有更简单或更好的方法,请告诉我。我只想获取 myResult 中的数据(应该是)并将其放入位图数组中,我可以在字节级别访问它。由于我已经有了与 NSBitmapImageRep 一起使用的代码,除非由于某种对我来说不太明显的原因这样做是一个坏主意,那么我宁愿将 myResult“转换”为 NSBitmapImageRep。

CIImage * myResult = [transform valueForKey:@"outputImage"]; 
NSImage *outputImage;
NSCIImageRep *ir = [NSCIImageRep alloc];
ir = [NSCIImageRep imageRepWithCIImage:myResult];
outputImage = [[[NSImage alloc] initWithSize:
                              NSMakeSize(inputImage.size.width, inputImage.size.height)]
                             autorelease];
                    [outputImage addRepresentation:ir];
[outputImageView setImage: outputImage];
NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult];

谢谢, Adam

编辑 #1——Peter H. 评论: 访问位图数据的示例代码...

for (row = 0; row < heightInPixels; row++)
  for (column = 0; column < widthInPixels; column++) {
    if (row == 1340) {  //just check this one row, that I know what to expect
        NSLog(@"Row 1340 column %d pixel redByte of pixel is %d",column,thisPixel->redByte);
    }
}

上面的结果(所有列都包含相同的零/空值,这就是我所说的“空”)...

2010-06-13 10:39:07.765 ImageTransform[5582:a0f] Row 1340 column 1664 pixel redByte of pixel is 0
2010-06-13 10:39:07.765 ImageTransform[5582:a0f] Row 1340 column 1665 pixel redByte of pixel is 0
2010-06-13 10:39:07.766 ImageTransform[5582:a0f] Row 1340 column 1666 pixel redByte of pixel is 0

如果我将 %d 更改为 %h 则根本不会打印任何内容(空白而不是比“0”)。如果我将其更改为 %@,我会在每一行上得到“(null)”,而不是上面显示的“0”。另一方面......当我只运行 NSAffineTransform 过滤器,然后执行此代码时,打印的字节包含我期望的数据(无论我如何格式化 NSLog 输出,都会打印一些内容)。

在 6/14 添加更多代码...

// prior code retrieves JPG image from disk and loads into NSImage
CIImage * inputCIimage = [[CIImage alloc] initWithBitmapImageRep:inputBitmap];
if (inputCIimage == nil) {
  NSLog(@"Bailing out.  Could not create CI Image");
  return;
}
NSLog (@"CI Image created.  working on transforms...");

旋转图像的过滤器...这以前是在一个方法中,但我后来将其移至“内联”,因为我一直试图找出问题所在..过滤

// rotate imageIn by degreesToRotate, using an AffineTransform      
CIFilter *transform = [CIFilter filterWithName:@"CIAffineTransform"];
[transform setDefaults];
[transform setValue:inputCIimage forKey:@"inputImage"];
NSAffineTransform *affineTransform = [NSAffineTransform transform];
[affineTransform transformPoint: NSMakePoint(inputImage.size.width/2, inputImage.size.height / 2)]; 
//inputImage.size.width /2.0,inputImage.size.height /2.0)];
[affineTransform rotateByDegrees:3.75];
[transform setValue:affineTransform forKey:@"inputTransform"];
CIImage * myResult2 = [transform valueForKey:@"outputImage"]; 

以应用 CILineOverlay 过滤器...(之前也在方法中)

CIFilter *lineOverlay = [CIFilter filterWithName:@"CILineOverlay"];
[lineOverlay setDefaults];
[lineOverlay setValue: inputCIimage forKey:@"inputImage"];
// start off with default values, then tweak the ones needed to achieve desired results
[lineOverlay setValue: [NSNumber numberWithFloat: .07] forKey:@"inputNRNoiseLevel"]; //.07 (0-1)
[lineOverlay setValue: [NSNumber numberWithFloat: .71] forKey:@"inputNRSharpness"]; //.71 (0-2)
[lineOverlay setValue: [NSNumber numberWithFloat: 1] forKey:@"inputEdgeIntensity"]; //1 (0-200)
[lineOverlay setValue: [NSNumber numberWithFloat: .1] forKey:@"inputThreshold"]; //.1 (0-1)
[lineOverlay setValue: [NSNumber numberWithFloat: 50] forKey:@"inputContrast"]; //50 (.25-200)
CIImage *myResult2 = [lineOverlay valueForKey:@"outputImage"];  //apply the filter to the CIImage object and return it

最后...使用结果的代码...

if (myResult2 == Nil)
    NSLog(@"Transformations failed");
else {
NSLog(@"Finished transformations successfully ... now render final image");
// make an NSImage from the CIImage (to display it, during initial development)
NSImage *outputImage;
NSCIImageRep *ir = [NSCIImageRep alloc];
// show the tranformed output on screen...
ir = [NSCIImageRep imageRepWithCIImage:myResult2];
outputImage = [[[NSImage alloc] initWithSize:
              NSMakeSize(inputImage.size.width, inputImage.size.height)]
             autorelease];
[outputImage addRepresentation:ir];
[outputImageView setImage: outputImage];  //rotatedImage

此时,无论我应用哪种变换以及哪种变换,变换后的图像都可以很好地显示在屏幕上我留下的一个注释掉了。如果我将转换“链接”在一起,以便 #1 的输出进入 #2,它甚至可以正常工作。所以,对我来说,这似乎表明过滤器正在发挥作用。

然而......我真正需要使用的代码是“位图分析”代码,它正在检查 Results2 中(或“应该”)中的位图。该代码仅适用于 CIAffineTransform 过滤器生成的位图。当我用它来检查 CILineOverlay 生成的位图时,整个位图似乎只包含零。

这是用于该分析的代码...

// this is the next line after the [outputImageView ...] shown above
[self findLeftEdge :myResult2];

然后这是 findLeftEdge 方法的代码...

- (void) findLeftEdge :(CIImage*)imageInCI {
    // find the left edge of the input image, assuming it will be the first non-white pixel
    // because we have already applied the Threshold filter

    NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: imageInCI];
    if (outputBitmap == nil)
        NSLog(@"unable to create outputBitmap");
    else 
        NSLog(@"ouputBitmap image rep created -- samples per pixel = %d", [outputBitmap samplesPerPixel]);

    RGBAPixel 
        *thisPixel, 
        *bitmapPixels = (RGBAPixel *)[outputBitmap bitmapData];  

    int 
    row, 
    column,
    widthInPixels = [outputBitmap pixelsWide], 
    heightInPixels = [outputBitmap pixelsHigh];

    //RGBAPixel *leftEdge [heightInPixels];
    struct { 
        int pixelNumber;
        unsigned char pixelValue;
    } leftEdge[heightInPixels];

    // Is this necessary, or does objective-c always intialize it to zero, for me?
    for (row = 0; row < heightInPixels; row++) {
        leftEdge[row].pixelNumber = 0;
        leftEdge[row].pixelValue = 0;
    }

    for (row = 0; row < heightInPixels; row++)
        for (column = 0; column < widthInPixels; column++)  {
            thisPixel = (&bitmapPixels[((widthInPixels * row) + column)]);

            //red is as good as any channel, for this test (assume threshold filter already applied)
            //this should "save" the column number of the first non-white pixel encountered
            if (leftEdge[row].pixelValue < thisPixel->redByte) {
                leftEdge[row].pixelValue = thisPixel->redByte;  
                leftEdge[row].pixelNumber = column;
            }
            // For debugging, display contents of each pixel
            //NSLog(@"Row %d column %d pixel redByte of pixel is %@",row,column,thisPixel->redByte);
            // For debugging, display contents of each pixel on one row
            //if (row == 1340) {
            //  NSLog(@"Row 1340 column %d pixel redByte of pixel is %@",column,thisPixel->redByte);
            //}

        }

    // For debugging, display the left edge that we discovered
    for (row = 0; row < heightInPixels; row++) {
        NSLog(@"Left edge on row %d was at pixel #%d", row, leftEdge[row].pixelNumber);
    }

    [outputBitmap release];  
}

这是另一个过滤器。当我使用它时,我确实在“输出位图”中获取数据(就像旋转过滤器一样)。因此,只是 AffineTransform 不会在生成的位图中为我提供其数据......

- (CIImage*) applyCropToCI:(CIImage*) imageIn {
rectToCrop { 
    // crop the rectangle specified from the input image
    CIFilter *crop = [CIFilter filterWithName:@"CICrop"];
    [crop setDefaults];  
    [crop setValue:imageIn forKey:@"inputImage"];
//  [crop setValue:rectToCrop forKey:@"inputRectangle"];  //vector defaults to 0,0,300,300
    //CIImage * myResult = [transform valueForKey:@"outputImage"]; //this is the way it was "in-line", before putting this code into a method
    return [crop valueForKey:@"outputImage"];   //does this need to be retained?
}

I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging.

I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me???

My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting!

Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep.

CIImage * myResult = [transform valueForKey:@"outputImage"]; 
NSImage *outputImage;
NSCIImageRep *ir = [NSCIImageRep alloc];
ir = [NSCIImageRep imageRepWithCIImage:myResult];
outputImage = [[[NSImage alloc] initWithSize:
                              NSMakeSize(inputImage.size.width, inputImage.size.height)]
                             autorelease];
                    [outputImage addRepresentation:ir];
[outputImageView setImage: outputImage];
NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult];

Thanks,
Adam

Edit #1 -- for Peter H. comment:
Sample code accessing bitmap data...

for (row = 0; row < heightInPixels; row++)
  for (column = 0; column < widthInPixels; column++) {
    if (row == 1340) {  //just check this one row, that I know what to expect
        NSLog(@"Row 1340 column %d pixel redByte of pixel is %d",column,thisPixel->redByte);
    }
}

Results from above (all columns contain the same zero/null value, which is what I called "empty")...

2010-06-13 10:39:07.765 ImageTransform[5582:a0f] Row 1340 column 1664 pixel redByte of pixel is 0
2010-06-13 10:39:07.765 ImageTransform[5582:a0f] Row 1340 column 1665 pixel redByte of pixel is 0
2010-06-13 10:39:07.766 ImageTransform[5582:a0f] Row 1340 column 1666 pixel redByte of pixel is 0

If I change the %d to %h nothing prints at all (blank rather than "0"). If I change it to %@ I get "(null)" on every line, instead of the "0" shown above. On the other hand ... when I run just the NSAffineTransform filter and then execute this code the bytes printed contain the data I would expect (regardless of how I format the NSLog output, something prints).

Adding more code on 6/14 ...

// prior code retrieves JPG image from disk and loads into NSImage
CIImage * inputCIimage = [[CIImage alloc] initWithBitmapImageRep:inputBitmap];
if (inputCIimage == nil) {
  NSLog(@"Bailing out.  Could not create CI Image");
  return;
}
NSLog (@"CI Image created.  working on transforms...");

Filter that rotates image.... this was previously in a method, but I have since moved it to be "in line" as I have been trying to figure out what is wrong...

// rotate imageIn by degreesToRotate, using an AffineTransform      
CIFilter *transform = [CIFilter filterWithName:@"CIAffineTransform"];
[transform setDefaults];
[transform setValue:inputCIimage forKey:@"inputImage"];
NSAffineTransform *affineTransform = [NSAffineTransform transform];
[affineTransform transformPoint: NSMakePoint(inputImage.size.width/2, inputImage.size.height / 2)]; 
//inputImage.size.width /2.0,inputImage.size.height /2.0)];
[affineTransform rotateByDegrees:3.75];
[transform setValue:affineTransform forKey:@"inputTransform"];
CIImage * myResult2 = [transform valueForKey:@"outputImage"]; 

Filter to apply CILineOverlay filter... (was also previously in a method)

CIFilter *lineOverlay = [CIFilter filterWithName:@"CILineOverlay"];
[lineOverlay setDefaults];
[lineOverlay setValue: inputCIimage forKey:@"inputImage"];
// start off with default values, then tweak the ones needed to achieve desired results
[lineOverlay setValue: [NSNumber numberWithFloat: .07] forKey:@"inputNRNoiseLevel"]; //.07 (0-1)
[lineOverlay setValue: [NSNumber numberWithFloat: .71] forKey:@"inputNRSharpness"]; //.71 (0-2)
[lineOverlay setValue: [NSNumber numberWithFloat: 1] forKey:@"inputEdgeIntensity"]; //1 (0-200)
[lineOverlay setValue: [NSNumber numberWithFloat: .1] forKey:@"inputThreshold"]; //.1 (0-1)
[lineOverlay setValue: [NSNumber numberWithFloat: 50] forKey:@"inputContrast"]; //50 (.25-200)
CIImage *myResult2 = [lineOverlay valueForKey:@"outputImage"];  //apply the filter to the CIImage object and return it

Finally ... the code that uses the results...

if (myResult2 == Nil)
    NSLog(@"Transformations failed");
else {
NSLog(@"Finished transformations successfully ... now render final image");
// make an NSImage from the CIImage (to display it, during initial development)
NSImage *outputImage;
NSCIImageRep *ir = [NSCIImageRep alloc];
// show the tranformed output on screen...
ir = [NSCIImageRep imageRepWithCIImage:myResult2];
outputImage = [[[NSImage alloc] initWithSize:
              NSMakeSize(inputImage.size.width, inputImage.size.height)]
             autorelease];
[outputImage addRepresentation:ir];
[outputImageView setImage: outputImage];  //rotatedImage

At this point the transformed image displays on screen just fine, regardless of which transform I apply and which one I leave commented out. It even works just fine if I "chain" together the transforms so that the output from #1 goes into #2. So, to me, this seems to indicates that the filters are working.

However ... the code that I really need to use is the "bitmap analysis" code that is examining the bitmap that is in (or "should be" in) Results2. And that code works only on the bitmap resulting from the CIAffineTransform filter. When I use it to examine the bitmap resulting from the CILineOverlay, the entire bitmap seems to contain only zeroes.

So here is the code used for that analysis...

// this is the next line after the [outputImageView ...] shown above
[self findLeftEdge :myResult2];

And then this is the code from the findLeftEdge method...

- (void) findLeftEdge :(CIImage*)imageInCI {
    // find the left edge of the input image, assuming it will be the first non-white pixel
    // because we have already applied the Threshold filter

    NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: imageInCI];
    if (outputBitmap == nil)
        NSLog(@"unable to create outputBitmap");
    else 
        NSLog(@"ouputBitmap image rep created -- samples per pixel = %d", [outputBitmap samplesPerPixel]);

    RGBAPixel 
        *thisPixel, 
        *bitmapPixels = (RGBAPixel *)[outputBitmap bitmapData];  

    int 
    row, 
    column,
    widthInPixels = [outputBitmap pixelsWide], 
    heightInPixels = [outputBitmap pixelsHigh];

    //RGBAPixel *leftEdge [heightInPixels];
    struct { 
        int pixelNumber;
        unsigned char pixelValue;
    } leftEdge[heightInPixels];

    // Is this necessary, or does objective-c always intialize it to zero, for me?
    for (row = 0; row < heightInPixels; row++) {
        leftEdge[row].pixelNumber = 0;
        leftEdge[row].pixelValue = 0;
    }

    for (row = 0; row < heightInPixels; row++)
        for (column = 0; column < widthInPixels; column++)  {
            thisPixel = (&bitmapPixels[((widthInPixels * row) + column)]);

            //red is as good as any channel, for this test (assume threshold filter already applied)
            //this should "save" the column number of the first non-white pixel encountered
            if (leftEdge[row].pixelValue < thisPixel->redByte) {
                leftEdge[row].pixelValue = thisPixel->redByte;  
                leftEdge[row].pixelNumber = column;
            }
            // For debugging, display contents of each pixel
            //NSLog(@"Row %d column %d pixel redByte of pixel is %@",row,column,thisPixel->redByte);
            // For debugging, display contents of each pixel on one row
            //if (row == 1340) {
            //  NSLog(@"Row 1340 column %d pixel redByte of pixel is %@",column,thisPixel->redByte);
            //}

        }

    // For debugging, display the left edge that we discovered
    for (row = 0; row < heightInPixels; row++) {
        NSLog(@"Left edge on row %d was at pixel #%d", row, leftEdge[row].pixelNumber);
    }

    [outputBitmap release];  
}

Here is another filter. When I use it I do get data in the "output bitmap" (just like the Rotation filter). So it is just the AffineTransform that does not yield up its data for me in the resulting bitmap ...

- (CIImage*) applyCropToCI:(CIImage*) imageIn {
rectToCrop { 
    // crop the rectangle specified from the input image
    CIFilter *crop = [CIFilter filterWithName:@"CICrop"];
    [crop setDefaults];  
    [crop setValue:imageIn forKey:@"inputImage"];
//  [crop setValue:rectToCrop forKey:@"inputRectangle"];  //vector defaults to 0,0,300,300
    //CIImage * myResult = [transform valueForKey:@"outputImage"]; //this is the way it was "in-line", before putting this code into a method
    return [crop valueForKey:@"outputImage"];   //does this need to be retained?
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

苍风燃霜 2024-09-12 14:29:20

您声称位图数据包含“全零”,但您只查看每个像素一个字节。您假设第一个组件是红色组件,并且您假设数据每个组件一个字节;如果数据是 alpha 优先或浮点型,则这些假设中的一个或两个都将是错误的。

使用您分配的缓冲区以您想要的任何格式创建位图上下文,并将图像渲染到该上下文中。然后,您的缓冲区将包含您期望的格式的图像。

您可能还想从基于结构的访问切换到基于字节的访问,即 pixels[(row*bytesPerRow)+col],将 col 增加每个像素的分量。当您使用结构来访问组件时,字节顺序很容易成为一个令人头痛的问题。

You claim that the bitmap data contains “all zeroes”, but you're only looking at one byte per pixel. You're assuming that the first component is the red component, and you're assuming that the data is one byte per component; if the data is alpha-first or floating-point, one or both of these assumptions will be wrong.

Create a bitmap context in whatever format you want using a buffer you allocate, and render the image into that context. Your buffer will then contain the image in the format you expect.

You might also want to switch from structure-based access to byte-based access—i.e., pixels[(row*bytesPerRow)+col], incrementing col by the number of components per pixel. Endianness can easily become a headache when you use structures to access the components.

空气里的味道 2024-09-12 14:29:20
for (row = 0; row < heightInPixels; row++)
  for (列 = 0; 列 < widthInPixels; 列++) {
    if (row == 1340) { //只需检查这一行,我就知道会发生什么
        NSLog(@"第1340行第%d个像素的redByte为%d",column,thisPixel->redByte);
  }
}

除了语法错误之外,此代码不起作用,因为您从未分配给 thisPixel。您毫无意义地循环索引,因为您实际上从未在这些索引处查找像素值并将其分配给 thisPixel 以便检查它。

NSLog 语句之前添加这样的赋值。

此外,如果您唯一关心的行是 1340,则无需循环遍历行。使用 if 语句检查 1340 是否小于高度,如果是,则仅执行列循环。 (另外,不要在代码中嵌入这样的幻数文字。为该常量指定一个能够解释数字 1340 重要性的名称,即为什么它是您唯一关心的行。)

for (row = 0; row < heightInPixels; row++)
  for (column = 0; column < widthInPixels; column++) {
    if (row == 1340) {  //just check this one row, that I know what to expect
        NSLog(@"Row 1340 column %d pixel redByte of pixel is %d",column,thisPixel->redByte);
  }
}

Aside from the syntax error, this code doesn't work because you never assigned to thisPixel. You are looping through indexes for nothing, since you never actually look up a pixel value at those indexes and assign it to thisPixel in order to inspect it.

Add such an assignment before the NSLog statement.

Furthermore, if the only row you care about is 1340, there's no need to loop through rows. Check using an if statement whether 1340 is less than the height, and if it is, then do only the columns loop. (Also, don't embed magic number literals like this in your code. Give that constant a name that explains the significance of the number 1340—i.e., why it's the only row you care about.)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文