如何在 MKOverlayView 上显示图像?

发布于 2024-09-26 20:26:04 字数 5519 浏览 0 评论 0原文

更新:

使用 MKOverlayView 投影在 MKMapView 上的图像使用 Mercator 投影,而我用作输入数据的图像使用 WGS84 投影。有没有办法将输入图像转换为正确的投影WGS84 ->墨卡托,无需平铺图像,可以即时完成吗?

通常,您可以使用程序 gdal2tiles 将图像转换为正确的投影。 然而,输入数据每十五分钟改变一次,因此图像必须每十五分钟转换一次。因此转换必须即时完成。我还希望由 Mapkit 完成平铺,而不是由我自己使用 gdal2tiles 或 GDAL 框架完成。

更新结束

我目前正在开发一个项目,该项目显示世界某些地区的降雨雷达。雷达图像由 EUMETSAT 提供,他们提供可以加载到 Google Earth 或 Google Maps 中的 KML 文件。如果我在 Google 地图中加载 KML 文件,它会完美显示,但如果我在 MKMapView 上使用 MKOverlayView 绘制图像,则图像会稍微有点问题。

例如,左侧显示 Google 地图,右侧则在 MKMapView 上显示相同的图像。

替代文本

替代文本

可以在 Google 地图,该图像使用的卫星是“Meteosat 0 Degree”卫星。

两个图像覆盖的表面大小相同,这是 KML 文件中的 LatLonBox,它指定地面叠加层边界框的顶部、底部、右侧和左侧的对齐位置。

  <LatLonBox id="GE_MET0D_VP-MPE-latlonbox">
        <north>57.4922</north>
        <south>-57.4922</south>
        <east>57.4922</east>
        <west>-57.4922</west>
        <rotation>0</rotation>
  </LatLonBox>

我使用这些参数创建一个名为 RadarOverlay 的新自定义 MKOverlay 对象,

[[RadarOverlay alloc] initWithImageData:[[self.currentRadarData objectAtIndex:0] valueForKey:@"Image"] withLowerLeftCoordinate:CLLocationCoordinate2DMake(-57.4922, -57.4922) withUpperRightCoordinate:CLLocationCoordinate2DMake(57.4922, 57.4922)];

自定义 MKOverlay 对象的实现; RadarOverlay

- (id) initWithImageData:(NSData*) imageData withLowerLeftCoordinate:(CLLocationCoordinate2D)lowerLeftCoordinate withUpperRightCoordinate:(CLLocationCoordinate2D)upperRightCoordinate
{
     self.radarData = imageData;

     MKMapPoint lowerLeft = MKMapPointForCoordinate(lowerLeftCoordinate);
     MKMapPoint upperRight = MKMapPointForCoordinate(upperRightCoordinate);

     mapRect = MKMapRectMake(lowerLeft.x, upperRight.y, upperRight.x - lowerLeft.x, lowerLeft.y - upperRight.y);

     return self;
}

- (CLLocationCoordinate2D)coordinate
{
     return MKCoordinateForMapPoint(MKMapPointMake(MKMapRectGetMidX(mapRect), MKMapRectGetMidY(mapRect)));
}

- (MKMapRect)boundingMapRect
{
     return mapRect;
}

自定义 MKOverlayView 的实现,RadarOverlayView

- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
    RadarOverlay* radarOverlay = (RadarOverlay*) self.overlay;

    UIImage *image          = [[UIImage alloc] initWithData:radarOverlay.radarData];

    CGImageRef imageReference = image.CGImage;

    MKMapRect theMapRect    = [self.overlay boundingMapRect];
   CGRect theRect           = [self rectForMapRect:theMapRect];
    CGRect clipRect     = [self rectForMapRect:mapRect];

    NSUserDefaults *preferences = [NSUserDefaults standardUserDefaults];
    CGContextSetAlpha(context, [preferences floatForKey:@"RadarTransparency"]);

    CGContextAddRect(context, clipRect);
    CGContextClip(context);

    CGContextDrawImage(context, theRect, imageReference);

    [image release]; 
}

当我下载图像时,我翻转图像,以便可以在 MKOverlayView 中轻松绘制图像。

size_t width    = (CGImageGetWidth(imageReference) / self.scaleFactor);
size_t height   = (CGImageGetHeight(imageReference) / self.scaleFactor);

// Calculate colorspace for the specified image
CGColorSpaceRef imageColorSpace = CGImageGetColorSpace(imageReference);

// Allocate and clear memory for the data of the image
unsigned char *imageData = (unsigned char*) malloc(height * width * 4);
memset(imageData, 0, height * width * 4);

// Define the rect for the image
CGRect imageRect;
if(image.imageOrientation==UIImageOrientationUp || image.imageOrientation==UIImageOrientationDown) 
    imageRect = CGRectMake(0, 0, width, height); 
else 
    imageRect = CGRectMake(0, 0, height, width); 

// Create the imagecontext by defining the colorspace and the address of the location to store the data
CGContextRef imageContext = CGBitmapContextCreate(imageData, width, height, 8, width * 4, imageColorSpace, kCGImageAlphaPremultipliedLast);

CGContextSaveGState(imageContext);

// Scale the image to the opposite orientation so it can be easylier drawn with CGContectDrawImage
CGContextTranslateCTM(imageContext, 0, height);
CGContextScaleCTM(imageContext, 1.0, -1.0);

if(image.imageOrientation==UIImageOrientationLeft) 
{
    CGContextRotateCTM(imageContext, M_PI / 2);
    CGContextTranslateCTM(imageContext, 0, -width);
}
else if(image.imageOrientation==UIImageOrientationRight) 
{
    CGContextRotateCTM(imageContext, - M_PI / 2);
    CGContextTranslateCTM(imageContext, -height, 0);
} 
else if(image.imageOrientation==UIImageOrientationDown) 
{
    CGContextTranslateCTM(imageContext, width, height);
    CGContextRotateCTM(imageContext, M_PI);
}

// Draw the image in the context
CGContextDrawImage(imageContext, imageRect, imageReference);
CGContextRestoreGState(imageContext);

翻转图像后,我对其进行操作,然后将其作为 NSData 对象存储在内存中。

图像看起来像是被拉伸了,但在图像的中心(位于赤道处)看起来还不错。

UPDATE:

Images who are projected on the MKMapView using a MKOverlayView use the Mercator projection, while the image that I use as input data uses a WGS84 projection. Is there a way to convert the input image, to the right projection WGS84 -> Mercator, without tiling the image up and can it done on the fly?

Normally you could convert a image to right projection using the program gdal2tiles.
The input data however changes every fifteen minutes, so the image has to be converted every fifteen minutes. So the conversion has to be done on the fly. I also want the tiling to be done by Mapkit and not by myself using gdal2tiles or the GDAL framework.

UPDATE END

I'm currently working on a project which displays a rainfall radar over some part of the world. The radar image is provided by EUMETSAT, they offer a KML file which can be loaded into Google Earth or Google Maps. If I load the KML file in Google Maps it displays perfectly, but if I draw the image using a MKOverlayView on a MKMapView, the image is slightly of.

For example, on the left side, Google Maps and on the right side the same image is displayed at a MKMapView.

alt text

alt text

The surface that the image covers can be viewed on Google Maps, the satellite that is used for the image is the "Meteosat 0 Degree" satellite.

The surface that both images cover is of the same size, this is the LatLonBox from the KML file, it specifies where the top, bottom, right, and left sides of a bounding box for the ground overlay are aligned.

  <LatLonBox id="GE_MET0D_VP-MPE-latlonbox">
        <north>57.4922</north>
        <south>-57.4922</south>
        <east>57.4922</east>
        <west>-57.4922</west>
        <rotation>0</rotation>
  </LatLonBox>

I create a new custom MKOverlay object called RadarOverlay with these parameters,

[[RadarOverlay alloc] initWithImageData:[[self.currentRadarData objectAtIndex:0] valueForKey:@"Image"] withLowerLeftCoordinate:CLLocationCoordinate2DMake(-57.4922, -57.4922) withUpperRightCoordinate:CLLocationCoordinate2DMake(57.4922, 57.4922)];

The implementation of the custom MKOverlay object; RadarOverlay

- (id) initWithImageData:(NSData*) imageData withLowerLeftCoordinate:(CLLocationCoordinate2D)lowerLeftCoordinate withUpperRightCoordinate:(CLLocationCoordinate2D)upperRightCoordinate
{
     self.radarData = imageData;

     MKMapPoint lowerLeft = MKMapPointForCoordinate(lowerLeftCoordinate);
     MKMapPoint upperRight = MKMapPointForCoordinate(upperRightCoordinate);

     mapRect = MKMapRectMake(lowerLeft.x, upperRight.y, upperRight.x - lowerLeft.x, lowerLeft.y - upperRight.y);

     return self;
}

- (CLLocationCoordinate2D)coordinate
{
     return MKCoordinateForMapPoint(MKMapPointMake(MKMapRectGetMidX(mapRect), MKMapRectGetMidY(mapRect)));
}

- (MKMapRect)boundingMapRect
{
     return mapRect;
}

The implementation of the custom MKOverlayView, RadarOverlayView

- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
    RadarOverlay* radarOverlay = (RadarOverlay*) self.overlay;

    UIImage *image          = [[UIImage alloc] initWithData:radarOverlay.radarData];

    CGImageRef imageReference = image.CGImage;

    MKMapRect theMapRect    = [self.overlay boundingMapRect];
   CGRect theRect           = [self rectForMapRect:theMapRect];
    CGRect clipRect     = [self rectForMapRect:mapRect];

    NSUserDefaults *preferences = [NSUserDefaults standardUserDefaults];
    CGContextSetAlpha(context, [preferences floatForKey:@"RadarTransparency"]);

    CGContextAddRect(context, clipRect);
    CGContextClip(context);

    CGContextDrawImage(context, theRect, imageReference);

    [image release]; 
}

When I download the image, I flip the image so it can be easily drawn in the MKOverlayView

size_t width    = (CGImageGetWidth(imageReference) / self.scaleFactor);
size_t height   = (CGImageGetHeight(imageReference) / self.scaleFactor);

// Calculate colorspace for the specified image
CGColorSpaceRef imageColorSpace = CGImageGetColorSpace(imageReference);

// Allocate and clear memory for the data of the image
unsigned char *imageData = (unsigned char*) malloc(height * width * 4);
memset(imageData, 0, height * width * 4);

// Define the rect for the image
CGRect imageRect;
if(image.imageOrientation==UIImageOrientationUp || image.imageOrientation==UIImageOrientationDown) 
    imageRect = CGRectMake(0, 0, width, height); 
else 
    imageRect = CGRectMake(0, 0, height, width); 

// Create the imagecontext by defining the colorspace and the address of the location to store the data
CGContextRef imageContext = CGBitmapContextCreate(imageData, width, height, 8, width * 4, imageColorSpace, kCGImageAlphaPremultipliedLast);

CGContextSaveGState(imageContext);

// Scale the image to the opposite orientation so it can be easylier drawn with CGContectDrawImage
CGContextTranslateCTM(imageContext, 0, height);
CGContextScaleCTM(imageContext, 1.0, -1.0);

if(image.imageOrientation==UIImageOrientationLeft) 
{
    CGContextRotateCTM(imageContext, M_PI / 2);
    CGContextTranslateCTM(imageContext, 0, -width);
}
else if(image.imageOrientation==UIImageOrientationRight) 
{
    CGContextRotateCTM(imageContext, - M_PI / 2);
    CGContextTranslateCTM(imageContext, -height, 0);
} 
else if(image.imageOrientation==UIImageOrientationDown) 
{
    CGContextTranslateCTM(imageContext, width, height);
    CGContextRotateCTM(imageContext, M_PI);
}

// Draw the image in the context
CGContextDrawImage(imageContext, imageRect, imageReference);
CGContextRestoreGState(imageContext);

After I flipped the image, I manipulate it and then store it in memory as a NSData object.

It looks like the image got stretched, but it looks allright at the center of the image, which is at the equator.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

北凤男飞 2024-10-03 20:26:04

您是否已经看过 WWDC 2010 视频中的“第 127 节 - 使用叠加层自定义地图”?其中一个示例采用地震数据,给出 0.5 x 0.5 度区域的地震风险并绘制它们。您的雷达数据看起来相似,基于正方形。示例代码有一个名为 HazardMaps 的完整应用程序,它获取此数据并使用 MKMapPoints 创建覆盖图。如果您还没有看过这个视频,我认为它会给您提供很多有用的信息。他还谈到了转换为墨卡托投影。

另一件需要检查的事情是 EUMETSAT 数据所在的坐标系(基准面)。Google 地图使用名为 WGS 的系统-84,这是一个通用标准。但还有许多其他标准可以给出世界不同地区更准确的位置。如果您在 Google 地图中使用不同标准的纬度和经度,您的所有点都会有一定的偏差。偏移量并不一致,它会随着您在地图上移动而变化。 Google 地图可能会智能处理数据并即时转换为 WGS-84。

您可以通过查看 KML 来了解更多详细信息。我查看了但找不到带有矩形的最终 KML。也许它提供了有关元数据中使用的坐标系的信息。

Have you already seen "Session 127 - Customizing Maps with Overlays" from the WWDC 2010 videos? One of the examples takes earthquake data, which gives the earthquake risk for 0.5 by 0.5 degree areas and maps them. Your radar data looks similar, based on squares. The sample code has a full application called HazardMaps, which takes this data and creates an overlay using MKMapPoints. If you haven't already seen this video, I think it will give you plenty of useful information. He also talks about converting to the Mercator projection.

Another thing to check is what coordinate system (datum) the data from EUMETSAT is in. Google Maps uses a system called WGS-84, which is a general standard. But there are many other standards which can give more accurate positions in different parts of the world. If you use the latitude and longitude from a different standard in Google Maps, all your points will be off by a certain amount. The offset is not consistent, it changes as you move around the map. It's possible that Google Maps is being smart about the data and converting to WGS-84 on the fly.

You might find out more details by looking at the KML. I looked but couldn't find the final KML, with the rectangles. Perhaps it gives information about what coordinate system it's using in the metadata.

相守太难 2024-10-03 20:26:04

我不确定这是否会影响缩放问题,但在您的 OverlayView 代码中,您为每个地图图块绘制整个图像。

您是否尝试过只绘制在mapRect中可见的图像部分?

当我在使用 MKOverlayViews 时遇到问题时,绘制覆盖层的矩形 (self.overlay.boundingRect) 和 mapRect (传递到 drawMapRect)对我很有帮助。不过,我不确定绘制 mapRect 对您的情况是否有帮助。

无论如何,这是我用来绘制矩形的函数,以防您想尝试一下

-(void)drawRect:(MKMapRect)rect inContext:(CGContextRef)context withLineWidth:(float)lineWidth andColor:(CGColorRef)color
{

    double maxx = MKMapRectGetMaxX(rect);
    double minx = MKMapRectGetMinX(rect);
    double maxy = MKMapRectGetMaxY(rect);
    double miny = MKMapRectGetMinY(rect);

    CGPoint tr = [self pointForMapPoint:(MKMapPoint) {maxx, maxy}];
    CGPoint br = [self pointForMapPoint:(MKMapPoint) {maxx, miny}];
    CGPoint bl = [self pointForMapPoint:(MKMapPoint) {minx, miny}];
    CGPoint tl = [self pointForMapPoint:(MKMapPoint) {minx, maxy}];

    CGMutablePathRef cgPath = CGPathCreateMutable();
    CGPathMoveToPoint(cgPath, NULL, tr.x, tr.y);
    CGPathAddLineToPoint(cgPath, NULL, br.x, br.y);
    CGPathAddLineToPoint(cgPath, NULL, bl.x, bl.y);
    CGPathAddLineToPoint(cgPath, NULL, tl.x, tl.y);
    CGPathAddLineToPoint(cgPath, NULL, tr.x, tr.y); 

    CGContextSaveGState(context);
    CGContextAddPath(context, cgPath);
    CGContextSetStrokeColorWithColor(context, color);
    CGContextSetLineJoin(context, kCGLineJoinRound);
    CGContextSetLineCap(context, kCGLineCapRound);
    CGContextSetLineWidth(context, lineWidth);
    CGContextStrokePath(context);
    CGPathRelease(cgPath);  
    CGContextRestoreGState(context);
}

I'm not sure if this would affect the scaling issue, but in your OverlayView code, your drawing the entire image for every maptile.

Have you tried only drawing the portion of the image that is visible in mapRect?

When I've had problems with MKOverlayViews, its been helpful for me to draw the rect of the overlay (self.overlay.boundingRect) and the mapRect (passed into drawMapRect). Although, I'm not sure if drawing the mapRect would be helpful in your situation.

At any rate, heres the function I use to draw the rect in case you want to try it out

-(void)drawRect:(MKMapRect)rect inContext:(CGContextRef)context withLineWidth:(float)lineWidth andColor:(CGColorRef)color
{

    double maxx = MKMapRectGetMaxX(rect);
    double minx = MKMapRectGetMinX(rect);
    double maxy = MKMapRectGetMaxY(rect);
    double miny = MKMapRectGetMinY(rect);

    CGPoint tr = [self pointForMapPoint:(MKMapPoint) {maxx, maxy}];
    CGPoint br = [self pointForMapPoint:(MKMapPoint) {maxx, miny}];
    CGPoint bl = [self pointForMapPoint:(MKMapPoint) {minx, miny}];
    CGPoint tl = [self pointForMapPoint:(MKMapPoint) {minx, maxy}];

    CGMutablePathRef cgPath = CGPathCreateMutable();
    CGPathMoveToPoint(cgPath, NULL, tr.x, tr.y);
    CGPathAddLineToPoint(cgPath, NULL, br.x, br.y);
    CGPathAddLineToPoint(cgPath, NULL, bl.x, bl.y);
    CGPathAddLineToPoint(cgPath, NULL, tl.x, tl.y);
    CGPathAddLineToPoint(cgPath, NULL, tr.x, tr.y); 

    CGContextSaveGState(context);
    CGContextAddPath(context, cgPath);
    CGContextSetStrokeColorWithColor(context, color);
    CGContextSetLineJoin(context, kCGLineJoinRound);
    CGContextSetLineCap(context, kCGLineCapRound);
    CGContextSetLineWidth(context, lineWidth);
    CGContextStrokePath(context);
    CGPathRelease(cgPath);  
    CGContextRestoreGState(context);
}
寄意 2024-10-03 20:26:04

我的猜测是,谷歌地图正在非线性拉伸图像以补偿地图投影,而您的代码/苹果的代码则不然。

一种可能的解决方案是将覆盖图像细分为更小的矩形,并为每个矩形单独调用 MKMapPointForCooperative()。那么数据就会更接近正确。

My guess is that Google Maps is stretching the image non-linearly to compensate for the map projection and that your code/Apple's code isn't.

One possible solution would be to subdivide the overlay image into smaller rectangles and call MKMapPointForCoordinate() separately for each rectangle. Then the data will be much closer to being correct.

打小就很酷 2024-10-03 20:26:04

WGS-84数据使用UTM投影,MKMapView使用墨卡托投影。您使用不同的方法将 3D 对象映射到 2D 表面,因此不会给出相同的结果。

您需要进入 GDAL 并将传入图像移动到不同的投影。如果你弄清楚了请告诉我,因为这并不容易

WGS-84 data is using UTM projection, MKMapView is using mercator. You are using different methods to map a 3D object to a 2D surface, hence it is not giving the same results.

You'll need to go down in GDAL and move the incomming image to a different projection. Let me know if you figured it out, because it ain't an easy

你不是我要的菜∠ 2024-10-03 20:26:04

您的图像/叠加层很可能不再成比例(即它已被拉伸)。我在自己的应用程序中看到过这种情况,其中视图的中心是正确的,但是当您远离赤道向任一极(屏幕的顶部和底部)移动时,地图/叠加层变得越来越扭曲。

显然,您正在通过scaleFactor缩放图像。首先我会看一下。

size_t width    = (CGImageGetWidth(imageReference) / self.scaleFactor);
size_t height   = (CGImageGetHeight(imageReference) / self.scaleFactor);

测试代码以查看缩放是否是罪魁祸首的另一个好方法是将 MKMapView 缩小到覆盖图像的大小。不要管覆盖图像,如果它不再扭曲,那么您就知道问题所在了。

Your image/overlay is most likely no longer proportional (i.e. it's been stretched). I've seen this kind of thing in my own app where the center of the view is correct, but as you go away from the equator toward either pole (top and bottom of screen) the map/overlay becomes increasingly distorted.

You are obviously scaling your image by scaleFactor. I'd take a look at that for starters.

size_t width    = (CGImageGetWidth(imageReference) / self.scaleFactor);
size_t height   = (CGImageGetHeight(imageReference) / self.scaleFactor);

Another good way to test your code to see if scaling is the culprit, is to scale your MKMapView down to the size of the overlay image. Leave the overlay image alone, and if it is no longer distorted then you know that is the problem.

雄赳赳气昂昂 2024-10-03 20:26:04

Apple 有一个来自 WWDC 的示例应用程序,可以解析 KML 并将其显示在地图上。如果您是付费开发者,您可以从 iTunes 中的 WWDC 视频页面访问它。我建议使用他们的解析器。

Apple has a sample app from WWDC that parses KML and displays it on a map. If you are a paid developer you can access it from the WWDC videos page in iTunes. I recommend using their parser.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文