存储位置信息,还是使用第三方来源?
我正在开发一个基于位置的网络应用程序(学习目的),用户将在其中对本地企业进行评分。然后,我希望用户能够根据他们居住的地方和给定范围(即 123 Street. City St, 12345 10 英里范围内的企业)查看本地企业。
我想知道我应该使用什么来获取位置信息;一些第 3 方来源(例如 Google 地理编码 API)或托管我自己的位置数据库?我知道邮政编码数据库包含每个邮政编码的纬度/经度以及其他数据,但这些数据库通常不完整,而且绝对不是全局的。
我知道大多数 API 都设置了使用限制,这可能是一个决定因素。我想我能做的就是将从 Google 检索到的所有数据存储到我自己的数据库中,这样我就不会两次执行相同的查询。
你们有什么建议?我尝试查看现有的答案,但没有任何帮助。
编辑要明确的是,我需要一种方法来查找给定位置特定范围内的所有企业。是否有一项服务可以用来执行此操作(即返回给定位置范围内的所有城市、邮政编码等)
I'm working on a location-based web app (learning purposes), where users will rate local businesses. I then want users to be able to see local businesses, based on where they live and a given range (i.e., businesses within 10 miles of 123 Street. City St, 12345).
I'm wondering what I should use for location info; some 3rd party source (like Googles geocoding API) or host my own location database? I know of zip-code databases that come with lat/lon of each zip code along with other data, but these databases are often not complete, and definitely not global.
I know that most API's set usage limits, which may be a deciding factor. I suppose what I could do is store all data retrieved from Google into a database of my own, so that I never make the same query twice.
What do you guys suggest? I've tried looking at existing answers on SO, but nothing helped.
EDIT To be clear, I need a way to find all businesses that fall within a certain range of a given location. Is there a service that I could use to do this (i.e., return all cities, zips, etc. that fall within range of a given location)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
将检索到的数据存储在本地缓存中始终是一个好主意。它将减少延迟并避免对您使用的任何 API 造成负担。正如您所说,它还可以帮助您保持在使用限制范围内。您始终可以对该缓存设置大小限制,并在需要时将其清除。
使用 API 意味着您只需为需要信息的网站提取数据,而不是购买大量数据并必须自己加载/托管所有数据(这些数据往往会变得巨大)。我建议使用API+缓存
Storing the data you retrieve in a local cache is always a good idea. It will reduce lag and keep from taxing whatever API you are using. It can also help keep you under usage limits as you stated. You can always place size limits on that cache and clear it out as it ages if the need arises.
Using an API means that you'll only be pulling data for sites you need information on, versus buying a bunch of data and having to load/host it all yourself (these can tend to get huge). I suggest using and API+caching