用于定位时手机加速度计的实际精度是多少?
我正在开发一个应用程序,我想在 GPS 不可用的建筑物内跟踪移动用户的位置。用户从一个众所周知的固定位置(精确到 5 厘米以内)开始,此时手机中的加速计将被激活,以跟踪相对于该固定位置的任何进一步移动。我的问题是,在当前一代智能手机(iPhone、Android 手机等)中,根据这些手机通常配备的加速度计,人们能够准确地跟踪某人的位置吗?
具体的例子会很好,例如“如果我从起点移动 50 米 X,从起点移动 35 米,从起点移动 5 米 Z,我可以期望我的位置近似在 +/- 80 以内大多数当前智能手机上的厘米”,或其他什么。
我对卡尔曼滤波器等漂移校正技术只有肤浅的了解,但如果此类技术与我的应用相关,并且有人想要描述我可能从此类技术中获得的校正的质量,那将是一个优势。
I am working on an application where I would like to track the position of a mobile user inside a building where GPS is unavailable. The user starts at a well known fixed location (accurate to within 5 centimeters), at which point the accelerometer in the phone is to be activated to track any further movements with respect to that fixed location. My question is, in current generation smart phones (iphones, android phones, etc), how accurately can one expect to be able to track somebodies position based on the accelerometer these phones generally come equip with?
Specific examples would be good, such as "If I move 50 meters X from the starting point, 35 meters Y from the starting point and 5 meters Z from the starting point, I can expect my location to be approximated to within +/- 80 centimeters on most current smart phones", or whatever.
I have only a superficial understanding of techniques like Kalman filters to correct for drift, though if such techniques are relevant to my application and someone wants to describe the quality of the corrections I might get from such techniques, that would be a plus.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果将加速度计值积分两次,您就可以获得位置,但误差是可怕的。它在实践中毫无用处。
这里是原因的解释(Google 技术讲座) 23:20。
我回答了类似的问题。
If you integrate the accelerometer values twice you get position but the error is horrible. It is useless in practice.
Here is an explanation why (Google Tech Talk) at 23:20.
I answered a similar question.
我不知道这个线程是否仍然打开,或者即使您仍在尝试这种方法,但考虑到我尝试了同样的事情,我至少可以对此进行输入。
正如阿里所说......这太可怕了!经过双重积分后,加速度计的最小测量误差变得很小。由于步行时加速度不断增加和减少(实际上是每一步),这个误差会随着时间的推移而迅速累积。
很抱歉这个坏消息。我也不想相信它,直到自己尝试......过滤掉不需要的测量值也不起作用。
如果您有兴趣继续您的项目,我有另一种可能可行的方法。 (我在计算机工程学位论文中遵循的方法)......通过图像处理!
您基本上遵循光学鼠标的理论。光流,或者被视图称为“自我运动”。 Android NDK 中实现的图像处理算法。甚至通过NDK实现了OpenCV来简化算法。您将图像转换为灰度(补偿不同的光强度),然后对图像实施阈值处理、图像增强(以补偿行走时图像模糊),然后进行角点检测(提高总体结果估计的准确性),然后进行模板匹配在图像帧之间进行实际比较并估计像素数量的实际位移。
然后,您通过反复试验来估计多少像素代表哪个距离,并乘以该值将像素位移转换为实际位移。不过,直到达到一定的移动速度为止,真正的问题是,由于行走,相机图像仍然变得太模糊,无法进行准确的比较。这可以通过设置相机快门速度或 ISO 来改进(我仍在研究这个)。
所以希望这会有所帮助...否则谷歌搜索 Egomotion 来获取实时应用程序。最终你会得到正确的东西并弄清楚我刚刚向你解释的胡言乱语。
享受 :)
I don't know if this thread is still open or even if you are still attempting this approach, but I could at least give an input into this, considering I tried the same thing.
As Ali said.... it's horrible! the smallest measurement error in accelerometers turn out to be rediculess after double integration. And due to constant increase and decrease in acceleration while walking (with each foot step in fact), this error quickly accumulates over time.
Sorry for the bad news. I also didn't want to believe it, till trying it self... filtering out unwanted measurements also doesn't work.
I have another approach possibly plausible, if you're interested in proceeding with your project. (approach which I followed for my thesis for my computer engineering degree)... through image processing!
You basically follow the theory for optical mice. Optical flow, or as called by a view, Ego-Motion. The image processing algorithms implemented in Androids NDK. Even implemented OpenCV through the NDK to simplify algorithms. You convert images to grayscale (compensating for different light entensities), then implement thresholding, image enhancement, on the images (to compensate for images getting blurred while walking), then corner detection (increase accuracy for total result estimations), then template matching which does the actual comparing between image frames and estimates actual displacement in amount of pixels.
You then go through trial and error to estimate which amount of pixels represents which distance, and multiply with that value to convert pixel displacement into actual displacement. This works up till a certain movement speed though, the real problem being camera images still getting too blurred for accurate comparisons due to walking. This can be improved by setting camera shutterspeeds, or ISO (I'm still playing around with this).
So hope this helps... otherwise google for Egomotion for real-time applications. Eventually you'll get the right stuff and figure out the jibberish I just explained to you.
enjoy :)
光学方法很好,但 OpenCV 提供了一些特征转换。然后你可以进行特征匹配(OpenCV 提供了这个)。
如果没有第二个参考点(两个摄像机),由于深度的原因,您无法直接重建您所在的位置。最好的情况是,您可以估计每个点的深度,假设一个运动,根据几帧对假设进行评分,然后重新猜测每个深度和运动,直到它有意义为止。编码并不难,但它不稳定,场景中物体的微小运动会把它搞砸。我尝试过:)
不过有了第二台相机,这根本就不难。但手机没有这些功能。
The optical approach is good, but OpenCV provides a few feature transforms. You then feature match (OpenCV provides this).
Without having a second point of reference (2 cameras) you can't reconstruct where you are directly because of depth. At best you can estimate a depth per point, assume a motion, score the assumption based on a few frames and re-guess at each depth and motion till it makes sense. Which isn't that hard to code but it isn't stable, small motions of things in the scene screw it up. I tried :)
With a second camera though, it's not that hard at all. But cell phones don't have them.
典型的手机加速计芯片可解析 +/- 2g @ 12 位,在全范围内提供 1024 位或 0.0643 ft/sec^2 lsb。采样率取决于时钟速度和整体配置。典型速率每秒可处理 1 到 400 个样本,速率越快,精度就越低。除非您将手机安装在蜗牛上,否则位移测量可能不适合您。您可以考虑使用光学距离测量而不是手机加速度计。查看松下设备 EKMB1191111。
Typical phone accelerometer chips resolve +/- 2g @ 12 bits providing 1024 bits over full range or 0.0643 ft/sec^2 lsb. The rate of sampling depends on clock speeds and overall configuration. Typical rates enable between one and 400 samples per second, with faster rates offering lower accuracy. Unless you mount the phone on a snail, displacement measurement likely will not work for you. You might consider using optical distance measurement instead of a phone accelerometer. Check out Panasonic device EKMB1191111.