如何校准 Android 加速度计和加速度计降低噪音,消除重力

发布于 2024-10-15 06:20:02 字数 1141 浏览 11 评论 0原文

所以,我已经在这个问题上挣扎了一段时间,并且没有运气利用互联网的智慧和有关该主题的相关 SO 帖子。

我正在编写一个使用无处不在的加速度计的 Android 应用程序,但即使在休息时,我似乎也会收到令人难以置信的大量“噪音”,而且似乎不知道如何处理它,因为我的读数需要相对准确的。我认为也许我的手机(HTC Incredible)出现了故障,但传感器似乎与我玩​​过的其他游戏和应用程序配合良好。

我尝试过使用各种“过滤器”,但我似乎无法全神贯注于它们。我知道必须以某种方式处理重力,也许这就是我出错的地方。目前我已经尝试过这个,改编自SO答案,它引用了iPhone的一个例子SDK:

                accel[0] = event.values[0] * kFilteringFactor + accel[0] * (1.0f - kFilteringFactor);
                accel[1] = event.values[1] * kFilteringFactor + accel[1] * (1.0f - kFilteringFactor);


                double x = event.values[0] - accel[0];
                double y = event.values[1] - accel[1];

发帖者说“使用”kFilteringFactor 值(示例中 kFilteringFactor = 0.1f)直到满意为止。不幸的是,我似乎仍然收到很多噪音,而这一切似乎只是让读数以小数形式出现,这对我没有多大帮助,而且似乎只是降低了传感器的灵敏度。我大脑的数学中心也因多年的忽视而萎缩,所以我不完全理解这个过滤器是如何工作的。

有人可以详细地向我解释如何从加速度计获取有用的读数吗?简洁的教程将是一个令人难以置信的帮助,因为我还没有找到一个真正好的教程(至少针对我的知识水平)。我感到沮丧,因为我觉得所有这些对我来说应该更加明显。任何帮助或指导将不胜感激,当然,如果需要,我可以从我的代码中提供更多示例。

我希望我没有要求太多的勺子喂食;除非我已经尝试了一段时间,否则我不会问。其他 SO 成员似乎也有一些兴趣。

So, I've been struggling with this problem for some time, and haven't had any luck tapping the wisdom of the internets and related SO posts on the subject.

I am writing an Android app that uses the ubiquitous Accelerometer, but I seem to be getting an incredible amount of "noise" even while at rest, and can't seem to figure out how to deal with it as my readings need to be relatively accurate. I thought that maybe my phone (HTC Incredible) was dysfunctional, but the sensor seems to work well with other games and apps I've played.

I've tried to use various "filters" but I can't seem to wrap my mind around them. I understand that gravity must be dealt within some way, and maybe that's where I am going wrong. Currently I have tried this, adapted from a SO answer, which refers to an example from the iPhone SDK:

                accel[0] = event.values[0] * kFilteringFactor + accel[0] * (1.0f - kFilteringFactor);
                accel[1] = event.values[1] * kFilteringFactor + accel[1] * (1.0f - kFilteringFactor);


                double x = event.values[0] - accel[0];
                double y = event.values[1] - accel[1];

The poster says to "play with" the kFilteringFactor value (kFilteringFactor = 0.1f in the example) until satisfied. Unfortunately I still seem to get a lot of noise, and all this seems to do is make the readings come in as tiny decimals, which doesn't help me all that much, and it appears to just make the sensor less sensitive. The math centers of my brain are also atrophied from years of neglect, so I don't completely understand how this filter is working.

Can someone explain to me in some detail how to go about getting a useful reading from the accelerometer? A succinct tutorial would be an incredible help, as I haven't found a really good one (at least aimed at my level of knowledge). I get frustrated because I feel like all of this should be more apparent to me. Any help or direction would be greatly appreciated, and of course I can provide more samples from my code if needed.

I hope I'm not asking to be spoon-fed too much; I wouldn't be asking unless I've been trying to figure it our for a while. It also looks like there is some interest from other SO members.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

一刻暧昧 2024-10-22 06:20:02

要从加速度计获得正确的读数,您需要使用方程 speed = SQRT(x*x + y*y + z*z)。使用此功能,当手机静止时,速度将是重力速度 - 9.8m/s。因此,如果减去 (SensorManager.GRAVITY_EARTH),那么当手机静止时,您的读数将为 0 m/s。至于噪音,Blrfl 对于廉价加速计的看法可能是正确的,即使我的手机处于静止状态,它也会持续每秒闪烁几分之一米。您可以设置一个小阈值,例如 0.4m/s,如果速度不超过该阈值,则它处于静止状态。

To get a correct reading from the accelerometer you need to use the equation speed = SQRT(x*x + y*y + z*z). Using this, when the phone is at rest the speed will be that of gravity - 9.8m/s. So if you subtract that (SensorManager.GRAVITY_EARTH) then when the phone is at rest, you will have a reading of 0 m/s. As for noise, Blrfl might be right about cheap accelerometers, even when my phone is at rest, it continuously flickers a few fractions of a metre per second. You could just set a small threshold e.g 0.4m/s and if the speed doesn't go over that, then it is at rest.

甜嗑 2024-10-22 06:20:02

部分答案:

准确性。如果您正在寻找高精度,那么手机中的廉价加速计就无法满足要求。相比之下,适用于工业或科学用途的三轴传感器仅传感器的价格就超过 1,500 美元;添加硬件来为其供电并将其读数转换为计算机可以使用的东西会使价格翻倍。手机中的传感器的价格远低于 5 美元。

噪声。廉价的传感器不准确,而不准确又会转化为噪声。不移动的不准确传感器不会总是显示零,它会在一定范围内显示两侧的值。您能做的最好的事情就是在静止时表征传感器,以了解它的噪声有多大,并使用它根据预期误差将测量结果舍入到不太精确的范围。 (换句话说,如果它在零的 ±x m/s^2 范围内,则可以肯定地说传感器没有移动,但您无法精确确定,因为它可能移动得很慢。 )您必须在每台设备上执行此操作,因为它们并不都使用相同的加速计,而且它们的行为也不同。我想这就是 iPhone 的优势之一:硬件几乎是同质的。

重力。中有一些讨论SensorEvent 文档 有关从加速度计显示的内容中分解重力的内容。您会注意到它与您发布的代码有很多相似之处,只是它更清楚它在做什么。 :-)

呵呵。

Partial answer:

Accuracy. If you're looking for high accuracy, the inexpensive accelerometers you find in handsets won't cut the mustard. For comparison, a three-axis sensor suitable for industrial or scientific use runs north of $1,500 for just the sensor; adding the hardware to power it and turn its readings into something a computer can use doubles the price. The sensor in a handset runs well below $5 in quantity.

Noise. Cheap sensors are inaccurate, and inaccuracy translates to noise. An inaccurate sensor that isn't moving won't always show zeros, it will show values on either side within some range. About the best you can do is characterize the sensor while motionless to get some idea how noisy it is and use that to round your measurements to a less-precise scale based on expected error. (In other words, If it's within ±x m/s^2 of zero, it's safe to say the sensor's not moving, but you can't be precisely sure because it could be moving very slowly.) You'll have to do this on every device, because they don't all use the same accelerometer and they all behave differently. I guess that's one advantage the iPhone has: the hardware's pretty much homogeneous.

Gravity. There's some discussion in the SensorEvent documentation about factoring gravity out of what the accelerometer says. You'll notice it bears a lot of similarity to the code you posted, except that it's clearer about what it's doing. :-)

HTH.

心清如水 2024-10-22 06:20:02

你如何应对紧张情绪?您平滑数据。您无需将传感器的值序列视为您的值,而是持续对它们进行平均,形成的新序列将成为您使用的值。这使得每个抖动值更接近移动平均值。平均必然会消除相邻值的快速变化......这就是人们使用术语“低(频率)通过滤波”的原因,因为最初每个样本(或单位时间)可能变化很大的数据现在变化得更慢。

例如,您可以通过多种方式对这些值进行平均,而不是使用值 10 6 7 11 7 10。例如,我们可以根据运行平均值(即最后处理的数据点)与下一个原始数据点的相等权重来计算下一个值。将上述数字按 50-50 混合,我们会得到 10、8、7.5、9.25、8.125、9.0675。这个新序列,即我们处理过的数据,将用于代替噪声数据。当然,我们可以使用不同于 50-50 的混合。

打个比方,想象一下您仅使用您的视力来报告某个人的位置。你可以看到更广阔的风景,但人却被雾气笼罩。您会看到引起您注意的身体部位……移动的左手、右脚、眼镜上的闪光等,这些都是紧张的,但每个值都相当接近真实的质心。如果我们进行某种运行平均,我们会得到接近该目标在雾中移动时的质心的值,并且实际上比我们(传感器)报告的值更准确,这些值是由多雾路段。

现在看来我们正在失去潜在有趣的数据以获得无聊的曲线。不过这是有道理的。如果我们试图重新创建雾中人的准确图像,第一个任务就是获得质心的良好平滑近似。为此,我们可以添加来自互补传感器/测量过程的数据。例如,另一个人可能接近这个目标。该人可能会提供非常准确的身体动作描述,但可能处于浓雾中,并且不知道目标的总体走向。这是对我们首先得到的位置的补充——第二个数据准确地提供了细节,而无需了解大致位置。两条数据将被缝合在一起。我们会对第一组进行低通处理(就像您在这里提出的问题一样)以获得没有噪音的大致位置。我们会通过第二组数据来获取细节,而不会对总体立场产生不必要的误导性影响。我们使用高质量的全局数据和高质量的本地数据,每组数据都以互补的方式进行优化,并防止破坏另一组数据(通过两次过滤)。

具体来说,我们将陀螺仪数据(在“树木”的局部细节中准确但在森林中丢失(漂移)的数据)混合到此处讨论的数据(来自加速度计)中,该数据可以很好地看到森林,但不是树木。

总而言之,我们对来自传感器的数据进行低通,这些数据是不稳定的,但仍靠近“质心”。我们将此基本平滑值与细节准确但有漂移的数据相结合,因此第二组经过高通滤波。当我们处理每组数据以清除其中不正确的方面时,我们得到了两全其美的效果。对于加速度计,我们通过对其测量值运行运行平均值的一些变化来有效地平滑/低通数据。如果我们处理陀螺仪数据,我们会进行数学运算,有效保留细节(接受增量),同时拒绝累积误差,这些误差最终会增长并破坏加速度计的平滑曲线。如何?本质上,我们使用实际的陀螺仪值(不是平均值),但在得出最终的总干净值时使用少量样本(增量)。使用少量的增量可以使总体平均曲线大部分沿着低通级(通过平均加速度计数据)跟踪的相同平均值,该低通级形成每个最终数据点的大部分。

How do you deal with jitteriness? You smooth the data. Instead of looking at the sequence of values from the sensor as your values, you average them on an ongoing basis, and the new sequence formed become the values you use. This moves each jittery value closer to the moving average. Averaging necessarily gets rid of quick variations in adjacent values.. and is why people use the terminology Low (frequency) Pass filtering since data that originally may have varied a lot per sample (or unit time) now varies more slowly.

eg, instead of using values 10 6 7 11 7 10, you can average these in many ways. For example, we can compute the next value from an equal weight of the running average (ie, of your last processed data point) with the next raw data point. Using a 50-50 mix for the above numbers, we'd get 10, 8, 7.5, 9.25, 8.125, 9.0675. This new sequence, our processed data, would be used in lieu of the noisy data. And we could use a different mix than 50-50 of course.

As an analogy, imagine you are reporting where a certain person is located using only your eyesight. You have a good view of the wider landscape, but the person is engulfed in a fog. You will see pieces of the body that catch your attention .. a moving left hand, a right foot, shine off eyeglasses, etc, that are jittery, BUT each value is fairly close to the true center of mass. If we run some sort of running averaging, we'd get values that approach the center of mass of that target as it moves through the fog and are in effect more accurate than the values we (the sensor) reported which was made noisy by the fog.

Now it seems like we are losing potentially interesting data to get a boring curve. It makes sense though. If we are trying to recreate an accurate picture of the person in the fog, the first task is to get a good smooth approximation of the center of mass. To this we can then add data from a complementary sensor/measuring process. For example, a different person might be up close to this target. That person might provide very accurate description of the body movements, but might be in the thick of the fog and not know overall where the target is ending up. This is the complementary position to what we first got -- the second data gives detail accurately without a sense of the approximate location. The two pieces of data would be stitched together. We'd low pass the first set (like your problem presented here) to get a general location void of noise. We'd high pass the second set of data to get the detail without unwanted misleading contributions to the general position. We use high quality global data and high quality local data, each set optimized in complementary ways and kept from corrupting the other set (through the 2 filterings).

Specifically, we'd mix in gyroscope data -- data that is accurate in the local detail of the "trees" but gets lost in the forest (drifts) -- into the data discussed here (from accelerometer) which sees the forest well but not the trees.

To summarize, we low pass data from sensors that is jittery but stays close to the "center of mass". We combine this base smooth value with data that is accurate at the detail but drifts, so this second set is high-pass filtered. We get the best of both worlds as we process each group of data to clean it of incorrect aspects. For the accelerometer, we smooth/low pass the data effectively by running some variation of a running average on its measured values. If we were treating the gyroscope data, we'd do math that effectively keeps the detail (accepts deltas) while rejecting the accumulated error that would eventually grow and corrupt the accelerometer smooth curve. How? Essentially, we use the actual gyro values (not averages), but use a small number of samples (of deltas) a piece when deriving our total final clean values. Using a small number of deltas keeps the overall average curve mostly along the same averages tracked by the low pass stage (by the averaged accelerometer data) which forms the bulk of each final data point.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文