So I have an accelerometer which I am wanting to use in an IMU.
When the device is tilted but stationary I want the x, y values to be 0, so effectively negate the effect of gravity along the x and y axes of the accelerometer.
I have found lots of conflicting advice on the internet regarding rotation matrices and was wondering if anyone here can provide some clarity.
I understand accelerometers can only measure pitch and roll from their x, y, z so how can I use these values to remove the gravity vector component on the x and y axes?
Answer
This is a hard thing to do. If you know the magnitude of $g$ accurately, you can look at the actual total acceleration observed by your accelerometer and subtract the "known" g. What you are left with is the difference vector. The problem is that the difference between two large vectors that point almost in the same direction is a small vector with a large error on it.
This is why it is preferable to have some independent information about the rotation of the sensor: if you have rotation sensing (not just linear acceleration measurement) you can integrate that to get the angular position; this helps improve the estimate of the orientation, and then it's easier to subtract the gravity vector.
For example this answer assumes you know the rotation (orientation of the sensor) after which things are simple. But when you don't, the problem is very ill posed unless the acceleration is large compared to $g$ (or at least "not small").
Details of the calculation can be found in this paper. If that doesn't answer your question, can you please be more specific in your question about the concept you are stuck on?
update
Here is the reason that your problem is not easy to solve. For simplicity I will do this in 2D - it should be easy to see how to extend it to 3D (but that doesn't make it better-behaved).
I show two situations where the $a_x$ sensor records the same acceleration, but the $a_y$ sensor records a slightly different value. When the sensor is rotated and stationary, and the values are perfectly known, then
$$a_x^2+a_y^2=g^2\\ a_y = g\sqrt{1-\left(\frac{a_x}{g}\right)^2}$$
Now in the second case (not rotated, and accelerating), the reading from the $a_y$ sensor will be exactly equal to $g$. In the first case, we can only estimate the rotation from the fact that it is not quite g: from trigonometry, we see that
$$\cos\theta = \frac{a_y}{g}$$
so we expect the measured horizontal component of acceleration to be
$$a_x = g \sin\theta = g \sin\cos^{-1}\frac{a_y}{g}$$
if we want to do error propagation, we can start by looking at the derivative of the $\cos^{-1}(x)$ function, which is $-\frac{1}{\sqrt{1-x^2}}$.
For small angles of rotation, when $a_y\approx g$, that error becomes exponentially bigger - in other words, even with an accurate accelerometer it's hard to estimate $\theta$ when the angle is small. And that means that you can't tell the difference between the two cases I drew very well - the uncertainty in $a_x$ will be very large. The problem becomes less important when the acceleration is very large - but for small (compared to $g$) accelerations this really won't work very well.
This is why it is necessary to have some independent indication of orientation / rotation if you want to use an accelerometer for an IMU.
No comments:
Post a Comment