Monday, January 19, 2015

error analysis - Is propagation of uncertainties linear?


I'm in doubt with one thing: let's imagine that we have n+1 quantities, n of them being directly measured, and the other one being related to the first n by a function f:RnR. If the xi is one of the quantities being directly measured and if Δxi is it's uncertainty, then I've learned that the uncertainty of the quantity y=f(x1,xn) is given by:


Δy=ni=1fxiΔxi


My problem in understanding this is: well, if f is such a function, it's derivative is the linear transformation df:RnR given by:


df=ni=1fxidxi


Where dxi:RnR is given by dxi(a1an)=ai. Hence, what we are saying, is that Δy=df(Δx1,Δxn), in other words, we are saying that the uncertainty of y is given by a linear function of the vector of uncertainties of the quantities xi. Why should that be true? I mean, what's the reasoning behind this? I really didn't get the point, my teacher just gave the equation without justifying or anything like that.


Any help and reference is appreciated.



Answer




It's not, at least not in the statistical sense. What you are doing is finding the (linearly approximated) change in y obtained by changing the inputs by their standard deviations. This is okay as a rough approximation, but you can do better with almost no extra work.


If you want the actual variance and standard deviation of y, the formula is different. Suppose y=f(x1,,xn)=f0+ni=1aixi+ni=1nj=ibijxixj+O(x3).

First, we can compute the expectation of y, E(y)ˆy, very easily: ˆy=f0+ni=1aiˆxi+2ni=1nj=i+1bijˆxiˆxj+ni=1biiE(x2i)+O(x3).
This follows from the fact that expectations are linear. Note here we assume xi is independent from xj, unless of course i=j; this is how the expectation distributes over the product. If your inputs are correlated, this simple analysis fails. Note also that this is very close to what you would guess the "mean" value of y is, but not quite. To second order, you have to account for the expectations of the squares of the inputs not matching the squares of the expectations. At least the maxim "plug in the best values of x to get the best value of y" works perfectly to first order in x.


We can square this result, yielding ˆy2=f20+2f0ni=1aiˆxi+4f0ni=1nj=i+1bijˆxiˆxj+2f0ni=1biiE(x2i)+ni=1nj=1aiajˆxiˆxj+O(x3).

Along similar lines, we have E(y2)=E(f20+2f0ni=1aixi+2f0ni=1nj=ibijxixj+ni=1nj=1aiajxixj)+O(x3)=f20+2f0ni=1aiˆxi+4f0ni=1nj=i+1bijˆxiˆxj+2f0ni=1biiE(x2i)+2ni=1nj=i+1aiajˆxiˆxj+ni=1a2iE(x2i)+O(x3).


Finally, we are in a position to compute the variance of y. This is simply (Δy)2E(y2)ˆy2=ni=1a2i(E(x2i)ˆx2i)+O(x3).

In fact, this result can be written entirely in terms of the first derivatives, ai, and the variances of the inputs, (Δxi)2: (Δy)2ni=1(fxi)2(Δxi)2.
The standard deviation Δy, which is what you expect reported when you see y=ˆy±Δy, is simply the square root of this: Δy(ni=1(fxi)2(Δxi)2)1/2.
It is this formula that people have in mind when they say things like "we added the uncertainties in quadrature," since this result is very broadly applicable - it applies whenever we have a continuously differentiable function of independent inputs with known means and standard deviations.


No comments:

Post a Comment

classical mechanics - Moment of a force about a given axis (Torque) - Scalar or vectorial?

I am studying Statics and saw that: The moment of a force about a given axis (or Torque) is defined by the equation: $M_X = (\vec r \times \...