Cheap and Secure Web Hosting Provider : See Now

Formal notations of equivalencence between floating point expressions

, , No Comments
Problem Detail: 

In an informal context, most programmers might call these two expressions "equal" and no one would be mislead:

(a + b) + c a + (b + c) 

For floating point numbers, we know that these two expressions are not strictly equivalent (in the sense that they evaluate to equal values for any a, b, and c) because floating point addition is not associative. Nonetheless, these expressions seem to be similar in interesting and useful ways. For many inputs, these expressions produce nearly equal or equal outputs. The real number counterparts of these expressions are equivalent. In many programming contexts, these expressions can be swapped without concern.

Are there notions of equivalence between floating point expressions that describe this sort of similarity?

Stating simply that the real number counterparts of these expressions are equivalent is a bit too literal to be a useful answer.

Asked By : Praxeolitic
Answered By : D.W.

One plausible notion of "nearly-equivalent" for floating-point expressions is that the relative error between the two is guaranteed small.

For example, if $x\ne 0$ and $y\ne 0$, define

$$D(x,y) = \max(\left|{x \over y} - 1\right|, \left|{y \over x} - 1\right|).$$

This measures the relative error between two real numbers. Now, say that $x$ is nearly equivalent to $y$ if $D(x,y) \le \varepsilon$ (for some $\varepsilon$ fixed in advance; e.g., $\varepsilon = 10^{-6}$ or something).

Now, we can define two floating-point expressions e, e' to be nearly-equivalent if, for all possible assignments to the variables of these expressions, we have $D(e,e') \le \varepsilon$.

This implies that replacing e with e' will have only a small effect on the value of the expression (with "small" measured using relative error, rather than absolute error).

With these definitions, if you know that a, b, and c are positive and bounded away from zero, you should be able to prove that (a+b)+c and a+(b+c) are nearly-equivalent, for very small values of $\varepsilon$ (dependent on machine epsilon and the specific floating-point format used). However, you can't prove them nearly-equivalent when nothing is known about a, b, c, because it is possible that (a+b)+c = 0 but a+(b+c) > 0, and then the relative error is infinite in this case.

Perhaps a more nuanced definition could allow them to be treated as nearly equivalent in the general case; I don't know. For instance, we could alternatively imagine defining $D(x,y)$ to be the smallest value $\delta>0$ such that $$\begin{align*} (1-\delta) x - \delta &\le y \le (1+\delta) x + \delta\\ (1-\delta) y - \delta &\le x \le (1+\delta) y + \delta \end{align*}$$ (This effectively captures a combination of relative error and absolute error.) I suspect it might be possible to prove that (a+b)+c and a+(b+c) are nearly-equivalent, under this distance metric. Is this useful? I don't know.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/61177

3200 people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback