Quote:
For example:
http://developer.java.sun.com/developer/bugParade/bugs/4857011.html
" Example in there: The value of sine for the floating-point number Math.PI is around
1.2246467991473532E-16
while the computed value for the algorithm used in java 1.3.1 is
1.2246063538223773E-16 ^
In other words, the returned result is only accurate to about 5 digits instead of the full 15-17 digit precision of double. Instead of a 1/2 ulp or 1 ulp error, the error is about 1.64e11 ulps, over *ten billion* ulps."
We have: sin(pi)=1.2246063538223773E-16 in jre 1.3.1 sin(pi)=1.2246467991473532E-16 in jre 1.4
We know that exact value of sin(pi) is zero.
So, the value we had in jre 1.3 for sin(pi) is better than jre 1.4???
|