Recent Progress of the Java Grande
Numerics Working Group

June 7, 1999


Contents


This report was prepared by Ronald F. Boisvert of NIST (boisvert@nist.gov) with input from many JGNWG members, including TIm Lindholm, Jose Moreira, Cleve Moler, Michael Philippsen, Marc Snir, and Roldan Pozo. Certain commercial products are mentioned in this report in order to document JGFNWG projects. Identification of such products does not imply recommendation or endorsement of the National Institute of Standards and Technology.

1. Introduction

If JavaTM is to become the environment of choice for high-performance scientific applications, then it must provide performance comparable to what is achieved in currently used programming languages (C or Fortran). In addition, it must have language features and core libraries that enable the convenient expression of mathematical algorithms. The goal of the Numerics Working Group (JGNWG) of the Java Grande Forum (JGF) is to assess the suitability of Java for numerical computation, and to work towards community consensus on actions which can be taken to overcome deficiencies of the language and its run-time environment.

For detailed information about the work of the Java Grande Forum, see its Web pages at http://www.javagrande.org/. The work of the JGNWG is described in the Java Numerics Web pages maintained at NIST, http://math.nist.gov/javanumerics/.

This report constitutes an update on JGNWG activities since the JGF's first detailed report, Making Java Work for High-End Computing, was issued in November 1998. This report is available in PDF form at http://www.javagrande.org/sc98/sc98grande.pdf. The sections relating to the JGNWG can also be viewed in HTML form at http://math.nist.gov/javanumerics/reports/jgfnwg-01.html. We assume that the reader is familiar with the contents of these reports.


2. Recent Accomplishments

2.1 Floating-point Semantics in Java 2

The report of the Java Grande Forum was influential in preventing the adoption of the Proposal for Extension of Java Floating-Point Semantics (Revision 1) released by Sun in May 1998. Instead, for Java 1.2 Sun adopted the key ideas of the Java Grande counterproposal. The principal intent of these proposals was to improve the performance of Java floating-point on processors like the Intel Pentium (i.e. those with the x86 architecture), whose registers operate using IEEE 754's 80-bit double-extended format. The original Java floating-point semantics lead to a 2- to 10-fold performance slowdown when implemented on such processors.

The JGNWG counterproposal suggested the adoption of two floating-point modes (strictfp and default). (The widefp keyword of the original Sun proposal is eliminated.) strictfp mode, which applies to classes or methods with the strictfp keyword, corresponds to the original (Java 1) floating-point semantics. The goal in this case is to provide bit-for-bit reproducibility of Java class files across JVMs. Default mode relaxes these requirements slightly. In default mode, anonymous doubles may be represented using 15-bit exponents (rather than standard 11-bit exponents). The x86 has a control bit which causes the fractional part of floating-point results to be truncated to 52 bits (double precision) after each operation without any performance degradation. However, the floating-point exponent remains at 15 bits. By admitting 15-bit exponents, artificial store/load to memory to adjust the exponent can be avoided. This minor change also leads to floating-point results that differ only rarely among processors. Difference in observed results between the x86 and SPARC processors, for example, would occur only in cases where the SPARC would overflow and the x86 would not. A similar design is used for floats.

The JGNWG requirement that the use of extended exponents be used consistently (i.e. either for all anonymous variables or for none) was not adopted for Java 2. While Sun believes that the improved predictability of such a requirement would be beneficial, it would require adding new typed pop/swap/dup JVM instructions to manipulate such numbers on the stack. As such, it was deemed too significant a change to be considered at this time.

Similarly, JGNWG proposals to admit use of floating-multiply-accumulate (FMA) instructions and the associative law in optimizing expressions were not adopted. It was felt that more study to understand their effects more clearly before they are considered. The JGNWG has been considering how to frame a proposal for use of FMAs (see Section 3.1).

2.2 Endorsement by IFIP Working Group 2.5

The work of the JGNWG has been endorsed by the International Federation for Information Processing Working Group 2.5 (WG2.5). WG2.5, whose charter is to improve the quality of numerical computation by promoting the development and availability of sound numerical software, reports to the IFIP Technical Committee on Programming Languages (TC2). At its May 1999 meeting at Purdue University, WG2.5 passed the following resolution:

WG2.5 supports the efforts of the Numerics Working Group (NWG) of the Java Grande Forum to represent the interests of the numerical software community in the evolution of the Java language and its environment. WG2.5 intends to trace NWG activities and continue to provide input to the recommendations of the NWG.

2.3 Working Group Meetings

The last open working meeting of the JGNWG was held in August 1998 in Palo Alto, CA. In March 1999 a group of JGNWG members visited Sun in Menlo Park, CA to present the report of the JGNWG to key Java developers. Representing the working group were Ron Boisvert (NIST), John Brophy (Visual Numerics), Shah Datardina (NAG), Jack Dongarra (University of Tennessee at Knoxville and Oak Ridge National Labs), Geoffrey Fox (Syracuse University), Siamak Hassanzadeh (Sun), Cleve Moler (The Mathworks), Jose Moreira (IBM), and Roldan Pozo (NIST). Representing Sun were James Gosling, Tim Lindholm, Joshua Bloch, Henry Sowizral, Thomas Arkwright, David Hough, and Greg Tarsey.

At the March meeting, the JGNWG expressed its appreciation to Sun for taking the time to exchange information on progress and plans for improving Java's usability for numerical computations common to most Grande applications. Tim Lindholm said that the JGNWG was providing valuable input to Sun and that its work had already had a significant effect on Java. Sun realizes that although scientific and engineering computation is not Java's primary market, it represents an important constituency that they would like to accommodate. Sun is happy to cooperate with the Java Grande Forum and will seek their advice on matters relating to Java Numerics. They appreciate the Forum's willingness to find compromise solutions that preserve Java's overall design and philosophy. (See the note from Tim Lindholm in the Appendix to this document.) At this meeting, Sun indicated that it would reassign a staff member to spend full time working on Java Numerics issues with the Java inner circle.

A half-day open meeting of the JGNWG is scheduled in conjunction with the ACM 1999 Java Grande Conference to be held in San Francisco on June 12-14.

2.4 Other Events

The work of the JGNWG was described at the following events.


3. Status of Proposals

3.1 Use of Floating Multiply-Accumulate (FMA)

Sun appears willing to consider allowing use of the hardware fused-multiply-accumulate (FMA), instruction under carefully controlled circumstances. The FMA operation, y = y + a*x, often appears in the inner loops of dense matrix computations. As a result, significant speedups can occur when hardware FMAs are used. Processors with FMAs include the Power PC and the Merced. IBM has analyzed the impact of using the FMA operation in several Java benchmarks. For matrix-multiply (real numbers), performance can be improved from 100.9 Mflops to 209.8 Mflops through the use of FMAs. For BSOM (a neural-network data mining kernel) the improvement is from 94.5 Mflops to 120.5 Mflops. For Cholesky factorization, the improvement is from 83.4 to 129.9 Mflops. These FMAs were all obtained from explicit a*b+c operations. The Fortran numbers for MATMUL (matrix multiply), BSOM, and Cholesky are 244.6, 28.0, and 134.0, respectively. (All experiments were performed on an RS/6000 model 590.)

This topic was discussed extensively at the March meeting in Menlo Park. Marc Snir of IBM has developed a proposal to modify the semantics of Java floating-point in order to admit FMAs. It is included as Appendix 2.

Another way to provide access to FMAs is to provide explicit methods that represent them. Two separate FMA methods could be added to Java: java.lang.Math.fma() and java.lang.StrictMath.fma(). fma(y,a,x) would return a*x+y in each case. Its behavior would be different depending on the package.

StrictMath

Forced FMA. Requires use of extended precision FMA, even if it requires simulation in software. That is: y, a, x are each converted to IEEE extended format (double or float as appropriate), the computation occurs in extended format, with the result rounded to double or float as appropriate.

Math

Opportunistic FMA. The FMA should be used if it can be done fast, e.g. as a single hardware instruction, otherwise the expression a*x+y should be computed using the usual Java floating-point semantics.

These methods should not be incorporated in place of the proposed extensions above, however. Rather, they should be included to provide an additional method of fine control of FMA operations.

3.2 Math Library Specification

The original specification for java.lang.Math says, roughly, to use the algorithms of fdlibm (Sun's freely distributable libm, available on netlib) translated straightforwardly to Java. It is not clear whom, if anyone, does this. In practice, Java programs that reference elementary functions are not producing the same results on all Java platforms. For the Java 2 platform, Sun is distributing Java wrappers for fdlibm in C for java.lang.Math. This should encourage more uniform results from the Java elementary functions.

The JGNWG has had extensive discussions regarding the proper specification of the elementary functions in Java. Cleve Moler has studied the various the issues; his report is included as Appendix 3.

Two recent contributions point to new possibilities. John Brophy of Visual Numerics announced the release of Jmath, a complete translation of Sun's fdlibm in Java. This is posted at http://www.vni.com/corner/garage/grande/. The use of such a library would insure reproducible results.

Abraham Ziv and colleagues at the IBM Haifa Labs recently released a suite of correctly rounded math functions for IEEE arithmetic in C. [Ziv] (Only binaries are released, for Solaris, NT and AIX. See http://www.alphaWorks.ibm.com/tech/mathlibrary4java/.) They have recommended its use for Java. Providing correctly rounded results in all cases is, of course, the ideal specification for the elementary functions. Of high concern to Sun is the speed of the library and its footprint (size of the distributable binary). Users will not tolerate a significant degradation of speed, and Sun does not want to significantly increase the size of the Java core. Ziv claims that the Haifa codes are usually significantly faster than fdlibm. The size of the binaries that would be included with Java are hard to judge. The compressed AIX binary is 185Kb. The codes include large tables, but their sizes can be adjusted. (The source form of the atan table is 250Kb, for example, although it is shared with atan2.) Further analysis is needed to assess the size in a Java distribution. In addition, versions for other platforms would have to be developed. An alternative would be to provide a version of the Haifa library coded in Java.

The JGNWG also sees merit in relaxing the strict specification for the elementary functions in default mode to admit use of hardware functions such as FSIN and FSQRT on the Pentium.

The following are possible specifications for the Java elementary functions.

strictfp mode: produce the correctly rounded result.

Pros

This is the ideal definition.

Cons

Pure Java code for this does not yet exist. Existing algorithms may be too large for inclusion in the Java core.

strictfp mode: mandate use of VNI's native Java fdlibm.

Pros

This code is available now. It is reasonably fast and has a small footprint. It produces results that are usually within one unit in the last place of the correctly rounded result.

Cons

An operational definition like this would mandate incorrect results. Incorporating future improvements in the algorithms yielding better results would change the behavior of existing Java class files.

default mode: specify largest acceptable relative error in result for each function

Pros

This allows flexibility of implementation. It allows (guarded) use of specialized hardware instructions for square root, sine, cosine, etc., yielding greatly improved performance. It allows improvements in algorithms producing more accurate results to be trivially accommodated

Cons

Verifying conformance to a tight bound (e.g., 1 ulp) is quite difficult.

3.3 Operator Overloading and Lightweight Objects

Sun clearly sees the need for these extensions for the numerics community. The performance and usability of core math capabilities such as complex arithmetic, interval arithmetic, multidimensional arrays, and others hinge on these capabilities. IBM has done experiments on supporting lightweight objects. Representing complex numbers as full-fledged objects results in a performance of 1.1 Mflops for matrix-multiply and 1.7 Mflops for a CFD kernel (two-dimensional convolution). Using lightweight objects and extensive compiler optimization raises those numbers to 89.5 Mflops (MATMUL) and 60.5 Mflops (CFD). Fortran numbers are 136.4 and 66.5 Mflops, respectively. (All experiments performed on an RS/6000 model 590.)

Sun has little experience in the community process for making changes to the Java language itself. As a result, Gosling suggested that Sun should take the lead in developing proposals for operator overloading and lightweight classes. Sun agreed to reassign a staff member to work with Gosling and colleagues to develop such a proposal. The JGF would help in the proposal process, providing justification of the need, providing comments, etc. It was suggested that finding allies in other user communities might help.

Some worries were expressed about the reaction of "purists" to a proposal for operator overloading. Ideally, one would like to see mechanisms that led to only reasonable usage. (Examples: use descriptive names for overloaded methods; require implementation of an arithmetic interface.) It does not seem possible to stop the truly perverse, however.

Lightweight objects are more problematical. There are many ways to provide such a facility, and these would have to be fleshed out. Lightweight objects can be first introduced in the Java language with little or no change to the VM. However, extensive changes to the VM may be necessary to actually deliver performance benefits. (A major problem is how to return lightweight objects from a method.)

The timing for such proposals was discussed. It is unlikely that major language changes will be on the table for the next few releases, so this is viewed as a longer-term project. It was also agreed that the proposals for operator overloading and lightweight classes should be unbundled in order to maximize the chances for success with "the process".


4. Related Activities

4.1 Relationship with vecmath

javax.vecmath is a package that provides a variety of low level mathematical utilities of use in computer graphics. Examples include multiplication of 2x2, 3x3 and 4x4 matrices. Also included in a class Gmatrix that implements operations on general nxn matrices. Methods for computing LU, Cholesky and singular value decompositions are included.

Some preliminary testing by JGNWG members indicates that vecmath may not be completely debugged. On computing the singular value decomposition (SVD) of a 3x3 matrix of zeros vecmath returned left and right singular vectors of all NaNs. The matrix of singular values was unchanged from its value on input. In another test, the SVD of the rank 2 matrix with rows (1 2 3), (4 5 6), (7 8 9) was computed. The computed singular vectors were correct, but the matrix of singular values was unchanged from its input state, and the rank was reported incorrectly as 3.

In discussions with Henry Sowizral of Sun it was agreed that the numerical analysis community should participate more actively in the development of APIs for linear algebra. One option would be for Gmatrix to be deprecated in vecmath and such matrix operations be provided instead in a separate class for numerical linear algebra which could be shepherded by the JGNWG. There will be a "call for experts" soon to consider extensions to vecmath. The JGNWG agreed to provide representatives to this team to help work out details for future development of this package.

4.2 SciMark 2 Benchmark

NIST recently released Version 2 of its SciMark benchmark for Java. SciMark is a composite Java benchmark measuring the performance of numerical kernels occurring in scientific and engineering applications. It consists of five kernels that typify computations commonly found in numeric codes: FFT, Gauss-Seidel relaxation, sparse matrix-multiply, Monte Carlo integration, and dense LU factorization.

These kernels are chosen to provide an indication of how well the underlying JVM/JITs perform on applications utilizing these types of algorithms. The problem sizes are purposely chosen to be small in order to isolate the effects of memory hierarchy and focus on internal JVM/JIT and CPU issues. Results in SciMark 2 are recorded in megaflops for ease of interpretation (although this makes them incompatible with SciMark 1.) A larger version of the benchmark (SciMark 2.0 LARGE) addresses performance of the memory subsystem with out-of-cache problem sizes. The source of the SciMark benchmark is available, and implementations in C and Fortran will be made available to admit comparisons with these language processors.

The benchmark may be downloaded as an applet and run in any Java environment. A table of submitted benchmark data is maintained on the SciMark Web page at http://math.nist.gov/scimark.

4.3 Java Preprocessor Admitting Complex Type

The JavaParty group at the University of Karlsruhe, Germany, has developed a preprocessor that adds a new basic type complex to Java and maps the resulting code back to pure Java.

Compared to code that uses complex objects and method invocations to express arithmetic operations the new basic type increases readability and it is also executed faster. On average, the versions of our benchmark programs that use the basic type outperform the class-based versions by a factor of 2 up to 21 (depending on the JVM used).

More information can be found at http://wwwipd.ira.uka.de/JavaParty/.


Appendix 1

Subject: Feedback for the Numerics Working Group

Date: Fri, 12 Mar 1999 15:59:40 -0800 (PST)
From: Tim Lindholm <Timothy.Lindholm@Eng.Sun.COM>
To: boisvert@cam.nist.gov

Hi Ron,

The following gives my perspective on the Java Grande Numerics Working Group and its relationship and relevance to Sun in Sun's role as the steward of Java development.

As you know, I'm a Distinguished Engineer at Sun, one of the members of the original Java project, the author of The Java Virtual Machine Specification, and currently one of the architects of the Java 2 platform. I work closely with the other architects of the Java technologies, such as James Gosling, and while I can't speak for them in detail I can say that the opinions I express below are not out of line with theirs.

During the development of the Java 2 platform version 1.2 (formerly known as JDK 1.2), I was handed responsiblity for creating licensee and industry consensus around changes to Java's primitive floating-point arithmetic targeted at improving performance on Intel CPUs. This was an extremely difficult task because it required careful attention to the balance between many factors, and was being done in an extremely charged political environment. We who were responsible did not have a broad background in numerics, and had not been successful finding help within Sun. We understood that there was high risk: Java had taken a rather different approach to floating point than many other languages, and the wrong decision in 1.2 could throw away many of the advantages of that approach.

Our best attempts led to a public proposal that we considered a bad compromise and were not happy with, but were resigned to. At this point the Numerics Working Group wrote a counterproposal to Sun's public proposal that gave new technical insight on how to balance the demands of performance on Intel without throwing away the important part of reproducibility. The counterproposal was both very sensitive to the spirit of Java and satisfactory as a solution for the performance problem. When we saw the new proposal we revived efforts to reach a better answer.

We were subsequently aided by email and phone calls with a number of members on the Numerics Working Group (you, Roldan, and Cleve). Joe Darcy, one of the authors of the counterproposal, helped us to understand the proposal generally, and specifically to evaluate the effects of modifications we required of the counterproposal. All of you helped us understand how the Numerics Working Group represented a number of rather different fields with different perspectives on numerics. This helped us gain confidence that the new solution reflected a consensus that would satisfy a broad range of Java users.

We are sure that we ended up with a better answer for 1.2, and arrived at it through more complete consideration of real issues, because of the efforts of the Numerics Working Group. We have internally resolved to consult with the Group on future numeric issues, and to do so earlier in the process. Our attendance at the 3/11 Numerics Working Group meeting at Sun and all the work we accomplished there is evidence of this resolution. We think that the Group continues to show great sensitivity to the needs of the Java platform and the difficulty of introducing change while preserving compatibility and robustness. We look forward to continuing this relationship into the future.

-- Tim


Appendix 2: Proposal for Use of FMAs in Java (Draft)

Marc Snir
IBM T.J. Watson Research Center
snir@us.ibm.com

The current JVM 1.2 spec supports two double formats: IEEE format with 53 bit mantissa, 12 bit exponent; and a default format, that has exactly the same mantissa, but may have a larger exponent. Internal values (not visible to the user) can be held in the default format, and converted to the IEEE format when assigned to program variables. If a method is declared strictfp, then the use of large exponents is disallowed. A similar design is used for floats. This design enables Intel to take (partial) advantage of its 80 bit floating point registers, using the mode where only 53 bit mantissas are used in floating point operations. It does not allow Power PC to take advantage of the fused multiply add, and does not allow the use of full sized mantissas in the 80 bit Intel floating point registers.

The JGNWG has discussed several proposals to remediate this problem. To be adopted, a proposal should

  1. require limited implementation effort, and negligible implementation effort on platforms that are not affected by current Java floating point restrictions. ,
  2. introduce only limited platform dependence (similar to the platform dependence due to the default/strictfp design),
  3. not worsen accuracy, and
  4. improve performance of codes without required significant, platform-dependent, code rewrite.

The proposal outlined below seems to achieve these four goals. An "extended" floating point mode is added, where larger mantissas are allowed, in the same circumstances where larger exponents are allowed in JVM 1.2: internal values can be stored with more digits of precision than required by IEEE format, but need be converted back to IEEE format when assigned to program variables. This solution would allow (i) the use of fused multiply adds, where the intermediary output from the multiplication is not rounded, (ii) the promotion of floats to doubles, on machines where double arithmetic is faster, and (iii) the use of full 80 bit precision on Intel machines – at least for internal values.

We can have two versions of this design. V1: The use of the "extended" format would be allowed only for methods (and classes) that were explicitly tagged with the keyword extendedfp in the source. The byte file will contain an extendedfp flag. Otherwise, the rules of JVM 1.2 apply. We then have two floating-point modes: strictfp and extendedfp (in addition to the default mode). V2: "extended" format can be used in default mode – no new keyword.

Note that the user can always prohibit the use of extended format by explicitly naming each internal value (replace x=a*b+c with x1=a*b; x=x1+c), by using strictfp, or, if V1 is adopted, just by avoiding the use of the extendedfp attribute. Note, too, that the implementation effort is minimal, for implementations that do not take advantage of the extended format: the Java compilers need generate a new flag, but this flag may be ignored by Java interpreters and JITs.

A downside of this design is that it is unlikely that Sun will add new keywords and new floating-point modes. Thus, either (i) we agree upfront what extendedfp entails (does it allow usage of associativity, as proposed with associativefp?), or (ii) we agree that extendedfp may add new relaxations, over time, or (iii) we agree that none will be added. (i) is unlikely, due to short time.


APPENDIX 3: The Java Elementary Functions

Cleve Moler
The MathWorks
moler@mathworks.com

An important design goal of Java has been machine independence. A single program should run on a wide range of computers, and get the same result everywhere. But, as Java matures and becomes widely used for a broad range of applications, the difficulty of achieving this goal in practice is becoming apparent.

The machine independence goal conflicts with execution speed. Keeping intermediate quantities in extended precision floating point registers increases speed, but may produce different results and lead to different exceptions on different machines.

Tradeoffs between machine independence and speed become particularly important in the design of a library for the elementary math functions, square root, exponential, logarithm, power, and a half dozen trig functions.

These fundamental tradeoff considerations are confounded in at least one currently available Java math library by serious design flaws. The exponential, sine and cosine functions in Microsoft's math library are so inaccurate for large arguments that the library is unsuitable for general-purpose use.

FDLIBM

The documentation of Java.lang.Math in the Java platform 1.2 Beta 4 API specification says:

To help ensure portability of Java programs, the definitions of many of the numeric functions in this package require that they produce the same results as certain published algorithms. These algorithms are available from the well-known network library netlib as the package "Freely Distributable Math Library" (FDLIBM). These algorithms, which are written in the C programming language, are then to be understood as executed with all floating-point operations following the rules of Java floating-point arithmetic.

However, very few people are paying any attention to this specification. There are two widely used Java environments for Microsoft operating systems on Intel PCs, one from Microsoft and one from Sun. Neither one of them uses currently FDLIM; both use the "libm" from Visual C++. As a consequence, Java numerical results on the PC may not be the same as those obtained on a Sun SPARC, and, worse yet, may be inaccurate or completely wrong.

FDLIBM deserves to be better known and more widely used. It is a very valuable asset. Authored by K. C. Ng, David Hough and possibly others at Sun, it is essentially a publicly available version of Sun's elementary function library. See http://www.netlib.org/fdlibm.

(At the MathWorks, we use FDLIBM as the underlying library for MATLAB on all platforms. We did this primarily because we got tired of working around the bugs, poor algorithms and inconsistent handling of exceptional cases that we found in the libm's provided by other hardware and operating systems vendors.)

FDLIBM has been translated to Java by John Brophy of Visual Numerics in Houston. This is useful in situations where the native code generated by the C version of FDLIBM is inappropriate. Brophy's source code is publicly available, http://www.vni.com/corner/garage/grande/index.html.

An Idealized Model

Let F(x) be any of the elementary math functions under consideration. How can we assess the accuracy of a routine that computes F(x)? One possibility is to regard any floating point input argument, x, as a precisely known real number, call upon some oracle to evaluate F(x) exactly, and then round the result to the nearest floating point number. There is only one rounding error in this entire theoretical computation. This is what we mean by "the correctly rounded result" or "getting the right answer".

This ultimate accuracy may not be justified by itself, but the implied consequences are very important. If a particular routine is known to fit this model, then it will always get the same results on different machines. If a particular function F(x) has monotonicity properties, such as x <= y implies exp(x) <= exp(y), or range properties, such as -1 <= cos(x) <= 1, then the computed function will inherit these properties.

Several years ago, a widely used Unix math library had cos(0) > 1. The value was an accurate approximation to the exact value; it differed from 1 by only one bit in the last place. But it violated a fundamental property of the cosine function. This kind of violation is not possible when computations produce correctly rounded results.

It is often argued that the input x is not known exactly, so there is no reason to evaluate F(x) exactly. This same argument could be (and actually has been) used to justify sloppy rounding with basic arithmetic operations like additional and multiplication. Again, the goal is not the accuracy itself, but the desirable mathematical consequences that correct rounding brings.

Whether or not this idealized model is justified in any particular context can be discussed in that context. Whether or not such a model can be achieved with reasonable cost in time and program size is a key question in the current discussion.

Ulps

Ulps stands for "units in the last place". This is a strong measure of error in which the error in y is expressed relative to the spacing of the floating-point numbers near y. For all double precision numbers between 1 and 2, an ulp is 2-52. For numbers between 2k and 2k+1, an ulp is 2k-52.

The distance from any real number to the nearest floating-point number is less than or equal to the half the spacing between the nearby numbers. So the correctly rounded results in our idealized model have errors <= 0.5 ulps. Documentation for various elementary function libraries, including FDLIBM, often says that the error is less than one ulp. That's good, but not perfect. Half an ulp is the ultimate goal.

Square Root

The documentation for FDLIBM itself says:

     *   ------------------------------------------
     *   |  Use the hardware sqrt if you have one |
     *   ------------------------------------------

It may be true that the FDLIM software square root and various hardware square roots all produce the same results. But, if that's not the case, what should Java use? For square root, the choice is clear: the hardware square root should be producing the correctly rounded answer, so use it. But, for other functions, the resolution is not so clear.

What is pi?

Let pi be the transcendental mathematical quantity usually denoted by a Greek letter and let PI be Java.lang.Math.PI, the floating point number closest to pi. What is the correct value for Java.lang.Math.sin(PI)? The answer is not zero, because PI is not equal to pi. Here are two answers, one obtained from the FSIN instruction on an Intel Pentium and the other obtained from FDLIBM:

     1.224606353822377e-16
     1.224646799147353e-16

These two values agree to only five significant figures. If the FDLIBM result is regarded as the correct answer, then the error in the FSIN result is 1.64e+11 ulps, or 164 gigaulps. It can be argued that both values are so small that either one is an acceptable result, but the mere fact that more than one answer is possible violates both the machine independence objective and the idealized model.

The first stage in the computation of sin(x) is argument reduction. If abs(x) is greater than pi/2, then an integer multiple of some approximation to pi is subtracted from x to bring it into the desired range. Different hardware and software manufacturers use different approximations to pi in this argument reduction. The FSIN instruction on Intel's Pentium has a 66-bit approximation to pi available in silicon for this purpose. Before the Pentium, earlier Intel 80x87 chips used less accurate approximations to pi. Documentation for new Cyrix and AMD chips indicate they may use more than 66 bits.

FDLIBM has a 1144-bit approximation to pi. This allows argument reduction equivalent to an infinitely precise pi for any input value less than 21024, the largest possible double precision floating point number. (See the papers by K. C. Ng available from http://www.validgh.com/.)

So, for sin(PI), the FDLIBM sin function gives the "correct" result required by our idealized model, while the hardware instructions on various chips give different, less correct, results. On the Pentium, the Java Development Kits from both Microsoft and Sun appear to use the less accurate FSIN instruction.

Using these different values for pi for the argument reduction in sin(), cos() and tan() is the most serious deterrent to machine independent results. When measured if terms of relative error, or ulps, the discrepancies can be huge.

Moreover, it is going to be difficult to get everyone to agree on one particular value for pi. The infinitely precise scheme in FDLIBM has the best overall mathematical properties, but its advantages may not be strong enough to convince PC uses to abandon their builtin functions.

Computing sin(x) for large x

Because the Pentium FSIN instruction only "knows" 66 bits of pi, it can not do accurate reduction of large arguments. When invoked with an argument larger than 263, the instruction sets an error bit in the floating-point status word and returns the argument unchanged. Systems, like Java, that use the FSIN instruction without checking the error bit, find that for large x, sin(x) is computed really fast, but is equal to x. This is unacceptable for general purpose technical computing.

Serious flaw in exp(x)

Like sin(x), computation of exp(x) begins with an argument reduction. The role of pi is played by ln2, the natural logarithm of 2. For any x, let n = round(x/ln2). Then

exp(x) = 2n * exp(x - n*ln2).

The reduced argument is in a range where exp() can be approximated quickly and accurately by a modest number of terms in a power series. The final scaling by 2n is accomplished by simply adding n to the floating-point exponent.

The integer n has been chosen so that severe subtractive cancellation occurs in the computation of the reduced argument, x - n*ln2. If the reduced argument is to have full relative precision, then ln2 must be represented to, and the computation must be carried out in, higher precision. It appears that the exp function in Microsoft's libm, which is used by Visual C and by both Microsoft's and Sun's JDKs, fails to do this. For example,

     java.lang.Math.exp(100) = 2.688117141816161e+043
                                              ^^

whereas the exact answer is

                    exp(100) = 2.688117141816135448... * 10^43

^^^^^
The error is more than 52 ulps.

More extensive experiments show that as x increases, the maximum error in exp(x) doubles whenever x/ln2 cross a power of 2. This is strong evidence that the argument reduction is not being done carefully.

Table Maker's Dilemma

Is a perfectly rounded elementary math library possible and practical? The difficulty is that nobody knows how much computation is required to achieve perfect rounding in all cases. If F(x) is a transcendental function, then, except for special arguments like x = 0 and x = 1, the value of F(x) is irrational. So, it can't be a floating-point number, and it can't fall exactly halfway between two floating-point numbers. But it can fall arbitrarily close to halfway between two numbers. A potentially very large number of extra digits is required to decide which way to round. This is the Table Maker's Dilemma.

A good example involving the exponential function has been found by Vincent Lefevre and Jean-Michel Muller of the Ecole Normale Superieure de Lyon (http://www.ens-lyon.fr/~jmmuller/Intro-to-TMD.htm.) Let x be the floating point number obtained by entering

x = 0.8375224553405740

The exact value of x is 7543731635572464 x 2-53, or, in hexadecimal, x = 1.ACCFBE46B4EF0 x 2-1. The decimal representation of exp(x) begins with

exp(x) = 2.3106351774748008...

The reason why we are interested in this number is revealed by the hexadecimal expansion of exp(x)

exp(x) = 1.27C2E4BC1EE70 7FFFFFFFFFFFF EB0C2DA33BA8F ... x 21

The first blank is at the point where we have to decide how to round. Should we round down to a floating-point number that ends in EE70, or up to EE71? The binary expansion after the rounding point has a zero, followed by 54 ones, then another zero. We need to go into the third double precision number before we can make the correct decision. If we decide to accept

exp(x) = 1.27C2E4BC1EE70 x 21

the error is 0.4999999999999999818 ulps. If, instead, we were to round up to EE71 the error would be 0.5000000000000000182 ulps. We have found our correctly rounded result, but it wasn't easy to get it.

Lefevre and Muller have done an exhaustive search to establish that this is the worst case for the exponential function with arguments between 1/2 and 1. But no one knows how much extra precision is needed to do perfect rounding for all elementary functions and all double precision floating point arguments.

Library Development

Two research groups, one at Sun in Menlo Park and one at IBM in Haifa, are currently working on elementary function libraries that deliver correctly rounded results. As far as we know, their efforts have not yet been reviewed by others. We do not know how the execution time compares with machine instructions or FDLIBM. And, the libraries may use fairly large tables, so their memory requirements may be significant.

Proving that such a library is perfect by exhaustive search is out of the question. A rigorous proof will have to involve careful mathematical analysis of the underlying algorithms and code.

Conclusions

Here are my recommendations for the vendors of Java libraries and the maintainers of the Java specifications.

Immediately:

Fix the libraries on the PC so that exp(x) uses accurate argument reduction and sin(x), cos(x) and tan(x) can be computed for large x. Check to see if there are any other serious flaws.

The Java Spec:

Augment the reference to NETLIB with a link to Brophy's work. In strict mode, continue to require the FDLIBM results. In nonstrict mode, allow different values of pi and any results with less than one ulp error.

A Future Java Spec:

Monitor the development of the perfectly rounded libraries. If they become viable, independently reviewed, have publicly available source code and live up to their claims, then change the spec to require perfectly rounded results in strict mode.


Notes