Some Thoughts on Modern Computer ArithmeticWolfgang Walter
Dresden University of Technology, Department of Mathematics
Thursday, August 14, 2008 15:30-16:30,
With the long overdue revision of the IEEE 754 Standard for Floating-Point Arithmetic finally completed, one might ask what we have gained during the last 23 years at the arithmetic level. In view of the omnipresence of multicore superscalar processors, addressing multithreading and parallelism issues has become a virtual necessity and often a heavy burden as well. Considering the sheer speed of today!Gs computers, automated error control seems essential to obtain accurate and reliable results from floating-point algorithms. Interval arithmetic provides the means to compute guaranteed enclosures of solutions and solution sets on a computer, but neither the original nor the revised IEEE 754 standard seem to bring users or programmers any closer to having efficient hardware support for intervals. This is particularly disappointing because highly efficient implementations have been known to be feasible with only a small additional hardware investment since the advent of processors with multiple FPUs.
On the other hand, interval arithmetic alone will often produce overly pessimistic error bounds (wide intervals) if used naively. The problem of computing numerical results of user-prescribed accuracy is inherently difficult and highly problem- and data-dependent. It is not addressed by IEEE 754R although the advent of hardware-supported quadruple precision floating-point arithmetic is pushing the frontier of !Idoable!I problems a bit further. Multiprecision and high-accuracy computations, in particular notoriously dangerous summation and accumulation operations epitomized by the ubiquitous dot product of vectors, continue to be haunted by roundoff errors and leading-digit cancellation. Although many programming languages have been providing matrix-vector operations for years and symbolic packages have included multi-precision arithmetic for decades, these are still implemented as composite operations at the hardware level, thus incurring unnecessary overhead and rounding errors.
In this talk, the development of the so-called XSC languages (PASCAL-XSC, C-XSC, ACRITH-XSC, and FORTRAN-XSC), their compilers and runtime libraries since 1976 is outlined. New data formats, in particular a variable-precision, multi-purpose data structure allowing the efficient and reliable implementation of summation processes as well as multi-precision arithmetic based on standard floating-point operations is proposed. For efficiency reasons, redundancy plays a key role in the representation of numerical values in this data structure. It alleviates the carry propagation problem and enables parallelization and vectorization.
Speaker Bio: Wolfgang Walter was born in Washington, D.C. in 1959, went to school in Germany and on occasion in the USA. He graduated from high school in Germany in 1977 before taking up his studies of mathematics in the US. He received his BA in mathematics from UCLA in 1980 and his diplome de mathematiques from the EPFL (Ecole Polytechnique Federale de Lausanne, Switzerland) in 1983. As a research assistant at the EPFL, he wrote a FORTRAN 77 compiler for NiklausWirth!Gs LILITH workstation before returning to Karlsruhe in 1994 to create and implement a Fortran extension compiler and runtime system to facilitate the use of IBM!Gs ACRITH subroutine library for High-Accuracy Arithmetic, resulting in the IBM program product ACRITH-XSC. He completed his doctorate at the University of Karlsruhe in 1990 and his !HHabilitation!I in 1994. That same year he was offered a professorship at the Institute of Scientific Computing at the Technische Universitat Dresden where he has been teaching and doing research in computer arithmetic, interval mathematics and verified numerical computing since.
Contact: R. F. Boisvert
Note: Visitors from outside NIST must contact Robin Bickel; (301) 975-3668; at least 24 hours in advance.