Mathematical and Computational Sciences Division
Summary of Activities for Fiscal Year 2000
Information Technology Laboratory
National Institute of Standards and Technology
U. S. Department of Commerce
This document summarizes activities of the ITL Mathematical and Computational Sciences Division for FY 2000, including technical highlights, project descriptions, lists of publications, and examples of industrial interactions. Note: At the close of the fiscal year the transfer of the Scientific Applications and Visualization Division from the ITL High Performances Systems and Services Division to MCSD was announced. We include the accomplishments of this group in this report even though this group was not part of MCSD in FY2000.
Questions regarding this document should be directed to Ronald F. Boisvert, Mail Stop 8910, NIST, 100 Bureau Drive, Gaithersburg, MD 20899-8910 (email@example.com).
Thanks to Robin Bickel for collecting and organizing the information for this document.
Table of Contents
Part I: Overview
The mission of the Mathematical and Computational Sciences Division (MCSD) is as follows.
Within the scope of our charter, we have set the following general goals.
With these goals in mind, we have developed a technical program in three general areas.
The first area is accomplished primarily via collaborations with other technical units of NIST, supported by mathematical research in key areas. Projects in the second area are typically motivated by internal NIST needs, but have products, such as software, which are widely distributed. The third area reflects work done primarily for the computational science community at large, although NIST staff benefits also.
We have developed a variety of strategies to increase our effectiveness in dealing with such a wide customer base. We take advantage of leverage provided via close collaborations with other NIST units, other government agencies, and industrial organizations. We develop tools with the highest potential impact, and make online resources easily available. We provide routine consulting, as well as educational and training opportunities for NIST staff. We maintain a state-of-the-art visualization laboratory. Finally, we select areas for direct external participation that are fundamental and broadly based, especially those where measurement and standards can play an essential role in the development of new products.
Division staff maintain expertise in a wide variety of mathematical domains, including linear algebra, special functions, partial differential equations, computational geometry, Monte Carlo methods, optimization, inverse problems, and nonlinear dynamics. We also provide expertise in parallel computing, visualization, and a variety of software tools for scientific computing. Application areas in which we have been actively involved in this year include atomic physics, materials science, fluid mechanics, electromagnetics, manufacturing engineering, construction engineering, wireless communications, bioinformatics, image analysis and computer graphics.
In addition to our direct collaborations and consulting, output of Division work includes publications in refereed journals and conference proceedings, technical reports, lectures, short courses, software packages, and Web services. In addition, MCSD staff members participate in a variety of professional activities, such as refereeing manuscripts and proposals, service on editorial boards, conference committees, and offices in professional societies. Staff members are also active in educational and outreach programs for mathematics and computer science students at all levels.
The Division is organized into four groups:
A listing of staff is provided in Part IV.
A list of recent activities in each of the division focus areas follows. Note that individual projects may have complementary activities in each of these areas. For example, the micromagnetic modeling work has led to the OOMMF software package, as well as to a collection of standard problems used as benchmarks by the micromagnetics community. Further details on many of these efforts can be found in Part II of this report.
Mathematical modeling in the physical sciences, engineering, and information technology.
To complement these activities, we engage in short-term consulting with NIST scientists and engineers, conduct a lecture series, and sponsor shortcourses and workshops in relevant areas. Information on the latter can be found in Part III of this report.
In this section, we highlight a few of the significant accomplishments of MCSD this past year. Further details on the technical accomplishments of the division can be found in Part II and Part III.
|Visualization of a three-dimensional dendrite resulting from a phase field model of solidification.|
Modeling and Simulation in Material Science. A significant application of MCSD work during the past several years has been in the general area of modeling and simulation in materials science and engineering. The work has been varied in nature, from the analysis of new theoretical and computational models, to the development of parallel computing and visualization methods, and the construction of modeling software. The impact of the program has been wide ranging, yielding gains in the capabilities and understanding for both our internal collaborators and in the materials science research community in industry and academia. Examples of the accomplishments in this area during the last year include the following. Each of these was performed in collaboration with scientists from the NIST Materials Science and Engineering Laboratory.
Division staff participated in the organization of several significant conferences and workshops related to materials modeling. Geoffrey McFadden co-chaired the SIAM Conference on Mathematical Aspects of Materials Science held in Philadelphia in May 2000, and Anthony Kearsley co-chaired the workshop Large-Scale Computations in the Simulation of Materials held at Carnegie Mellon University in March 2000.
|Visualization of Bose-Einstein condensate on the cover of December 1999 Physics Today.|
Visualization of Bose-Einstein Condensates. Bose-Einstein condensates (BECs) are a new state of matter that is formed when small numbers of atoms are trapped and cooled to temperatures a few millionths of a degree above absolute zero. In a BEC, the atoms meld into a "superatom" which behaves as a single entity. BECs were first demonstrated by Eric Cornell and Carl Weiman of JILA, a joint research institute of NIST and the University of Colorado, in 1995. Since then there has been an explosion of interest in the further understanding and applications of this phenomenon. In particular, scientists in the NIST Physics Laboratory have undertaken a variety of research studies to understand properties of BECs and the nanoKelvin physics needed to manipulate them. Such computational studies results in voluminous high-dimensional data that is difficult to assimilate. NIST physicists turned to computer scientists in the MCSD Scientific Applications and Visualizations Group to develop techniques to view the data. Among the particular cases addressed were the visualization of quantized vortices and solitons. The resulting collaborations have proven enormously successful. Visualizations developed by the team appeared on the cover of the December 1999 Physics Today, the August 2000 Parity (Japanese), and the December 2000 Optics and Photonics News. In addition, an article in the December 2000 issue of Scientific American features a graphic developed by the team on its opening page. Not only are the visualizations strikingly beautiful, they have led directly to new science. When Eric Cornell saw these visualizations of solitons in a BEC he was prompted to attempt to experimentally verify their existence, which he succeeded in doing this year. For their efforts, the visualization team won a NIST Bronze Medal award for 2000 (see below).
Digital Library of Mathematical Functions (DLMF). The DLMF project is developing a Web-based resource providing NIST-certified reference data and associated information for the higher functions of applied mathematics. Such functions possess a wealth of highly technical properties that are used by engineers, scientists, statisticians, and others to aid in the construction and analysis of computational models in a variety of applications. The data will be delivered within a rich structure of semantic-based representation, metadata, interactive features, and internal/external links. It will support diverse user requirements such as simple lookup, complex search and retrieval, formula discovery, interactive visualization, custom data on demand, and pointers to software and evaluated numerical methodology.
|The DLMF Editorial Board. Seated (left to right): Ingram Olkin, Ronald Boisvert, Daniel Lozier, Frank Olver, Jet Wimp, Walter Gautschi. Standing (left to right): Richard Askey, Peter Paule, Charles Clark, Nico Temme, Leonard Maximon, Michael Berry, William Reinhardt, Morris Newman.|
The DLMF was conceived as the successor for the NBS Handbook of Mathematical Functions (AMS 55), edited by M. Abramowitz and I. Stegun and published by NBS in 1964. AMS 55 is the most widely distributed and cited NBS/NIST technical publication of all time. (The U.S. Govt. Printing Office has sold over 150,000 copies, and commercial publishers are estimated to have sold 4-6 times that number). The DLMF is expected to contain more than twice as much technical information as AMS 55, reflecting the continuing advances of the intervening 40 years.
The DLMF is the largest project ever undertaken by mathematical organizations at NIST. It is being developed by a team of researchers led by Daniel Lozier, Frank Olver, Charles Clark (PL) and Ronald Boisvert. A set of external Associate Editors has been assembled to provide technical assistance. This year authors for all 38 chapters of technical material were identified and are now working under contract to NIST. After delivery to NIST in 2001, the material will be independently validated, and then converted for use on the World Wide Web by the NIST team. A complete working version of the DLMF will be available sometime in 2003.
Funding for the project is being provided by the NIST Information Technology Laboratory, the National Science Foundation, the NIST MEL Systems Integration for Manufacturing Applications (SIMA) program, the NIST TS Standard Reference Data Program, and the ATP Adaptive Learning Systems program. NSF funding will be used to contract for the services of experts on mathematical functions to develop and validate the technical material.
Strategic Planning and New Startups. During FY 1999 MCSD developed a five-year strategic plan. The major trends identified in the plan are as follows. These issues, as well as proposed MCSD responses, are detailed in the MCSD Strategic Plan, available upon request.
The ordinary industrial user of complex modeling packages has few tools available to assess the robustness, reliability, and accuracy of models and simulations. Without these tools and methods to instill confidence in computer-generated predictions, the use of advanced computing and information technology by industry will lag behind technology development. NIST, as the nation's metrology lab, is increasingly being asked to focus on this problem.
Research studies undertaken by laboratories like NIST are often outside the domain of commercial modeling and simulation systems. Consequently, there is a great need for the rapid development of flexible and capable research-grade modeling and simulation systems. Components of such systems include high-level problem specification, graphical user interfaces, real-time monitoring and control of the solution process, visualization, and data management. Such needs are common to many application domains, and re-invention of solutions to these problems is quite wasteful.
The availability of low-cost networked workstations will promote growth in distributed, coarse grain computation. Such an environment is necessarily heterogeneous, exposing the need for virtual machines with portable object codes. Core mathematical software libraries must adapt to this new environment.
All resources in future computing environments will be distributed by nature. Components of applications will be accessed dynamically over the network on demand. There will be increasing need for online access to reference material describing mathematical definitions, properties, approximations, and algorithms. Semantically rich exchange formats for mathematical data must be developed and standardized. Trusted institutions, like NIST, must begin to populate the net with such dynamic resources, both to demonstrate feasibility and to generate demand, which can ultimately be satisfied in the marketplace.
The NIST Laboratories will remain a rich source of challenging mathematical problems. MCSD must continually retool itself to be able to address needs in new application areas and to provide leadership in state-of-the-art analysis and solution techniques in more traditional areas. Many emerging needs are related to applications of information technology. Examples include VLSI design, security modeling, analysis of real-time network protocols, image recognition, object recognition in three dimensions, bioinformatics, and geometric data processing. Applications throughout NIST will require increased expertise in discrete mathematics, combinatorial methods, data mining, large-scale and non-standard optimization, stochastic methods, fast semi-analytical methods, and multiple length- scale analysis.
In early FY 2000, MCSD management participated in a strategic planning exercise for ITL as a whole. The MCSD plan provided input to the ITL Strategic Plan. MCSD's plans are consistent with those of the ITL and of NIST.
During FY 2000, additional investigation of NIST needs, along with the startup of new efforts, were undertaken in three areas.
Bioinformatics is widely recognized as a field of tremendous emerging importance. During the past few years, MCSD has been working to develop some competence in this area. Our initial efforts in this area have been undertaken by several staff members, a guest researcher, and several student workers. Successes to date include the development of a software package, GenPatterns, enabling unique visualization and analysis of genome sequences. This year we began an effort to develop numerical methods and software for the alignment of genome sequences. This problem can be formulated as an optimization problem. MCSD work in this area is being done jointly with the NIST Biotechnology Division as part of a project funded by the NIST ATP program.
Combinatorial methods refer to massively parallel rough experimental techniques that used to identify "hits", materials with certain desired properties, which are candidates for further experimentation and possible development. Use of these methods is now routine in drug design, and there is much interest in the use of such techniques for materials discovery. However, most materials, and their processing, are much more complex than simple chemicals. In addition, such techniques generate massive amounts of data, which must be managed and analyzed.
Several MCSD staff members participated in the NIST Combinatorial Methods Working Group during FY 2000. The group identified both industrial and NIST needs for combinatorial methods. In both domains, "informatics" issues were identified as one of the most difficult barriers to the more widespread use of such techniques. MCSD has recently undertaken two joint ATP-funded projects with MSEL to develop data mining techniques applicable to problems in combinatorial materials discovery:
The NIST Physics Laboratory (PL) has undertaken a significant new program in quantum computing. Quantum computers store information in the quantum states of matter. Algorithms for quantum computers have been developed that, in theory, could solve problems much faster than thought possible on classical computers. One example is Shor's algorithm for factoring large integers, which has important implications in cryptography. No quantum computers yet exist, and we are many years away from the realization of a practical quantum computer. Nevertheless, there is now widespread interest in further research in this field. The NIST PL has some of the world's best technology applicable to the development of components for quantum computing (e.g., ion trapping, single photon sources). As a result, it has become a center of excellence in this emerging field. MCSD staff members participated this year in extensive discussion and planning with the PL regarding ITL involvement in NIST's quantum computing effort. PL is seeking to expand NIST expertise in areas such as information theory, error correcting codes, quantum communication protocols, and quantum cryptography. It is expected that extensive interactions between mathematicians, computer scientists, and physicists will be necessary to develop practical technology to fully realize the potential of quantum computing.
|Left: Andrew Roosen (MSEL), Stephen Langer, and Edwin Filler (MSEL). Right: Fern Hunt|
Awards. Several staff members received significant awards this year. In December 1999 Industry Week magazine named OOF as a 1999 Technology of the Year. Developed by Stephen Langer of MCSD, along with Ed Fuller and Andy Roosen of MSEL and Craig Carter of MIT, OOF is a system that processes micrographs of real materials with complex microstructure, allowing material scientists to perform virtual testing to predict macroscopic properties. "Laboratory work has become very expensive due to the rising costs of materials, equipment, and personnel. OOF points researchers toward the right experiment to do in the lab," said Katherine Faber, Professor of Materials Science and Engineering at Northwestern University in an Industry Week interview. OOF joined 24 other technologies, such as the Toyota Pirius hybrid gas- electric vehicle, in the annual Industry Week review. Langer accepted the award for NIST at Industry Week's annual awards ceremony held in Tucson in February 2000. In December 2000 NIST presented the OOF team with the 2000 Jacob Rabinow Applied Research Award. The Rabinow Award is presented yearly in recognition of outstanding application of NIST research in industry.
Fern Hunt, a mathematician in the MCSD Mathematical Modeling Group, received the Arthur Flemming Award in May 2000. The Flemming Award is given annually to recognize outstanding Federal employees with less than 15 years of service. The Flemming Award Commission selects the honorees, and the award is sponsored by George Washington University and Government Executive magazine. This year 12 winners were selected, six in the administrative category and six in the science and engineering category. Hunt was cited for fundamental contributions to probability and stochastic modeling, mathematical biology, computational geometry, nonlinear dynamics, computer graphics, and parallel computing, as well as for extensive collaborations with scientists and engineers. She was presented with the award at the Flemming Awards Ceremony at the Cosmos Club in Washington, DC. Earlier in the spring, Hunt delivered the 2000 Marden Lecture in Mathematical Sciences at the University of Wisconsin-Milwaukee. Each spring the Mathematical Science Department there invites a distinguished scientist to deliver a lecture on mathematics and its applications to a general audience. The lecture is named after Morris Marden (1905-1991), former chairman of the department and founder of the graduate program. Previous speakers were James Yorke, Guido Weiss, Richard Askey and Walter Rudin.
|Left: Ronald Boisvert. Right: Steve Satterfield, Peter Ketcham, Terrence Griffin, William George, and Judith Devaney.|
In May 2000, the Association for Computing Machinery (ACM) presented Ronald Boisvert its 1999 Outstanding Contribution to ACM Award. Boisvert was cited for his "leadership and innovation as Editor-in-Chief of the ACM Transactions on Mathematical Software and his exceptional contributions to the ACM Digital Library project". ACM has honored 29 professionals for service to the society since the award's inception in 1976. Boisvert received the award at ACM's annual awards ceremony at the Fairmont Hotel in San Francisco.
A team of MCSD staff from the Scientific Applications and Visualization Group was awarded a NIST Bronze Medal for its work in visualization of Bose-Einstein condensates. The honorees were Judith Devaney, William George, Terence Griffin, Peter Ketcham, and Steve Satterfield. They worked with colleagues in the NIST Physics Lab to develop unique 3D color representations of the output of computational models of Bose-Einstein condensates. The visualizations illustrated properties of the condensates which were previously unknown, and which have since been experimentally verified. The pictures were selected as cover illustrations by Physics Today (Dec. 1999), Parity magazine (Japanese, Aug. 2000), Optics and Photonics News (Dec. 2000), and were featured in a title spread for in Scientific American (Dec. 2000).
Technology Transfer. MCSD staff members continue to be active in publishing the results of their research. This year 41 publications authored by Division staff appeared, 25 of which were published in refereed journals. Fifteen additional papers have been accepted for publication in refereed journals. Another 12 manuscripts have been submitted for publication and 35 are being developed.
MCSD staff members were invited to give 53 lectures in a variety of venues and contributed another 21 talks at conferences and workshops. Thirteen shortcourses where provided by MCSD for NIST staff this year, including Non-numerical Methods for Scientific Computing (Isabel Beichl), Fortran 90 for Scientists and Engineers (William Mitchell), The Dense Eigenproblem (Pete Stewart), LabVIEW Programming (Jim Filla), Introduction to Java (John Koontz), and C++ (Adele Peskin). Each was very well attended. The Division lecture series remained active, with 13 talks presented (four by MCSD staff members); all were open to NIST staff.
MCSD staff members also organize workshops, minisymposia, and conferences to provide forums to interact with external customers. This year, staff members were involved in organizing twelve such events. Several of these were done in collaboration with other NIST Laboratories. Michael Donahue and Don Porter hosted a workshop (joint with MSEL) at NIST for users of their OOMMF micromagnetic modeling system in August 2000. Fern Hunt was co-organizer (with BFRL, PL, and MEL) of a NIST workshop on Metrology and Modeling of Color and Appearance held in March 2000. Tim Burns co-organized (with MEL and PL) a NIST workshop on Non-Contact Themometry held in October 2000. Saul Gass is co-organizing (with MEL) a workshop on Supply Chain Management. Two staff members were co-chairs of major external events. Geoffrey McFadden co-chaired the SIAM Conference on Mathematical Aspects of Materials Science held in Philadelphia in May 2000, and Anthony Kearsley co-chaired the workshop Large-Scale Computations in the Simulation of Materials held at Carnegie Mellon University in March 2000.
Software continues to be a by-product of Division work, and the reuse of such software within NIST and externally provides a means to make staff expertise widely available. Several existing MCSD software packages saw new releases this year, including OOMMF (micromagnetic modeling), OOF (material microstructure modeling), TNT (Template Numerical Toolkit for numerical linear algebra in C++), and SciMark (benchmark for numerical computing in Java).
Tools developed by MCSD have led to a number of commercial products. Two recent examples are f90gl and IMPI. F90gl is a Fortran 90 interface to OpenGL graphics. Originally developed by William Mitchell of MCSD for use in NIST applications, f90gl was subsequently adopted by the industry-based OpenGL Architecture Review Board to define the standard Fortran API for OpenGL. NIST's reference implementation has since been included in commercial products of Lahey Computer Systems, Compaq, and Interactive Software Services. Several others are planned. Recently, staff of the Scientific Applications and Visualization Group facilitated the development of the specification for the Interoperable Message Passing Interface (IMPI). IMPI extends MPI to permit communication between heterogeneous processors. MCSD staff also developed a Web-based conformance testing facility for implementations. Several commercial implementations are now under development. Two of these companies, Hewlett-Packard and MPI Software Technologies, participated in the first public demonstration of IMPI on the exhibit floor of the SC'00 conference in Dallas in November 2000.
Web resources developed by MCSD continue to be among the most popular at NIST. The MCSD Web server at math.nist.gov has serviced more than 29 million Web hits since its inception in 1994 (9 million of which have occurred in the past year!) Altavista has identified more than 7,400 external links to the Division server. The top seven ITL Web sites are all services offered by MCSD:
The NIST Guide to Available Mathematical Software (GAMS), a cross-index and virtual repository of mathematical software, is used more than 10,000 times each month. During a recent 36-month period, 34 prominent research-oriented companies in the .com domain registered more than 100 visits apiece to GAMS. The Matrix Market, a visual repository of matrix data used in the comparative study of algorithms and software for numerical linear algebra, sees more than 100 users each day. It has distributed more than 20 Gbytes of matrix data, including more than 80,000 matrices, since its inception in 1996.
Professional Activities. Division staff members continue to make significant contributions to their disciplines through a variety of professional activities. This year Ronald Boisvert was elected Chair of the International Federation for Information Processing (IFIP) Working Group 2.5 (Numerical Software). He was also appointed Vice-Chair of the ACM Publications Board. Donald Porter was elected to the fourteen- member Tcl Core Team, which manages the development of the Tcl scripting language. Daniel Lozier continues to serve as chair of the SIAM Special Interest Group on Orthogonal Polynomials and Special Functions.
Division staff members continue to serve on journal editorial boards. Ronald Boisvert serves as Editor-in-Chief of the ACM Transactions on Mathematical Software. Daniel Lozier is an Associate Editor of Mathematics of Computation and the NIST Journal of Research. Geoffrey McFadden is an Associate Editor of Journal of Computational Physics, SIAM Journal of Applied Mathematics, Interfaces and Free Boundaries, and the Journal of Crystal Growth. Bradley Alpert is a newly appointed Associate Editor of the SIAM Journal of Scientific Computing, and Isabel Beichl was recently appointed to the Editorial Board of Computing in Science & Engineering.
Division staff members work with a variety of external working groups. Ronald Boisvert and Roldan Pozo chair the Numerics Working Group of the Java Grande Forum. Roldan Pozo chairs the Sparse Subcommittee of the BLAS Technical Forum. Michael Donahue and Donald Porter are members of the Steering Committee of muMag, the Micromagnetic Modeling Activity Group.
Mathematics in NIST History. As NIST approaches its centennial in 2001, some attention is being focused on the historic contributions of NBS/NIST to the field of mathematics. The first such accolade came in early 2000 when Computing in Science & Engineering magazine named an NBS-developed method one of the "Top 10 Algorithms of the Century". The method cited was the Krylov subspace iteration, which was pioneered at the NBS Institute for Numerical Analysis by Cornelius Lanczos, Magnus Hestenes, and Eduard Stiefel in the late 1940s and early 1950s.
Later this year NIST will be publishing a centennial volume entitled A Century of Excellence in Measurements, Standards, and Technology: A Chronicle of Selected Publications of NBS/NIST, 1901-2000. The publication will highlight approximately 100 highly significant NBS/NIST publications of the last century. Four of the highlighted publications are associated with the work of ancestor organizations to MCSD:
In addition, three publications related to statistics, and four publications related to computer science, were also selected.
Staff News. MCSD grew in size during FY2000. Dr. Andrew Dienstfrey joined the MCSD staff in Mathematical Modeling Group, Boulder, in March 2000. Diensfrey comes to NIST from a postdoctoral appointment at the Courant Institute. He will be working on problems in computational electromagnetics and optoelectronics in collaboration with NIST scientists. Dr. David Gilsinn, a mathematician formerly with the NIST Manufacturing Engineering Laboratory, joined the MCSD Optimization and Computational Geometry Group in September 2000. He will be working on smooth surface approximations with applications to terrain modeling. Dr. Abdou Youssef, Professor of Electrical Engineering and Computer Science at George Washington University, became an MCSD faculty appointee in September 2000. He will be working on search and retrieval aspects of the Digital Library of Mathematical Functions.
A new NIST/NRC Postdoctoral Associate will be joining MCSD in January 2000. Dr. Katherine Gurski, currently a postdoc at the NASA Goddard Space Flight Center, received a Ph.D. in mathematics from the University of Maryland. She will be working with Geoffrey McFadden on the use of boundary integral methods for the modeling of three-dimensional dendritc growth in metal alloys.
At the close of FY2000, a reorganization of the service functions within ITL brought the Scientific Applications and Visualization Group to MCSD. Led by Dr. Judith Devaney, the group specializes in collaborative research with NIST scientists in the areas of parallel computing and scientific visualization. The group also provides primary consulting services to users of the NIST central scientific computing facility, maintaining expertise in the use of a variety of scientific software packages. As part of its work, the group maintains a well-equipped visualization laboratory and a Unix training facility. The group consists of 13 full-time staff members and several guest researchers; two staff members are in Boulder. Seven hold Ph.D. degrees.
Two MCSD staff members were on extended developmental assignments during FY2000. Anthony Kearsley spent half time at Carnegie-Mellon University, working with researchers in the Computer Science Department there on applications of optimization methods to problems in networking and computer security. Bradley Alpert spent nine months at the Courant Institute of NYU working with Leslie Greengard and colleagues on fast semi-analytic methods with applications in electromagnetic modeling.
Geoffrey McFadden hosted a three-month visit by Professor Stefan van Vaerenbergh, from the Free University in Brussels, Belgium. In collaboration with Sam Coriell of MSEL, they studied the effect of a temperature-dependent solute diffusivity on the stability of a solid-liquid interface during the directional solidification of a binary alloy.
Student Employment Program. MCSD provided support for seven student staff members on summer appointments during FY 2000. Such appointments provide valuable experiences for students interested in careers in mathematics and the sciences. In the process, the students can make very valuable contributions to MCSD programs. This year's students were as follows.
Montgomery Blair High School
|I. Beichl||Analysis of images from polymer dewetting experiments.|
Montgomery Blair High School
|F. Hunt||Application of the GenPatterns software package for genome data analysis.|
Montgomery Blair High School
|S. Langer||Processing of material micrographs with the OOF software package.|
Carnegie Mellon University
|Evaluation of Web-based display technologies for mathematics, and the development of a Java-based function exploration applet.|
University of Arkansas
|J. Devaney||Accessibility of Web pages|
|Computational complexity and applications to gene structures.|
George Washington University
|M. Donahue||Addition of Preisach modeling modules to the OOMMF software package.|
Mathematical Modeling Group
Alfred S. Carasso
A new technique for blind deconvolution of images has recently been developed that is garnering favorable attention at NIST and in the larger research community. Blind deconvolution seeks to deblur an image without knowing the cause of the blur. This is of interest in numerous medical, industrial, scientific, and military applications. The mathematical problem is quite difficult and not fully understood. So far, most approaches to blind deconvolution have been iterative in nature. However, the iterative approach is generally ill behaved, often developing stagnation points or diverging altogether. When the iterative process is stable, a large number of iterations, and several hours of computation, may be necessary to resolve fine detail.
In a research paper to appear shortly in the SIAM Journal on Applied Mathematics, Alfred Carasso has developed a novel approach to that problem that is non-iterative in nature. The method does not attempt to solve the blind deconvolution problem in full generality. Instead, attention is focused on a wide class of blurs that includes and generalizes Gaussian and Lorentzian distributions, and has significant applications. Likewise, a large class of sharp images is exhibited and characterized in terms of its behavior in the Fourier domain. Within that framework, it is shown how 1-D Fourier analysis of blurred image data can be used to detect system point spread functions (psf). A separate image deblurring technique uses this detected psf to deblur the image. Each of these steps uses direct (non-iterative) methods. Although the new technique does require interactive adjustment of parameters, it still enables blind deblurring of 512x512 images in only minutes of CPU time on current desktop workstations.
Two distinct methodologies have been developed for this class of problems, the BEAK method and the APEX method. The BEAK method requires prior information on the 'gross behavior' of the Fourier transform of the ideal image, along a line through the origin in the Fourier plane. This technique generally returns a psf that is approximately equivalent to the true psf. The APEX method does not require knowledge of the gross behavior, but assumes the image to be a recognizable object. Several fast interactive trials are necessary using that method. In general, the APEX method returns several distinct psfs that lead to useful yet visually distinct restorations.
Invited talks on this work have been presented at UCLA, Stanford, Johns Hopkins, the Courant Institute, the University of Maryland, the Los Alamos National Laboratory, the Air Force Research Laboratory, and the Institute for Defense Analyses. Inquiries have also been received from researchers in industry, and from doctoral students contemplating dissertations on blind deconvolution. At NIST, the APEX method is being successfully applied in the Surface and Microanalysis Science Division, CSTL, to improve resolution of Scanning Electron Microscope images. In the Ionizing Radiation Division, PL, a research project on improving measurement infrastructure for Medical Imaging will include a study of this approach to improve resolution in PET and SPECT images.
|Original (left) and deblurred (right) scanning electron microscope images of dust particles.|
Recently, the range of applicability of the BEAK method was extended in two major ways. First, it was discovered that images of similar objects often have approximately equal gross behavior. Therefore, in using the BEAK method for psf detection, it is only necessary to know the gross behavior in a sharp image of a similar object rather than that in the typically unknown original image. By using such substitute images for a-priori information, the BEAK method becomes applicable in a wide variety of situations. Second, using substitute images, a variant of the BEAK method was developed that can approximately identify defocus psfs provided the defocus is not too severe.
Matthew Davies (NIST MEL)
Chris Evans (NIST MEL)
Jon Pratt (NIST MEL)
Tony Schmitz (NIST MEL)
Brian Dutterer (NIST MEL)
Michael Kennedy (NIST MEL)
Carol Johnson (NIST PL)
Howard Yoon (NIST PL)
Machining operations comprise a substantial portion of the world's manufacturing infrastructure. In the 1st CIRP International Workshop on Machining Operations, Atlanta, GA, 19 May 1998, one of the founding fathers of the scientific study of machining, Dr. Eugene Merchant, estimated that 15% of the value of all mechanical components manufactured worldwide is derived from machining operations. However, despite its obvious economic and technical importance, machining remains one of the least well- understood manufacturing operations. Our work is a continuing collaboration on the modeling and measurement of machining processes with researchers in the Manufacturing Process Metrology Group in MEL's Manufacturing Metrology Division (MMD). The mission of MMD is to fulfill the measurements and standards needs of the U.S. discrete-parts manufacturers in mechanical metrology and advanced manufacturing technology.
|High speed metal cutting.|
This year, in an ongoing collaboration on the thermal metrology of high-speed metal- cutting processes, with Carol Johnson and Howard Yoon of the Optical Technology Division in the NIST Physics Laboratory, in-situ measurements that are an order of magnitude more accurate than existing measurements were made of the temperature of the tool-chip interface during orthogonal cutting of a mild steel commonly used in the auto industry. This work was partially funded by ATP, and it has been re-funded for a third (and final) year for FY00-01. The results of this experimental program will provide a set of material reference data that will contribute to a number of ongoing research programs of interest to U.S. industry, including material-response modeling, validation of finite-element simulations of high-speed metal-cutting operations, and studies of tool wear.
In a new effort with M. Davies and T. Schmitz, considerable progress has been made on the use of receptance coupling substructure analysis for tool tuning. The goal of this effort is to develop methods for the rapid selection of the length-to-diameter ratio of a milling tool that provides the best dynamic stability behavior for a given high-speed machining operation. By varying this ratio, one can "tune" the tool/holder/machine system to avoid tool chatter in regions of the parameter space of cutting conditions. The basic idea is to combine experimental measurements of the frequency response of the machine spindle/tool holder assembly with analytical predictions of the cutting tool frequency response in order to predict the frequency response of the entire cutting assembly.
Geoffrey B. McFadden
William Boettinger (NIST MSEL)
John Cahn (NIST MSEL)
Carelyn Campbell (NIST MSEL)
Sam Coriell (NIST MSEL)
Bernard Billia (University of Marseille)
Alex Chernov (NASA Marshall Space Flight Center)
Robert Sekerka (Carnegie Mellon University)
Adam Wheeler (University of Southampton)
Materials processing applications often involve mathematical modeling as a predictive tool to understand the properties of processed materials as a function of the growth conditions of the materials. During the growth of alloys by solidification from the melt, the homogeneity of the solid phase is strongly affected by the prevailing conditions at the solid-liquid interface, both in terms of the geometry of the interface and the arrangements of the local temperature and solute fields near the interface. Instabilities that occur during crystal growth can cause spatial non-uniformities in the sample that significantly degrade the mechanical and electrical properties of the crystal. Considerable attention has been devoted to understanding and controlling these instabilities, which generally include interfacial and convective modes that are challenging to model by analytical or computational means.
The MCSD has a long history of collaboration with the NIST Metallurgy Division, which has included support from the NASA Microgravity Research Division, as well as extensive interactions with university and industrial researchers. In the past year, with W. Boettinger, S. Coriell, and C. Campbell in the Metallurgy Division, a theoretical study of the growth of intermetallic layers in ternary alloys was conducted in order to gain insight into such practical problems as diffusion bonding, soldering, brazing, oxidation, and corrosion. The study is based on finding similarity solutions that describe the dynamics and geometry of the growing phases, which result in nonlinear equations that describe the interface motion. This work derived selection rules that enable the computation of diffusion paths describing the variation of the alloy composition through the intermetallic layers during the growth process.
In another project, performed in collaboration with A. Chernov of the NASA Marshall Space Flight Center, G. McFadden, B. Murray, and S. Coriell studied the effect of time- dependent shear flows on the stability of steps bunches during crystal growth from supersaturated solution. Numerical solutions of the linearized Navier-Stokes and diffusion equations were found using Floquet theory, and used to examine how the mixing introduced by the flow modifies the coupling between the interface perturbation and the induced concentration waves. Studies of this nature help define optimal processing conditions for the growth of the large opto-electronic crystals that are used in laser fusion applications.
During the past year G. McFadden and S. Coriell also worked with B. Billia and colleagues at the University of Marseilles, France, on modeling a diagnostic technique known as Peltier Interface Demarcation. This is an experimental method in which an electrical current is passed through a metallic sample that is undergoing directional solidification. The passage of a pulsed current through the solid-liquid interface generates or absorbs heat through the Peltier effect, which produces a corresponding jump in the solute concentration at the interface. The solute pattern in the crystal can be examined subsequently to determine the geometry of the interface during solidification. A nonlinear computational study and a theoretical analysis based on Laplace transform theory were used to relate the size of the current pulse to the observed variations in interface velocity and concentration. The work revealed an important coupling between the thermal and concentration fields that must be taken into account in order to predict the magnitude of the response to the current pulse.
Other projects that have been undertaken this year include work with S. Coriell and R. Sekerka (Carnegie Mellon University) on models of dendritic growth, work with J. Cahn (850), A. Wheeler (University of Southampton), and R. Braun on diffuse interface models of order-disorder transitions, and work with D. Anderson and A. Wheeler on phase field models of solidification with convection.
Robert McMichael (NIST MSEL)
The engineering of such IT storage technology as magnetic recording media, GMR sensors for read heads, and magnetic RAM (MRAM) elements requires an understanding of magnetization patterns in magnetic materials at a submicron scale. Mathematical models are required to interpret measurements at this scale. The Micromagnetic Modeling Activity Group (muMAG) was formed to address fundamental issues in micromagnetic modeling through two activities: the definition and dissemination of standard problems for testing modeling software, and the development of public domain reference software. MCSD staff is engaged in both of these activities. Their Object-Oriented MicroMagnetic Framework (OOMMF) software package is a reference implementation of micromagnetic modeling software. Their achievements in this area since October 1999 include the following.
Edwin Fuller (NIST MSEL)
Andrew Roosen (NIST MSEL)
Craig Carter (MIT)
Edwin Garcia (MIT)
Daniel Vlacich (Montgomery Blair High School)
The OOF Project, a collaboration between MCSD, MSEL's Ceramics Division, and MIT, is developing software tools for analyzing real material microstructure. The microstructure of a material is the (usually) complex ensemble of polycrystalline grains, second phases, cracks, pores, and other features occurring on length scales large compared to atomic sizes. The goal of OOF is to use data from a micrograph of a real material to compute the macroscopic behavior of the material via finite element analysis.
In the period from September 1, 1999 to August 30, 2000, the preprocessor (ppm2oof) was downloaded 400 times from the OOF website, and the solver (OOF) was downloaded 532 times. The web site recorded over 110,000 hits during that period. The OOF mailing list (as of 11/30/00) has 212 members from around the world.
|Material micrograph (left) and result of OOF simulation (right).|
Major developments during FY00 include the following.
Source code: The OOF source code was rearranged into a more convenient package and made available on the web for the first time. This has allowed it to be compiled and run on computer architectures not available at NIST.
Thermal calculations: The original version of OOF included only linear elastic effects, with thermal expansions computed from a constant applied temperature. OOF now also computes thermal diffusion, allowing users to specify the thermal conductivity tensor for materials, to compute temperature and heat flux profiles, and to observe the effects of thermal expansion in non-uniform temperature fields. The thermal version of OOF was released on the OOF website in October 2000. It is being used by researchers at General Electric who are working with NIST on measuring the thermal conductivity of the ceramic coatings that are used as thermal barriers on jet engine turbine blades. This work is being done under a grant from DOE.
Automation: A method of automatically dividing an image into grains and grain boundary phases was added to ppm2oof, and applied to the analysis of 249 micrographs of a steel sample. These images form a three dimensional micrograph. Fully automating the image analysis and mesh generation procedure allowed us to get statistical information on the variation of the two dimensional OOF calculations from slice to slice within the three-dimensional data set. The automation code was partially written by D. Vlacich as part of a high school summer research project.
Piezoelectric calculations: This version of OOF was created by E. Garcia as part of his PhD thesis work at MIT. It demonstrated that OOF could be extended to new fields (electric, in this case), and formed the basis for the thermal version mentioned above. It will not be released in its present form, though, because the same functionality will be in OOF2.
OOF 2: The original OOF has some design deficiencies that make it difficult to extend to new physical effects, such as thermal conductivity and piezoelectricity. So, although thermal and piezoelectric versions have been created, they are difficult to maintain and somewhat clumsy to use. Work is underway on OOF2, which will be based on a much more flexible object oriented finite element framework. It will allow very easy addition of new fields and new couplings between fields, as well as new kinds of finite elements. It will be a platform which users can extend to address their own problems.
Cross Sections: A way of plotting stresses and strains along an arbitrary line drawn through the sample was implemented.
OOF received several honors during this past year. In December 1999, Industry Week magazine named OOF one of the top 25 Technologies of the Year for 1999. OOF was included in a prestigious group that included Toyota's hybrid gas-electric car (Pirius). In December 2000, S. Langer, E. Fuller, and A. Roosen received the NIST Jacob Rabinow Applied Research Award for the development of OOF. The award is given annually in recognition of "superior achievement in the practical application of the results of scientific or engineering research."
Together with other NIST research staff, we are collecting, classifying, and developing solution techniques for frequently encountered, nonstandard modeling and optimization problems. These problems arise, for example, in the measurement of material properties, in the simulation of difficult-to-predict material behavior, and computationally expensive numerical data analysis. To date, there has been enthusiastic interest and a desire to participate from NIST scientists, academia, other national laboratories, and industry. One application has been the design of specialized signal sets for enhanced wireless transmissions. We have developed a computationally efficient way to choose signal sets that minimize the probability that noise will interfere with transmissions (a digital signal will be incorrectly received). A unique and exciting aspect of this work is the extreme flexibility allowed in the definition of statistical models employed to simulate different noise. Unlike other methods, our scheme has succeeded in identifying optimal signal sets for even demanding methods for modeling noise.
Among this year's accomplishments are the following.
Leslie Greengard (New York University)
Thomas Hagstrom (University of New Mexico)
Radiation and scattering of acoustic and electromagnetic waves are increasingly modeled using time-domain computational methods, due to their flexibility in handling wide-band signals, material inhomogeneities, and nonlinearities. For many applications, particularly those arising at NIST, the accuracy of the computed models is centrally important. Nevertheless, existing methods typically allow for only limited control over accuracy and cannot achieve high accuracy for reasonable computational cost.
Applications that require modeling of electromagnetic (and acoustic) wave propagation are extremely broad, ranging over device design, for antennas and waveguides, microcircuits and transducers, and low-observable aircraft; nondestructive testing, for turbines, jet engines, and railroad wheels; and imaging, in geophysics, medicine, and target identification. At NIST, applications include the modeling of antennas (including those on integrated circuits), waveguides (microwave and photonic), transducers, and in nondestructive testing.
The objective of this project is to advance the state of the art in electromagnetic computations by eliminating three existing weaknesses with time-domain algorithms for computational electromagnetics to yield: (1) accurate nonreflecting boundary conditions (that reduce an infinite physical domain to a finite computational domain), (2) suitable geometric representation of scattering objects, and (3) high-order convergent, stable spatial and temporal discretizations for realistic scatterer geometries. The project is developing software to verify the accuracy of new algorithms and reporting these developments in publications and at professional conferences.
|Bradley Alpert and Andrew Dienstfrey|
Papers describing two mathematical and algorithmic advances appeared in peer- reviewed journals this year: "Rapid Evaluation of Nonreflecting Boundary Kernels for Time-Domain Wave Propagation," Alpert, Greengard, and Hagstrom, SIAM J. Numer. Anal. 37, 1138-1164 (2000), and "An Integral Evolution Formula for the Wave Equation," Alpert, Greengard, and Hagstrom, J. Comput. Phys. 162, 536-543 (2000). Software implementations in one and two spatial dimensions were developed to test the integral evolution formula, and its successful prevention of the small-cell problem. The "small-cell problem," which may arise when complicated scatterer geometry leads to small cells in a spatial discretization, refers to the development of numerical instabilities with explicit time-marching schemes. Currently, work is underway on improved discretization techniques, with an emphasis on rapidly convergent approximations, particularly for representing geometry and wave solutions in two and three spatial dimensions. Attention is being devoted to extending quadrature and interpolation techniques for bandlimited functions.
The work of the project, though still relatively new, has already been rather widely noted, particularly among Defense Advanced Research Projects Agency (DARPA) contractors doing modeling with computational electromagnetics (CEM). It has influenced work on these problems at Boeing and HRL (formerly Hughes Research Laboratories). It has also influenced researchers at Yale University and University of Illinois. In each of these cases, new research in time-domain CEM is exploiting discoveries of the project. In particular, some efforts for the new DARPA program on Virtual Electromagnetic Testrange (VET) are incorporating these developments. We expect that design tools for the microelectronics industry, which increasingly require high-quality electromagnetics modeling, will also follow.
Maria Nadal (NIST PL)
Gary Meyer (University of Oregon)
Harold Westlund (University of Oregon)
Michael Metzler (ISCIENCES Corporation)
For some years, computer programs have produced images of scenes based on a simulation of scattering and reflection of light off one or more surfaces in the scene. In response to increasing demand for the use of rendering in design and manufacturing, the models used in these programs have undergone intense development. In particular, more physically realistic models are sought (i.e., models that more accurately depict the physics of light scattering). However there has been a lack of relevant measurements needed to complement the modeling. As part of a NIST project entitled "Measurement Science for Optical Reflectance and Scattering", F. Hunt is coordinating the development of a computer rendering system that utilizes high quality optical and surface topographical measurements performed here at NIST. The system will be used to render physically realistic and potentially photorealistic images. Success in this and similar efforts can pave the way to computer based prediction and standards for appearance that can assure the quality and accuracy of products as they are designed, manufactured and displayed for electronic marketing.
This year, following up on the work of producing photo-realistic images of coated black glass from NIST measurements the group completed last year, G. Meyer and H. Westlund developed software that calculates ASTM standards for gloss, haze and distinctness of image for arbitrary reflection models. The models are then used to produce rendered images of black glass with specified ASTM values. In this case, their work provides a direct link between existing appearance standards and its computer graphic representation. Meyer and Westlund along with F. Hunt and M. Metzler of ISCIENCES Corporation continued work on the macro-appearance of gray metallic paint. Metzler developed a protocol for the optical measurement of metallic paint that could be used to create a reflectance model suitable for the rendering interface that Meyer and Westlund built last year. M. Nadal of the NIST Optical Technology Division carried out these measurements, and Meyer and Westlund produced images of the gray panels. Unlike the previous rendering done by G. W. Larson, these images depict two types of paint differing in the size of the flakes used in the formulation. The goal of this exercise was to determine whether rendered images could accurately depict differences in the macro-appearance of the panels. In particular could "flop" be detected. Evaluation of the model developed by Metzler and a comparison procedure for the resulting images is underway.
|Photo-realistic rendered images of coated black glass.|
This past March, the competency project group organized and ran a workshop on "Metrology and Modeling of Color and Appearance". F. Hunt organized a special session on computer rendering. Speakers included researchers from SGI, IBM, DuPont, and the University of Oregon. Speaker presentations, including the work of Meyer and Westlund on ASTM standards, can be found at the project website.
As the competency project enters its final phase, reporting and publicizing the results of these efforts has become a top priority. The optical measurements need to be available to the rendering community and they will use it to further this already rapidly advancing technology.
Computational biology is currently experiencing explosive growth in its technology and industrial applications. Mathematical and statistical methods dominated the development of the field but as the emphasis on high throughput experiments and analysis of genetic data continues, computational techniques have also become essential. We seek to develop generic tools that can be used to analyze and classify protein and base sequence patterns that signal potential biological functionality. Many of the fundamental techniques in biotechnology are based on the answer to a fundamental question. Are a pair or group of protein sequences related because they have similar biological function or because they are descended from a common ancestor sequence or are they unrelated?
Currently the answer is obtained when the sequences are lined up so that the underlying similarity between the constituent amino acid residues is maximized. Dynamic programming is used to create such an arrangement, known as an alignment. Very fast algorithms exist for aligning two sequences or more if the possibility of gaps is ignored. Gaps are hypothesized insertions or deletions of amino acids that express mutations that have occurred over the course of evolution. The alignment of sequences with such gaps remains an enormous computational challenge. We are currently experimenting with an alternative approach based on Markov decision processes. The optimization problem associated with alignment then becomes a linear programming problem and it is amenable to powerful and efficient techniques for solution. Taking a database of protein sequences (myoglobin) as a test case, we have developed a method of using sequence statistics to build a Markov decision model and currently the model is being used to solve the linear program for a variety of cost functions.
Work has also continued on another project involving the program GenPatterns. The software computes and visually displays DNA or RNA subsequence frequencies and their recurrence patterns. Bacterial genomes and chromosome data can be downloaded from GENBANK and computations can be performed and displayed using a variety of user options including creating Markov models of the data. A demonstration can be found at the project website.
In 1993 S. Prabhu observed that the number of occurrences of a given DNA subsequence closely approximates the number of occurrences of its inverted complement on the same strand. He investigated a number of completely sequenced genomes that were available at the time. We extended that survey to more than 30 completely sequenced genomes, and used GenPatterns to verify Prabhu's symmetry rule to longer subsequences. This rule also holds surprisingly for Markov models based on the data. We also investigated several yeast chromosomes and found that parts of the DNA that are directly code for mRNA (as opposed to coding on the complementary strand) deviate from this approximate symmetry. The figures show two examples of a visualization known as a DNA walk. The first visualizes the first 10,000 bases of a viral DNA according to the scheme in the box accompanying the figure. The second figure shows the last 10,000 bases. The similarity between the two figures is due to the fact the end of the DNA sequence is the inverted complement of the beginning. This causes the DNA to assume a hairpin shape in nature, a common feature of this class of viruses.
GenPatterns and the software developed from the alignment project will be part of the NIST Bioinformatics/Computational Biology software website currently being constructed under the direction of T.N. Bhat of the Chemical Science and Technology Laboratory (CSTL).
|GenPatterns: a DNA walk where the beginning of the walk positioned at the origin. Each of the four letters represents a unit step in one of the four horizontal or vertical directions. Here T denotes "up", A is "down", G is "right" and C is "left". The green point is the origin of the walk and the red point marks the end. The data comes from the first 10,000 bases of the DNA sequence for the vaccinia virus.|
|GenPatterns: a DNA walk as explained in the previous figure. The data comes from the last 10,000 bases of the DNA sequence for the vaccinia virus. The last 4900 bases of DNA are the inverted complement of the first 4900. Inverted complements and their frequencies were the subject of a study using GenPatterns.|
Mathematical Software Group
F. W. J. Olver
Qming Wang (NIST ITL)
Charles Clark (NIST PL)
NIST is well known for its collection and dissemination of standard reference data in physical sciences and engineering. From the 1930s through the 1960s, NBS also disseminated standard reference mathematics, typically tables of mathematical functions. The prime example is the NBS Handbook of Mathematical Functions, prepared under the editorship of Milton Abramowitz and Irene Stegun and published in 1964 by the U.S. Government Printing Office. The NBS Handbook is a technical best seller, and likely is the most frequently cited of all technical references. Total sales to date of the government edition exceed 150,000; further sales by commercial publishers are several times higher. Its daily sales rank on amazon.com consistently surpasses other well-known reference books in mathematics, such as Gradshteyn and Ryzhik's Table of Integrals. The number of citations reported by Science Citation Index continues to rise each year, not only in absolute terms but also in proportion to the total number of citations. Some of the citations are in pure and applied mathematics but even more are in physics, engineering, chemistry, statistics, and other disciplines. The main users are practitioners, professors, researchers, and graduate students.
Except for correction of typographical and other errors, no changes have ever been made in the Handbook. This leaves much of the content unsuitable for modern usage, particularly the large tables of function values (over 50% of the pages), the low-precision rational approximations, and the numerical examples that were geared for hand computation. Also, numerous advances in the mathematics, computation, and application of special functions have been made or are in progress. We are engaged in a substantial project to transform this old classic radically. The Digital Library of Mathematical Functions is a complete rewriting and substantial update of the Handbook that will be published on the Internet for free public access. The Web site will include capabilities for searching, downloading, and visualization, as well as pointers to software and related resources. The contents of the Web site will also be made available on CD- ROM. A substantial subset will also be issued in printed form. A sample chapter, including examples of dynamic visualizations, may be viewed on the project Web site.
|Members of the DLMF team. Seated: Bonita Saunders, Frank Olver, and Daniel Lozier. Standing: Majorie McClain, Bruce Fabijonas, Charles Clark, Ronald Boisvert, and Bruce Miller. Not pictured: Joyce Conlon, Qiming Wang, Abdou Youssef.|
Funded by the National Science Foundation and NIST, the DLMF Project is contracting with the best available world experts to rewrite all existing chapters, and to provide additional chapters to cover new functions (such as the Painlev transcendents) and new methodologies (such as computer algebra). Four NIST editors (Lozier, Olver, Clark, and Boisvert) and an international board of ten associate editors are directing the project. The associate editors are
Major achievements since the beginning of FY 2000 are as follows:
MCSD continues to provide Web-based information resources to the computational Science research community. The first of these is the Guide to Available Mathematical Software (GAMS). GAMS is a cross-index and virtual repository of some 9,000 mathematical and statistical software components of use in science and engineering research. It catalogs software, both public domain and commercial, that are supported for use on NIST central computers by ITL, as well as software assets distributed by netlib. While the principal purpose of GAMS is to provide NIST scientists with information on software available to them, the information and software it provides are of great interest to the public at large. GAMS users locate software via several search mechanisms. The most popular of these is the use of the GAMS Problem Classification System. This system provides a tree-structured taxonomy of standard mathematical problems that can be solved by extant software. It has also been adopted for use by major math software library vendors.
A second resource provided by MCSD is the Matrix Market, a visual repository of matrix data used in the comparative study of algorithms and software for numerical linear algebra. The Matrix Market database contains more than 400 sparse matrices from a variety of applications, along with software to compute test matrices in various forms. A convenient system for searching for matrices with particular attributes is provided. The web page for each matrix provides background information, visualizations, and statistics on matrix properties. During FY2000 several sets of matrices were added to the Matrix Market. Waterloo Maple announced that support of input/output of matrices in Matrix Market format had been included in its Maple 6.0 product.
Web resources developed by MCSD continue to be among the most popular at NIST. The MCSD Web server at math.nist.gov has serviced more than 29 million Web hits since its inception in 1994 (9 million of which have occurred in the past year!) Altavista has identified more than 7,400 external links to the Division server. The top seven ITL Web sites are all services offered by MCSD:
The NIST Guide to Available Mathematical Software (GAMS), a cross-index and virtual repository of mathematical software, is used more than 10,000 times each month. During a recent 36-month period, 34 prominent research-oriented companies in the .com domain registered more than 100 visits apiece to GAMS. The Matrix Market, a visual repository of matrix data used in the comparative study of algorithms and software for numerical linear algebra, sees more than 100 users each day. It has distributed more than 20 Gbytes of matrix data, including more than 80,000 matrices, since its inception
Java, a network-aware programming language and environment developed by Sun Microsystems, has already made a huge impact on the computing industry. Recently there has been increased interest in the application of Java to high performance scientific computing. MCSD is participating in the Java Grande Forum (JGF), a consortium of companies, universities, and government labs who are working to assess the capabilities of Java in this domain, and to provide community feedback to Sun on steps that should be taken to make Java more suitable for large-scale computing. The JGF is made up of two working groups: the Numerics Working Group and the Concurrency and Applications Working Group. The former is co-chaired by R. Boisvert and R. Pozo of MCSD. Among the institutions participating in the Numerics Working Group are: IBM, Intel, Least Squares Software, NAG, Sun, Visual Numerics, Waterloo Maple, Florida State University, the University of Karlsruhe, the University of North Carolina, the University of Tennessee at Knoxville, and the University of Westminster.
Earlier recommendations of the Numerics Working Group were instrumental in the adoption of a fundamental change in the way floating-point numbers are processed in Java. This change will lead to significant speedups to Java code running on Intel microprocessors like the Pentium. All Java programs run in a portable environment called the Java Virtual Machine (JVM). The behavior of the JVM is carefully specified to insure that Java codes produce the same results on all computing platforms. Unfortunately, emulating JVM floating-point operations on the Pentium leads to a four- to ten-fold performance penalty. The Working Group studied an earlier Sun proposal, producing a counter-proposal which was much simpler, more predictable, and which would eliminate the performance penalty on the Pentium. Sun decided to implement the key provision of the Numerics Working Group proposal in Java 1.2, which was released in the spring of 1999.
The working group also advised Sun on the specification of elementary functions in Java, which led to improvements in Java 1.3 that was released in late 1999. The specification of the elementary functions was relaxed to tolerate errors of up to one unit in the last place, permitting more efficient implementations to be used. A parallel library, java.lang.StrictMath, was introduced to provide strictly reproducible results.
The Numerics Working Group has now begun work on a series of formal Java Specification Requests for language extensions, including a fast floating-point mode and a standardized class and syntax for complex arithmetic and multidimensional arrays.
This year, MCSD staff presented the findings of the Working Group in a variety of forums, including
The first, third, and fourth of these events were organized by Boisvert and Pozo. The Java Grande Conference in San Francisco occurred immediately before the huge JavaOne Conference. John Gage, Chief Scientist at Sun Microsystems, video-taped interviews with a number of Java visionaries for his Digital Journey series of Webcasts during each of these events. Ron Boisvert and Roldan Pozo each sat for extensive interviews about the future of Java for high performance scientific and engineering computing. The interviews are online at http://javaone.liveonline,net/. Other interviewees included Bill Joy and James Gosling of Sun, two of the original creators of Java.
Also this year, Roldan Pozo and Bruce Miller extended and revised an interactive numerical benchmark to measure the performance of scientific codes on Java Virtual machines running various platforms. The SciMark 2.0 benchmark includes computational kernels for FFTs, SOR, Monte Carlo integration, sparse matrix multiply, and dense LU factorization, comprising a representative set of computational styles commonly found in numeric applications. Several of SciMark's kernels have been adopted by the benchmarking effort of the Java Grande Forum. SciMark can be run interactively from Web browsers, or can be downloaded and compiled for stand-alone Java platforms. Full source code is provided. The SciMark 2.0 result is recorded as megaflop rates for the numerical kernels, as well as an aggregate score for the complete benchmark. The current database lists results for 1100 computational platforms, from laptops to high-end servers. As Of January 2001, the record for SciMark 2.0 is 164 Mflops.
NIST continues to distribute the JAMA linear algebra class for Java that it developed in collaboration with the MathWorks several years ago. More than 8,000 copies of this software have been downloaded from the NIST web site.
Several years ago, NIST was among the first institutions looking into low-cost parallel computing using commodity parts and operating systems. We built several PC clusters using the Linux operating system and fast-ethernet and Myrinet networking technologies. We put real applications on these machines and studied performance/cost trade-offs. The goal was to demonstrate to industry that such configurations were practical computing solutions, not just of interest to the computer science research community. Today, Linux clusters are not only commonplace in industry, but have actually become the dominant product being offered by several hardware vendors.
MCSD currently has three 8-node rack-mounted Pentium-based clusters running Linux. These systems are typically used as "personal" supercomputers by individual staff members.
In conjunction with this effort, W. Mitchell has been investigating the best software environments to utilize these architectures. He has evaluated nearly every Fortran 90/95 Linux compiler available and has conducted extensive benchmarking, reliability, completeness, and cost analysis.
More information on the JazzNet Linux cluster, applications, and available software is available at the project website.
Finite element methods using adaptive refinement and multigrid techniques have been shown to be very efficient for solving partial differential equations on sequential computers. Adaptive refinement reduces the number of grid points by concentrating the grid in the areas where the action is, and multigrid methods solve the resulting linear systems in an optimal number of operations. W. Mitchell has been developing a code, PHAML, to apply these methods on parallel computers. The expertise and software developed in this project are useful for many NIST laboratory programs, including material design, semiconductor device simulation, and the quantum physics of matter.
This year saw three major activities on this project. The first is a collaboration with Sandia National Laboratories to develop Zoltan, a dynamic load balancing library. NIST's contributions to Zoltan are the implementation of a Fortran 90 interface to the library, and the implementation of the K-Way Refinement Tree (RTK) partitioning method that was developed as part of PHAML. Second is the beginning of a study to determine if the grid partitioning techniques used to improve performance on distributed memory parallel computers can be applied on a smaller scale to improve performance in cache- based memory hierarchies. Third is the application of PHAML to solve Schrodinger's Equation in collaboration with the Quantum Processes group of NIST's Atomic Physics Division. Particular milestones for this year include the following.
|William Mitchell (left) and Anthony Kearsley (right) discuss grid partitioning.|
NIST is playing a leading role in the new standardization effort for the Basic Linear Algebra Subprograms (BLAS) kernels for computational linear algebra. The BLAS Technical Forum (BLAST) is coordinating this work. BLAST is an international consortium of industry, academia, and government institutions, including Intel, IBM, Sun, HP, Compaq/Digital, SGI/Cray, Lucent, Visual Numerics, and NAG.
One of the most anticipated components of the new BLAS standard is support for sparse matrix computations. R. Pozo chairs the Sparse BLAS subcommittee. NIST was first to develop and release a public-domain reference implementations for early versions of the standard, which has helped shape the proposal which was released for public review last year.
We are now finalizing the process and preparing the document for publication. NIST is also designing and developing the C reference implementation and will make this code publicly available through their Web site in 2001.
Division staff members continue to contribute to the high quality scientific computing environment for NIST scientists and engineers via short-term consulting related to mathematics, algorithms and software, and by the support of software libraries on central systems. Division staff maintains a variety of research-grade public-domain math software libraries on the NIST Central Computers (SGI Origins, IBM SP2), as well as for NFS mounting by individual workstations. Among the libraries supported are the NIST Core Math Library (CMLIB), the SLATEC library, the NIST Standards Time Series and Regression Package (STARPAC), and LAPACK. These libraries, as well as a variety of commercial libraries implemented on NIST central systems, are cross-indexed for ease of discovery by NIST staff in the Guide to Available Mathematical Software, along with many other such resources, by MCSD.
This year, we continued our efforts to implementing these libraries on the new NIST SGI Origin 2000 systems (in most cases many versions of the libraries are maintained to support a variety of compilation modes). We also migrated our implementations of CMLIB and LAPACK to ITL's centralized software checkout service to ease the process of mounting libraries on workstations distributed throughout the Institute.
NIST has a history of developing some of the most visible object-oriented linear algebra libraries, including Lapack++, Iterative Methods Library (IML++), Sparse Matrix Library (SparseLib++), Matrix/Vector Library (MV++), and most recently the Template Numerical Toolkit (TNT).
TNT incorporates many of the ideas we have explored with previous designs, and includes new techniques that were difficult to support before the ANSI C++ standardization. The library includes support for both C and Fortran array layouts, array sections, basic linear algebra algorithms (LU, Cholesky, QR, and eigenvalues) as well as primitive support for sparse matrices.
TNT has enjoyed several thousand downloads and is currently in use in several industrial applications. This year saw three major software updates to the TNT software package.
Optimization and Computational Geometry Group
Francis Sullivan (IDA/CCS)
A central issue in statistical physics is that of evaluating the partition function, which describes the probability distribution of states of a system of interest. In a number of important settings, including the Ising model, the q-state Potts model, and the monomer- dimer model, no closed form expressions are known for three-dimensional cases and, in addition, obtaining exact solutions of the problems is known to be computationally intractable. At NIST, related problems occur in materials science theory in combinatorial chemistry research, and in physics, where models for the Ising problem and the Bose- Einstein condensate can be formulated as dimer problems.
Many of these computations can be stated as combinatorial counting problems for which the Monte Carlo Markov Chain (MCMC) method can be formulated to give approximate solutions. In some cases, the Markov Chain is rapidly mixing so that, in principle at least, a pre-specified arbitrary accuracy can be obtained in only a polynomial number of steps. In practice, however, the wall-clock computing time is thought to be prohibitively long and hardware bounds on precision can limit accuracy for native mode arithmetic.
A class of probabilistic importance sampling methods has been developed for these problems that appears to be much more effective than the standard MCMC technique for attacking these problems. The key to the approach is the use of non-linear iterative methods, such as Sinkhorn balancing, to construct an extremely accurate importance function. Because importance sampling gives unbiased estimates, the importance function can be adjusted so that the simulation spends more time in the more complex regions of state space.
We have used these techniques to obtain accurate solutions for both the 3-d dimer covering problem and the more general monomer-dimer problem. In addition, an importance sampling formulation for the 3-d Ising model has been constructed.
The MCMC method is itself an importance sampling technique. An examination of the relation between it and the new approach leads to an interesting way for collapsing the MCMC random walk to a much smaller number of states having a limit probability distribution that can be readily estimated to high accuracy.
This year, we conducted a thorough test of our version of the MCMC method and compared it with results claimed in the literature and our own importance sampling technique. As a first step, we formulated a rigorous version of our notion of "collapsing the random walk" in MCMC calculations. The resulting method, properly called aggregation, resulted in a dramatic speedup of the standard method. Three results were found:
Based on these experiments, we are now focusing our attention on importance sampling for the 3-d Ising model. This model is equivalent to an independent vertices problem and therefore known to be NP complete.
The construction of the importance function is crucial to our approach. We are using Sinkhorn balancing for this and the main computational expense is in computing the balanced matrix. This requires finding the "support" of the matrix and then iterating to get the balanced matrix. We have found that Sinkhorn balancing can be expressed as a non-linear optimization problem whose objective function is related to that used in linear programming via interior-point methods. The trick is to think of balancing as a max-flow problem.
During the past decade, laser-scanning technology has developed into a major vehicle for wide-spread applications such as cartography, bathymetry, urban planning, object detection, dredge volume determination, just to name a few.
BFRL is actively investigating the use of that technology for monitoring construction sites. Here laser scans taken from several vantage points are used to construct a surface representing a particular scene. Since Spring 1999, MCSD is collaborating in this effort by providing prototype algorithms/software based on the general concept of a Triangulated Irregular Network (TIN). The main purpose of that methodology is to construct surfaces from "data clouds". First year efforts, directed mainly towards demonstrations, were documented in NSTIR 6457, "NIST Construction Automation Program Report No.4: Non-Intrusive Scanning Technology for Construction Status Determination", Jan. 2000, with coauthors C. Witzgall and J. Bernal.
MCSD has several years experience with developing TIN surfaces for both military and civilian applications, mostly in collaboration with the USArmy Topographic Engineering Center (TEC). This collaboration continued in FY00 with research on data editing leading up to presentations for the "Digital Terrain Thinning Workshop" at the CADD-GIS 2000 Conference in St.Louis, May 24, 2000.
|Christoph Witzgall, Gerry Cheok (BFRL), William Stone (BFRL), and Javier Bernal prepare for lidar measurement of gravel pile to be used as input to MCSD surface model.|
The BFRL effort continued with the collection of surface scans at an actual NIST construction site, and the resulting TIN surfaces were visualized and compared. "Field Demonstration of Laser Scanning for Excavation Measurements", coauthored with BFRL, appeared in Proceedings of ISARC 2000. "A Lidar-based Status Evaluation of Construction Sites' Earth Moving Operations", was accepted by Automation in Construction.
In a separate thrust, BFRL initiated an investigation into the general problem of assessing the accuracy of the instrumentation as well as the ensuing surface construction. SED collaboration was secured to analyze instrument performance. An approach has been been identified towards propagation of instrument noise to derived quantities such as volume. One goal is to eventually assess the utility of Bayesian techniques. MCSD is asked to implement that approach by extending its TIN software accordingly. BFRL has also scanned a rectangular 3x4x5 feet box toprovide means of comparison with actual reality. Scan derived volumes have been found in agreement at errors below 1%.
A common challenge in those endeavors has been the need to "register" scans of the same scene, that is, to align them in a common coordinate system. The main tool for such a registration have been direct measurements of the locations of vantage points to be followed by manual alignment of graphic displays of data clouds. MCSD is experimenting with numerical alignment. Major difficulties were found to be caused by the extreme differences in data density, noise and measurement artifacts arising from the instrumentation. MCSD has therefore upgraded "data cleaning" software developed in the course of its collaboration with TEC.
Emphasis in FY01 will be on improving mesh generation and registration methods as well as quantifying uncertainties as part of surface generation software.
Computational geometry and image analysis techniques have been applied to photographic images of polymer dewetting under various conditions in order to model the evolution of these materials. This work is in collaboration with the MSEL Polymers Division which has massive amounts of data as a result of combinatorial experimentation and which is in great need of automatic techniques for analysis. Methods and software have been devised to automatically evaluate areas of wetness and dryness for their geometric properties such as deviation of holes from perfect circularity and distribution of hole centers. We have computed Voronoi diagrams of the initial hole centers and we investigate their use as a predictor of later de-wetting behavior.
|Enhanced image from early state of dewetting of polystyrene film on silicon substrate.|
In dewetting, the samples progress through various states and we need to determine automatically which state an image represents. To this end, we plan to apply statistical techniques developed recently by D. Naiman and C. Priebe at Johns Hopkins for analyzing medical images. They do this with Monte Carlo methods based on importance sampling for estimating the probabilities of being in various states using many normal images. Their method is a brilliant combination of importance sampling and the Bayesian approach.
John Horst (NIST MEL)
John Lavery (Army Research Office)
Methods for gathering terrain data have proliferated during the past decade in both the military and commercial sectors. The rapid development of laser scanning techniques and their application to cartography, bathymetry, urban planning, construction site monitoring, just to name a few, has resulted in a strong push for next generation computational tools for terrain representation.
Using smooth surfaces for representation of terrain has long been recognized. However, previously available smooth-surface techniques such as polynomial and rational splines, radial basis functions and wavelets require too much data, too much computing time, too much human interaction, and/or do not preserve shape well. Conventional smooth splines have been the main candidate for an alternative to triangular irregular networks (TINS) because of their relative computational simplicity. However, conventional smooth splines are plagued by extraneous, nonphysical oscillation.
Recently (1996-2000), J. Lavery of the Army Research Office (ARO) has developed and tested a new class of L1 splines (published in the journal Computer Aided Geometric Design). L1 splines provide smooth, shape-preserving, piecewise-polynomial fitting of arbitrary data, including data with abrupt changes in magnitude and spacing and are calculated by efficient interior-point algorithms (extensions of Karmarkar's algorithm).
Under the supervision of J. Lavery of the ARO, NIST has been sponsored by the Army Model Improvement Program (AMIP) of the Army Modeling and Simulation Office to carry out the first steps in evaluating the accuracy and data-compression capabilities of L1 splines. The goal of the AMIP project is to demonstrate that, on simple grids with uniform spacing, L1 splines provide more accurate and compact representation of terrain than do conventional splines and piecewise planar surfaces. The results of this work are to be published in two conference proceedings (Lavery, J.E., Gilsinn, D.E., 'Multiresolution Representation of Terrain By Cubic L1 Splines', Trends in Approximation Theory, Vanderbilt University Press, and Lavery, J.E., Gilsinn, D.E., 'Multiresolution Representation of Urban Terrain by L1 Splines, L2 Splines and Piecewise Planar Surfaces', Proc. 22nd Army Science Conference, 11-13 December 2000, Baltimore, MD). They demonstrated the superiority of L1 spline interpolative ability over conventional L2 splines. The superiority of L1 splines over piecewise planar interpolation depended on the measure of closeness.
The current project to be carried out as a Defense Model and Simulation Office (DMSO) initiative will implement L1 splines on uniform and nonuniform quadtree and triangulated grids and assess the performance of L1 splines on these grids by comparison with conventional procedures. L1 splines on quadtree meshes will be based on rectangular C1 finite elements, such as Sibson elements. L1 splines on triangulated grids will be based on triangular C1 finite elements such as Clough-Tocher elements. Comparisons of the performance of L1 splines vs. that of piecewise planar surfaces and of conventional smooth splines will be carried out on sets of open terrain data, such as Ft. Hood DTED data, which include irregularly curved surfaces, steep hillsides and cliffs as well as flat areas (plateaus or bodies of water), and urban terrain data, such as data for downtown Baltimore. The metrics for the comparison will be 1) amount of storage required for meshes and spline parameters; 2) accuracy of the representation as measured by rms error and maximum error. L1 splines will be compared with conventional techniques not only for fitting terrain data that has been "rectified" to regular grids (a standard, but error- rich step in current modeling systems) but also for fitting irregularly spaced "raw" terrain data.
|Surface model of downtown Baltimore based on scanned elevation data and L1 splines.|
The codes that will be produced by this effort will be research codes that will be made publicly available and will also be used as the basis for future plug-ins to the Army's major geometric modeling system BRL-CAD, and Navy systems for representation of littoral areas and open ocean. The algorithms and codes developed under this project will be transitioned to the following organizations: 1) National Imagery and Mapping Agency (NIMA), Reston, VA and NIMA contractors. 2) Army Research Laboratory (ARL), Adelphi, MD and Georgia Institute of Technology, an ARL contractor. 3) Naval Research Laboratory (NRL), Washington, DC and Naval Oceanographic Office (NOO), Biloxi, Mississippi. NIMA and ARL are interested in L1 splines for data-compressed representation of a wide variety of natural and urban terrain. NRL and NOO are interested in L1 splines for representation of bathymetry, especially bathymetry in littoral areas.
Scientific Applications and Visualization Group
The Message Passing Interface (MPI) is the de facto standard for writing parallel scientific applications in the message passing programming paradigm. MPI suffers from two limitations: lack of interoperability among vendor MPI implementations and lack of fault tolerance. For long-term viability, MPI needs both. The Interoperable MPI protocol (IMPI) standard addresses the interoperability issue. It extends the power of MPI by allowing applications to run on heterogeneous clusters of machines with various architectures and operation systems, each of which in turn can be a parallel machine, while allowing the program to use a different implementation of MPI on each machine. This is accomplished without requiring any modifications to the existing MPI specification. That is, IMPI does not add, remove, or modify the semantics of any of the existing MPI routines. All current valid MPI programs can be run in this way without any changes to their source code.
NIST, at the request of computer vendors, facilitated the interoperable MPI standard, and built a conformance tester. The IMPI standard was adopted in March 2000; the conformance tester was completed at the same time. The conformance tester is a web- based system that sets up a parallel virtual machine between NIST and the testers. The conformance test suite contains over a hundred tests and exercises all parts of the IMPI protocol. Results are returned via a web page. The IMPI standard was published in the May-June 2000 issue of the NIST Journal of Research.
The first IMPI-enabled version of LAM, version 6.4-a3, was recently released by the Laboratory for Scientific Computing at the University of Notre Dame. They also released a portable version of the IMPI server (impiexec-server) for use by all vendors. A demo of IMPI took place at the Supercomputing 2000 Conference, Nov. 4-10, 2000, in Dallas. Notre Dame, Hewlett-Packard, and MPI Software Technology participated. Commercial releases are scheduled for March 2001 by Hewlett Packard and MPI Software Technology. A Phase I SBIR in the amount of $75,000 was awarded to MPI Software Technology to develop an algorithm tuner for IMPI software.
Nicos Martys (NIST BFRL)
The flow of fluids in complex geometries plays an important role in many environmental and technological processes such as oil recovery, the spread of hazardous wastes in soils, and the service life of building materials. In collaboration with N. Martys of BFRL, we developed a parallel lattice Boltzmann (LB) algorithm to simulate fluid flow through complex pore geometries. This model accommodates multiple fluid interactions and accounts for wetting effects at the fluid-solid interface.
Because of the high resolution and large size of the porous media being modeled, we developed a memory efficent indirect addressing scheme for storing the required physical data only at the active lattice points. Because of the large computational requirements we parallelized the code using the Message Passing Interface (MPI) within a simple single program multiple data (SPMD) model. The parallelization was accomplished with only minimal changes to the serial code. The parallel version of the code showed excellent speedups and was ported without change to each of NIST's parallel computing platforms.
We modeled the permeabilities of several microtomography-based images of Fontainebleau sandstone media. Agreement with experimental results verified the correctness and utility of our parallel implementation of the LB method. Because of the large size of the sandstone images, these simulations would not have been possible without parallelizing the algorithm. We ran over 70 production runs on a variety of samples, including a time-series of X-ray microtomographs of fractured concrete. Tests were performed to investigate the effect of resolution on calculated permeability - a secondary use of the code.
Nicos Martys (NIST BFRL)
Understanding the flow properties of complex fluids like suspensions (e.g., colloids, ceramic slurries and concrete) is of technological importance and presents a significant theoretical challenge. The computational modeling of such systems is also a challenge because it is difficult to track boundaries between different fluid/fluid and fluid/solid phases. We use a new computational method called dissipative particle dynamics (DPD) which has several advantages over traditional computational dynamics methods while naturally accommodating boundary conditions. In DPD, the interparticle interactions are chosen to allow for much larger time steps, so that physical behavior on time scales many orders of magnitude greater than that possible with molecular dynamics may be studied. Our algorithm (QDPD) is a modification of DPD which uses a velocity Verlet algorithm to update the positions of both the free particles and the solid inclusion. In addition, the rigid body motion is determined from a quaternion-based scheme (hence the Q in QDPD).
QDPD has been parallelized with both a replicated data algorithm and a spatial decomposition algorithm. The replicated data approach has worked well for small to medium sized problems (tens-of-thousands of particles) on shared-memory SGIs. We have found speedups of as much as 17.5 times on 24 processors of a SGI Origin 2000. Utilizing three such systems, we were able to get a year's worth of conventional computing done in a week. Among the results computed has been the calculation and subsequent visualization of a sheared suspension of solid inclusions.
|Visualization of particles in model of cement.|
The spatial decomposition version of QDPD was developed for distributed memory systems like the IBM SP2/SP3. We decompose the total volume into P volumes, where P is the number of processors. We assign particles to processors, and then augment each processor with neighboring particles (an extended volume) so that each processor has the particles it needs for forces calculations. We loop over the particles in the original volume, calculating forces on them and their pair particle (for conservation of momentum). Care must be taken to add these forces on particles in the extended volume to the forces on the processor "owning" them. Finally we calculate the new positions of all particles and move the particles which have left the processor to their new home processors. A novel feature of this work is that we explicitly do not keep all particles belonging to the same ellipsoid on the same processor. Each processor computes rigid body properties for each particle it "owns", and these properties are globally summed over all processors so that all processors have all solid inclusion properties. Since there are only a small number of solid inclusions (relative to the number of particles), the amount of communication necessary for the global sums is small and the amount of extra memory is also relatively small. Hence it is an effective technique.
Current results show a speed up of 22.5 on 27 200MHz Power3 processors on an IBM SP3 distributed memory system. The same technique also is very effective in a shared- memory environment, where the speedups are a factor of 29 on 32 processors of an SGI Origin 3000 system and a factor of 50 on 64 processors.
While various quantitative tests are used to help validate our algorithms, visualization plays an important role in the testing and validation of codes. One visual check of the DPD code was a time series of the motion of a single ellipsoidal inclusion subject to shear. The rotation of the single ellipsoid was clearly visible, a well-known phenomena seen in experiments called Jefferies orbits. In contrast, we found that when several elliposidial inclusions were added to the system the Jefferies orbits were suppressed and the ellipsoids had a tendency to align as their relative motion was partly stablized by mutual hydrodynamic interactions.
We created stereo versions of the output of the concrete simulations both end on and from the point of view of a single particle.
Virtual Reality Modeling Language (VRML) has been used to distribute animations of the results from this computation. An example of using VRML to animate the results from a computational model of the flow of suspensions can be found on the web.
Charles Bouldin (NIST MSEL)
X-ray absorption spectroscopy (XAS) uses energy-dependent modulations of photoelectron scattering to determine local atomic structure. X-ray absorption calculations in the near edge structure (XANES) in the 0-70 eV range are time- consuming but amenable to parallel processing. To implement parallel processing of XANES we started from the single-processor version of the computer code Feff developed in portable Fortran 77 by C. Bouldin of MSEL. Feff (for effective potential F_eff) does real-space calculations of x-ray absorption. X-ray absorption at a given x-ray energy is independent of the absorption at other energies. We use this physical parallelism to make simultaneous calculations of the XANES at different energies using multiple processor clusters, and then assemble the results from the individual processors to produce the full XANES spectrum. The parallelization is done using the Message Passing Interface (MPI) library for maximum portability. We have run the parallel Feff code (FeffMPI) on Linux, Windows NT, IBM-AIX, and SGI systems with no changes to the code. FeffMPI can run on any parallel processing cluster that supports MPI, and these system can use distributed or shared memory, or even a mixture of distributed and shared memory.
To evaluate the parallel algorithm, we conducted tests on six systems. As representative single-processor systems, we did benchmarks on a 450 Mhz AMD K6-3 running SuSe Linux 6.1, and an Apple PowerMac G4 running at 450 Mhz. We then ran FeffMPI on four MPI clusters: (1) a cluster of 16 Pentium II 333 Mhz systems running Redhat Linux, connected via 100 megabit ethernet, (2) a similar cluster of Pentium III 400 Mhz machines running Windows NT connected by 100 megabit ethernet, (3) a cluster of SGI machines, (4) an IBM SP2/3 using up to 32 processors. The fastest times were turned in by using 32 IBM SP3 processors. This was 25 times faster than the PowerMac G4 and 40 times faster than the single processor Linux system. We found that processing speed could be predicted, as a function of cluster size, by the simple scaling law T = a (0.03 + 0.97/N), where T is the runtime in seconds, a is a scaling factor that accounts for the performance of a given processor/compiler, and N is the number of processors in the cluster. As cluster size is increased, the part of the code that runs in parallel changes from the dominant part of the runtime to an irrelevant fraction of the total. In the limit of large cluster sizes, runtime is dominated by the 3 % of the original code that still executes sequentially. In such large clusters, we expect no further increase in speed because the runtime is then totally dominated by sequentially executing code, and large clusters can even increase runtime due to communications overhead. However, on the largest clusters we had available, we did not observe any saturation of the scaling due to communication overhead.
James Warren (NIST MSEL)
Snowflake-like structures known as dendrites develop within metal alloys during casting. A better understanding of the process of dendritic growth during the solidification will help guide the design of new alloys and the casting process used to produce them. MCSD mathematicians (e.g., G. McFadden, B. Murray, D. Anderson, R. Braun) have worked with MSEL scientists (e.g., W. Boettinger, R. Sekerka) for some time to develop phase field models of dendritic growth. Such diffuse-interface approaches are much more computationally attractive than traditional sharp-interface models. Computations in two dimensions are now routinely accomplished. Extending this to three dimensions presents scaling problems for both the computations and the subsequent rendering of the results for visualization. This is due to the 0(n4) execution time of the algorithm as well as the 0(n3) space requirements for the field parameters. Additionally, rendering the output of the three dimensional simulation also stresses the available software and hardware when the simulations extend over finite-difference grids of size 1000x1000x1000.
We have developed a parallel 3D dendritic growth simulator that runs efficiently on both distributed-memory and shared-memory machines. This simulator can also run efficiently on heterogeneous clusters of machines due to the dynamic load-balancing support provided by our MPI-based C-DParLib library. This library simplifies the coding of data-parallel style algorithms in C by managing the distribution of arrays and providing for many common operations on arrays such as shifting, elemental operations, reductions, and the exchanging of array slices between neighboring processing nodes. Test runs on our current systems indicate that we will soon be able to complete a 10003 simulation in three to four days.
The output from the simulator consists of 40 snapshots consisting of pairs of files containing the phase-field and the relative concentration of the solutes at each grid point at specific time steps. At smaller grid sizes, below 3003, we use commonly available visualization software to process these snapshot files into color images and animations with appropriate lighting and shading added. For larger grid sizes we have developed a visualization procedure that converts the 3D grid data into a polygonal data set that can take advantage of hardware acceleration. Using standard SGI software, OpenGL Performer, this polygonal representation is easily displayed. The semi-transparent colors allow a certain amount of internal structure to be revealed and the additive effects of the semi-transparent colors produces an isosurface approximation. A series of polygonal representations from the simulator snapshots are cycled producing a 3D animation of dendrite growth that can be interactively viewed. Most of the currently available immersive virtual reality (IVR) systems are based on OpenGL Performer. Thus, utilizing this format immediately allows the dendrite growth animation to be placed in an IVR environment for enhanced insight. A 3D image of one of our dendrites was on the cover of the May-June 2000 issue of the NIST Journal of Research.
Data structures are an integral part of effective scientific computing. Yet, message passing systems for parallel and distributed computation, such as the Message Passing Interface (MPI), do not provide the capability to send and receive dynamic data structures as part of their standard. AutoMap and AutoLink were developed to simplify the creation and use of data types, including dynamic data types with MPI. AutoMap is a source to source compiler that reads C code directly and creates MPI Data types from user-marked data structures. These structures may have pointers. AutoMap may be run directly from the AutoMap web page. AutoLink is an MPI library that enables sending and receiving of dynamic data structures, such as lists and graphs, via simple calls to the library. Both synchronous and asynchronous sends and receives are available. The lastest versions of AutoMap and AutoLink were released on August 28th, 2000. Both are available on the web for download.
Charles Clark (NIST PL)
David Feder (NIST PL)
A Bose-Einstein condensate (BEC) is a state of matter that exists at extremely low temperatures. Researchers at NIST are studying BECs of alkali atoms confined within magnetic traps. Under investigation is the evolution of the BEC wave function when the trapped BEC is subjected to rotation. Upon rotation, quantized vortices may form within the BEC. These vortices are of interest because of their theoretical implications for the characteristics of BECs, such as superfluidity.
Scientists in the NIST Physics Lab have performed numerical simulations of the BEC wave function to determine if quantized vortices exist. Scientific visualization is used to analyze the large amount of data produced by these simulations. In the case of BECs, the goal of visualization is to identify and isolate possible vortex structures within a three- dimensional volume. The result of the visualization process is a sequence of images that forms a 3D stereoscopic animation.
|Visualization of solitons in Bose-Einstein condensate on cover of December 2000 Photonics and Optics News.|
In this study, the BEC images did indeed show the presence of quantized vortices as well as an unanticipated vortex array structure. The images were the first three- dimensional visualization of vortex structures in a rotating BEC. Images of vortex structures which we produced appear on the cover of the December 1999 issue of Physics Today, on the cover of the August 2000 issue of Parity (Japanese), and on the opening page of an article in the December 2000 issue of Scientific American.
Additionally, images of solitons within a BEC were also produced. These soliton images revealed a decay process known as a snake instability. The discovery of this decay process provoked a great deal of further simulations. Experimentalists at JILA attempted to generate these snake instabilities, confirming all the predictions from the simulations. An image of a soliton within a BEC appeared on the cover of the December 2000 issue of Optics and Photonics News, the magazine of the Optical Society of America.
For their work in BEC visualization, P. Ketcham, S. Satterfield, T. Griffin, W. George, and J. Devaney received a NIST 2000 Bronze Medal award.
We are developing a system for the automatic design and implementation of algorithms using parallel genetic programming. The user specifies the problem to be solved and provides the building blocks; the system determines an algorithm that fits the building blocks together into a solution to the specified problem. Instead of the simple operator/operand trees that are conventional in genetic programming, we have chosen to represent our programs in a form that is closer to the programming model used in standard procedural programming languages like C or Fortran. The basic program component is a routine that has a formal argument list with input and output arguments; this routine calls other routines using its own formal arguments as actual arguments in these calls to subordinate routines. Within each routine, local variables may be introduced to hold temporary results. At the bottom of the calling structure are routines that implement the basic operations such as addition and subtraction, as well as other problem-specific functions. The variables (arguments and local variables) may be scalar or array. This is in contrast to the typical genetic programming approach in which all data items are scalars.
A number of standard problems have been used to test the system, such as Boolean, symbolic regression, and finite state machine problems. We have given, and have been invited to give, several presentations on our work which we expect will lead to future collaborations.
MCSD staff members make contact with a wide variety of organizations in the course of their
work. Examples of these follow.
MCSD staff members make contact with a wide variety of organizations in the course of their work. Examples of these follow.
Advanced Network Consultants
Alabama Cryogenic Engineering
Altra Energy Technologies
Crime Investigation Dept., Home Ofc. (UK)
Digital Equipment Corp.
Dow Research and Development Group
Hughes Aircraft Co.
IBM Thomas J. Watson Research Laboratory
MPI Software Technology, Inc.
N.A. Software, Ltd.
National Radio Astronomy Observatory
New Age Media Systems
Sailfish Systems, Ltd.
Sanders, A Lockheed-Martin Co.
Silicon Graphics, Inc. Spectra-Tech Inc.
VX Optronics Corp.
|Government / Non-profit Organizations|
Air Force Research Laboratory (NM)
Argonne National Laboratory
Asia Technology Information Program (ATIP)
Association for Women in Mathematics (AWM)
CIRES/NOAA Aeronomy Lab (Boulder)
Defense Advanced Research Projects Agency
Department of Energy
Naval Research Labs (NRL)
Oak Ridge National Laboratory
Ohio Supercomputer Center
Sandia National Laboratory
U.S. Army Corps of Engineers
Carnegie Mellon University
Case Western University
Chalmers Institute of Technology (Sweden)
Florida State University
George Mason University
George Washington University
Institute for Mathematics and Its Applications (IMA)
Johns Hopkins Medical School
New Mexico State University
New Mexico Tech
New York University
Penn State University
Rensselaer Polytechnic Institute
Southampton University (UK)
University of Alabama
University of Brest (France)
University of Brussels
Universite d'Aix-Marseille III
University of Chicago
University of Colorado
University of Delaware
University of Houston
University of Iowa
University of Karlsruhe
University of Kent (UK)
University of Loughborough
University of Maryland
University of New Mexico
University of North Carolina
University of Notre Dame
University of Oregon
University of Pennsylvania
University of Pittsburgh
University of Texas
University of Wisconsin
Virginia Polytechnic University
MCSD staff members engage in a variety of activities designed to promote careers in
mathematics and computational science. Among these are the following.
MCSD receives a variety of funding to supplement the base STRS allocation obtained from the NIST Information Technology Laboratory. Funding for fiscal year 2000 included the following. (For joint funding, the amount shown is MCSD's portion.)
MCSD consists of full time permanent staff located at NIST laboratories in both Gaithersburg, MD and Boulder, CO. This is supplemented with a variety of faculty appointments, guest researchers, postdoctoral appointments, and student appointments. The following list reflects the status at the end of FY1999.
Legend: F = Faculty Appointee, GR = Guest Researcher, PD = Postdoctoral Appointee, S = Student, PT= Part time
|ACM||Association for Computing Machinery|
|AMS||Applied Mathematics Series|
|ATP||NIST Advanced Technology Program|
|BLAS||Basic Linear Algebra Subprograms|
|BFRL||NIST Building and Fire Research Laboratory|
|CSTL||NIST Chemical Sciences and Technology Laboratory|
|DARPA||Defense Advanced Research Projects Agency|
|DLMF||Digital Library of Mathematical Functions|
|EEEL||NIST Electronics and Electrical Engineering Laboratory|
|GAMS||Guide to Available Mathematical Software|
|ITL||NIST Information Technology Laboratory|
|MAA||Mathematical Association of America|
|MCSD||ITL Mathematical and Computational Sciences Division|
|MEL||NIST Manufacturing Engineering Laboratory|
|MSEL||NIST Materials Science and Engineering Laboratory|
|NRC||National Research Council|
|NSF||National Science Foundation|
|NIST||National Institute of Standards and Technology|
|OOMMF||Object-Oriented Micromagnetic Modeling Framework|
|OOF||Object-Oriented Finite Elements (for materials microstructure)|
|PHAML||Parallel Hierarchical Adaptive MultiLevel software|
|PL||NIST Physics Laboratory|
|SAVG||MCSD Scientific Applications and Visualization Group|
|SIAM||Society for Industrial and Applied Mathematics|
|SIMA||MEL Systems Integration for Manufacturing Applications|
|SRDP||TS Standard Reference Data Program|
|TIN||Triangulated Irregular Network|
|TNT||Template Numerical Toolkit|
|TS||NIST Technology Services|
|VRML||Virtual Reality Modeling Language|