Mathematical and Computational Sciences Division

Summary of Activities for Fiscal Year 2001

 

 

Collage of images related to MCSD work.

 

 

Information Technology Laboratory
National Institute of Standards and Technology
Technology Administration
U. S. Department of Commerce

January 2001

 

 

 



 

Abstract

This report summarizes the technical work of the Mathematical and Computational Sciences Division of NIST's Information Technology Laboratory. Included are details of technical projects, as well as information on publications, technical talks, and other professional activities in which the Division's staff has participated.

 

 

For further information, contact Ronald F. Boisvert, Mail Stop 8910, NIST, Gaithersburg, MD 20899-8910, phone 301-975-3812, email boisvert@nist.gov, or see the Division’s web site at http://math.nist.gov/mcsd/.

 

 

Thanks to Robin Bickel for collecting and organizing the information contained in this report.

 

 



 

Table of Contents

 

Part I - Overview... 9

1.1.     Introduction.. 11

1.2.     Overview of Technical Areas. 12

Applied Mathematics. 12

Mathematical Software. 13

High Performance Computing and Visualization. 14

Digital Library of Mathematical Functions. 15

Quantum Information. 16

1.3.     Technical Highlights. 17

Image Analysis. 17

Discrete Mathematics. 20

Virtual Cement and Concrete Testing Laboratory. 21

Parallelization of Feff. 22

Awards. 23

Technology Transfer. 27

Professional Activities. 29

Mathematics in NIST History. 30

1.4.     Strategic Planning.. 30

1.5.     Administrative Highlights. 33

Staff News. 33

Student Employment Program... 33

Part II - Projects. 35

2.1.     Applied Mathematics. 37

The APEX Method in Image Sharpening. 37

Blind Deconvolution of Scanning Electron Microscope Imagery. 40

Image Analysis for Combinatorial Experimentation. 43

Mathematical Problems in Construction Metrology. 44

Representation of Terrain and Images by L1 Splines. 46

Computer Graphic Rendering of Material Surfaces. 49

Monte Carlo Methods for Combinatorial Counting Problems. 50

Time-Domain Algorithms for Computational Electromagnetics. 51

Micromagnetic Modeling. 53

OOF: Finite Element Analysis of Material Microstructures. 54

Mathematical Modeling of Solidification. 56

Numerical Simulation of Axisymmetric Dendritic Crystals. 58

Machining Process Metrology, Modeling and Simulation. 58

Modeling and Computational Techniques for Bioinformatics. 60

2.2.     Mathematical Software.. 62

Sparse BLAS Standardization. 62

TNT: Object Oriented Numerical Programming. 62

Parallel Adaptive Refinement and Multigrid Finite Element Methods. 63

Java Numerics. 64

Information Services for Computational Science. 66

2.3.     High Performance Computing and Visualization.. 68

Interoperable MPI Standard.. 68

Parallel Computation of Ground State of Neutral Helium... 70

Parallelization of Feff X-ray Absorption Code. 71

Modeling and Visualization of Dendritic Growth in Metallic Alloys. 72

Parallel Genetic Programming. 74

Immersive Visualization. 76

Linewidth Standards for Nanometer-level Semiconductor Metrology. 79

Theory of Nano-structures and Nano-optics. 80

Cement and Concrete Projects. 82

Computational Modeling of the Flow of Cement 82

Parallelization, Visualization of Fluid Flow in Complex Geometries. 84

Parallelization of a Model of the Elastic Properties of Cement 85

The Visible Cement Dataset 87

2.4.     Special Projects. 89

Digital Library of Mathematical Functions. 89

Quantum Information. 93

Part III: Activity Data.. 97

3.1.     Publications. 99

Appeared.. 99

Technical Reports. 101

Accepted.. 101

Submitted.. 102

In Process. 103

Visualizations Published.. 104

3.2.     Presentations. 104

Invited Talks. 104

Conference Presentations. 106

Visualizations Produced.. 107

3.3.     Conferences, Minisymposia, Lecture Series, Short-courses. 108

MCSD Seminar Series. 108

DLMF Seminar Series. 109

Scientific Object Oriented Programming Users Group (SCOOP). 109

Local Events Organized.. 109

External Event Organization. 110

Other Participation. 111

3.4.     Software Released.. 111

3.5.     External Contacts. 111

3.6.     Other Professional Activities. 113

Internal 113

External 113

Outreach. 114

Part IV - Staff. 117



Text Box:   
  Part I


 Overview
Part I - Overview

 


Charge density on a computed
diffusion-limited cluster aggregate.

 

Charge density on a computed diffusion-limited cluster aggregate.

 

 

 

 


1.1.        Introduction

 

The mission of the Mathematical and Computational Sciences Division (MCSD) is stated as follows.

 

Provide technical leadership within NIST in modern analytical and computational methods for solving scientific problems of interest to U.S. industry. The division focuses on the development and analysis of theoretical descriptions of phenomenon (mathematical modeling), the design of requisite computational methods and experiments, the transformation of methods into efficient numerical algorithms for high- performance computers, the implementation of these methods in high- quality mathematical software, and the distribution of software to NIST and industry partners.

 

Within the scope of our charter, we have set the following general goals.

 

o       Insure that sound mathematical and computational methods are applied to NIST problems.

o       Improve the environment for computational science and engineering research community at large.

 

With these goals in mind, we have developed a technical program in five major areas.

  1. Applied Mathematics
  2. Mathematical Software
  3. High Performance Computing and Visualization
  4. Digital Library of Mathematical Functions
  5. Quantum Information

The first area and third areas accomplished primarily via collaborations with other technical units of NIST, supported by mathematical research in key areas.  Projects in the second area are typically motivated by internal NIST needs, but have products, such as software, which are widely distributed. This work is also often done in conjunction with external forums whose goals are to promulgate standards and best practices.  The fourth and fifth areas represent large special projects.  These are being done in collaboration with other ITL Divisions, as well as with the NIST Physics and Electronics and Electrical Engineering Laboratories.  Each of these is described in further detail below.

 

Our customers span all of the NIST Laboratories, as well as the computational science community at large. We have developed a variety of strategies to increase our effectiveness in dealing with such a wide customer base. We take advantage of leverage provided via close collaborations with other NIST units, other government agencies, and industrial organizations. We develop tools with the highest potential impact, and make online resources easily available. We provide routine consulting, as well as educational and training opportunities for NIST staff. We maintain a state-of-the-art visualization laboratory. Finally, we select areas for direct external participation that are fundamental and broadly based, especially those where measurement and standards can play an essential role in the development of new products.

 

Division staff maintain expertise in a wide variety of mathematical domains, including linear algebra, special functions, partial differential equations, computational geometry, Monte Carlo methods, optimization, inverse problems, and nonlinear dynamics. We also provide expertise in parallel computing, visualization, and a variety of software tools for scientific computing. Application areas in which we have been actively involved in this year include atomic physics, materials science, fluid mechanics, electromagnetics, manufacturing engineering, construction engineering, wireless communications, bioinformatics, image analysis and computer graphics.

 

In addition to our direct collaborations and consulting, output of Division work includes publications in refereed journals and conference proceedings, technical reports, lectures, short courses, software packages, and Web services. In addition, MCSD staff members participate in a variety of professional activities, such as refereeing manuscripts and proposals, service on editorial boards, conference committees, and offices in professional societies. Staff members are also active in educational and outreach programs for mathematics and computer science students at all levels.

 

 

1.2.        Overview of Technical Areas

 

In this section we provide additional background on each of the technical thrust areas, including their impetus, general goals, and expected long-term outcomes.  The identification of these areas was part of a NIST-wide effort to identify and document its programs of work.  Details on the technical work that has been undertaken in each of these areas can be found in Part II.

 

Applied Mathematics

 

Impetus.  As computing resources become more plentiful there is increased emphasis on answering problems by "putting problems on the computer".  Formulating the right questions, translating them into tractable computations, and analyzing the resulting output, are all mathematics-intensive operations.  It is rare for a bench scientist to be expert both in their primary subject area and in the often deep and subtle questions of the mathematics that they engender.  Thus, NIST needs a sustained cadre of professional mathematicians who can bring their expertise to bear on the wide variety of mathematics problems found at NIST.  Often, the mathematics resulting from NIST problems is widely applicable outside, and hence there is added benefit.

 

Activities.  MCSD mathematicians engage in consulting and long-term collaboration with NIST scientists and their external customers.  They also work to develop requisite mathematical technologies, including mathematical models, methods and software.  The following are examples of such activities.

 

o       Mathematical modeling of solidification processes

o       Monte Carlo methods for combinatorial counting problems

o       Terrain modeling

o       Micromagnetic modeling

o       Modeling of complex material microstructures

o       Modeling of high-speed machining processes

o       Development and analysis of image sharpening methods

o       Computer graphic rendering of material surfaces

o       Computational techniques in bioinformatics

o       Mathematical problems in construction metrology

 

Expected Outcomes.  Improved mathematical techniques and computational procedures will lead to more effective use of mathematical and computational modeling at NIST.  Areas such as materials science, high-speed machining, and construction technology will see immediate improvements in methodology.  Distribution of related methodology and tools (including computer software) will allow these benefits to accrue to the scientific community at large.  Examples of the latter include (1) more widespread study of material science problems and the development of new technologies characterized by complex material microstructure, and (2) improvement in the accuracy and reliability of micromagnetic modeling software.

 

Mathematical Software

 

Impetus.  Mathematical modeling in the sciences, engineering, and finance inevitably leads to computation.  The core of computations is typically a series of well-defined recurring mathematical problems, such as the solution of a differential equation, the solution of a linear system, or the computation of a transform.  Much mathematical research has focused on how to solve such problems efficiently.  The most effective means of passing on this expertise to potential customers is by encapsulating it in reusable software components.  Since much work at NIST relies on such computations, it has a natural interest in seeing that such components are developed, tested, and made available.  The computational science community outside of NIST has similar needs.  Programming methodologies and tools for developing efficient and reliable mathematical modeling codes in general, and for developing and testing reusable mathematical software components in particular, are also of interest.

 

Activities. MCSD staff members develop of mathematical algorithms and software in response to current and anticipated NIST needs. They are also involved in the development of standards for mathematical software tools, and in the widespread dissemination of research software, tools, testing artifacts, and related information to the computational science community at large. The following are examples of such activities.

 

o       Numerical computing in Java

o       The Sparse BLAS

o       Parallel adaptive multigrid methods

o       Template Numerical Toolkit

o       Guide to Available Mathematical Software

o       The Matrix Market

 

Expected Outcomes.  Improved access to general-purpose mathematical software will facilitate the rapid development of science and engineering applications. In addition, the availability of community standards and testing tools will lead to improved portability, performance, and reliability of science and engineering applications.

 

 

Steven Satterfield.

 

Steven Satterfield visualizes cement data using the Rave immersive visualization system.

(Reprinted with the permission of Government Computer News, Copyright © Post

Newsweek Tech Media Group.  All Rights reserved.)

 

High Performance Computing and Visualization

 

Impetus.  The most demanding mathematical modeling and data analysis applications at NIST require resources that far exceed those routinely found on the scientist's desktop.  In order to effect such computations in a reasonable amount of time, one must often resort to parallel computers.  The effective use of parallel computers requires that computational algorithms be redesigned, often in a very fundamental way.  Effecting these changes, and debugging the resulting code, requires expertise and a facility with specialized software tools that most working scientists do not possess. Hence, it is necessary to support the use of such facilities with specialized expertise in these areas.  Similarly, the use of sophisticated visualization equipment and techniques is necessary to adequately digest the massive amount of data that these high performance computer simulations can produce.  It is not easy to become facile with the use of such tools, and hence specialized expertise in their use must also be provided.

 

Activities.  MCSD staff members collaborate with NIST scientists on the application of parallel computing to mathematical models of physical systems.  In addition, they collaborate with NIST scientists on the application of advanced scientific visualization and data mining techniques.  They develop and maintain supporting hardware and software tools, including a fully functional visualization laboratory.  MCSD staff members also provide consulting in the use of applications software provided by the NIST central computing facility.  The following are examples of activities in this area.

 

o       Parallelization of Feff x-ray absorption code

o       Parallel computation of the ground state of neutral helium

o       Parallel genetic programming

o       Parallel computing and visualization of the flow of suspensions

o       Modeling and visualization of dendritic growth

o       Visible cement database

o       Immersive visualization

 

Expected Outcomes.  Working closely with NIST scientists to improve the computational performance of their models will lead to higher fidelity simulations, and more efficient use of NIST central computing resources.  New scientific discovery will be enabled through the insight provided by visualization and data mining. Finally, widespread dissemination of supporting techniques and tools will improve the environment for high performance computing and visualization at large.

 

Digital Library of Mathematical Functions

 

Impetus.  The special functions of applied mathematics are extremely useful tools in mathematical and computational modeling in a very wide variety of fields.  The effective use of these tools requires access to a convenient source of information on the mathematical properties of these functions such as series expansions, asymptotics, integral representations, relations to other functions, methods of computation, etc. For more than 35 years the NBS Handbook of Mathematical Functions (AMS 55) has served this purpose. However, this book is now woefully out of date. Many new properties of these functions are known, many new scientific applications of them have come into use, and current computational methods are completely different than those of the 1950s. Finally, today there are new and more effective means of presenting the information: online, Web-based, highly interactive, and visual.

 

Activities.  The purpose of this project is to develop a freely available, online, interactive resource for information on the special functions of applied mathematics.  With the help of some 40 outside technical experts, we are surveying the technical literature, extracting the essential properties of interest in applications, and packaging this information in the form of a reference compendium.  To support the presentation of such data on the Web, we are developing mathematics-aware search tools, indices, thesauri, and interactive Web-based visualizations.

 

Expected Outcomes.  Widespread access to state-of-the-art data on the special functions will improve mathematical modeling in many areas of science, statistics, engineering, and finance. The DLMF will encourage standardization of notations and normalizations for the special functions. Users of the special functions will have an authoritative reference to cite the functions they are using, providing traceability to NIST for standardized mathematical objects.

 

 

Brianna Blaser, Elaine Kim, and Bonita Saunders.

 

Students Brianna Blaser (Carnegie Mellon) and Elaine Kim (Stanford) work with

Bonita Saunders on graphics for the Digital Library of Mathematical Functions.

 

 

Quantum Information

 

Impetus.  Quantum information networks have the potential of providing the only known provably secure physical channel for the transfer of information.  The technology has only been demonstrated in laboratory settings, and a solid measurement and standards infrastructure is needed to move this into the technology development arena. Quantum computers have potential for speeding up previously intractable computations.  ITL has been asked to support the work in the NIST Physics and Electronics and Electrical Engineering Laboratories to develop quantum processors and memory, concentrating on the critical areas of error correction, secure protocols, algorithm and tool development, programming, and information theory.

 

Activities.  This project is an ITL-wide effort with participants in six Divisions.  We are working to develop a quantum communications test bed facility for the DARPA QuIST program as part of a larger effort to develop a measurement and standards infrastructure to support quantum communications. We are further supporting the NIST Quantum Information program through collaborative research with the NIST Physics Laboratory related to quantum information theory.  Within MCSD we are working on issues related to the use of quantum entanglement for long-distance communication, the modeling of neutral atom traps as quantum processors, and the development and analysis of quantum algorithms.

 

Expected Outcomes.  We expect that the development of an open, measurement-focused test bed facility will allow a better understanding of the practical commercial potential for secure quantum communication, and serve the development of standardized network protocols for this new communications technology.  By working closely with staff members of the NIST Physics Laboratory, who are working to develop quantum processors, we expect that early processor designs will be more capable and useable.

 

1.3.        Technical Highlights

 

In this section we will highlight some of the technical accomplishments of the Division for FY2001.  Further details can be found in Part II.

 

Image Analysis

 

Scientific and engineering data is increasingly being generated in the form of images.  Images produced at NIST are from a wide variety of sources, from scanning electron microscopes to laser radar.  Applications range from combinatorial chemistry to building construction.  The area of image analysis has blossomed into a significant area of applied mathematics research in recent years, for which new fundamental mathematical technologies are continuing to be developed.  MCSD staff members are working on a variety of projects in collaboration with the NIST Laboratories in which image analysis plays a vital role.  Examples of these follow.

 

Blind Direct Deconvolution.  Scanning electron microscopes (SEMs) are basic research tools in many of NIST's programs in nanotechnology. A major concern in scanning electron microscopy is the loss of resolution due to image blurring caused by electron beam point spread. The shape of that beam can change over time, and is usually not known to the microscopist. Real-time blind deconvolution of SEM imagery, if achievable, would significantly extend the capability of electron microprobe instrumentation. Blind deconvolution is a very difficult problem in which ill conditioning is compounded with non-uniqueness. Most known approaches to that problem are iterative in nature.  Such processes are typically quite slow, can develop stagnation points, or diverge altogether. Alfred Carasso of MCSD has developed reliable direct (non-iterative) methods, in which the fast Fourier transform is used to solve appropriately regularized versions of the underlying ill-posed parabolic differential equation problem associated with the blur. When the point-spread function (psf) is known, Carasso's SECB method can deblur 512x512 images in about 1 second of CPU time on current desktop platforms. Carasso has recently developed two new direct blind deconvolution techniques based upon SECB. These methods detect the signature of the psf from appropriate 1-D Fourier analysis of the blurred image. The detected psf is then input into the SECB method to obtain the deblurred image. When applicable, these blind methods can deblur 512x512 images in less than a minute of CPU time, which makes them highly attractive in real-time applications.  Carasso has been applying this method with great success to images obtained from NIST SEMs.  The methods are applicable in a wide variety of imaging modalities in addition to SEM imaging.

 

 

Alfred Carasso

 

Alfred Carasso has developed a unique highly efficient method for blind deconvolution of images. This is

currently being used in several applications of electron microscopy at NIST. The method is more widely

applicable, as indicated by the enhancement of the Whirlpool Galaxy (M51) image, shown in the photo.

 

 

Feature Extraction, Classification.  In applications like combinatorial chemistry, large sets of such images are generated which must be processed automatically to identify information of interest.  Isabel Beichl has been working with the NIST Polymers Division to automatically detect areas of wetness and dryness in images of polymer dewetting processes, and to generate summary statistics related to the geometry of each image.  Another need is to automatically classify the state of the dewetting process that each image represents.  An algorithm of Naiman and Priebe based upon importance sampling and Bayesian statistics is being adapted for this purpose.  In a separate effort, Barbara am Ende is working with the Semiconductor Electronics Division to develop techniques for automatically detecting and counting lattice planes between sidewalls in High Resolution Transmission Electron Microscopy (HRTEM) images. This capability is a key step in the development of precision linewidth standards for nanometer-level semiconductor metrology.

 

Micrographs to Computational Models.  Image analysis is the first step in the processing done by the popular OOF software for analyzing materials with complex microstructure.  Developed by MCSD's Stephen Langer in association with staff of the NIST Material Science and Engineering Laboratory, OOF begins with a micrograph of a real material with multiple phases, grain boundaries, holes, and cracks, identifies all the parts, and then generates a finite element mesh consistent with the complicated geometry.  Material scientists can then use the result to perform virtual tests on the material, such as raising its temperature and pulling on it.  The resulting stresses and strains can then be displayed.  OOF has become a popular tool in the material science community, and has won internal and external awards.  This year Langer worked with Robert Jin, a talented intern from Montgomery Blair High School, to developed a technique for automatically detecting grain boundaries in micrographs. The algorithm is based upon a modified Gabor wavelet filter and edge linking.  This will be incorporated into OOF2, now under development.  OOF2 will include a variety of new capabilities and will be easier to extend.

 

LADAR and 3D Imaging.  Laser radar (LADAR) systems provide a relatively inexpensive method for terrain mapping.  Such systems can optically scan a given scene, providing distance and intensity readings as a function of scanning angle.  In principle, such data can be used to construct a geometrical model of the scanned scene.  In practice this remains a very difficult process.  The data is voluminous, noisy, and full of unnatural artifacts.  The data is one-sided, only providing the view as seen from a particular vantage point.  Hence, to develop a true three-dimensional model, scans from multiple sources must be registered and the data fused.  Christoph Witzgall has been working with staff of the NIST Building and Fire Research Laboratory to develop three-dimensional models of construction sites.  With such a model, as-built conditions could be automatically assessed, current construction processes could be viewed, planned sequences of processes could be tested, and object information could be retrieved on demand.  Witzgall has developed techniques for cleaning and registering LADAR data, and extracting a triangulated irregular network model from it.  These techniques have been tested on applications such as determining volumes of excavated earth.  In a related effort, David Gilsinn is studying the use of LADAR to read object-identifying bar codes on remote objects.  The reflectance data is noisy and defocused, and Gilsinn is developing deconvolution techniques to reconstruct bar codes from the LADAR data.  This is challenging since the LADAR is not a single beam, but rather a collection of multiple sub-beams.  Some progress has been made using averaging filters.  A more accurate model for the convolution kernel is being developed.

 

Discrete Mathematics

 

Mathematical problems with discrete components are increasing in frequency at NIST, turning up in applications from nanotechnology to network analysis.  MCSD staff members have become involved in a variety of these efforts, and are developing some of the basic technologies to tackle such problems efficiently.  Some examples follow.

 

Combinatorial counting problems.  Combinatorial problems arise in a wide variety of applications, from nanotechnology to computer network analysis.  Fundamental models in these fields are often based on quantities that are extremely difficult (i.e., exponentially hard) to compute.  We have devised methods to compute such quantities approximately (with known error bars) using Monte Carlo methods.  Traditional Monte Carlo methods can be slow to converge, but we have made progress in significantly speeding up these computations using importance sampling.  In the past few years Isabel Beichl and colleagues have made progress in evaluating the partition function for describing the probability distribution of states of a system. In a number of settings, including the Ising model, the q-state Potts model, and the monomer-dimer model, no closed form expressions are known for three-dimensional cases and obtaining exact solutions of the problems is known to be computationally intractable.  We have developed a class of probabilistic importance sampling methods for these problems that appears to be much more effective than the standard Markov Chain Monte Carlo technique.  We have used these techniques to obtain accurate solutions for both the 3D dimer covering problem and the more general monomer-dimer problem. An importance sampling formulation for the 3D Ising model has also been constructed.  This year, new Monte Carlo/importance sampling techniques and software have been developed to estimate the number of independent sets in a graph. A graph is a set of vertices with a set of connections between some of the vertices. An independent set is a subset of the vertices, no two of which are connected. The problem of counting independent sets arises in data communications, in thermodynamics, and in graph theory itself.  For example, it is closely related to issues of reliability of computer networks.  Physicists have used estimates of number of independent sets to estimate the hard sphere entropy constant.  This constant is known analytically in 2D, but no analytical result is known in 3D.  Beichl, along with Dianne O'Leary and Francis Sullivan have been able to use their approach to estimate the constant for a 3D cubic lattice. They are now are working on the case of an FCC lattice.

 

Bioinformatics.  Computational biology is currently experiencing explosive growth in its technology and industrial applications. Mathematical and statistical methods dominated the development of the field but as the emphasis on high throughput experiments and analysis of genetic data continues, computational techniques have also become essential.  We are working to understand the mathematical issues in dealing with large biological datasets with the aim of developing expertise that can be applied to future NIST problems.  In the process, we are developing techniques and tools of widespread interest.  One of these is GenPatterns.  Fern Hunt, along with former guest researcher Antti Pessonen, and student Daniel Cardy developed this program to compute and graphically display DNA or RNA subsequence frequencies and their recurrence patterns, as well as to creating Markov models of the data. GenPatterns is now a part of the NIST Bioinformatics/Computational Biology software website currently being constructed by the NIST the Chemical Science and Technology Laboratory.  More recently we have turned our attention to the problem of aligning protein sequences with gaps.  Database searches of protein sequences are based on algorithms that find the best matches to a query sequence, returning both the matches and the query in a linear arrangement that maximizes underlying similarity between the constituent amino acid residues. Very fast algorithms based on dynamic programming exist for aligning two or more sequences if the possibility of gaps is ignored. Gaps are hypothesized insertions or deletions of amino acids that express mutations that have occurred over the course of evolution. The alignment of sequences with such gaps remains an enormous computational challenge. Fern Hunt and Anthony Kearsley are currently working with Honghui Wan of NIH to develop an alternative approach based on Markov decision processes. The optimization problem then becomes a linear programming problem and it is amenable to powerful and efficient techniques for solution. We are creating software for multiple sequence alignment based on these ideas.

 

Quantum algorithms.  We have recently begun a project in the area of quantum information science.  We are collaborating with other ITL Divisions and the NIST Physics Laboratory in the development and analysis of quantum-based systems for communication and computation.  One component of this is the study of algorithms for quantum computers.  The principle advances in this field thus far have been Shor’s algorithm for factoring and Grover’s algorithm for searching an unordered set, each of which exhibit significant speedups which are thought not to be possible on classical computers.  A new postdoctoral appointee, David Song, is working with Isabel Beichl and Francis Sullivan on quantum algorithms for determining whether a finite function over the integers is one-to-one.  They are constructing a quantum algorithm for this problem which they hope to show has a complexity of O(SQRT(n)) steps.  Classical algorithms require n steps to do this computation.  The proposed algorithm uses phase symmetry, Grover's search algorithm and results about the pth complex roots of unity for a prime p.

 

Virtual Cement and Concrete Testing Laboratory

 

Concrete is an essential ingredient of the national civil engineering infrastructure.  Some 6,100 companies support this infrastructure, with a gross annual product of $35 billion when delivered to a work site, and over $100 billion when in place in a building. In recent years there has been a growing recognition of the great potential for improving the performance of cement and concrete products with the development of new understanding of the materials and processes. The NIST Building and Fire Research Laboratory (BFRL) has over two decade's worth of experience in experimental, theoretical, and computational work on cement and concrete and is a world leader in this field. MCSD staff members in the Scientific Applications and Visualization Group have contributed to this effort by working closely with BFRL scientists in developing parallel implementations of their computational models, and in providing effective visualizations of their results.  Among these are models of the flow of suspensions, flow in porous media, and the elastic properties of concrete.  MCSD contributions have significantly extended the class of problems that can be addressed by BFRL researchers.  Striking visualizations of the results of these simulations, including immersive visualizations, have also been developed by MCSD staff.  (Examples are included elsewhere in this report.) 

 

In January 2001 the Virtual Cement and Concrete Testing Laboratory (VCCTL) consortium was formed under the leadership of BFRL.  The overall goals of the consortium are to develop a virtual testing system to reduce the amount of physical testing of concrete, expedite the research and development process, and facilitate innovation. The consortium has seven industrial members. MCSD is a partner in the effort, and is taking the lead in visualization and parallelization efforts. 

 

Volume rendering of a cement paste sample.

 

This image shows a volume rendering of a cement paste sample.

The actual sample is less than one millimeter wide.

 

 

Parallelization of Feff

 

A popular computer code for X-ray absorption spectroscopy (XAS) now runs 20-30 times faster, thanks to a cooperative effort of MCSD and the NIST Materials Science and Engineering Laboratory (MSEL).  XAS is widely used to study the atomic-scale structure of materials, and is currently employed by hundreds of research groups in a variety of fields, including ceramics, superconductors, semiconductors, catalysis, metallurgy, geophysics, and structural biology. Analysis of XAS relies heavily on ab initio computer calculations to model x-ray absorption in new materials. These calculations are computationally intensive, taking days or weeks to complete in many cases. As XAS becomes more widely used in the study of new materials, particularly in combinatorial materials processing, it is crucial to speed up these calculations.  One of the most commonly used codes for such analyses is FEFF. Developed at the University of Washington, FEFF is an automated program for ab initio multiple scattering calculations of X-ray Absorption Fine Structure (XAFS) and X-ray Absorption Near-Edge Structure (XANES) spectra for clusters of atoms. The code yields scattering amplitudes and phases used in many modern XAFS analysis codes. Feff has a user base of over 400 research groups, including a number of industrial users, such as Dow, DuPont, Boeing, Chevron, Kodak, and General Electric.

 

To achieve faster speeds in FEFF, James Sims of the MCSD worked with Charles Bouldin of the MSEL Ceramics Division to develop a parallel version, FeffMPI.  In modifying the code to run on the NIST parallel processing clusters using a message-passing approach, they gained a 20-30-fold improvement in speed over the single processor code. Combining parallelization with improved matrix algorithms may allow the software to run 100 times or more faster than current single processor codes. The latter work is in process.  The parallel version of the XAS code is portable, and is now also operating on parallel processing clusters at the University of Washington and at DoE's National Energy Research Scientific Computing Center (NERSC). One NERSC researcher has reported doing a calculation in 18 minutes using FeffMPI on the NERSC IBM SP2 cluster that would have taken 10 hours before. In 10 hours this researcher can now do a run that would have taken months before, and hence would not have been even attempted.

 

Awards

 

A large number of MCSD staff members received significant awards this year. Some of these are highly distinguished awards from external groups, while others are prized internal awards.

 

External Awards.  Anthony Kearsley, a MCSD mathematician, received the Arthur Flemming Award in June 2001. The Flemming Award is given annually to recognize outstanding Federal employees with less than 15 years of service. The Flemming Award Commission selects the honorees, and the award is sponsored by George Washington University and Government Executive magazine. This year 12 winners were selected from throughout the federal government, six in the administrative category and six in the science and engineering category. Kearsley was cited for a sustained record of contributions to the development and use of large-scale optimization techniques for the solution of partial differential equations arising in science and engineering. Noted were his contributions to the solution of problems in such diverse areas as oil recovery, antenna design, wireless communications, climate modeling, optimal shape design, and high-temperature superconductors. His tireless work as a mentor and leading proponent of careers in mathematics for students at the high school, undergraduate, and graduate levels was also cited.  This was the second year in a row that an MCSD staff member received the Flemming award.  Last year Fern Hunt was among the 12 winners.

 

 

Anthony Kearsley           Bonita Saunders

 

Anthony Kearsley, winner of the 2001 Arthur Flemming Award,

and Bonita Saunders, 2001 Claytor Lecturer.

 

 

Bonita V. Saunders presented the 2001 Claytor Lecture on January 13, 2001. The National Association of Mathematicians (NAM) inaugurated the Claytor Lecture in 1980 in honor of W. W. Schieffelin Claytor, the third African American to earn a Ph.D. in Mathematics, and the first to publish mathematics outside of his thesis. Founded in 1969, NAM is a non-profit professional organization whose mission is "to promote excellence in the mathematical sciences and promote the mathematical development of underrepresented American minorities." Saunders is the twentieth mathematician to be selected as Claytor lecturer. Previous honorees include Fern Hunt, also of ITL, David H. Blackwell, the first African American elected to the National Academy of Sciences, and J. Ernest Wilkins, who at 19 became the youngest African American to receive a doctorate in the mathematical sciences. Saunders' lecture, entitled, "Numerical Grid Generation and 3D Visualization of Special Functions" was delivered at a special session of the Joint Mathematics Meetings in New Orleans.

 

Geoffrey McFadden, Leader of the MCSD Mathematical Modeling Group, was elected a Fellow of the American Physical Society (APS).  McFadden was recognized "for fundamental insights into the effect of fluid flow on crystal growth and for an innovative approach to phase field methods in fluid mechanics." McFadden's interest in the study of crystal growth began when he joined NIST in 1981. Since then he has published more than 100 papers with colleagues in MSEL, as well as with researchers at external institutions such as Carnegie Mellon University, Northwestern University, Rensselaer Polytechnic, and the University of Southampton. The APS's Division of Fluid Dynamics recommended the nomination. Fellowship in the APS is limited to no more than one-half of one percent of APS membership. Presentation of the award took place at the Annual Meeting of the Division of Fluid Dynamics held in San Diego, November 18-20, 2001. 

 

Raghu Kacker was elected Fellow of the American Society for Quality and recognized at the 55th Annual Quality Congress held in Charlotte, NC on May 6-9, 2001. He was cited for pioneering work in the advancement of the application of the statistical sciences, especially Taguchi methods, to quality, measurement science, calibration and inter-laboratory comparisons.

 

 

Raghu Kacker          Geoffrey McFadden

 

Raghu Kacker (left) was elected a Fellow of the American Society for Quality, and

Geoffrey McFadden (right) was elected a Fellow of the American Physical Society.

 

 

 

NIST Awards.  In December 2000, Stephen Langer of MCSD, along with Ed Fuller and Andy Roosen of MSEL, received the NIST Jacob Rabinow Applied Research Award. The Rabinow Award is presented yearly in recognition of outstanding application of NIST research in industry.  Langer, Fuller, and Roosen were honored for the development of OOF, a system for the modeling of materials with complex microstructures.  Also in December 200, a team of MCSD staff from the Scientific Applications and Visualization Group was awarded a NIST Bronze Medal for their work in visualization of Bose-Einstein condensates. The honorees were Judith Devaney, William George, Terence Griffin, Peter Ketcham, and Steve Satterfield. They were cited for their work with colleagues in the NIST Physics Lab to develop unique 3D color representations of the output of computational models of Bose-Einstein condensates. The visualizations illustrated properties of the condensates which were previously unknown, and which have since been experimentally verified. The pictures were selected as cover illustrations by Physics Today (Dec. 1999), Parity magazine (Japanese, Aug. 2000), Optics and Photonics News (Dec. 2000), and were featured in a title spread for an article in Scientific American (Dec. 2000).

 

 

Andrew Roosen, Stephen Langer, and Edwin Fuller

 

Winners of the 2000 NIST Jacob Rabinow Applied Research Award (left to right):

 Andrew Roosen (MSEL), Stephen Langer, and Edwin Fuller (MSEL).

 

 

Steven
Satterfield, Peter Ketcham, Terrence Griffin, William George,
Judith Devaney

 

Winners of the 2000 NIST Bronze Medal: (front, left to right) Steven Satterfield, Peter

Ketcham, Terrence Griffin, (back, left to right) William George, Judith Devaney.

 

 

Roldan Pozo and Ronald Boisvert

 

Winners of the 2001 Bronze medal: Roldan Pozo (left) and Ronald Boisvert (right)

 

 

In December 2001, Ronald Boisvert and Roldan Pozo received a NIST Bronze Medal. They were cited "for leadership in technology transfer introducing significant improvements to the Java programming language and environment for scientific computing applications."

 

ITL Awards.  Isabel Beichl received the first annual ITL Outstanding Publication Award in May 2001 in recognition of a series of 11 tutorial articles on non-numeric techniques for scientific computing published in Computing in Science and Engineering from 1997-2000.  Beichl was the first winner of this newly instated ITL award.

 

Five MCSD staff members were among a group of 17 ITL staff named as joint recipients of the Outstanding Contribution to ITL Award in May 2001.  The award recognized members of the ITL Diversity Committee.  The MCSD awardees were Judith Devaney (Chair), Isabel Beichl, Ronald Boisvert, Raghu Kacker, and Bonita Saunders.

 

 

   

 

Isabel Beichl won the first ITL Outstanding publication Award for a series of

11 tutorial articles published in Computing in Science and Engineering.

 

 

Technology Transfer

 

MCSD staff members continue to be active in publishing the results of their research. This year 49 publications authored by Division staff appeared, 28 of which were published in refereed journals. Twenty-one additional papers have been accepted and are awaiting publication. Another 22 are under review. MCSD staff members were invited to give 40 lectures in a variety of venues and contributed another 30 talks to conferences and workshops.

 

Four shortcourses on Java and LabView where provided by MCSD for NIST staff this year.  The Division lecture series remained active, with 27 talks presented (five by MCSD staff members); all were open to NIST staff.  In addition, a Scientific Object Oriented Programming User's Group, chaired by Stephen Langer, was established.  Six meetings of the group have been held.

 

MCSD staff members also organize workshops, minisymposia, and conferences to provide forums to interact with external customers. This year, staff members were involved in organizing twelve external events and three internal ones.  For example, a very successful workshop was held in late June to discuss the current state of the OOF finite element program and to plan future developments.  Approximately 65 OOF users and developers attended the two-day workshop from 5 countries, 9 companies, 18 universities, and 4 national labs.  The workshop was co-sponsored by MCSD and the MSEL Center for Theory and Computation in Material Science (CTCMS).

 

Software continues to be a by-product of Division work, and the reuse of such software within NIST and externally provides a means to make staff expertise widely available. Several existing MCSD software packages saw new releases this year, including Zoltan (grid partitioning, joint with Sandia National Laboratories), OOMMF (micromagnetic modeling), OOF (material microstructure modeling), and TNT (Template Numerical Toolkit for numerical linear algebra in C).

 

Tools developed by MCSD have led to a number of commercial products. Examples from two past Division projects are f90gl and IMPI.  F90gl is a Fortran 90 interface to OpenGL graphics. Originally developed by William Mitchell of MCSD for use in NIST applications, f90gl was subsequently adopted by the industry-based OpenGL Architecture Review Board to define the standard Fortran API for OpenGL. NIST's reference implementation has since been included in commercial products of Lahey Computer Systems, Compaq, NASoftware, and Interactive Software Services. Several others are planned.  MCSD staff facilitated the development of the specification for the Interoperable Message Passing Interface (IMPI) several years ago.  IMPI extends MPI to permit communication between heterogeneous processors. We developed a Web-based conformance testing facility for implementations. Several commercial implementations are now under development. Several companies, including Hewlett-Packard and MPI Software Technologies demonstrated IMPI on the exhibit floor of the SC'01 conference in Denver in November 2001.

 

Web resources developed by MCSD continue to be among the most popular at NIST. The MCSD Web server at math.nist.gov has serviced more than 38 million Web hits since its inception in 1994 (9 million of which have occurred in the past year). The Division server regularly handles more than 11,000 requests for pages each day, serving more than 40,000 distinct hosts on a monthly basis.  Altavista has identified approximately 10,000 external links to the Division server. The seven most accessed ITL Web sites are all services offered by MCSD:

  1. NIST Math Portal
  2. Matrix Market
  3. Guide to Available Mathematical Software
  4. Division home page
  5. ACM Transactions on Mathematical Software
  6. Digital Library of Mathematical Functions
  7. Template Numerical Toolkit

 

Professional Activities

 

Division staff members continue to make significant contributions to their disciplines through a variety of professional activities. Ronald Boisvert serves as Chair of the International Federation for Information Processing (IFIP) Working Group 2.5 (Numerical Software). He also serves as Vice-Chair of the ACM Publications Board. Donald Porter serves on the Tcl Core Team, which manages the development of the Tcl scripting language. Daniel Lozier serves as chair of the SIAM Special Interest Group on Orthogonal Polynomials and Special Functions.

 

Division staff members serve on journal editorial boards of eleven journals: ACM Transactions on Mathematical Software (R. Boisvert and R. Pozo), Computing in Science & Engineering (I. Beichl), Interfaces and Free Boundaries (G. McFadden), Journal of Computational Methods in Science and Engineering (M. Donahue), Journal of Computational Physics (G. McFadden), Journal of Crystal Growth (G. McFadden), Journal of Numerical Analysis and Computational Mathematics (I. Beichl and W. Mitchell), Journal of Research of NIST (D. Lozier), Mathematics of Computation (D. Lozier), SIAM Journal of Applied Mathematics (G. McFadden), SIAM Journal of Scientific Computing (B. Alpert).

 

Division staff members also work with a variety of external working groups. Ronald Boisvert and Roldan Pozo chair the Numerics Working Group of the Java Grande Forum. Roldan Pozo chairs the Sparse Subcommittee of the BLAS Technical Forum. Michael Donahue and Donald Porter are members of the Steering Committee of muMag, the Micromagnetic Modeling Activity Group.

 

Mathematics in NIST History

 

In 2001 NIST Celebrated its Centennial.  As part of the celebration, NIST published a centennial volume entitled A Century of Excellence in Measurements, Standards, and Technology: A Chronicle of Selected Publications of NBS/NIST, 1901-2000. The publication highlights approximately 100 highly significant NBS/NIST publications of the last century. CRC Press published this book in the fall of 2001. Four of the highlighted publications are associated with the work of ancestor organizations to MCSD:

  1. C. Lanczos, An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators, Journal of Research of the National Bureau of Standards 45 (1950), pp. 255-282.
  2. M. R. Hestenes and E. Stiefel, Methods of Conjugate Gradients for Solving Linear Systems, Journal of Research of the National Bureau of Standards 49 (1952), pp. 409-436.
  3. M. Abramowitz and I. Stegun (eds.), Handbook of Mathematical Functions, NBS Applied Mathematics Series 55, U.S. Government Printing Office, 1964.
  4. J. R. Edmonds, Paths, Trees and Flowers, Canadian Journal of Mathematics 17 (1965), pp. 449-467.

 

R. Boisvert, D. Lozier, D. O'Leary, and C. Witzgall developed vignettes in the published volume describing these publications.

 

The year 2002 also marks the 50th anniversary of the original Hestenes-Stiefel paper on the conjugate gradient method cited above.  This anniversary will be commemorated at a conference on Iterative Methods for Large Linear Systems to be held at the ETH in Zurich in February 2002.  MCSD is a joint sponsor of this conference.

 

1.4.        Strategic Planning

 

MCSD attempts to maximize the impact of its work.  In order to do this, it must continually assess the future needs of its customers, as well as the mathematical and computational technologies that can help meet those needs.  This is the role of strategic planning.  Information gathered in this way is used to set priorities for selecting projects, developing new areas of expertise, and hiring new staff.

 

MCSD assesses the needs of its customers in a variety of ways.

  1. One-on-one interactions at the staff level.
  2. Attendance at seminars, workshops, and conferences.
  3. Interactions with other organizations at the management level.
  4. Monitoring of planning reports of customer organizations, government organizations, and private think tanks.
  5. Participation in the development of research proposals with customer organizations.
  6. Participation in formal strategic planning efforts.

 

Advances in mathematical and computational technologies are tracked in the course of a variety of professional activities such as participation in workshops and conferences, monitoring of technical magazines and journals, and consultation with external technical experts.

 

Many of these planning activities occur on a continuing basis during the year.  A formal Division strategic plan was developed in 1999 and will be revisited in 2002.  The major themes identified in that plan were the following.

 

o       Measurement and Calibration for the Virtual Sciences

 

The ordinary industrial user of complex modeling packages has few tools available to assess the robustness, reliability, and accuracy of models and simulations.  Without these tools and methods to instill confidence in computer-generated predictions, the use of advanced computing and information technology by industry will lag behind technology development.  NIST, as the nation’s metrology lab, is increasingly being asked to focus on this problem.

 

o       Evolving Architecture of Tools, Libraries, and Information Systems for Science and Engineering

 

Research studies undertaken by laboratories like NIST are often outside the domain of commercial modeling and simulation systems.  Consequently, there is a great need for the rapid development of flexible and capable research-grade modeling and simulation systems.  Components of such systems include high-level problem specification, graphical user interfaces, real-time monitoring and control of the solution process, visualization, and data management.  Such needs are common to many application domains, and re-invention of solutions to these problems is quite wasteful.

 

The availability of low-cost networked workstations will promote growth in distributed, coarse grain computation.  Such an environment is necessarily heterogeneous, exposing the need for virtual machines with portable object codes.  Core mathematical software libraries must adapt to this new environment.

 

All resources in future computing environments will be distributed by nature.  Components of applications will be accessed dynamically over the network on demand.  There will be increasing need for online access to reference material describing mathematical definitions, properties, approximations, and algorithms.  Semantically rich exchange formats for mathematical data must be developed and standardized.  Trusted institutions, like NIST, must begin to populate the net with such dynamic resources, both to demonstrate feasibility and to generate demand, which can ultimately be satisfied in the marketplace.

 

o       Emerging Needs for Applied Mathematics

 

The NIST Laboratories will remain a rich source of challenging mathematical problems.  MCSD must continually retool itself to be able to address needs in new application areas and to provide leadership in state-of-the-art analysis and solution techniques in more traditional areas.  Many emerging needs are related to applications of information technology.  Examples include VLSI design, security modeling, analysis of real-time network protocols, image recognition, object recognition in three dimensions, bioinformatics, and geometric data processing.  Applications throughout NIST will require increased expertise in discrete mathematics, combinatorial methods, data mining, large-scale and non-standard optimization, stochastic methods, fast semi-analytical methods, and multiple length-scale analysis.

 

This year NIST embarked on an Institute-wide strategic planning process called NIST 2010.  Four technical areas were identified for emphasis.

  1. Nanotechnology
  2. Health care
  3. Knowledge management
  4. Homeland security

 

In addition, three internal infrastructure areas where identified.

  1. People
  2. Customer focus
  3. Information technology support

 

MCSD staff members are currently working with NIST-wide committees to understand current NIST capabilities in these areas and develop specific plans.  We will work to align our programs to be able to support these efforts.

 

In addition to these planning efforts, we have had additional extensive discussions with management and staff of the NIST Physics Lab related to quantum information, and the NIST Building and Fire Research Lab related to computer-aided construction.  Finally, we have exchanged ideas with members of the government-wide Interagency Committee on Extramural Mathematics Programs (ICEMAP), whose meetings we have participated in this past year.

1.5.        Administrative Highlights

 

Staff News

 

Two new postdoctoral appointments were made during the past year. Katharine Gurski joined MCSD in January 2001 as a National Research Council postdoctoral fellow working with Geoffrey McFadden.  She has a Ph.D. in applied mathematics from the University of Maryland, and had a previous postdoctoral appointment at the NASA Goddard Space Flight Center.  She has been developing numerical methods for the solution of axisymmetric boundary integral equations for applications of in materials science, including dendritic growth.  In October 2001 David Daegene Song also began a two-year postdoctoral appointment with MCSD.  A recent graduate of Oxford University, where he received a Ph.D. in physics, Song was associated with the Clarendon Laboratory’s Center for Quantum Computation.  He has been working on issues related to entanglement swapping and the analysis of quantum algorithms.

 

Raghu Kacker began a one-year detail from the ITL Statistical Engineering Division to MCSD to begin investigation of the mathematical and statistical questions associated with virtual measurement systems. He is also assisting with the DLMF project.

 

Annette Shives, Secretary for MCSD’s Scientific Applications and Visualizations Group, retired on September 28, 2001 after 24 years of government service. Yolanda Parker, formerly of the NIST Manufacturing Engineering Laboratory, was hired to take over the administrative operations of the group, as well as to perform new duties related to the operations of the MCSD Visualization Lab.

 

Three new foreign guest researchers began their terms in MCSD this year: Julien Franiette, Aboubekre Zahid, and F. Pokam.  Each is working in the Scientific Applications and Visualization Group.  A. Samson and F. Pokam also completed their terms during the year.

 

Student Employment Program

 

MCSD provided support for nine student staff members on summer appointments during FY 2001. Such appointments provide valuable experiences for students interested in careers in mathematics and the sciences. In the process, the students can make very valuable contributions to MCSD programs. This year's students were as follows.

 

 

Name,

Affiliation

 

Supervisor

 

Project

E. Baer, Montgomery Blair High School

A. Kearsley

Numerical and theoretical properties of algorithms for the solution of linear systems were studied. In particular, an application of P-adic arithmetic work of Morris Newman’s was implemented.

B. Blaser, Carnegie Mellon Univ.

B. Saunders

Development of graphics for the DLMF project.

D. Cardy, Montgomery Blair High School

F. Hunt

Explored methods for distinguishing coding and non-coding regions of DNA sequences based on the mutual entropy function. His work involved use of GenPatterns, a tool for analyzing statistical patterns in DNA and RNA.

J. Carlson, Dartmouth College

I. Beichl

Developed a probabilistic algorithm to estimate the number of independent sets in a graph. She wrote a Matlab program to do this and applied the results to computing the 2d and 3d hard sphere entropy constants for cubic lattices.

D. Caton, Univ. of Maryland

J. Devaney

Developing an algorithm to recognize images in a large database of images with similar texture characteristics.

Stefanie Copley

Univ. of Colorado

J. Filla

Vislab and scientific visualization support, specializing in nonlinear video editing and 3D stereo data presentation.

R. Jin, Montgomery Blair High School

S. Langer

Apply image analysis techniques to micrographs of materials microstructure, with the goal of developing software for automatic grain boundary detection. The software will be included in the OOF project.

E. Kim, Stanford Univ.

B. Saunders

Development of graphics for the DLMF Project.

K. McQuighan, Montgomery Blair High School

T. Kearsley

The theoretical properties of algorithms for quantum computers were studied. In particular, the application of Grover's method to difficult search problems was considered.

 


Part II - Projects

Text Box:   
  Part II


 Projects
 

 

 

 

 

 

 


Charge density on a computed
diffusion-limited cluster aggregate.

 

Charge density on a computed diffusion-limited cluster aggregate.

 

 

 

 

 

 

2.1.Applied Mathematics

 

 

The APEX Method in Image Sharpening

 

Alfred S. Carasso

 

In work yet to be published, Alfred Carasso's direct blind deconvolution techniques have been shown capable of producing useful results, in real time, on a wide variety of real blurred images, including astronomical, Landsat and aerial images, MRI and PET brain scans, and electron microscope imagery. A key role is played by a class of functions introduced in the 1930's by Paul Lévy in connection with his fundamental work on the Central Limit Theorem. The potential usefulness in image processing of these so-called Lévy "stable" laws had not previously been suspected.

 

In the last several years, digital imagery has become pervasive in numerous areas of applied science and technology, and digital image processing has matured into a major discipline within Information Technology. Image processing is now a vast research activity that lies at the intersection of Optics, Electronics, Computer Science and Applied Mathematics. SPIE, IEEE, and SIAM are three major scientific societies that support significant research in this area.

 

In most cases, image acquisition involves a loss of resolution. This may come about from imperfect optics, from the scattering of photons before they reach their intended target, from turbulent fluctuations in the refractive index while imaging through the atmosphere, from image motion or defocusing, or from a combination of these and a myriad other small aberrations. The resulting acquired image is typically blurred, and this blur, when known, can be described by a point spread function (psf) that mathematically characterizes the cumulative effect of all these distortions. In an idealized imaging system, the psf is the Dirac delta function and has zero spread. In a real system, there is always some point spread, and this delta function typically becomes spread out onto some type of bell-shaped curve. There is considerable interest in improving image resolution by removing some of this blur through computer processing of the given blurred image.

 

Image deblurring is one of several distinct topics within image processing, (image compression is another), and it is one with considerable mathematical content.  Deblurring involves deconvolution in an integral equation. This is a notoriously difficult ill-conditioned problem in which data noise can become amplified and overwhelm the desired true solution. Depending on the type of point spread function, this deconvolution problem is mathematically equivalent to an ill-posed initial value problem for a partial differential equation in two space variables. For example, Gaussian psfs, which are ubiquitous in applications, lead to solving the time-reversed heat equation. Other types of parabolic partial differential equations, associated with nonlinear anisotropic diffusion, have recently been advocated as generic image enhancement tools in image processing. That approach, originating in France in the early 1990's, is computationally highly intensive, and has yet to be evaluated. In another direction, probabilistic methods based on Bayesian analysis together with Maximum Likelihood or Maximum Entropy criteria, have long been used in Astronomy and Medical Imaging. These are again nonlinear methods that must be implemented iteratively. A characteristic feature of such probabilistic approaches is that large-scale features in the image can typically be reconstructed after one or two-dozen iterations, while several thousand further iterations, and several hours of CPU time, are usually necessary to reconstruct fine detail.

 

In many cases, the psf describing the blur is unknown or incompletely known. So-called blind deconvolution seeks to deblur the image without knowing the psf. This is a much more difficult problem in which ill conditioning is compounded with non-uniqueness. Most known approaches to that problem are iterative in nature and seek to simultaneously reconstruct both the psf and the deblurred image. As might be expected, that iterative process can become ill behaved and develop stagnation points or diverge altogether. As a rule, iterative blind deconvolution procedures are not well suited for real-time processing of large size images of complex objects.

 

Carasso's work in image deblurring has focused on developing reliable direct non-iterative methods, in which Fast Fourier Transform algorithms are used to solve appropriately regularized versions of the underlying ill-posed parabolic equation problem associated with the blur. When the psf is known, Carasso's SECB method can deblur 512 by 512 images in about 1 second of CPU time on current desktop platforms. Moreover, in a recent SIAM Journal on Applied Mathematics paper, Carasso has developed two new direct blind deconvolution techniques, the BEAK method and the APEX method. These methods are based on detecting the signature of the psf from appropriate 1-D Fourier analysis of the blurred image. This detected psf is then input into the SECB method to obtain the deblurred image. When applicable, either of these two distinct blind methods can deblur 512x512 images in less than a minute of CPU time, which makes them highly attractive in real-time applications.

 

The APEX method is predicated on a class of shift invariant blurs, the class G, which can be expressed as a finite convolution product of radially symmetric two-dimensional Lévy stable density functions. This class includes Gaussians, Lorentzians, and their convolutions, as well as many other kinds of bell-shaped curves with heavy tails. The motivation for using the class G as the framework for the APEX method, lies in previously unrecognized basic work by C. B. Johnson, an electronics engineer who, in the 1970's, discovered non-Gaussian heavy-tailed psfs in a wide variety of electron optical imaging devices. In fact, Carasso has been energetic in making Johnson's work more widely known within the imaging research community, has corresponded with Johnson, and has succeeded in drawing the attention of Mandelbrot, Woyczynski, and Nolan, three eminent specialists on Lévy processes, to Johnson's seminal work. Very recently, Woyczynski has interviewed Johnson in connection with Woyczynski's forthcoming book on Lévy processes in the physical sciences.

 

 

 

APEX method in image sharpening. See caption below.

 

APEX method in image sharpening

(A) Original transverse PET brain image. (B) Enhanced PET image. Bright spots indicating areas of the brain responding to applied external stimuli were barely visible in original image. Here, beta=0.284. (C) Original scanning electron micrograph of mosquito's head showing compound eye. (D) Enhanced image shows increased contrast and brings eye into sharper focus. Here, beta=0.157. (E) Original F-15 plane image. (F) Enhanced image brings out terrain features and condensation trails behind aircraft. Here, beta=0.107.

 

Lévy densities are characterized by an exponent beta that expresses the degree of departure from the Gaussian density, for which beta=1.0. In physical applications where Lévy densities appear, values of beta less than 0.5 are generally rare. While not all images can be significantly improved with the APEX method, there is a wide class of images for which APEX processing is beneficial. These images have the property that their 1-D Fourier transform traces are globally logarithmically convex. When the APEX method is applied to such an image, a specific value of beta is detected. Typical APEX-detected values of beta are on the order of 0.25. The physical origin of such beta values, if any, is uncertain. However, it is remarkable that useful sharpening of imagery from a wide variety of scientific and technological applications can be accomplished with such heavy-tailed psfs. The appearance of low-exponent stable laws in the present context is of great interest to specialists on Lévy processes.  The APEX method is based on ill-posed continuation in diffusion equations involving fractional powers of the Laplacian. Mathematically, such an approach differs fundamentally from currently more popular techniques based on solving well-posed nonlinear anisotropic diffusion equations. Interestingly, the APEX method generally produces sharper imagery, at much lower computing times.

 

Future work will explore more fully applications of this technique to NIST imaging problems, as well as to selected problems in other areas.

 

 

Blind Deconvolution of Scanning Electron Microscope Imagery

 

Alfred S. Carasso

David S. Bright (NIST CSTL)

András E. Vladár (NIST MEL)

 

Scanning electron microscopes (SEM) are basic research tools in many of NIST's programs in nanotechnology. Moreover, considerable expertise resides at NIST on the theory behind these instruments, as well as on the analysis and interpretation of SEM imagery. David Bright has created the LISPIX image analysis package and has used it to automate electron microscopes. András Vladár is the SEM Project Leader in the Nanoscale Metrology Group, and he has helped define and implement the basic standards for the measurement and monitoring of electron microscope imaging performance.  That expertise was vital to the success of this project, which extended over a two-year period and involved well over 1 gigabyte of processed imagery.

 

A major concern in scanning electron microscopy is the loss of resolution due to image blurring caused by electron beam point spread. The shape of that beam can change over time, and is usually not known to the microscopist. Hence, the point spread function (psf) describing the blur is generally unknown. Nevertheless, there is great interest in improving resolution by reducing this blur. The images we are concerned with come from scanning electron beam instruments such as the field emission gun scanning electron microscope (FEGSEM), a high-resolution instrument, and the environmental scanning electron microscope (ESEM), a lower resolution instrument with more flexible sample handling capability.  SEM micrographs are typically large size images of complex objects.

 

Real-time blind deconvolution of SEM imagery, if achievable, would significantly extend the capability of electron microprobe instrumentation. Previously gained experience with the APEX method on images from very diverse imaging modalities, naturally suggests use of this technique. However, SEM imaging differs from other electron-optic imaging, in that the instrument transform I that converts a sample s(x,y) into an image i(x,y) has a nonlinear component, M, which describes the details of the nonlinear interaction between the electrons and the material. M is usually studied by Monte Carlo simulations applied to electron trajectories, but is not readily invertible. The second component of I, call it q, describes blurring due to the electron beam point spread, along with some of the instrument's electronics. That component is often represented as a convolution, so that the SEM micrograph i(x,y) is the convolution of q with M(s(x,y)). The APEX method is a linear deconvolution technique predicated on a restricted class of blurs, the class G, consisting of finite convolution products of radially symmetric Lévy probability density functions.  It is by no means obvious that the APEX method is applicable to SEM imagery.

 

Nevertheless, when the APEX method was applied to a large variety of original SEM micrographs, the method was and found to be quite useful in detecting and enhancing fine detail not otherwise discernible. Several examples are shown in the accompanying Figure. In addition, quantitative sharpness analysis of ‘ideal sample’ micrographs, using a methodology originally developed by the NIST Nanoscale Metrology Group to monitor SEM imaging performance, shows that APEX processing can actually produce sharper imagery than is achievable with optimal microscope settings. On such ideal sample micrographs, sharpness increases on the order of 15% were obtained as a result of APEX processing. A crucial element in this work was the marching backwards in time feature of the APEX method, which allows for deconvolution in slow motion. The APEX method sharpens the image, while simultaneously increasing contrast and brightness, by restoring some of the high frequency content that had been attenuated in the course of imaging the sample. Slow motion deconvolution allows the user to terminate the APEX process before brightness, contrast, or noise, becomes excessive.

 

As in all inverse problems, successful use of the APEX method requires a-priori knowledge about the solution. Here, such prior knowledge takes the form of training and experience on the part of the microscopist, whose judgment is called upon to distinguish genuine features in the presence of noise and visually select the best reconstruction. Several experienced NIST microscopists were involved in evaluating the merits of APEX processed imagery.

 

 

 

Real time APEX processing of Scanning Electron Microscope Imagery. 
See caption below

 

Real time APEX processing of Scanning Electron Microscope Imagery

Left column: Original SEM micrographs, Right column: After APEX processing. (A) Fly ash particle from Nuclepore filter. (C) particle from crystalline mercury compound. (E) dirt particle from air filter. APEX processing increases contrast and brightness as it sharpens the image, and brings out fine scale detail not otherwise discernible.

 

 

In the adjoining Figure, the left column contains examples of original SEM micrographs that were input into the APEX method, while the right column contains the corresponding APEX images. All original micrographs were input as 8-bit 512 by 512 images, although smaller sub images are displayed in some cases. These images are part of a wide class of SEM images with globally logarithmically convex 1-D Fourier transform traces. Image (A) is a micrograph of a 2-micron diameter fly ash particle on a nuclepore filter. That image was scanned from an old Polaroid print taken by John Small (NIST), in the 1970's, on a Cambridge SEM at the University of Maryland. Imperfections on the Polaroid print are detected in the APEX image (B), along with enhancing the texture of the sample. Some of that texture may be due to the print rather than to the sample itself. Moreover, the scratch near the upper right corner in image (B) is not discernible in image (A). This example is a useful indicator of the value of APEX processing. Presumably, actual imperfections or small defects in some other sample would have been detected equally well. Also, the APEX image (B) has more depth than the original image, in that the structure in the lower left quadrant now appears closer to the viewer than does the rest of the image.

 

Image (C) is a 20-micron field of view micrograph of a particle from a complex multi-form crystalline compound of mercury. This particular sample has very complex and varied morphology, in addition to surface dusting or decoration of fine particles almost everywhere. This becomes clearly evident only in the APEX image (D), which contains substantially more information than does image (C). Also, the three-dimensional structure of the particle is particularly well rendered in image (D). Image (E) is a small portion of a 250-micron field of view micrograph of a dust particle from an air vent, consisting of a complex agglomeration of biological and mineral particles. Very striking APEX enhancement is apparent in image (F).

 

As in the previously mentioned APEX applications, low values of the Lévy exponent beta, typically on the order of 0.25, were detected in these SEM micrographs. Future work will examine possible links between these values of beta and the physics of electron microscopy. Plans are also underway to incorporate APEX processing into the LISPIX package, a NIST-developed image analysis tool that is widely used within the NIST Laboratories. In another direction, the possible use of APEX methodology to produce a new quantitative measure of SEM imaging performance is being explored.

 

 

Image Analysis for Combinatorial Experimentation

 

Isabel Beichl

James Lawrence

 

Computational geometry and image analysis techniques have been applied to photographic images of polymer dewetting under various conditions in order to model the evolution of these materials. This work is in collaboration with MSEL which has massive amounts of data as a result of combinatorial experimentation and which is in great need of automatic techniques for analysis. Methods and software have been devised to evaluate areas of wetness and dryness for their geometric properties such as deviation of holes from perfect circularity and distribution of holes centers. We computed Voronoi diagrams of the initial hole centers and we investigate their use as a predictor of later de-wetting behavior.

 

In dewetting, the samples progress through various states and we need to determine automatically which state a given image represents. To this end, we have built on statistical techniques developed by D. Naiman and C. Priebe at Johns Hopkins for analyzing medical images. They do this with Monte Carlo methods based on importance sampling for estimating the probabilities of being in various states using many normal images. Their method is a brilliant combination of importance sampling and the Bayesian approach. We have devised methods for determining the probability of states that are a combination of other states and we have tested our approach on some simple geometric examples. A paper describing our work is being written will soon be submitted for publication. The true test on real-world data awaits preparation and delivery of data from MSEL is in process.

 

Recently we have we have begun to extend these techniques to infrared spectral data from given to us by PL.

 

 

Mathematical Problems in Construction Metrology

 

Christoph Witzgall

Javier Bernal

David Gilsinn

 

During the past decade, laser-scanning technology has developed into a major vehicle for wide-spread applications such as cartography, bathymetry, urban planning, object detection, dredge volume determination, just to name a few. BFRL is actively investigating the use of that technology for monitoring construction sites. Here laser scans taken from several vantage points are used to construct a surface representing a particular scene. In conjunction with the construction site terrain modeling work currently under way another aspect of the overall project envisions that CAD-generated geometry sets will be transformed into a library of 3D construction site objects. These objects are then loaded into an augmented simulation system that tracks both equipment and resources based on real-time data from the construction site. With some future enhancements, the end result will be a world model of the site, in which as-built conditions can be assessed, current construction processes can be viewed as they occur, planned sequences of processes can be tested, and object information can be retrieved on demand. A project can be viewed and managed remotely using this tool.

 

LIDAR technology is currently being tested for locating equipment on construction sites. Three specific areas will be the major concern for this project: a) Literature search for LIDAR based object recognition technology, which has been completed and a report submitted to BFRL.  b) Parts tracking support and demonstration project, and c) LIDAR bar code recognition for object identification.

 

 

LIDAR-acquired
image of a pattern of 25.4 mm (1 in) reflector
bar codes. Note the lower three blurred bars.

LIDAR-acquired image of a pattern of 25.4 mm (1 in)

reflector bar codes. Note the lower three blurred bars.

 

Blurred
LIDAR image of 25.4 mm (1 in) reflector bar codes deconvolved with
an averaging filter. Note ringing due to sharp data edges.

Blurred LIDAR image of 25.4 mm (1 in) reflector bar codes deconvolved

with an averaging filter. Note ringing due to sharp data edges.


As a first step a program has been written to display distance and intensity responses of LIDAR scans of bar code reflectance tape, which optionally writes the images out in various selectable formats: PS, EPS, JPEG, TIFF. A deconvolution program has also been written using averaging filters for the convolution integral as a method to reconstruct defocused LIDAR scanned bar code images. Through a simulation it has been shown that an exact knowledge of the beam model allows reconstruction of bar codes. In order to determine the nature of a real LIDAR beam experiments using an infrared visual scope was used to observe the size and configuration of an actual beam at various distances from 5 m to 40 m. The beam was found to not remain a solid beam but split into multiple sub-beams. These observations are currently affecting the design of filters to deconvolve a set of LIDAR images acquired by BFRL of reflective tape simulations of bar codes. However averaging filters have been used with some success to deconvolve some of the images.

 

 

Representation of Terrain and Images by L1 Splines

 

David Gilsinn
Christoph Witzgall
John Lavery (Army Research Office)

 

Methods for gathering terrain data have proliferated during the past decade in both the military and commercial sectors. The rapid development of laser scanning techniques and their application to cartography, bathymetry, urban planning, construction site monitoring, just to name a few, has resulted in a strong push for next generation computational tools for terrain representation. Using smooth surfaces for representation of terrain has long been recognized. However, previously available smooth-surface techniques such as polynomial and rational splines, radial basis functions and wavelets require too much data, too much computing time, too much human interaction, and/or do not preserve shape well. Conventional smooth splines have been the main candidate for an alternative to triangular irregular networks (TINS) because of their relative computational simplicity. However, conventional smooth splines are plagued by extraneous, nonphysical oscillation.

 

Recently (1996-2000), J. Lavery of the Army Research Office (ARO) has developed and tested a new class of L1 splines (published in the journal Computer Aided Geometric Design). L1 splines provide smooth, shape-preserving, piecewise polynomial fitting of arbitrary data, including data with abrupt changes in magnitude and spacing and are calculated by efficient interior-point algorithms (extensions of Karmarkar's algorithm). The L1 spline algorithm developed by John Lavery of the Army Research Office and used in the terrain approximation code uses a special finite element with a bivariate cubic spline structure function. It is called a Sibson element and is not documented well in the literature. A NISTIR documenting the construction of a Sibson element was completed. In collaboration with J. Lavery of the ARO, NIST has carried out the first steps in evaluating the accuracy and data-compression capabilities of L1 splines. The goal was to demonstrate that, on simple grids with uniform spacing, L1 splines provide more accurate and compact representation of terrain than do conventional splines and piecewise planar surfaces. The results of this work are to be published in three conference proceedings (Lavery, J.E., Gilsinn, D.E., Multiresolution Representation of Terrain By Cubic L1 Splines', Trends in Approximation Theory, Vanderbilt University Press; Lavery, J.E., Gilsinn, D.E., "Multiresolution Representation of Urban Terrain by L1 Splines, L2 Splines and Piecewise Planar Surfaces", Proc. 22nd Army Science Conference, 11-13 December 2000, Baltimore, MD; D.E., Gilsinn, J.E. Lavery, "Shape-Preserving, Multiscale Fitting of Bivariate Data by L1 Smoothing Splines", Proc. Conf. Approximation Theory X, St. Louis, MO.). They demonstrated the superiority of L1 spline interpolative ability over conventional L2 splines. The superiority of L1 splines over piecewise planar interpolation depended on the measure of closeness.

 

Comparisons of the performance of L1 splines vs. that of piecewise planar surfaces and of conventional smooth splines have been carried out on sets of open terrain data, such as Ft. Hood DTED data, which include irregularly curved surfaces, steep hillsides and cliffs as well as flat areas (plateaus or bodies of water), and urban terrain data, such as data for downtown Baltimore. The metrics for the comparison will be 1) amount of storage required for meshes and spline parameters; 2) accuracy of the representation as measured by rms RMS error and maximum error. L1 splines will be compared with conventional techniques not only for fitting terrain data that has been "rectified" to regular grids (a standard, but error-rich step in current modeling systems) but also for fitting irregularly spaced "raw" terrain data. Numerical experiments have also been undertaken with the application of smoothing L1 splines to decomposed portions of a larger image with the intent of stitching the individual splines together in order to recompose the larger image. The resulting spline coefficients at overlapping cells of the subimages were remarkably similar. This initially indicated the potential success of recomposing large images from subimages for which L1 smoothing splines can be computed rapidly through parallel processing. Due to uncertainties about the methods used to prepare certain urban data sets obtained for imaging sources, simulated urban terrain data was created without noise or image uncertainties. L1 smoothing spline approximation then demonstrated the clear difference between conventional spline and L1 approximations in that the Gibbs phenomenon at sharp discontinuities was clearly visible for conventional splines. The L1 smoothing spline code was also tested on several simulated urban data sets with buildings that included curved sides, quadratic function roofs as well as slanted roofs.

 

 


L1 spline approximation of a
simulated urban building complex. Note the sharp edge approximation.

 

L1 spline approximation of a simulated urban building complex.

Note the sharp edge approximation.

 

L2 spline approximation of a
simulated urban building complex. Note the Gibbs phenomena at the edges of the buildings

 

L2 spline approximation of a simulated urban building complex.

Note the Gibbs phenomena at the edges of the buildings.

 

 

 

 

Computer Graphic Rendering of Material Surfaces

 

Fern Hunt

Maria Nadal (NIST PL)

Gary Meyer (University of Oregon)

Harold Westlund (University of Oregon)

Michael Metzler (ISCIENCES Corporation)

                                                

   http://math.nist.gov/~FHunt/webpar4.html

 

For some years, computer programs have produced images of scenes based on a simulation of scattering and reflection of light off one or more surfaces in the scene. In response to increasing demand for the use of rendering in design and manufacturing, the models used in these programs have undergone intense development. In particular, more physically realistic models are sought (i.e., models that more accurately depict the physics of light scattering). However there has been a lack of relevant measurements needed to complement the modeling. As part of a NIST competency project entitled "Measurement Science for Optical Reflectance and Scattering", F. Hunt is coordinating the development of a computer rendering system that utilizes high quality optical and surface topographical measurements performed here at NIST. The system will be used to render physically realistic and potentially photorealistic images. Success in this and similar efforts can pave the way to computer based prediction and standards for appearance that can assure the quality and accuracy of products as they are designed, manufactured and displayed for electronic marketing.

 

The work of the past year has focused on the application of the enhanced rendering program iBRDF that broadens the range of models and optical measurements that can be used to produce computer graphic images of surfaces. This program was developed by Gary Meyer and his student Harold Westlund of the University of Oregon as part of the competency project. F. Hunt worked with Meyer and Westlund on a quantitative evaluation of a selected set of rendered images. These images were compared with the optical measurements performed by Maria Nadal of the Physics Laboratory. Nadal used a measurement protocol worked out with Michael Metzler and Hunt. The protocol is set up so that measurements can be used to parameterize the Beard-Maxwell model for optical scattering and is based on the protocol used in a government database. The objects measured were 2 metal panels painted with gray metallic paint, with one paint consisting of large metallic paint flakes while the other contained small flakes. The goal of this exercise was to establish a metrological basis for a difference in appearance. The figure  below shows a digital photograph of the two panels painted with the metallic paints positioned inside a lightbox. The panels are illuminated by lights in the ceiling of the box. The figure below shows a rendering of the panels and the box based on optical measurements of the panel and the walls of the box. The calculations assumed that the lights in the ceiling provided a diffuse and uniform illumination of the samples. Numerical comparison showed good agreement between the model and the measurements that were used to define the parameters of the model and to validate it for out-of plane measurements. Radiance measurements of the panels were compared radiance values calculated from the rendering model. Here there was less agreement because the actual light source was in fact quite non-uniform. The simulation did not capture the sudden decrease in sample radiance as the sample is rotated from 45 to 60 degrees with respect to the normal of the floor i.e. flop. When a single light source, was assumed in the calculation (reproducing the source used in the laboratory) flop was observed in the calculated radiance values.

 

The project officially ended in fiscal year 2001. Westlund, Hunt and Meyer are working on a web site that gives an account of the rendering work done during the project. We will also make NIST scattering measurements available to the rendering community.

 

F. Hunt gave an invited presentation at the ACREO AB Microelectronics and Optics Conference in Kista, Sweden on October 29. It was entitled, "Digital Rendering of Surfaces". Harold Westlund and Gary Meyer gave a presentation of their work at SIGGRAPH 2001 in Los Angles, CA, and at EuroGraphics Workshop.

 

 

Digital photo of a lightbox.       Rendered image of lightbox.


             
Digital photo of  a lightbox  (left) and  a rendered image (right).

 

 

Monte Carlo Methods for Combinatorial Counting Problems

 

Isabel Beichl

Dianne O'Leary

Francis Sullivan (IDA/CCS)

 

This year, new techniques and software have been developed to estimate the number of independent sets in a graph. A graph is a set of vertices with a set of connections between some of the vertices. An independent set is a subset of the vertices, no two of which are connected.

 

The problem of counting independent sets arises in data communications, in thermodynamics and in graph theory itself. In data communications it is closely related to issues of reliability of networks. In brief, if failure probabilities are assigned to links, the new methods can be used to estimate the failure probability of the entire network. I. Beichl has consulted with Leonard Miller in the ITL Advanced Networking Technologies Division about applications to network reliability. They believe that the combinatorial counting techniques can be extended to estimate the probability of network failure for very large graphs.

 

Physicists have used estimates of number of independent sets to estimate the hard sphere entropy constant which can be formulated as an independent set problem. This constant is now known in 2D analytically but know analytical result is known in 3D. Beichl, O'Leary and Sullivan have been able to use their approach to estimate the constant for a 3D cubic lattice. They are now are working on the case of an FCC lattice.

 

I. Beichl in collaboration with guest researcher, F. Sullivan, also discovered that stratified sampling can be used to enhance this program. Stratified sampling is a Monte Carlo technique that divides choices into strata and requires one sample from each stratum be chosen if possible. They found that the independent set program could be improved so that many fewer samples are needed with this technique. Isabel Beichl, Dianne O’Leary and Francis Sullivan are investigating the connection between this method and standard Markov chain methods for estimating the number of independent sets in a graph.

 

I. Beichl gave eight invited talks on these Monte Carlo methods in the last year.  The team was also invited to make a presentation on this subject at the annual American Mathematical Society meeting.

 

 

Time-Domain Algorithms for Computational Electromagnetics

 

Bradley Alpert

Andrew Dienstfrey

Leslie Greengard (New York University)

Thomas Hagstrom (University of New Mexico)

 

Acoustic and electromagnetic waves, including radiation and scattering phenomena, are increasingly modeled using time-domain computational methods, due to their flexibility in handling wide-band signals, material inhomogeneities, and nonlinearities. For many applications, particularly those arising at NIST, the accuracy of the computed models is essential. Existing methods, however, typically permit only limited control over accuracy; high accuracy generally cannot be achieved for reasonable computational cost.

 

Applications that require modeling of electromagnetic (and acoustic) wave propagation are extremely broad, ranging over device design, for antennas and waveguides, microcircuits and transducers, and low-observable aircraft; nondestructive testing, for turbines, jet engines, and railroad wheels; and imaging, in geophysics, medicine, and target identification. At NIST, applications include the modeling of antennas (including those on integrated circuits), waveguides (microwave and photonic), transducers, and in nondestructive testing.

 

The objective of this project is to advance the state of the art in electromagnetic computations by eliminating three existing weaknesses with time-domain algorithms for computational electromagnetics to yield: (1) accurate nonreflecting boundary conditions (that reduce an infinite physical domain to a finite computational domain), (2) suitable geometric representation of scattering objects, and (3) high-order convergent, stable spatial and temporal discretizations for realistic scatterer geometries. The project is developing software to verify the accuracy of new algorithms and reporting these developments in publications and at professional conferences.

 

This year the paper, "Lattice Sums and the Two-Dimensional, Periodic Green's Function for the Helmholtz equation," Dienstfrey, Hang, and Huang, Proc. Roy. Soc. Lond. A 457, 67-85 (2001), which treats the solution of problems in periodic media, appeared.  Submitted for publication, the paper Nonreflecting Boundary Conditions for the Time-Dependent Wave Equation, Alpert, Greengard, and Hagstrom, demonstrates the efficacy of the recently-developed nonreflecting boundary conditions through their implementation in wave-propagation software, and compares to the perfectly-matched layer (PML) technique due to Berenger.  In addition, this year the project continued to investigate discretization issues that arise in complicated geometry, leading to new quadrature and interpolation techniques still under development.

 

The work of the project is supported in part by the Defense Advanced Research Projects Agency (DARPA).  The work has been recognized by researchers developing methods for computational electromagnetics (CEM) and has influenced work on these problems at Boeing and HRL (formerly Hughes Research Laboratories). It has also influenced researchers at Yale University and University of Illinois. In each of these cases, new research in time-domain CEM is exploiting discoveries of the project. In particular, some efforts for the new DARPA program on Virtual Electromagnetic Testrange (VET) are incorporating these developments. We expect that design tools for the microelectronics industry and photonics industry, which increasingly require accurate electromagnetics modeling, will also follow.

 

 

Micromagnetic Modeling

 

Michael Donahue

Donald Porter

Robert McMichael (NIST MSEL)

Jason Eicke (George Washington University)

 

http://math.nist.gov/oommf/

http://www.ctcms.nist.gov/~rdm/mumag.html

 

The engineering of such IT storage technology as patterned magnetic recording media, GMR sensors for read heads, and magnetic RAM (MRAM) elements requires an understanding of magnetization patterns in magnetic materials at the nanoscale.  Mathematical models are required to interpret measurements at this scale.  The Micromagnetic Modeling Activity Group (muMAG) was formed to address fundamental issues in micromagnetic modeling through two activities: the definition and dissemination of standard problems for testing modeling software, and the development of public domain reference software.  MCSD staff is engaged in both of these activities.  The Object-Oriented MicroMagnetic Framework (OOMMF) software package is a reference implementation of micromagnetic modeling software.   Achievements in this area since October 2000 include the following.

 

Software Releases

 

 

Standard Problems

 

 

Supporting the micromagnetics community

 

    

Scientific contributions

 

 

 

OOF: Finite Element Analysis of Material Microstructures

 

Stephen Langer

Andrew Reid (MIT/NIST)

Andrew Roosen (NIST MSEL)

Edwin Fuller (NIST MSEL)

Craig Carter (MIT)

Edwin Garcia (MIT)

Robert Kang-Xing Jin (Montgomery Blair High School)

 

http://www.ctcms.nist.gov/oof/

 

The OOF project, a collaborative venture of MCSD, MSEL's Ceramics Division, and the Center for Theoretical and Computational Materials Science, and MIT, is developing software tools for analyzing real material microstructure.  The microstructure of a material is the (usually) complex ensemble of polycrystalline grains, second phases, cracks, pores, and other features occurring on length scales large compared to atomic sizes.  The goal of OOF is to use data from a micrograph of a real material to compute the macroscopic behavior of the material via finite element analysis.

 

OOF is composed of two programs, oof and ppm2oof, which are available as binary files and as source code on the OOF website.  From December 2000 through November 2001, ppm2oof was downloaded 1369 and oof was downloaded 1182 times.  The source code was downloaded 610 times, and a conversion program, oof2abaqus, was downloaded 141 times.  The OOF mailing list (as of 12/4/01) has 268 members.

 

In June 2001, an OOF workshop held at NIST.  Approximately 70 researchers from nine different corporations, four different government laboratories, 18 universities, and five countries attended.  The OOF developers presented the current state of the software and plans for the future, while the users spoke about the numerous ways in which the software is being used.  Topics ranged from ceramic coatings on turbine blades to marble degradation and paint blistering.  A final discussion session provided useful feedback for further code development.

 

Technical achievements during FY01 included:

 

 

 

Mathematical Modeling of Solidification

 

Geoffrey B. McFadden

William Boettinger (NIST MSEL)

John Cahn (NIST MSEL)

Sam Coriell (NIST MSEL)

Jonathan Guyer (NIST MSEL)

James Warren (NIST MSEL)

Daniel Anderson (George Mason University)

B. Andrews (University of Alabama)

Richard Braun (University of Delaware)

Bruce Murray, (SUNY Binghamton)

Robert Sekerka (Carnegie Mellon University)

G. Tonaglu (Izmir Institute of Technology, Turkey)

Adam Wheeler (University of Southampton, UK)

 

Mathematical modeling provides a valuable means of understanding and predicting the properties of materials as a function of the processing conditions by which they are formed. During the growth of alloy crystals from the melt, the homogeneity of the solid phase is strongly influenced by conditions near the solid-liquid interface, both in terms of the geometry of the interface and the arrangements of the local temperature and solute fields near the interface. Instabilities that occur during crystal growth can cause spatial inhomogeneities in the sample that can significantly degrade the mechanical and electrical properties of the crystal. Considerable attention has been devoted to understanding and controlling these instabilities, which generally include interfacial and convective modes that are challenging to model by analytical or computational techniques.

 

A well-established collaborative effort between the Mathematical and Computational Sciences Division and the Metallurgy Division of the Materials Science and Engineering Laboratory has included support from the NASA Microgravity Research Program as well as extensive interaction with university and industrial researchers. In the past year a number of projects have been undertaken that address outstanding issues in materials processing through mathematical modeling.

 

G. McFadden collaborated with W. Boettinger, J. Warren, and J. Guyer (MSEL) on an extension of recently developed diffuse-interface models of solidification to include electrical effects during deposition processes. The resulting model of electrodeposition is intended to treat the free boundary between the electrolyte and metal electrode, and includes equations for charged species and electrical potential.

 

G. McFadden collaborated with John Cahn (MSEL), R. Braun (U. Delaware), and G. Tonaglu (Izmir Institute of Technology, Turkey) on a model of order-disorder transitions in a face-center-cubic binary alloy. This work includes improved models for the free energy of the system, which lead to more accurate representations of the solid-state phase transitions that occur in such materials. The work has been submitted for publication in Acta Materialia.

 

G. McFadden is also a participant in a new project on the evolution and self-assembly of nanoscale quantum dots, in collaboration with researchers at Northwestern University.  Self-assembly of quantum dots, which offer interesting electronic properties through quantum confinement of electrons, can be achieved spontaneously during heteroepitaxy

(controlled deposition of one material upon another). In this project, the effects of anisotropic surface energy and substrate elasticity will be studied in order to understand the underlying physics and nonlinear nature of the dynamics of the self-organization process. 

 

Other related work in this period included collaboration with Professor R. Sekerka, Carnegie Mellon University, on a model of dendritic growth for two-component metallic alloys, which has been written up as a short note. A short review on applications of stability theory for a solid-liquid interface in collaboration with S. Coriell (MSEL) is also in press. McFadden also collaborated with Coriell and B. Murray (SUNY Binghamton) on developing a model for interfacial instabilities during the cooperative growth of monotectic materials. This work is in support of research by B. Andrews, University of Alabama in Birmingham, who is planning an experiment in monotectic growth on board the US Space Station. McFadden also hosted an extended visit by Professor A. Wheeler, University of Southampton, UK. McFadden and Wheeler completed a study of the Gibbs adsorption equation in the context of diffuse interface theory. The Gibbs adsorption equation provides a description of the dependence of the interfacial surface energy on other thermodynamical parameters in the system. A manuscript describing this work has been accepted for publication in the Proceedings of the Royal Society of London.

 

G. McFadden was elected a Fellow of the American Physical Society in the fall of 2001 by its Fluid Dynamics Branch in recognition of his contributions to the modeling of crystal growth.  In addition, two long-standing MCSD collaborators received prestigious external recognition for their research in materials science this year. W. Boettinger received the 2001 Bruce Chalmers Award of the Materials Processing and Manufacturing Division of The Minerals, Metals and Materials Society (TMS) For showing how fundamental thermodynamic and kinetic models, with modern computational power, lead directly to quantitative predictions of the microstructures generated by solidification. S. Coriell received the 2001 F.C. Frank Award of the International Organization for Crystal Growth, which was awarded in Kyoto, Japan in July 2001.

 

Numerical Simulation of Axisymmetric Dendritic Crystals

 

Katharine Gurski

Geoffrey McFadden

 

Dendritic growth is commonly observed during many materials processing techniques, including the casting of alloys for industrial applications.

 

The prediction of the associated length scales and degrees of segregation for dendritic growth are essential to design and control materials processing technology.  We are developing numerical methods for the solution of axisymmetric boundary integral equations for applications of potential theory in materials science, including dendritic growth. The goal is to create a stable, computationally feasible numerical simulation of axisymmetric dendrite growth for a pure material.  Our efforts are directed toward the removal of computational difficulties that have plagued previous attempts to create a model of more than two dimensions by using a sharp interface model with an axisymmetric boundary integral method that incorporates fast algorithms and iterative solvers.

 

This project is in the early developmental stage. We are still investigating effects of different polynomial approximations and integration methods on the stability of the numerical method.

 

 

Machining Process Metrology, Modeling and Simulation

 

Timothy Burns
Debasis Basak (NIST MSEL)
Matthew Davies (NIST MEL)
Brian Dutterer (NIST MEL)
Richard Fields (NIST MSEL)
Michael Kennedy (NIST MEL)
Lyle Levine (NIST MSEL)
Robert Polvani (NIST MEL)
Richard Rhorer (NIST MEL)
Tony Schmitz (NIST MEL)
Howard Yoon (NIST PL)

 

This is an ongoing collaboration on the modeling and measurement of machining processes with researchers in the Manufacturing Process Metrology Group in the Manufacturing Metrology Division (MMD) in MEL. The mission of MMD is to fulfill the measurements and standards needs of the U.S. discrete-parts manufacturers in mechanical metrology and advanced manufacturing technology.

 

Most manufacturing operations involve the plastic working of material to produce a finished component. One way to classify these plastic deformation processes is by the order of magnitude of the rate of deformation, or strain rate. Forming, rolling, and drawing involve relatively low strain rates (<103s-1), while high-speed stamping, punching and machining can involve strain rates as high as 106s-1 or more. Annual U.S. expenditures on machining operations alone total more than $200B, or about 2% of the Gross Domestic Product (GDP). Currently, process parameters are chosen by costly trial-and-error prototyping, and the resulting choices are often sub-optimal. A recent survey by the Kennametal Corporation has found that industry chooses the correct tool less than 50 % of the time.

 

Pressure from international competition is driving industry to seek more sophisticated and cost-effective means of choosing process parameters through modeling and simulation. While there has been significant progress in the predictive simulation of low-strain-rate manufacturing processes, there is presently a need for better predictive capabilities for high-rate processes. The main limitations are current measurement capabilities and lack of good material response data. Thus, while commercial finite-element software provides impressive qualitative results, data to validate these results are nearly nonexistent. Without serious advances in metrology, it is likely that industry will lose faith in this approach to modeling.

 

 

 

The main goal of our current efforts, which are in the second year of a three-year program supported in large part by intramural ATP funding, is to develop the capability to obtain and validate the material response data that are critical for accurate simulation of high-strain-rate manufacturing processes. Although the focus of this project is machining, the material response data will be broadly applicable. Success in this project will advance the state-of-the-art in two areas: (1) fundamental advanced machining metrology and simulation; and (2) measurement of fundamental data on the behavior of materials at high strain rates (material-response-data) needed for input into machining (and more broadly mechanical manufacturing) simulations. A longer-term, higher-risk objective of this effort is the development of new test methods that use idealized machining configurations to measure high-strain-rate material response.

 

Related work this year has involved research with M.A. Davies and T.L. Schmitz in MEL on the analysis of the stability of high-speed machining operations in which the tool contacts the workpiece only intermittently

 

 

Modeling and Computational Techniques for Bioinformatics

 

Fern Y. Hunt

Anthony J. Kearsley

Agnes O’Gallagher

Honghui Wan (National Center for Biotechnology Information, NIH)

Antti Pesonen (VTT, Helsinki Finland)

Daniel J. Cardy (Montgomery Blair High School)

 

http://math.nist.gov/~FHunt/GenPatterns/

 

Computational biology is currently experiencing explosive growth in its technology and industrial applications. Mathematical and statistical methods dominated the development of the field but as the emphasis on high throughput experiments and analysis of genetic data continues, computational techniques have also become essential. We seek to develop generic tools that can be used to analyze and classify protein and base sequence patterns that signal potential biological functionality.

 

Database searches of protein sequences are based on algorithms that find the best matches to a query sequence, returning both the matches and the query in a linear arrangement that maximizes underlying similarity between the constituent amino acid residues. Dynamic programming is used to create such an arrangement, known as an alignment. Very fast algorithms exist for aligning two sequences or more if the possibility of gaps is ignored. Gaps are hypothesized insertions or deletions of amino acids that express mutations that have occurred over the course of evolution. The alignment of sequences with such gaps remains an enormous computational challenge. We are currently experimenting with an alternative approach based on Markov decision processes. The optimization problem associated with alignment then becomes a linear programming problem and it is amenable to powerful and efficient techniques for solution. Taking a database of protein sequences (cytochrome p450) as a test case, we have developed a method of using sequence statistics to build a Markov decision model and currently the model is being used to solve the linear program for a variety of cost functions. We are creating software for multiple sequence alignment based on these ideas.

 

Work has also continued on another project involving the program GenPatterns. The software computes and visually displays DNA or RNA subsequence frequencies and their recurrence patterns. Bacterial genomes and chromosome data can be downloaded from GENBANK and computations can be performed and displayed using a variety of user options including creating Markov models of the data. A demonstration can be found at the project website.

 

GenPatterns and the software developed from the alignment project is now a part of the NIST Bioinformatics/Computational Biology software website currently being constructed under the direction of T.N. Bhat of the Chemical Science and Technology Laboratory (CSTL).


 

2.2.        Mathematical Software

 

 

 

Sparse BLAS Standardization

 

Roldan Pozo

BLAS Technical Forum

 

    http://math.nist.gov/spblas/

http://www.netlib.org/blas/blast-forum/

 

NIST is playing a leading role in the new standardization effort for the Basic Linear Algebra Subprograms (BLAS) kernels for computational linear algebra. The BLAS Technical Forum (BLAST) is coordinating this work. BLAST is an international consortium of industry, academia, and government institutions, including Intel, IBM, Sun, HP, Compaq/Digital, SGI/Cray, Lucent, Visual Numerics, and NAG.

 

One of the most anticipated components of the new BLAS standard is support for sparse matrix computations. R. Pozo chairs the Sparse BLAS subcommittee. NIST was first to develop and release a public-domain reference implementations for early versions of the standard, which has helped shape the standard, which was released this year.

 

The new BLAS standard, which includes the Sparse BLAS component, has been finalized and was submitted to the International Journal of High Performance Computing Applications. Several companion papers on implementation and design of the new BLAS were submitted to ACM Transactions on Mathematical Software.  Implementations of the Sparse BLAS in Fortran 95 are currently available on the Web, and the C implementation is currently being developed.

 

 

TNT: Object Oriented Numerical Programming

 

Roldan Pozo

 

http://math.nist.gov/tnt/

 

NIST has a history of developing some of the most visible object-oriented linear algebra libraries, including Lapack++, Iterative Methods Library (IML++), Sparse Matrix Library (SparseLib++), Matrix/Vector Library (MV++), and most recently the Template Numerical Toolkit (TNT).

 

TNT incorporates many of the ideas we have explored with previous designs, and includes new techniques that were difficult to support before the ANSI C++ standardization. The library includes support for both C and Fortran array layouts, array sections, basic linear algebra algorithms (LU, Cholesky, QR, and eigenvalues) as well as primitive support for sparse matrices.

 

TNT has enjoyed several thousand downloads and is currently in use in several industrial applications. This year there were two software updates to the TNT package, as well as current development work on a new array interface for multidimensional arrays compatible with C and Fortran storage layouts.

 

 

Parallel Adaptive Refinement and Multigrid Finite Element Methods

 

William F. Mitchell

 

Finite element methods using adaptive refinement and multigrid techniques have been shown to be very efficient for solving partial differential equations on sequential computers.  Adaptive refinement reduces the number of grid points by concentrating the grid in the areas where the action is, and multigrid methods solve the resulting linear systems in an optimal number of operations.  W. Mitchell has been developing a code, PHAML, to apply these methods on parallel computers.  The expertise and software developed in this project are useful for many NIST laboratory programs, including material design, semiconductor device simulation, and the quantum physics of matter.

 

This year saw three major activities on this project.  The first is a collaboration with Sandia National Laboratories to develop Zoltan, a dynamic load balancing library.  NIST's contributions to Zoltan are the implementation of a Fortran 90 interface to the library, and the implementation of the K-Way Refinement Tree (RTK) partitioning method, which was developed as part of PHAML.  Second is the completion of an initial version of the PHAML software to be released to the public.  Third is the application of PHAML to solve Schrödinger's Equation in collaboration with the Quantum Processes group of NIST's Atomic Physics division. Among the accomplishments this year are the following.

 

 

Java Numerics

 

Ronald Boisvert

Roldan Pozo

Bruce Miller

 

http://math.nist.gov/javanumerics/

<http://math.nist.gov/scimark/

 

Java, a network-aware programming language and environment developed by Sun Microsystems, has already made a huge impact on the computing industry. Recently there has been increased interest in the application of Java to high performance scientific computing. MCSD is participating in the Java Grande Forum (JGF), a consortium of companies, universities, and government labs who are working to assess the capabilities of Java in this domain, and to provide community feedback to Sun on steps that should be taken to make Java more suitable for large-scale computing. The JGF is made up of two working groups: the Numerics Working Group and the Concurrency and Applications Working Group. The former is co-chaired by R. Boisvert and R. Pozo of MCSD. Among the institutions participating in the Numerics Working Group are: IBM, Intel, Least Squares Software, NAG, Sun, Visual Numerics, Waterloo Maple, Florida State University, the University of Karlsruhe, the University of Tennessee at Knoxville, and the University of Westminster.

 

Earlier recommendations of the Numerics Working Group were instrumental in the adoption of a fundamental change in the way floating-point numbers are processed in Java. This change will lead to significant speedups to Java code running on Intel microprocessors like the Pentium.  The working group also advised Sun on the specification of elementary functions in Java, which led to improvements in Java 1.3. The specification of the elementary functions was relaxed to tolerate errors of up to one unit in the last place, permitting more efficient implementations to be used. A parallel library, java.lang.StrictMath, was introduced to provide strictly reproducible results.

 

The Numerics Working Group has now begun work on a series of formal Java Specification Requests for language extensions, including a fast floating-point mode and a standardized class and syntax for multidimensional arrays.

 

This year, MCSD staff presented the findings of the Working Group in a variety of forums, including

 

MCSD staff also worked on the organization of a number of events related to Java.

 

Boisvert at Pozo were co-authors with José Moreira (IBM) and Michael Philippsen (University of Karlsruhe) of an invited survey article on Numerical Computing in Java, which appeared in the March/April 2001 issue of Computing in Science and Engineering.

 

The NIST SciMark benchmark continues to be widely used.  SciMark includes computational kernels for FFTs, SOR, Monte Carlo integration, sparse matrix multiply, and dense LU factorization, comprising a representative set of computational styles commonly found in numeric applications. SciMark can be run interactively from Web browsers, or can be downloaded and compiled for stand-alone Java platforms. Full source code is provided. The SciMark result is recorded as megaflop rates for the numerical kernels, as well as an aggregate score for the complete benchmark. The current database lists results for more than 1300 computational platforms, from laptops to high-end servers. As of December 2001, the record for SciMark is 275 Mflops, a 68% improvement over the best reported one-year ago (164 Mflops).

 

NIST continues to distribute the JAMA linear algebra class for Java that it developed in collaboration with the MathWorks several years ago. More than 8,000 copies of this software have been downloaded from the NIST web site.

 

Boisvert and Pozo received a Department of Commerce Bronze medal in December 2001 in recognition of their leadership in this area.

 

 

Information Services for Computational Science

 

Ronald Boisvert

Joyce Conlon

Marjorie McClain

Bruce Miller

Roldan Pozo

http://math.nist.gov/

http://gams.nist.gov/

 http://math.nist.gov/MatrixMarket/

 

MCSD continues to provide Web-based information resources to the computational Science research community. The first of these is the Guide to Available Mathematical Software (GAMS). GAMS is a cross-index and virtual repository of some 9,000 mathematical and statistical software components of use in science and engineering research. It catalogs software, both public domain and commercial, that is supported for use on NIST central computers by ITL, as well as software assets distributed by netlib. While the principal purpose of GAMS is to provide NIST scientists with information on software available to them, the information and software it provides are of great interest to the public at large. GAMS users locate software via several search mechanisms. The most popular of these is the use of the GAMS Problem Classification System. This system provides a tree-structured taxonomy of standard mathematical problems that can be solved by extant software. It has also been adopted for use by major math software library vendors.

 

A second resource provided by MCSD is the Matrix Market, a visual repository of matrix data used in the comparative study of algorithms and software for numerical linear algebra. The Matrix Market database contains more than 400 sparse matrices from a variety of applications, along with software to compute test matrices in various forms. A convenient system for searching for matrices with particular attributes is provided. The web page for each matrix provides background information, visualizations, and statistics on matrix properties.

 

Web resources developed by MCSD continue to be among the most popular at NIST. The MCSD Web server at math.nist.gov has serviced more than 38 million Web hits since its inception in 1994 (9 million of which have occurred in the past year!) The Division server regularly handles more than 11,000 requests for pages each day, serving more than 40,000 distinct hosts on a monthly basis.  Altavista has identified approximately 10,000 external links to the Division server. The top seven ITL Web sites are all services offered by MCSD:

 

  1. NIST Math Portal
  2. Matrix Market
  3. Guide to Available Mathematical Software
  4. Division home page
  5. ACM Transactions on Mathematical Software
  6. Digital Library of Mathematical Functions
  7. Template Numerical Toolkit

 

The GAMS home page is downloaded more than 25,000 times per month by some 15,000 distinct hostnames. During a recent 36-month period, 34 prominent research-oriented companies in the .com domain registered more than 100 visits apiece to GAMS. The Matrix Market sees more than 100 users each day. It has distributed more than 35 Gbytes of matrix data, including nearly 100,000 matrices, since its inception.  The Matrix Market is mirrored in Japan and Korea. GAMS has a Korean mirror.

 


 

2.3.        High Performance Computing and Visualization

 

 

 

Interoperable MPI Standard

 

William George

John Hagedorn

Judith Devaney

 

    http://impi.nist.gov/

 

The Message Passing Interface (MPI) is the de facto standard for writing parallel scientific applications in the message-passing programming paradigm. MPI suffers from two limitations: lack of interoperability among vendor MPI implementations and lack of fault tolerance. For long-term viability, MPI needs both. The Interoperable MPI protocol (IMPI) standard addresses the interoperability issue. It extends the power of MPI by allowing applications to run on heterogeneous clusters of machines with various architectures and operation systems, each of which in turn can be a parallel machine, while allowing the program to use a different implementation of MPI on each machine. This is accomplished without requiring any modifications to the existing MPI specification. That is, IMPI does not add, remove, or modify the semantics of any of the existing MPI routines. All current valid MPI programs can be run in this way without any changes to their source code.

 

NIST, at the request of computer vendors, facilitated the specification of the IMPI standard, and built a conformance tester. The IMPI standard was adopted in March 2000; the conformance tester was completed at the same time. The conformance tester is a web-based system that sets up a parallel virtual machine between NIST and the testers, that is, the vendor implementers of MPI. The conformance test suite contains over a hundred tests and exercises all parts of the IMPI protocol. Results are returned via a web page. The IMPI standard was published in the May-June 2000 issue of the NIST Journal of Research.

 

In 2001 we have provided assistance to active vendor implementers of IMPI by initiating and coordinating on-line discussions, between MPI vendors, of several aspects of the IMPI protocols. This was needed to clarify the intent of the specification in several areas.  A minor error in one of the collective communications algorithms was discovered by one of the vendors. This was fixed and documented with an entry in the IMPI errata as well as by the addition of extra conformance tests to confirm the correct operation of the algorithm.

 

 

IMPI
application (computing the Mandelbrot set) that was demonstrated by vendors at
SC2001.

 

IMPI application (computing the Mandelbrot set) that was demonstrated by vendors at SC2001.

 

 

During 2001, the IMPI protocols were fully implemented in the MPI libraries of Hewlett-Packard and Fujitsu. Most of IMPI is supported in the latest library from LAM/MPI (Univ. of Indiana).  MPI Software Technology will have full IMPI support in their commercial MPI/Pro library for MS Windows and Linux early in 2002.  A Phase II SBIR in the amount of $289,568 was awarded to MPI Software Technology to continue the development of a dynamic communications algorithm tuner specifically for IMPI software. IMPI software was on display on the vendor exhibition floor at the SC2001 conference held in Denver in November 2001 and IMPI was mentioned in several product pamphlets. Several vendors are discussing the possibility of a demonstration of IMPI for the SC2002 conference exhibition (Nov 2002) in Baltimore. This demonstration would include machines from each of the implementers of IMPI and would demonstrate IMPI applied to a production parallel code. Extensions to IMPI to accommodate MPI-2 may be proposed by MPI Software Technology as they gain more experience with IMPI.

 

We have submitted an article on IMPI to Dr. Dobb's Journal, at their invitation, and it has been accepted for publication.

 

 

Parallel Computation of Ground State of Neutral Helium

 

James Sims

Stanley Hagstrom (Indiana University)

 

Exact analytical solutions to the Schrödinger equation, which determines quantities such as energies, are known only for atomic hydrogen and other equivalent two-body systems. Thus, for any atomic system other than hydrogen, approximate solutions must be determined numerically. This year the computation of the nonrelativistic energy for the ground singlet S state of neutral beryllium (a four electron system) was computed to a higher accuracy than had ever been achieved before by James Sims and Stanley Hagstrom using the HyCI method, which they developed.

 

In a series of papers between 1971 and 1976, Sims and Hagstrom used the method to compute not only energy levels, but also other atomic properties such as ionization potentials, electron affinities, electric polarizibilities, and transition probabilities of two, three, and four electron atoms and other members of their isoelectronic sequences. The technique is still being used today. In 1996, in a review article in Computational Chemistry, it was declared that this method is nearly impossible to use for more than three or four electrons. Sims and Hagstrom believe that while that may have been true in 1996, it is no longer true today due to the availability of cheap CPUs which can be connected in parallel to enhance both the CPU power and the memory that can be brought to bear on the computational task. To demonstrate the capability of the Hy-CI technique in a modern computing environment with parallel processing and multiprecision arithmetic, Sims and Hagstrom undertook to calculate the nonrelativistic energy for the ground singlet S state of neutral helium (a two electron problem).

 

They have computed the energy to be -2.9037 2437 7034 1195 9829 99 a.u. This represents the highest accuracy computation of this quantity to date. Comparisons with other calculations and an energy extrapolation yield an estimated accuracy of 20 decimal digits. To obtain a result with this high a precision, a very large basis sets had to be used.  In this case, variational expansions of the wave function with 4,648 terms were employed, leading to the need for very large computational resources. Such large expansions also lead to problems of linear dependence, which can only be remedied by using higher precision arithmetic than is provided by standard computer hardware. For this computation, 192-bit precision (roughly 48 decimal places) was necessary, and special coding was required to simulate hardware with this precision. Parallel processing was also employed to speed the computation, as well as to provide access to enough memory to accommodate larger expansions.  NIST's Scientific Computer Facility cluster of 16 PCs running Windows NT was utilized for parallel computation. Typical run times for a calculation of this size about are 8 hours on a single CPU, but only 30 - 40 minutes on the parallel processing cluster.

 

The results of this work have been submitted to the peer-reviewed International Journal of Quantum Chemistry. This work employs a very novel wave function, namely, one consisting of at most a single r12 raised to the first power combined with a conventional non-orthogonal configuration interaction (CI) basis. The researchers believe that this technique can be extended to multielectron systems (more than three or four electrons). The combination of computational simplicity of this form of the wave function, compared to other wave functions of comparable accuracy, as well as the use of parallel processing and extended precision arithmetic, make it possible (they believe) to achieve levels of accuracy comparable to what has been achieved for He, for atoms with more than two electrons. Work is in progress, for example, to see what precision can be obtained for atomic lithium, which is estimated to require a 6,000-fold increase in CPU requirements to reach the same level of precision, making the use of parallel programming techniques even more critical.  After lithium comes beryllium, which Sims and Hagstrom hope they can again compute with a higher accuracy than has been achieved to date. Beryllium is the key to multielectron systems (more than four electrons), since the integrals that arise for more than four electrons are of the same type as the ones that arise in the four electron systems.

 

 

Parallelization of Feff X-ray Absorption Code

 

James Sims

Howard Hung

Charles Bouldin (NIST MSEL)

John Rehr (University of Washington)

 

X-ray absorption spectroscopy (XAS) is used to study the atomic-scale structure of materials, and is employed by hundreds of research groups in a variety of fields, including ceramics, superconductors, semiconductors, catalysis, metallurgy and structural biology. Analysis of XAS relies heavily on ab-initio computer calculations to model x-ray absorption. These calculations are computationally intensive, taking days or weeks to complete in many cases. As XAS is more widely used in the design of new materials, particularly in combinatorial materials processing, it is crucial to speed up these calculations.  One of the most commonly used codes for such analyses is FEFF. Developed at the University of Washington, FEFF is an automated program for ab initio multiple scattering calculations of X-ray Absorption Fine Structure (XAFS) and X-ray Absorption Near-Edge Structure (XANES) spectra for clusters of atoms. The code yields scattering amplitudes and phases used in many modern XAFS analysis codes. Feff has a user base of over 400 research groups, including a number of industrial users, such as Dow, DuPont, Boeing, Chevron, Kodak, and General Electric.

 

James Sims, Howard Hung, and Charles Bouldin have parallelized the FEFF code using MPI. It now runs 20-30 times faster than its single-processor counterpart. The parallel version of the XAS code is portable, and has been incorporated in the latest release of Feff (FeffMPI). It is now in operation on the parallel processing clusters at the University of Washington and at DoE's National Energy Research Scientific Computing Center (NERSC). With the speedup of 30 provided by this version, researchers can now do calculations they only dreamed about before. One NERSC researcher has reported doing a calculation in 18 minutes using FeffMPI on the NERSC IBM SP2 cluster that would previously have taken 10 hours. In 10 hours this researcher can (and does) now do runs that would have taken months before, and hence would not have been even attempted.

 

The peer-reviewed paper Rapid Calculation of X-ray absorption near edge structure using parallel computing, has been published in X-ray Spectroscopy. The paper Parallel Calculation of Electron Multiple Scattering using Lanczos Algorithms, has been accepted for publication by the Physical Review B. The presentation "Rapid Computation of X-ray Absorption Near Edge Structure Using Parallel Computation", was given at the American Physical Society Meeting, March 12-16, 2001, Seattle, Washington.

 

The bottleneck in the code is now a memory bottleneck for large systems brought about by the way the tables are built and stored in the sequential version of the code. The Feff development team is working on eliminating this bottleneck. Once that is accomplished, the NIST researchers will begin another round of benchmarking and parallelizing which hopefully will allow the software to run 100 times or more faster than current single processor codes.

 

 

Modeling and Visualization of Dendritic Growth in Metallic Alloys

 

William George

Steve Satterfield

James Warren (NIST MSEL)

 

Snowflake-like structures known as dendrites develop within metal alloys during casting. A better understanding of the process of dendritic growth during the solidification will help guide the design of new alloys and the casting process used to produce them. MCSD mathematicians (e.g., G. McFadden, B.  Murray, D. Anderson, R. Braun) have worked with MSEL scientists (e.g., W. Boettinger, R. Sekerka) for some time to develop phase field models of dendritic growth. Such diffuse-interface approaches are much more computationally attractive than traditional sharp-interface models. Computations in two dimensions are now routinely accomplished. Extending this to three dimensions presents scaling problems for both the computations and the subsequent rendering of the results for visualization. This is due to the 0(n4) execution time of the algorithm as well as the 0(n3) space requirements for the field parameters.  Additionally, rendering the output of the three dimensional simulation also stresses the available software and hardware when the simulations extend over finite-difference grids of size 1000x1000x1000.

 

We have developed a parallel 3D dendritic growth simulator that runs efficiently on both distributed-memory and shared-memory machines. This simulator can also run efficiently on heterogeneous clusters of machines due to the dynamic load-balancing support provided by our MPI-based C-DParLib library. This library simplifies the coding of data-parallel style algorithms in C by managing the distribution of arrays and providing for many common operations on arrays such as shifting, elemental operations, reductions, and the exchanging of array slices between neighboring processing nodes as is needed in parallel finite-difference algorithms. With the expansion of Hudson, NIST's central Linux cluster, to 128 CPUs with 1GB of memory per node, we will now be able to complete simulations on 10003 grids, sufficient for direct comparison with earlier two-dimensional simulations.

 

 

A two-dimensional slice through a
simulated three-dimensional alloy.

 

A two-dimensional slice through a simulated three-dimensional

dendrite crystal) of a bi-metal alloy.  This image, colored to indicate

the relative concentration of the two metals within the dendrite, is

one of many snapshots taken during the simulation to observe the

process of dendritic growth.  The bright outline in this image is at the

dendrite surface, showing the abrupt change in  relative concentration

that takes place as the alloy changes phase from liquid to solid.

 

 

The output from the simulator consists of 40 snapshots consisting of pairs of files containing the phase-field and the relative concentration of the solutes at each grid point at specific time steps. At smaller grid sizes, below 3003, we use commonly available visualization software to process these snapshot files into color images and animations with appropriate lighting and shading added. For larger grid sizes we have developed a visualization procedure that converts the 3D grid data into a polygonal data set that can take advantage of hardware acceleration. Using standard SGI software, OpenGL Performer, this polygonal representation is easily displayed. The semi-transparent colors allow a certain amount of internal structure to be revealed and the additive effects of the semi-transparent colors produce an isosurface approximation. A series of polygonal representations from the simulator snapshots are cycled producing a 3D animation of dendrite growth that can be interactively viewed. Most of the currently available immersive virtual reality (IVR) systems are based on OpenGL Performer. Thus, utilizing this format immediately allows the dendrite growth animation to be placed in an IVR environment for enhanced insight.

 

An article on this implementation of 3-D dendritic growth simulation using the phase-field method, with an emphasis on the parallel implementation, has been submitted to the Journal of Computational Physics. 

 

Improvements to this simulator that we intend to pursue include adding computational steering capabilities, improving the immersive visualization of the results, and decreasing the memory requirements of the simulator.

 

 

Parallel Genetic Programming

 

Judith E. Devaney

John G. Hagedorn

 

Because the design and implementation of algorithms is highly labor-intensive, the number of such projects that can be undertaken is limited by the availability of people with appropriate expertise.  The goal of this project is to create a system that will leverage human expertise and effort through parallel genetic programming.  The human specifies the problem to be solved, provides the building blocks and a fitness function that measures success, and the system determines an algorithm that fits the building blocks together into a solution to the specified problem. We are implementing a generic Genetic Programming (GP) system with features of existing systems as well as some features unique to our approach.  These unique features are intended to improve the operation of the system particularly for the types of real-world scientific problems to which we are applying the system at NIST. Genetic programming is also a meta-technique. That is, it can be used to solve any problem whose solution can be framed in terms of a set of operators and a fitness function. Thus it has applications in parameter search. NIST scientists have many special purpose codes that can be used as operators in this sense.

 

We have instrumented our system to collect a variety of information about programs, populations of programs, and runs.  We have also implemented a visual representation of populations and individual programs.  The accompanying figure shows a visualization of a population of 128 individuals.  Each program is represented by one vertical column.  As indicated in the figure, three aspects of each program are represented.  The upper part is a visual representation of the content of the program.  Each block of color in this section corresponds to a procedure in the program.  In the middle section, the sequence of genetic operations that brought each individual into existence is presented. Finally, the lower portion of the image presents a normalized view of the fitness of each individual.  In the figure, the individuals have been sorted by fitness with the more fit individuals on the left.

 

 

 

Visualization of a population.

 

Visualization of a population.

 

 

The instrumentation described above has provided insight in many aspects of the operation of our GP system.  As a result we have created two new operators: repair and prune. They have yielded substantial improvement in the system's ability to find solutions. All operating parameters of the system are controlled by keyword parameter files that are read in during program initialization, and the system is configured to dynamically link to user-supplied code that provides a problem-specific fitness function as well as problem-specific operations encapsulated as C functions. The GP system has been parallelized using the island model.  This parallelization has been easily accomplished with the use of our MPI AutoMap and AutoLink software libraries that facilitating the transfer of complex data structures between independent programs. 

 

Papers describing our work appeared in two peer-reviewed conference proceedings this year: "A Genetic Programming System with a Procedural Program Representation", Proceedings of the Genetic and Evolutionary Computation Conference (Late Breaking Papers), July, 2001, and A Genetic Programming Ecosystem, Proceedings of the 15th International Parallel and Distributed Processing Symposium, April 2001.One of us was invited to participate in the panel "Biologically Inspired Computing: Where to in the next 10 years?" at the Workshop on Biologically Inspired Solutions to Parallel Processing Problems, April 23, 2001, San Francisco. One poster, Genetic Programming and Discovery was presented at the Advanced Technology Program National Meeting, June 2001. Two invited talks were presented: "Genetic Programming for Data Visualization and Mining", Workshop on Combinatorial Methods for Materials R&D: Systems Integration in High Throughput Experimentation, American Institute of Chemical Engineering National Meeting, November 15, 2000, and "Genetic Programming", Electron and Optical Physics Seminar, January 11, 2001.

 

Currently, we are using symbolic regression to automate the identification of functional forms of measurement errors; we are studying metrics for monitoring population diversity; and we will use our system to mine the output of combinatorial experiments. We have interest from NIST scientists who would like to collaborate with us when we have completed our system in about a year.

 

 

Immersive Visualization

 

Steven Satterfield

 

Immersive Visualization, also described as Immersive Virtual Reality (IVR), is an emerging technique with the potential to handle the growing amount of data from large parallel computations or advanced data acquisitions. To be fully immersive, a computer graphics system should include one or more large rear projection screens to encompass peripheral vision, stereoscopic display for increased depth perception, and head tracking for realistic perspective based on the direction the user is viewing. Unlike graphics on a computer monitor, immersive visualization allows the scientist to explore inside the data. Visualization of scientific data can provide an intuitive understanding of the phenomenon or data being studied. It can contribute to theory validation through demonstration of qualitative effects seen in experiments. Effective visualization can also uncover structure where no structure was previously known. With parallel computing, the datasets are typically three-dimensional. Immersive visualization sets the viewer in a 3D setting and takes advantage of human skills at pattern recognition by providing a more natural environment where peripheral vision, increased depth perception, and realistic perspective provides more context for human intuition.

 

A scientist who specializes in a field such as chemistry or physics is often not simultaneously an expert in visualization techniques. MCSD provides a framework of hardware, software and complementary expertise, which NIST application scientists can utilize to facilitate meaningful discoveries.  The immersive system in the Immersive Visualization Laboratory (Gaithersburg Building  225/A140) is a  RAVE (Reconfigurable Automatic Virtual Environment) from Fakespace  Systems. During 2001, this system was upgraded with the addition of a second module. Thus, the two-wall RAVE is configured as an immersive corner with two 8' x 8' (2.44m x 2.44m) screens flush to the floor oriented 90 degrees to form a corner. As defined above, the RAVE is fully immersive. The large corner configuration provides a very wide field of peripheral vision, with stereoscopic display and head tracking. The host computer system is a high performance graphics system from SGI that was upgraded during 2001 to the current Origin 3000 family consisting of 12 500MHz MIPS R14000 CPUs, 12GB memory and 3 Infinite Reality Graphics Pipes. The additional floor space required by the second module required the expansion of the Immersive Visualization Laboratory into an adjacent room by removing the joining wall unit and repairing the raised floor. Use of immersive visualization to model the expansion prior to implementation allowed the unit to be efficiently placed in the new space.

 

Collaboration with Virginia Tech's Visualization and Animation Group on the use and implementation of DIVERSE (Device Independent Virtual Environments-Reconfigurable, Scalable, Extensible) open source software was continued. DIVERSE is the primary software environment in use on the RAVE. It handles the details necessary to implement the immersive environment. A flashlight feature was added to the system this year. Like a real flashlight, an object within the immersive environment can be identified by shining the virtual light on it. 

 

Researchers in the Building and Fire Research Laboratory (BFRL) at NIST are studying high performance concrete. BFRL is leading the Virtual Cement and Concrete Laboratory (VCCTL) consortium consisting of the major cement producers. The accompanying image is from a virtual concrete flow visualization. The numerical algorithm, simulates the flow of ellipsoidal objects (concrete particles) in suspension. The visualization plays an important role in the validation of the algorithms and the correctness of  complex systems like this flow of fluid concrete. A digital movie of this visualization is available for view at http://math.nist.gov/mcsd/savg/vis/concrete/.

 

The virtual reality simulation of concrete flow was implemented with Diversifly, a visualization utility included with DIVERSE, so no application-specific programming was required. Two general purpose and very simple ASCII file formats were defined. Two file loaders were implemented to provide an interface between the numerical simulation and the immersive environment. Utilizing shell scripts and common filters/tools, the simulation data is transformed into the suitable formats to be loaded, viewed and navigated with Diversifly. The file formats are suitable for a wide range of application areas. This philosophy of converting data to predefined file formats that can be immediately displayed in the immersive environment has created a simple and very usable system. The NIST scientists themselves use the RAVE and demo their own visualizations.

 

A description of the BFRL collaboration is included in the peer-reviewed paper titled, "DIVERSE: A Framework for Building Extensible and Reconfigurable Device Independent Virtual Environments" to be presented at the IEEE Virtual Reality Conference 2002. A demonstration to a reporter from Government Computer News resulted in an article in the July 27, 2001 issues, which is online at http://www.gcn.com/20_25/news/16941-1.html. Other demonstrations to external organizations include: the Director of the High Performance Computer Center at Texas Tech University (April 2001), a Digital Library of Mathematical Functions Editorial Board (April 2001), a Virtual Cement and Concrete Testing Laboratory (VCCTL) Consortium Meeting (April 2001), an Aggregates Foundation for Technology, Research and Education (AFTRE) meeting (May 2001), the Virginia Department of Transportation and the University of Virginia (July 2001), LaFarge (cement producer in France) (July 2001),  Montgomery College Students (August 2001), the Fire Testing Laboratory Workshop (June 2001), the Washington Internships for Students of Engineering (July 2001), the German Cement Association (VDZ-Verein Deutscher Zementwerke e.V.) (November 2001).

 

 

Single image from an interactive
visualization of flowing concrete.

 

Single image from an interactive visualization of flowing

concrete. Ellipsoids represent concrete particle motion. Lines

 represent their full path over the simulation time period.

 

 

The most interactive visualizations in an immersive environment are those that can be rendered using polygon-based graphics techniques. A large amount of scientific data is represented as a volume with a data values at each x,y,z point within a defined volume. For example, experimental cement data has been captured with X-ray techniques at 1000 x 1000 x 1000  resolution.

 

Future work will include continuing collaborations with Virginia Tech on incorporating volume-rendering techniques of this type of data into the immersive environment. The device independence of DIVERSE allows the same applications to be run on a variety of hardware from non-immersive desktop machines to fully immersive environments. This capability will be exploited to bring a broad base of research activities into the immersive environment by providing an entry point at the scientists desktop and then drawing them into the Immersive Visualization Lab.

 

 

 

Linewidth Standards for Nanometer-level Semiconductor Metrology

 

Barbara am Ende

Michael Cresswell (NIST EEEL)

Richard Allan (NIST EEEL)

Loren. Linholm (NIST EEEL)

Christine Murabito (NIST EEEL)

Will Guthrie (ITL Statistical Engineering Division)

Hal Bogardus (SEMATECH)

 

The Semiconductor Industry Association's (SIA) International Technology Roadmap for Semiconductors (ITRS) projects the decrease of gate linewidths used in state-of-the-art IC manufacturing from present levels of up to 250 nm to below 70 nm within several years.  Scanning electron microscopes (SEMs) and other systems traditionally used for linewidth metrology exhibit measurement uncertainties exceeding ITRS-specified tolerances for these applications.  It is widely believed that these uncertainties can be partly managed through the use of CD (Critical Dimension) reference materials with linewidth values that are traceable with single-nanometer-level uncertainties. Until now, such reference materials have been unavailable because the technology needed for their fabrication, and a means of assuring their traceability, has not been available.

 

A technical strategy that has been developed at NIST for fabricating CD reference materials with appropriate properties is based on the Single-Crystal CD Reference-Materials (SCCDRM) implementation.  Essential elements of the implementation are the starting silicon wafers having a (110) orientation; the reference features being aligned to specific lattice vectors; and their lithographic patterning with lattice-plane selective etches of the kind used in silicon micro-machining.  This approach provides straight reference features with vertical, atomically planar, sidewalls. The path for linewidth traceability is provided by High Resolution Transmission Electron Microscopy (HRTEM) imaging.  The technique enables counting the lattice planes between the feature's two sidewalls and thus measuring the linewidth with single nanometer-level accuracy.  However, sample preparation is destructive and very costly to implement.  The traceability strategy for the SCCDRM implementation utilizes the sub-nanometer repeatability of electrical linewidth metrology as a secondary reference means.  Low-cost precise measurements of the electrical linewidths of features on all die sites of each starting wafer are made first.  In order to enable electrical linewidth metrology, the reference features are patterned in the device layers of silicon-on-insulator material.  Then, the absolute linewidths of a subset of these features are determined from lattice-plane counts extracted from HRTEM images.  The absolute linewidths are then reconciled with the features’ previously measured electrical linewidths.  In this way, the linewidths of all reference features on the wafer that are not used for HRTEM imaging become calibrated with specified uncertainties and having traceability to silicon’s (111) lattice-plane spacing. MCSD is working to automate detection and counting of lattice planes between a feature's two sidewalls in the HRTEM images. am Ende has developed  a series of algorithms to automatically detect and count peaks (which represent lattice planes) in the image. Peaks are calculated for all zone intervals across the entire vertical direction of the image.  The best zones are determined based on the lowest standard deviation of the distances between peaks. The algorithm that selects peaks automatically currently needs some human input in areas where the images are not clear, where peaks are poorly developed, and along the margins of the crystalline portion of the wafer. The human input required for judging the quality of the automatically determined peaks is significant less than the manual counting of fringes and the repeatability is greatly increased.

 

Results of the project’s work were presented on October 10, 2001, "Single-Crystal CD Reference Materials,"

Semiconductor Electronics and Statistical Engineering Divisions, National Institute of Standards and Technology,

AMAG Meeting, International SEMATECH. Two abstracts were accepted to conferences based on this work.  Both have been accepted:

 

 

 

Am Ende will continue to fully automate the counting algorithms to completely take the human input out of the loop.  Criteria for marking the boundaries of the lattice planes, and to quantify the "raggedness" of the boundary between the crystalline and amorphous silicon will be developed.

 

 

Theory of Nano-structures and Nano-optics

 

Julien Franiatte

Judith Devaney

Steve Satterfield

Garnett Bryant (NIST PL)

 

Accurate atomic-scale quantum theory of nanostructures and nanosystems fabricated from nanostructures enables precision metrology of these nanosystems and provides the predictive precision modeling tools needed for engineering these systems for applications including advanced semiconductor lasers and detectors, single photon sources and detectors, biosensors, and nanoarchitectures for quantum coherent technologies such as quantum computing. Theory and modeling of nanoscale and near-field optics is essential for the realization and exploitation of nanoscale resolution in near-field optical microscopy and for the development of nanotechnologies that utilize optics on the size-scale of the system.  Applications include quantum dot arrays and quantum computers. Atomic-scale theory and modeling of quantum nanostructures, including quantum dots, quantum wires, quantum-dot arrays, biomolecules, and molecular electronics, is being used to understand the electronic and optical properties of quantum nanostructures and nanosystems fabricated from component nanostructures. Theory and numerical modeling is being used to understand optics on the nanoscale and in the near field with applications including near-field microscopy, single-molecule spectroscopy, optics and quantum optics of nanosystems, and atom optics in optical nanostructures.

 

 

Laboratory nanostructure

 

Laboratory nanostructure (from Phys. Rev. B, 53, R13242, 1996)

 

 

A computed nanostructure

 

A computed nanostructure

 

MCSD is participating in parallelization of computational models for studying nanostructures. Parallel processing has enabled near linear speedup in the sequential code. Codes that took nine hours can now be completed in one hour on ten processors. As the computational model is extended to handle more complex and larger systems by including not only the nanocrystals but also the substrate and environment around them, parallel processing becomes a necessity. This year the code will be extended to study self-assembled quantum dots.

 

 

Cement and Concrete Projects

 

The NIST Building and Fire Research Laboratory (BFRL) does experimental and computational research in cement and concrete. Recently MCSD has been working with BFRL parallelizing their codes and creating visualizations of their data. In January 2001 the Virtual Cement and Concrete Testing Laboratory (VCCTL) consortium was formed. MCSD assisted in this effort through presentations of our work with BFRL and demonstrations of visualizations in our immersive environment.  The consortium originally consisted of NIST and six industrial members: Cemex, Dyckerhoff Zement GmbH, Holcim Inc., Master Builders Technologies, the Portland Cement Association, and W.R. Grace & Co.  A seventh industrial member, the German Cement Association (VDZ), has recently joined.  The overall goals of the consortium are to develop a virtual testing system to reduce the amount of physical concrete testing and expedite the research and development process.  This will result in substantial time and cost savings to the concrete construction industry as a whole. MCSD continues to contribute to the VCCTL through collaborative projects involving parallelizing and running codes, creating visualizations, as well as presentations to the VCCTL current and prospective members. The following four projects are included in this effort.

 

 

Computational Modeling of the Flow of Cement

 

James Sims

Terence Griffin

Steve Satterfield

Nicos Martys (NIST BFRL)

 

 http://math.nist.gov/mcsd/savg/parallel/dpd/

 http://math.nist.gov/mcsd/savg/vis/concrete/

 

Understanding the flow properties of complex fluids like suspensions  (e.g., colloids, ceramic slurries and concrete) is of technological importance and presents a significant theoretical challenge. The computational modeling of such systems is also a great challenge because it is difficult to track boundaries between different fluid/fluid and fluid/solid phases. We use a new computational method called dissipative particle dynamics (DPD), which has several advantages over traditional computational dynamics methods while naturally accommodating such boundary conditions. In DPD, the interparticle interactions are chosen to allow for much larger time steps so that physical behavior, on time scales many orders of magnitude greater than that possible with molecular dynamics, may be studied.

 

Our algorithm (QDPD) is a modification of DPD, which uses a velocity Verlet algorithm to update the positions of both the free particles and the solid inclusion. In addition, the rigid body motion is determined from the quaternion-based scheme of Omelayan (hence the Q in QDPD). Parallelization of the algorithm is important in order to adequately model size distributions, and to have enough resolution to avoid finite size effects.

 

 

Flow around steel reinforcing bars              <Model rheometer

 

Flow around steel reinforcing bars (left) and a  model rheometer (right).

 

 

This year Jim Sims has completed both shared and distributed memory versions of the algorithm using MPI. The distributed memory version runs so well on the PC cluster (with fast Ethernet) that its limits are not visible on the current cluster. We are currently able to model coarse aggregates. This code has been used to study the flow around steel reinforcing bars, and to model a rheometer. Experiments at the Center for Advanced Cement Based Materials, a consortium of universities and industry that includes NIST, will use this code to validate the viscosity measurements of a rheometer with idealized aggregates consisting of marbles. Talks on this work including results of the computations have been presented at diverse places.

 

 

Terence Griffin has worked extensively with N. Martys to develop visualizations for this project (see accompanying examples). Martys presented some of these at the American Concrete International Meeting in March 26, 2001 in Philadelphia. Griffin also made videos of simulation results that were shown at the Symposium of Aggregate Research (Austin, Texas, April 23 2001), the VCCTL Consortium (NIST, April 19, 2001), and the Interfacial Consortium (May 2).

 

In the coming year, additional computations will be performed in support of the VCCTL and papers will be submitted to refereed journals. This code is also flexible enough to be used to model other things such as multicomponent fluids.

 

 

Parallelization, Visualization of Fluid Flow in Complex Geometries

 

John Hagedorn

Judith Devaney

Nicos Martys (NIST BFRL)

 

http://math.nist.gov/mcsd/savg/parallel/lb/

http://math.nist.gov/mcsd/savg/vis/fluid/

 

The flow of fluids in complex geometries plays an important role in many environmental and technological processes. Examples include oil recovery, the spread of hazardous wastes in soils, and the service life of building materials. Further, such processes depend on the degree of saturation of the porous medium. The detailed simulation of such transport phenomena, subject to varying environmental conditions or saturation, is a great challenge because of the difficulty of modeling fluid flow in random pore geometries and the proper accounting of the interfacial boundary conditions.

 

In order to model realistic systems, we developed a parallel lattice Boltzmann (LB) algorithm and implemented it with MPI to study large systems.  We verified the correctness of the model with several numerical tests and comparisons with experiments. The modeled permeabilities of X-ray microtomography images of sandstone media and their agreement with experimental results verified the correctness and utility of the parallel implementation of the LB methods.

 

These simulations would not have been possible without parallelizing the algorithm. The results were published by Martys, Hagedorn and Devaney, as an invited chapter "Pore Scale Modeling of Fluid Transport using Discrete Boltzmann Methods" in the book Ion and Mass Transport in Cement-Based Material. The model was run many times to generate data that was used in the paper The effects of statistical fluctuations, finite size error, and digital resolution on the phase percolation and transport properties of the NIST cement hydration model, by E. J. Garboczi and D. P. Bentz of BFRL, submitted to the peer-reviewed Cement & Concrete Research.  Visualizations of fluid properties calculated with this model were published in Physical Review E, Vol 63, 031205, "Critical Properties and Phase Separation in Lattice Boltzmann Fluid Mixtures". Martys and Hagedorn presented "Modeling Complex Fluids with the Lattice Boltzmann Method", Society of Rheology Meeting, Oct, 2001, Bethesda.

 

 


Laboratory experiment      Computational experiment

 

Laboratory Experiment                                          Computational Experiment

 

 

J. Hagedorn has performed a series of runs simulating multiple fluids through a tube.  Parameters have been varied to investigate the effects of tube radius, tube length, wetting parameters, and other parameters on the stability of the fluid structure. Results are very similar to experimental results generated by Dr. Kalman Migler of the Polymers Division of MSEL as shown in the figures below. Papers on this work will be submitted in the coming year. Martys and Hagedorn will present "Modeling fluid flow in Cement Based Materials using the Lattice Boltzmann Method" at the Ventura California Gordon Conference on Cement-based Materials (April 2002).

 

 

Parallelization of a Model of the Elastic Properties of Cement

 

Robert Bohn

Edward Garboczi (NIST BFRL)

 

Almost all real materials are multi-phase, whether deliberately, when formulating a composite, inadvertently, by introducing impurities into a nominally mono-phase material, or by the very nature of the material components, as in the case of cement-based materials.  Predicting the elastic properties of such a material is dependent on two pieces of information for each phase: how each phase is arranged in the microstructure, and the elastic moduli of each phase.  Cement paste is extraordinarily complex elastically, with many different chemically and elastically distinct phases (20+) and a complex microstructure.  This complexity further increases in concrete, as aggregates are added.

 

A finite element package for computing the elastic moduli of composite materials has been written by staff of the NIST Building and Fire Research Laboratory and has been available for several years. The program takes a 3-D digital image of a microstructure, assigns elastic moduli tensors to each pixel according to what material phase is present, and then computes the effective composite linear elastic moduli of the material.  This program has worked successfully in many different material microstructures, including ceramics, metal alloys, closed and open cell foams, gypsum plaster, oil-bearing rocks, and 2-D images of damaged concrete This program is a single-processor code.  Reasonable run times mean that we are limited, at present, to systems of about 1-2 million pixels, which require 200-500 Mbytes of memory.

 

We are updating and parallelizing this code with MPI. In particular, this code will be run on multiprocessor SGI hardware and also on a Linux based PC cluster. The main benefactor of the work will be the cement and concrete industries that are members of the Virtual Cement and Concrete Testing Laboratory consortium. This code, in its scalar form at present, will go into version 2.0 of the software that is distributed to the companies in January, 2002. The parallel form will be used for further research in the elastic properties of cement paste. For example, work on the elastic properties of random-shape aggregates in concrete will directly benefit all the aggregate companies involved in the International Center for Aggregate Research (U Texas-Austin), who will be sponsoring this research in 2002.  

 

This year the code was rewritten to disseminate the data and other necessary information to each of the computing nodes. This code was previously written in a linear vector form in order to run faster on a vector-based machine; this bookkeeping system has been eliminated. The input now describes the actual data in a 3-D way and it is more natural to compute and transmit the data in chunks of the original 3-D data array.   Currently we are parallelizing the three main subroutines, FEMAT, DEMBX and ENERGY. The other subroutines are virtually identical in structure and will follow this parallelization.

 

Making this code parallel will allow much larger systems to be studied, allowing us to probe the crucial parameter of digital resolution by providing faster turnaround times and the possibility to quickly run larger higher-resolution microstructures. Early age computations of cement paste elastic moduli are more difficult to carry out accurately, because of the low connectivity between phases at early ages. The higher resolution available on the parallel machines should help resolve this problem. Large systems are also required to study the elastic properties of random shapes, like aggregates found in concrete. Finally, direct simulations of AFM probes of composite surfaces are being carried out. These are CPU-time intensive, since every pixel on a surface needs to be displaced, one at a time, and the composite elastic response computed. The improved run times from parallel codes will help this project immensely. 

 

 

The Visible Cement Dataset

 

Steve Satterfield

Peter Ketcham

William George

Judith Devaney

James Graham

James Porterfield

Dale P. Bentz (NIST BFRL)

Symoane Mizell (NIST BFRL)

Daniel A. Quenard (Centre Scientifique et Technique du Batiment)

Hebert Sallee (Centre Scientifique et Technique du Batiment)

Franck Vallee (Centre Scientifique et Technique du Batiment)

Jose Baruchel (European Synchrotron Radiation Facility)

Elodie Boller (European Synchrotron Radiation Facility)

Abdelmajid Elmoutaouakkil (European Synchrotron Radiation Facility)

Stefania Nuzzo (European Synchrotron Radiation Facility)

 

http://visiblecement.nist.gov/

 

To produce materials with acceptable or improved properties, adequate characterization of their microstructure is critical.  While the microstructure can be viewed in two dimensions at a variety of resolutions (e.g., optical microscopy, scanning electron microscopy, and transmission electron microscopy), it is often the three-dimensional aspects of the microstructure that have the largest influence on material performance.  Direct viewing of the three-dimensional microstructure is a difficult task for most materials. With advances in X-ray microtomography, it is now possible to obtain three-dimensional representations of a material's microstructure with a spatial resolution of better than one micrometer per voxel. 

 

The Visible Cement Data Set represents a collection of 3-D data sets obtained using the European Synchrotron Radiation Facility in Grenoble, France in September of 2000 as part of an international collaboration between NIST, ESRF, and Centre Scientifique et Technique du Batiment (CSTB- Grenoble, FRANCE).   Most of the images obtained are for hydrating Portland cement pastes, with a few data sets representing hydrating plaster of Paris and a common building brick. The goal of this project is to create a web site at NIST where all researchers could access these unique data sets.  The web site includes a text-based description of each data set and computer programs to assist in processing and analyzing the data sets.  In addition to the raw data files, the site contains both 2-D and 3-D images and visualizations of the microstructures. 

 

Several of these data sets have been animated using the MCSD immersive visualization environment.  The accompanying figure is an image from one of the plaster of Paris data sets that have been displayed this way.  A variety of computer programs for processing the data sets have been developed and made available on the Visible Cement Data Set web site.  These include programs for extracting a subvolume from the complete data set, determining the gray level histogram for a subvolume, segmenting a subvolume into individual phases (cement particles, hydration products, and pores for example), filtering the raw and segmented subvolume, and assessing the percolation (connectivity) properties of a phase in a segmented subvolume.  The segmentation of a data set into individual phases is the critical step in attaching physical significance to the data. Suitable algorithms for converting these segmented subvolumes into a collection of polygons suitable for viewing on the MCSD immersive environment have been explored and demonstrated. The article The Visible Cement Data Set by D.P. Bentz, S. Mizel, S. Satterfield, J. Devaney, W. George, P. Ketcham, J. Graham, J. Porterfield, D. Quenard, F. Vallee, H. Sallee, E. Boller, J. Baruchel, describes the details of the dataset. It is in preparation for the NIST Journal of Research.

 

The Visible Cement Data Set web site will continued to be used by NIST and will serve as a valuable resource to both the construction materials and visualization research communities.

 

64 x
64 x 64 subvolume of Plaster of Paris hydrated for 4 hours, rendered

64 x 64 x 64 subvolume of Plaster of Paris hydrated for 4 hours, rendered as an isosurface of the segmented (particle/porosity) data.

as an isosurface of the segmented (particle/porosity) data.

 

 

 

 


2.4.        Special Projects

 

 

Digital Library of Mathematical Functions

 

Daniel Lozier
Ronald Boisvert
Joyce Conlon
Marjorie McClain
Bruce Fabijonas
Raghu Kacker
Bruce Miller
F. W. J. Olver
Bonita Saunders
Abdou Youssef
Qiming Wang (NIST ITL/IAD)
Charles Clark (NIST PL)
Brianna Blaser
Elaine Kim

 

http://dlmf.nist.gov/

 

NIST is well known for its collection and dissemination of standard reference data in physical sciences and engineering. From the 1930s through the 1960s, NBS also disseminated standard reference mathematics, typically tables of mathematical functions. The prime example is the NBS Handbook of Mathematical Functions, prepared under the editorship of Milton Abramowitz and Irene Stegun and published in 1964 by the U.S. Government Printing Office. The NBS Handbook is a technical best seller, and likely is the most frequently cited of all technical references. Total sales to date of the government edition exceed 150,000; further sales by commercial publishers are several times higher. Its daily sales rank on amazon.com consistently surpasses other well-known reference books in mathematics, such as Gradshteyn and Ryzhik's Table of Integrals. The number of citations reported by Science Citation Index continues to rise each year, not only in absolute terms but also in proportion to the total number of citations. Some of the citations are in pure and applied mathematics but even more are in physics, engineering, chemistry, statistics, and other disciplines. The main users are practitioners, professors, researchers, and graduate students.

 

Except for correction of typographical and other errors, no changes have ever been made in the Handbook. This leaves much of the content unsuitable for modern usage, particularly the large tables of function values (over 50% of the pages), the low-precision rational approximations, and the numerical examples that were geared for hand computation. Also, numerous advances in the mathematics, computation, and application of special functions have been made or are in progress. We are engaged in a substantial project to transform this old classic radically. The Digital Library of Mathematical Functions is a complete rewriting and substantial update of the Handbook that will be published in a low-cost hardcover edition and on the Internet for free public access. The Web site will include capabilities for searching, downloading, and visualization, as well as pointers to software and related resources. The contents of the Web site will also be made available on CD- ROM, to be included with the hardcover edition. A sample chapter, including examples of dynamic visualizations, may be viewed on the project Web site.

 

View of principal branch of the Hankel function

Dynamic Visualization.  View of principal branch of the Hankel function |H5(1)(x+iy)| showing pole at the origin, branch cut, location of zeroes near the cut, and exponential growth and decay in different parts of the complex plane.  Five zeros around the pole are not fully visible in this view.  In the DLMF, this view may be rotated and seen from any angle.  The same technology can be used to generate views of additional branches.  © National Institute of Standards and Technology.

 

 

Funded by the National Science Foundation and NIST, the DLMF Project is contracting with the best available world experts to rewrite all existing chapters, and to provide additional chapters to cover new functions (such as the Painlevé transcendents and q-hypergeometric functions) and new methodologies (such as computer algebra). Four NIST editors (Lozier, Olver, Clark, and Boisvert) and an international board of nine associate editors are directing the project. The associate editors are

 

Richard Askey (University of Wisconsin),
Michael Berry (University of Bristol),
Walter Gautschi (Purdue University),
Leonard Maximon (George Washington University),
Morris Newman (University of California at Santa Barbara),
Peter Paule (Technical University of Linz),
William Reinhardt (University of Washington),
Ingram Olkin (Stanford), and
Nico Temme (CWI Amsterdam).

 

Major accomplishments were the result of team efforts.  In FY 2001 these include the following.

 

Chapter Status

 

Editorial Issues

 

Production Issues

 

External Recognition

 

 

Quantum Information

 

Ronald Boisvert

Isabel Beichl

Anthony Kearsley

William Mitchell

David Song

Francis Sullivan

Carl Williams (NIST PL)

Eite Tiesinga (NIST PL)

Mike Robinson (IDA Center for Computing Sciences)

http://math.nist.gov/quantum/

 

This year, ITL began a new program of work in Quantum Information Systems in collaboration with the NIST Physics Laboratory and the NIST Electronics and Electrical Engineering Laboratory. R. Boisvert is coordinating the ITL effort, which involves participants from six ITL divisions.  This work is partially supported by a grant from the DARPA Quantum Information Science and Technology (QuIST) program, which began this year.  The main thrusts of ITL’s DARPA QuIST effort are as follows.

 

o       Quantum Communications Testbed Facility

We are working with the NIST PL to develop a working testbed to demonstrate concepts and to measure performance of systems, components, and protocols for highly secure communications based on the principles of quantum physics.  The initial testbed, now under construction, will feature an open-air optical link between the NIST Administration Building and the NIST North Building, which will be used to demonstrate the BB84 protocol for quantum key exchange. (Such keys could be used as one-time pads for encrypting messages, or could be used for the separate generation of common one-time pads).  The link will include a quantum channel as well as several classical channels.  An attenuated laser will generate single polarized photons for transmission over the quantum channel; commercially available avalanche photo diodes will be used to detect the photons.  An effective key generation rate of 1 Mbps is the goal.  To achieve this it will be necessary for the channels to operate at 1 Gbps.  The testbed will be used to study the performance and security of quantum-based network protocols, and to quantify the improvements obtained through the use of alternate physical components.  For example, improved single photon sources and detectors are under development in the NIST PL and EEEL, respectively. Participants: ITL Advanced Networking Technologies Division, ITL Convergent Information Systems Division, NIST Physics Lab, NIST Electronics and Electrical Engineering Laboratory.

 

o       Hybrid Quantum Authentication Protocols

Authentication is another important aspect of secure communications, i.e., being able to verify the identity of someone with whom you have initiated an electronic communication.  Quantum communication networks may provide new means for authentication.  ITL staff members have begun research in the development and analysis of hybrid quantum/classical authentication schemes based upon the availability of entangled photons.  Participants: ITL Computer Security Division.

 

o       Information Theory

Quantum systems provide enormous potential for new ways of doing computation in which currently intractable problems could become routine to solve. Experiments that are currently underway at NIST and elsewhere in the development of processors for quantum computation are still very far from practical use in computation, however.  Many obstacles remain in the areas of computer engineering and computer science. Practical error correction schemes need to be devised and implemented, languages for expressing quantum algorithms need to be devised, and compilers capable of translating high-level descriptions into sequences of gate operations (and in turn sequences of instructions to lasers and other hardware components) need to be devised. In addition, we need to understand what problems are amenable to solution by quantum computers, and how to implement them. In this work we are studying error propagation and correction in particular quantum gates being developed in the NIST PL. We are also studying the scalability of computer architectures based upon the neutral atoms or ion arrays being developed in the NIST PL. Finally, we have begun the study of new quantum algorithms that would show significant speedups on quantum computers. Participants: ITL Mathematical and Computational Sciences Division, ITL Software Diagnostics and Conformance Testing Division.

 

Within MCSD, we are working at two ends of the spectrum of quantum computation: in the modeling of the physical processes that will be used to implement a quantum gate, and in the development and analysis of algorithms for quantum computers.

 

William Mitchell has been working with Eite Tiesinga of the NIST PL to solve for eigenvalues and eigenstates of the Schrödinger equation in configurations relevant to the optical traps for neutral atoms.  Arrays of such atoms will correspond to arrays of qubits, and interactions of adjacent atoms will be used to implement elementary quantum gates.  The computations are quite challenging.  Multiple eigenvalues in the middle of the spectrum are desired, and the corresponding eigenstates have sharp gradients.  Mitchell is adapting his parallel adaptive multigrid solver PHAML for this task.  Some very encouraging early results have already been generated.

 

David Daegene Song, a recent Ph.D. from the Quantum Computation program at Oxford University, joined MCSD this fall. Song has done work in approximate quantum cloning, entanglement swapping, and nonlinear qubit transformations.  Entanglement swapping provides a means for transporting quantum states over long distances using chains of entangled qubits.  He has begun extending his work on this subject here at NIST.

 

Several MCSD staff members have begun investigations of the potential speedups for quantum-based algorithms for a variety of applications.  David Song, Isabel Beichl and Francis Sullivan are studying the problem of determining whether a finite function over the integers is one-to-one.  In particular, they are developing a quantum algorithm for determining if a mapping from a finite set to itself is one-to-one.  They hope to find a complexity of O(SQRT(n)) steps.  Classical algorithms require n steps to do this computation.  The proposed quantum algorithm uses phase symmetry, Grover's search algorithm and results about the pth complex roots of unity for a prime p. The proof, developed in collaboration with Mike Robinson at the Center for Computing Sciences, relies on results about the density of prime numbers in the integers.

 

Song has also begun work with Anthony Kearsley to study potential speedups when solving integer-valued matrix equations on quantum computers. 



Part III: Activity Data

Text Box:   
  Part III


 Activity Data
 

 

 


Charge
density on a computed diffusion-limited cluster aggregate.

 

Charge density on a computed diffusion-limited cluster aggregate.

 

 

 

 

 

 

 

 

 

3.1.        Publications

 

Appeared

  1. B. am Ende, 3D Mapping of Underwater Caves, Computer Graphics and Applications 21, no. 2, (2001), pp. 14-20.
  2. B. am Ende, Wakulla Spring, in Taylor, Michael Ray, Caves Exploring Hidden Realms, National Geographic Society, Washington, DC (2001), p. 110-111.
  3. D. Anderson, G. McFadden, and A. Wheeler, A Phase-Field Model with Convection: Sharp-Interface Asymptotics, Physica D 151 (2001), pp. 305-331.
  4. A.L. Ankudinov, J.J. Rehr, J.E. Sims, C.E. Bouldin, and H.K. Hung, Rapid Calculation of X-ray Absorption Near Edge Structure Using Parallel Computing, Journal of X-ray Spectroscopy 30 (2001), pp. 431-434.
  5. I. Beichl and F. Sullivan, In Order to Form a More Perfect UNION, IEEE Computing in Science and Engineering 3, no. 2 (Mar.-Apr. 2001), pp. 60-64.
  6. I. Beichl, D. O'Leary, and F. Sullivan, Approximating the Number of Monomer-Dimer Coverings in Periodic Lattices, Physical Review E (June 2001), pp. 016701-1--016701-6.
  7. J. Bernal, REGTET: A Program for Computing Regular Tetrahedralizations, in Proceedings of the 2001 International Conference on Computational Science (Springer-Verlag Lecture Notes in Computer Science 2073), May 28-30, 2001, San Francisco, CA, pp. 629-632.
  8. R.F. Boisvert, M. Donahue, D. Lozier, R. McMichael, and B. Rust, Mathematics and Measurement, NIST Journal of Research 106, no. 1 (January-February 2001), pp. 293-313.
  9. R.F. Boisvert, J. Moreira, M. Philippsen, and R. Pozo, Numerical Computing in Java, Computing in Science and Engineering 3, no. 1 (March/April 2001), pp. 18-24.
  10. R.F. Boisvert and D.W. Lozier, Handbook of Mathematical Functions, in A Century of Excellence in Measurements, Standards, and Technology, (D. Lide, Ed.), CRC Press, 2001, pp.135-139. Also: NIST Special Publication 958.
  11. R.F. Boisvert and P.T.P. Tang (eds.),The Architecture of Scientific Software, Kluwer Academic Publishers, Boston, 2001.
  12. A.S. Carasso, Direct Blind Deconvolution, SIAM J. Appl. Math. 61, (2001), pp. 1980-2007.
  13. A.M. Casas, A.L. Cortes, A. Maestro, M.A. Soriano, A. Riaguas, and J. Bernal, LINDENS: A Program for Lineament Length and Density Analysis, Computers & Geosciences 26 (2000), pp. 1011-1022.
  14. S. Coriell, G. McFadden, W. Mitchell, B. Murray, J. Andrews, and Y. Arikawa, Effect of Flow Due to Density Change on Eutectic Growth, Journal of Crystal Growth 224 (2001), pp. 145-154.
  15. M.A. Davies and T.J. Burns, Thermomechanical Oscillations in Material Flow during High-Speed Machining, Philosophical Transactions of the Royal Society A 359 (2001), pp. 821-846.
  16. J. Devaney, J. Hagedorn, O. Nicolas, G. Garg, A. Samson, and M. Michel, A Genetic Programming Ecosystem, in Proceedings of the 15th Annual International Parallel and Distributed Processing Symposium, IPDPS 2001, Workshop on Biologically Inspired Solutions to Parallel Processing Problems, April 23, 2001, San Francisco, CA.
  17. J.E. Devaney, The Role of Choice in Discovery, Lecture Notes in Artificial Intelligence, No. 1967, Setsuo Arikawa and Shinichi Morishita, Editors, Proceedings of The Third International Conference on Discovery Science, DS 2000, December 4-6, Kyoto, Japan, pp. 247-251.
  18. A. Dienstfrey, J. Huang, F. Hang, Lattice Sums and the Two-dimensional, Periodic Green's function for the Helmholtz equation, Proceedings of the Royal Society of London 457, No. 2005 (2001), pg. 67-86.
  19. A. Dienstfrey and L. Greengard, Analytic Continuation, Singular Value Expansions, Kramers-Kronig Analysis, Inverse Problems 17, No. 5 (2001), pp. 1307-1320.
  20. R. Dixson, N.G. Orji, J. Fu, V. Tsai, E.D. Williams, R. Kacker, T. Vorburger, H. Edwards, D. Cook, P. West, and R. Nyffenegger, Silicon Single Atom Steps as AFM (Atomic Force Microscopy) Standards, Proceedings of SPIE, Santa Clara, CA. March 2001.
  21. J. Eggleston, G. McFadden, and P. Voorhees, A Phase-Field Model for Highly Anisotropic Interfacial Energy, Physica D 150 (2001), pp. 91-103.
  22. O. Gérardin, H. Le Gall, M.J. Donahue, and N. Vukadinovic, Micromagnetic Calculation of the High Frequency Dynamics of Nano-Size Rectangular Ferromagnetic Stripes, Journal of Applied Physics 89 (2001), pp. 7012-7014.
  23. K. Gurski, Hints for Finding Non-Academic Research Positions (Postdoctoral and Permanent), Association for Women in Mathematics Newsletter, September-October 2001, pp. 17-19.
  24. J. Hagedorn, J. Devaney, A Genetic Programming System with a Procedural Program Representation, in Proceedings of the Late Breaking Papers in Genetic and Evolutionary Computation Conference 2001 (GECCO), July 7-11, 2001, San Francisco, CA, pp. 152-159.
  25. F. Hunt, Finite Precision Representation of the Conley Decomposition, Journal of Dynamics and Differential Equations 13, No. 1 (January 2001), p. 87-105.
  26. F. Hunt, A. Kearsley, and H. Wan, An Optimization Approach to Multiple Sequence Alignment: A Preliminary Report, in Proceedings of the Atlantic Symposium on Computational Biology and Genome Information Systems and Technology (2001), pp. 164-170, Durham, NC, Mar. 15-18, 2001.
  27. A. Kearsley, Global and Local Optimization Algorithms for Optimal Signal Set Design, NIST Journal of Research 106, No. 2. (2001), pp. 441-454.
  28. A. Kearsley, L.C. Cowsar, R. Glowinski, M.F. Wheeler and I. Yotov, New Optimization Approach to Multi-Phase Flow, Journal of Optimization Theory and Applications 111 (3) (December 2001), pp.  473-488.
  29. S.A. Langer, E.R. Fuller, Jr., and W.C. Carter, OOF: An Image-Based Finite-Element Analysis of Material Microstructures, Computing in Science and Engineering 3, no. 3 (May/June 2001), p. 15.
  30. J.E. Lavery, D.E. Gilsinn, Multiresolution Representation of Urban Terrain By L1 Splines, L2 Splines and Piecewise Planar Surfaces, in Proceedings 22nd Army Science Conference, Baltimore, MD, December 11-13, 2000, pp. 767-773.
  31. D.W. Lozier, The DLMF Project: A New Initiative in Classical Special Functions, in Special Functions: Proceedings of the International Workshop, C. Dunkl, M. Ismail, R. Wong, editors, World Scientific (Singapore), pp. 207-220.
  32. N. Martys, J. Hagedorn, J. Devaney, Pore Scale Modeling of Fluid Transport using Discrete Boltzmann Methods, in Ion and Mass Transport in Cement-Based Material, edited by R. D. Hooton, M.D.A. Thomas, J. Marchand, J.J. Beaudoin; publisher: The American Ceramic Society,Westerville, Ohio, 2001.
  33. G. McCormick and C. Witzgall, Logarithmic SUMT Limits in Convex Programming, Mathematical Programming, Series A 90 (2001), pp. 113-145.
  34. R.D. McMichael, M.J. Donahue, D.G. Porter and J. Eicke, Switching Dynamics and Critical Behavior of Standard Problem No. 4, Journal of Applied Physics 89 (2001), pp. 7603-7605.
  35. W.F. Mitchell, Adaptive Grid Refinement and Multigrid on Cluster Computers, in Proceedings of the 15th International Parallel and Distributed Processing Symposium, IEEE Computer Society Press, 2001.
  36. W.F. Mitchell, A Refinement-Tree Based Partitioning Method for Adaptively Refined Grids, in Proceedings of the Tenth SIAM Conference on Parallel Processing for Scientific Computing, 2001.
  37. C.S. O'Hern, S.A. Langer, A.J. Liu, and S.R. Nagel, Force Distributions near Jamming and Glass Transitions, Physical Review Letters 86 (Jan 1, 2001), p. 111.
  38. D. O’Leary, An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators, in A Century of Excellence in Measurements, Standards, and Technology, (D. Lide, Ed.), CRC Press, 2001, pp. 77-80. Also: NIST Special Publication 958.
  39. D. O’Leary, Methods of Conjugate Gradients for Solving Linear Systems, in A Century of Excellence in Measurements, Standards, and Technology, (D. Lide, Ed.), CRC Press, 2001, pp. 81-85. Also: NIST Special Publication 958.
  40. D.G. Porter and M.J. Donahue,  Generalization of a Two-Dimensional Micromagnetic Model to Non-Uniform Thickness, Journal of Applied Physics 89 (2001), pp. 7257-7259.
  41. B. Rust, Parameter Selection for Constrained Solutions to Ill-Posed Problems, Computing Science and Statistics 32 (2000), pp. 333-347.
  42. B. Rust, Fitting Nature's Basic Functions Part I: Polynomial and Linear Least Squares, Computing in Science & Engineering 3, No. 5 (Sept/Oct 2001), pp. 84-89.
  43. B. Saunders, The Application of Numerical Grid Generation to Problems in Computational Fluid Dynamics, in Council for African American Researchers in the Mathematical Sciences: Volume III, Contemporary Mathematics Series 275, American Mathematical Society, 2001.
  44. R. Sekerka, S. Coriell, and G. McFadden, Separation of Scales for Growth of an Alloy Needle Crystal, Metallurgical and Materials Transactions 32A (2001) 2669-2670.
  45. J.S. Sims, J.G. Hagedorn, P.M. Ketcham, S.G. Satterfield, T.J. Griffin, W.L. George, H.A. Fowler, B.A. am Ende, H.K. Hung, R.B. Bohn, J.E. Koontz, N.S. Martys, C.E. Bouldin, J.A. Warren, D.L. Feder, C.W. Clark, B.J. Filla, J.E. Devaney, Accelerating Scientific Discovery Through Computation and Visualization, NIST Journal of Research 105 (6) (Nov.-Dec. 2000), pp. 875-894.
  46. D. Sterling, Chaotic Synchronization of Coupled Ergodic Maps, Chaos 11 (1) (March 2001), pp. 29-46.
  47. S. Van Vaerenbergh, S. Coriell, and G. McFadden, Morphological Stability of a Binary Alloy: Temperature-Dependent Diffusivity, Journal of Crystal Growth 223 (2001), pp. 565-573.
  48. D. Williams and B. Alpert, Causality and Waveguide Circuit Theory, IEEE Transactions on Microwave Theory and Techniques 49, no. 4 (2001), pp. 613-623.
  49. C. Witzgall, Path, Trees, and Flowers, in A Century of Excellence in Measurements, Standards, and Technology, (D. Lide, Ed.), CRC Press, 2001, pp. 140-144.  Also: NIST Special Publication 958.

 

Technical Reports

  1. J. Bernal, REGTET: A Program for Computing Regular Tetrahedralizations, NIST IR 6786.
  2. A.S. Carasso, The APEX Method in Image Sharpening and the Use of Low Exponent Levy Stable Laws, NISTIR 6749 (2001).
  3. I. Duff, M. Heroux, and R. Pozo, The Sparse BLAS, Technical Report, Rutherford Appleton Laboratory RAL-TR-2001-032, August 2001.
  4. D.E. Gilsinn, Constructing Sibson Elements for a Rectangular Mesh, NISTIR 6718, February 28, 2001.
  5. D.E. Gilsinn, H.T. Bandy, and A.V. Ling, Updating a Turning Center Error Model by Singular Value Decomposition, NISTIR 6722, March 2001.
  6. P. Ketcham, D.L. Feder, C.W. Clark, S.G. Satterfield, T.J. Griffin, W.L. George, W.P. Reinhardt, Volume Visualization of Bose-Einstein Condensates, NISTIR 6739, April 30, 2001.
  7. J. Sims, J.G. Hagedorn, P.M. Ketcham, S.G. Satterfield, T.J. Griffin, W.L. George, H.A. Fowler, B.A. am Ende, H.K. Hung, R.B. Bohn, J.E. Koontz, N.S. Martys, C.E.  Bouldin, J.A. Warren, D.L. Feder, C.W. Clark, B.J. Filla, J.E. Devaney, Accelerating Scientific Discovery Through Computation and Visualization, NISTIR 5709, July 26, 2001.
  8. C. Witzgall and G.Cheok, Registering 3D Point Clouds: An Experimental Evaluation, NISTIR 6743.

 

Accepted

  1. B.A. am Ende, M.W. Cresswell, R.A. Allen, T.J. Headley, W.F. Guthrie, L.W. Linholm, E.H. Bogardus, and C.E. Murabito, Measurement of the Linewidth of Electrical Test-Structure Reference Features by Automated Phase-Contrast Image Analysis, IEEE International Conference on Microelectronic Test Structures, April 8-11, 2002.
  2. D. Anderson, G. McFadden, A. Wheeler, A Phase-field Model with Convection: Numerical Simulations, Proceedings on Interfaces for the Twenty-First Century, (Imperial College Press, London, 2002).
  3. D. Anderson, G. McFadden, A. Wheeler, A Phase-field Model of Convection, Proceedings of the 40th AIAA Aerospace Sciences Meeting, January, 2002, Reno, Nevada.
  4. R.J. Braun, J. Zhang, J. Cahn, G. McFadden, and A. Wheeler, Model Phase Diagrams for an FCC Alloy, Proceedings on Interfaces for the Twenty-First Century, (Imperial College Press, London, 2002).
  5. R.F. Boisvert and E. Houstis (eds.), Computational Science, Mathematics, and Software, Purdue University Press, 2002.
  6. R.F. Boisvert, Mathematical Software: Past, Present, and Future, in Computational Science, Mathematics, and Software, (R.F. Boisvert and E.N. Houstis, Eds.), Purdue University Press, 2002.
  7. S. Coriell and G. McFadden, Applications of Morphological Stability Theory, Journal of Crystal Growth.
  8. M.W. Cresswell, E.H. Bogardus, M.H. Bennett, R.A. Allen, W.F. Guthrie, C.E. Murabito, B.A. am Ende, and L.W. Linholm, CD Reference Materials for Sub-Tenth Micrometer Applications, International Society for Optical Engineering (SPIE) Microlithology Symposium, March 3-8, 2002.
  9. M.A. Davies, J.R. Pratt, B. Dutterer, and T.J. Burns, Stability Prediction for Low Radial Immersion Milling, Journal of Manufacturing Science and Engineering.
  10. M.A. Davies, J.R. Pratt, B. Dutterer, and T.J. Burns, Regenerative Stability Analysis of Highly Interrupted Machining, in Metal Cutting and High Speed Machining, D. Dudzinski, et al., eds., Kluwer Academic/Plenum Publishers.
  11. M.A. Davies, H. Yoon, T.L. Schmitz, T.J. Burns, and M.D. Kennedy, Calibrated Thermal Microscopy of the Tool-chip Interface in Machining, Machining Science & Technology.
  12. H. Fowler, J. Devaney, and J. Hagedorn, Growth Model for Filamentary Streamers in an Ambient Field, IEEE Transactions on Dielectrics and Electrical Insulation.
  13. W.L. George, J. Hagedorn, J. Devaney, Parallel Programming with IMPI, Dr. Dobb's Journal.
  14. D.E. Gilsinn, H.T. Bandy, A.V. Ling, A Spline Algorithm for Modeling Cutting Errors on Turning Centers, Journal of Intelligent Manufacturing.
  15. D.E. Gilsinn, M.A. Davies, B. Balachandran, Stability of Precision Diamond Turning Processes That Use Round Nosed Tools, ASME Journal of Manufacturing Science and Engineering.
  16. K.F. Gurski and R.L.Pego, Normal Modes for a Stratified Viscous Fluid Layer, Royal Society of Edinburgh Proceedings A.
  17. J. Kelso, L.E. Arsenault, S.G. Satterfield, and R.D. Kriz, DIVERSE: A Framework for Building Extensible and Reconfigurable Device Independent Virtual Environments, Virtual Reality 2002 Conference, Orlando, FL, March 24-27, 2002.
  18. J.E. Lavery, D. E. Gilsinn, Representation of Natural Terrain by Cubic L1 and L2 Splines, Trends in Approximation Theory, Vanderbilt University Press.
  19. N. Martys and J. Sims, Computational Study of Collodial Suspensions using Dissipative Particle Dynamics, in Proceedings of the 73rd Annual Meeting of the Society of Rheology, October 21 - 25, 2001, Bethesda, MD.
  20. G. McFadden and A. Wheeler, On the Gibbs Adsorption Equation for Diffuse Interface Models, Proceeding of the Royal Society (London), Series A.
  21. R. Sekerka, S. Coriell, and G. McFadden, Separation of Scales for Growth of an Alloy Needle Crystal, Metallurgical and Materials Transactions.

 

Submitted

  1. B. Alpert, L. Greengard, and T. Hagstrom, Nonreflecting Boundary Conditions for the Time-Dependent Wave Equation.
  2. H.T. Bandy, M.A. Donmez, D.E. Gilsinn, C. Han, M. Kennedy, A. Ling, N. Wilkin, and K. Ye, A Methodology for Compensating Errors Detected by Process-Intermittent Inspection, NISTIR.
  3. I. Beichl, J. Bernstein, and A. Karim, Large Automated Image Processing Tools for High Throughput Measurements of Polymer Coatings: Initial Report on Software to Quantify Features for High Throughput Measurements of Polymer Coatings, NIST IR.
  4. C.E. Bouldin, J.S. Sims, H.K. Hung, J.J. Rehr, and A. Ankudinov, Parallel Calculation of Electron Multiple Scattering using Lanczos Algorithms, Physical Review B.
  5. A.S. Carasso, The APEX Method in Image Sharpening and the Use of Low Exponent Levy Stable Laws, SIAM Journal on Applied Mathematics.
  6. A. Dienstfrey and J. Huang, Integral Representations for Elliptic Functions, Transactions of the American Mathematical Society.
  7. I. Duff, M. Heroux, and R. Pozo, The Sparse BLAS, ACM Transactions on Mathematical Software.
  8. W.L. George and J.A. Warren, A Parallel 3D Dendritic Growth Simulator using the Phase-Field Method, Journal of Computational Physics.
  9. D.E. Gilsinn, Machine Tool Chatter: A Genuine Hopf Bifurcation, Nonlinear Dynamics.
  10. D.E. Gilsinn, J.E. Lavery, Shape-Preserving, Multiscale Fitting of Bivariate Data by L1 Smoothing Splines, Proc. Conf. Approximation Theory X, St. Louis, MO.
  11. M. Hamstad, J. Gary, and A. O'Gallagher, Effects of Lateral Plate Dimensions on Acoustic Emission Signals from Dipole Sources, Journal of Acoustic Emission.
  12. F. Hunt, H.B. Westlund, G.M. Meyer, The Role of Rendering in Measurement Science for Optical Reflectance and Scattering, NIST Journal of Research.
  13. R. Kacker and N. Fan Zhang, On-line Control Using Integrated Moving Average Model for Manufacturing Errors, Journal of Production Research.
  14. A. Kearsely and A. Reiff, Existence of Weak Solutions to a Class of Non-Strictly Hyperbolic Conservation Laws with Noninteracting Waves, Pacific Journal of Mathematics.
  15. A. Kearsley, F. Hunt, and H. Wan, An Optimization Approach to Multiple Sequence Alignment, Applied Mathematics Letters.
  16. A. Kearsley, P.T. Boggs, and J.W. Tolle, Hierarchical Control of a Linear Diffusion Equation, Proceedings of the 1st Sandia Workshop on PDE-Constrained Optimization.
  17. D.W. Lozier, The NIST Digital Library of Mathematical Functions Project, in electronic Proc. First International Workshop ion Mathematical Knowledge Management, http://www.risc.uni-linz.ac.at/conferences/MKM2001/Proceedings/toc.html.
  18. B. Rust, Fitting Nature's Basic Functions Part II: Estimating Uncertainties and Testing Hypotheses, Computing in Science & Engineering 3, No. 6 (Nov/Dec 2001).
  19. J. Sims and S. Hagstrom, Nonrelativistic Energy of the Ground State of Neutral Helium, Journal of Quantum Chemistry.
  20. W.C. Stone, G.S. Cheok, K.M. Furlani, and D. Gilsinn, Object Identification Using Bar Codes Based on LADAR Intensity, International Symposium for Automation and Robotics in Construction (ISARC) 2001, Krakow, Poland, Sept. 10-11.
  21. G. Tanoglu, R. Braun, J. Cahn and G. McFadden, A1-L10 Phase Boundaries and Anisotropy via Multiple-Order-Parameter Theory for an FCC Alloy, Acta Materialia.
  22. B. Williams, H. Grabinski, U. Arz, D. Walker, and B. Alpert, Causal Characteristic Impedance of Planar Transmission Lines.

 

In Process

  1. I. Beichl and F. Sullivan, Computing the Partition Function of the Monomer-Dimer System via Importance Sampling.
  2. I. Beichl, J. Carlson, and F. Sullivan, Approximating Independent Sets with Application to the Hard-Square Entropy Constant.
  3. D.P. Bentz, S. Mizel, S. Satterfield, J. Devaney, W. George, P. Ketcham, J. Graham, J. Porterfield, D. Quenard, F. Vallee, H. Sallee, E. Boller, and J. Baruchel, The Visible Cement Data Set.
  4. A.S. Carasso, D.S. Bright, and A.E. Vladar, The APEX Method and Real-Time Blind Deconvolution of Electron Microscope Images.
  5. B.R. Fabijonas, D.W. Lozier and F.W.J. Olver, Algorithm XXX: Airy Functions.
  6. W.L. George, C-DParLib Reference Manual.
  7. W.L. George, C-DParLib User's Guide.
  8. D.E. Gilsinn, B.R. Borchardt, R.A. Clary, M.A. Donmez, A.V. Ling, R. Rhorer, and J.A. Soons, Comparative Statistical Analysis of Test Parts Manufactured in Production Environments.
  9. D.E. Gilsinn, Estimating Critical Hopf Bifurcation Parameters for Delay Differential Equations with Application to Machine Tool Chatter.
  10. D. Gilsinn, Estimating Critical Hopf Bifurcation Parameters for a Second Order Delay Differential Equation with Application to Machine Tool Chatter.
  11. K.F. Gurski, R. Kollar, and R. L. Pego, Normal Modes for a Stratified Viscous Three-Dimensional Fluid in a Closed Container.
  12. R. Kacker, R. Datla, and A. Parr, Combined Result and Uncertainty from Interlaboratory Evaluations Based on theISO Guide.
  13. A. Kearsley and C. Lawrence, A New Matrix-Free Interior Point Algorithm for Large-Scale Spherically Constrained Quadratic Programs.
  14. A. Kearsely and G. Cornuejouls, An Infeasible Point Method for Solving 0/1 Integer Programming Problems.
  15. P.M. Ketcham, Visualization of Bose-Einstein Condensates.
  16. E. Kim and G. McFadden, The Effect of Anisotropic Surface Diffusion on the Pinch-off of an Axisymmetric Rod.
  17. I.K. Ono, C.S. O'Hern, S.A. Langer, and A.J. Liu, Effective Temperatures of a Driven System Near Jamming.
  18. C.S. O'Hern, S.A. Langer, A.J. Liu, and S.R. Nagel, Random Packings of Frictionless Particles.
  19. B. Rust, A Threshold Singular Component Method for Ill-Posed Problems
  20. B. Rust, The Variable Projection Method for Nonlinear Fitting.
  21. B. Rust and D. O’Leary, Residual Periodograms for Choosing Regularization Parameters for Ill-Posed Problems.
  22. B. Rust and D. O'Leary, FORTRAN Subroutines for Computing Nonnegatively Constrained Confidence Intervals for Ill-Posed Problems.
  23. B. Rust, Fitting Nature's Basic Functions Part III: Exponentials, Sinusoids and Nonlinear Least Squares.
  24. R. Saha, D.T. Vonk, J.J. Williams, N. Chawla, and S. Langer, Microstructure-Based Object Oriented Finite Element Analysis of the Thermomechanical Behavior of Metal Matrix Composites.
  25. D. Sterling, Self-Synchronizing Ergodic Maps and Chaotic Modulation.
  26. G. Tanoglu, R. Braun, J. Cahn, and G. McFadden, A1-L10 Phase Boundaries and Anisotropy via Multiple-Order-Parameter Theory for an FCC Alloy.
  27. J. Willis, F. Sabina, G. McFadden, and E. Drescher-Krasicka, Acoustic Microscopy of Stress. I. Longitudinal Mode.

 

Visualizations Published

  1. J. Hagedorn and T. Griffin, flow visualizations in Physical Review E, 63, 031205, Critical Properties and Phase Separation in Lattice Boltzmann Fluid Mixtures, by N. Martys, J. Douglas.
  2. P. Ketcham, Image of BEC in Parity Magazine, p. 5, November 2000.
  3. D. Feder and P. Ketcham, Image of Bose-Einstein condensate simulation, in The Coolest Gas in the Universe, Scientific American, p. 92, December 2000.
  4. D. Feder and P. Ketcham, Image of Solitons in Bose-Einstein condensate Simulation, cover of Optics & Photonics News 11, No. 12, December 2000.
  5. P. Ketcham, Image of Bose-Einstein condensate, in Parity Magazine, p. 5, November 2000.

 

3.2.        Presentations

 

Invited Talks

  1. B. Alpert, Near-Field to Far-Field Transformation with Nonideal Measurements: Spherical Scanning, Boulder, August 30, 2001. (Presentation for Doug Hurlburt and Larry Corey of DARPA.)
  2. I. Beichl, Approximating the Permanent with Importance Sampling, NIST Physics Lab, February 1, 2001.
  3. I. Beichl, Automated Image-Processing Tools for High throughput Mea surements of Polymer Coatings, ATP Intramural Workshop on Combinatorial Methods for Materials R&D, March 23, 2001.
  4. I. Beichl, Estimating the Permanent with Importance Sampling, Mathematics Dept., University of Pennsylvania, Philadelphia, PA, April 10, 2001.
  5. I. Beichl, Estimating the Partition Function for the Monomer-Dimer System, Mathematics Dept., George Mason University, Fairfax, VA, April 20, 2001.
  6. R.F. Boisvert, The Java Grande Forum: Making Java Better for Numerical Computing, Shortcourse on Java for High Performance Computing, Tenth SIAM Conference on Parallel Processing for Scientific Computing, Portsmouth, VA, March 11, 2001.
  7. R.F. Boisvert, Java for High Performance Numerical Computing, Hewlett-Packard High Performance Computer Users Group Meeting, Keynote Lecture, San Mateo, CA, March 19, 2001.
  8. T. Burns, Modeling and Stability of Interrupted Machining, Minisymposium on the Nonlinear Dynamics of Machining Processes, Sixth SIAM Conference on Applications of Dynamical Systems, Snowbird Ski and Summer Resort, Snowbird, UT, May 20-24, 2001.
  9. T. Burns, Regenerative Stability Analysis of Highly Interrupted Machining, Third International Conference on Metal Cutting and High Speed Machining, Metz, France, June 27-29, 2001.
  10. A. Carasso, Direct Blind Deconvolution, American Mathematical Society Annual Meeting, Special Session on Interaction of Inverse Problems and Image Analysis, New Orleans, LA, January 12, 2001.
  11. A. Carasso, Direct Blind Deconvolution and Levy densities, Center for Nonlinear Analysis, Mathematical Sciences Department, Carnegie-Mellon University, Pittsburgh, PA, February 27, 2001.
  12. J. Devaney, Biologically Inspired Computing: Where to in the next 10 years? Workshop on Biologically Inspired Solutions to Parallel Processing Problems, San Francisco, CA, April 23, 2001, panel discussion.
  13. M. J. Donahue, Micromagnetic Dynamics in Thin Films, Advanced Materials Research Institute seminar, University of New Orleans, October 30, 2000.
  14. K. Gurski, Hints for Finding Non-Academic Research Positions (Postdoctoral and Permanent), Association for Women in Mathematics Workshop at the Society for Industrial and Applied Mathematics Annual Meeting, San Diego, CA, July 10, 2001.
  15. K. Gurski, An HLLC-type Approximate Riemann Solver for Ideal Magnetohydrodynamics, Department of Mathematical Sciences Colloquium, George Mason University, November 9, 2001.
  16. J. Hagedorn and J.E. Devaney, Genetic Programming, Electron and Optical Physics Seminar, January 11, 2001.
  17. J. Hagedorn and J.E. Devaney, Genetic Programming for Automatic Algorithm Design, ATP Intramural Workshop on Combinatorial Methods for Materials R&D, NIST, March 23, 2001.
  18. J. Hagedorn and J. Devaney, Genetic Programming and Discovery, ATP National Meeting, Technologies at the Crossroads: Frontiers of the Future, Baltimore, MD, June 4, 2001, poster.
  19. A. Kearsley, Hierarchical Control Problems, University of North Carolina, Chapel Hill, NC, Feb. 27, 2001.
  20. A. Kearsley, An Infeasible Point Method for Integer Programming Problems, Carnegie Mellon University, Pittsburgh, PA, Mar. 28, 2001.
  21. A. Kearsley, Solving 0/1 Programs Using an Infeasible Point Method, Carnegie Mellon University, May 2001.
  22. A. Kearsley, An Infeasible Point Method for Solving a Class of 0/1 Programs, Sandia National Laboratories, June 2001.
  23. D. Lozier, NIST Digital Library of Mathematical Functions, SIMA Technical Seminar Series, NIST Manufacturing Engineering Laboratory, June 26, 2001.
  24. D.W. Lozier, The NIST Digital Library of Mathematical Functions Project, First International Workshop on Mathematical Knowledge Management, Schloss Hagenberg, Austria, September 26, 2001.
  25. G. McFadden, Modeling the Solidification of Non-Axisymmetric Dendrites, Applied Mathematics Seminar, University of Maryland, College Park, February 15, 2001.
  26. G. McFadden, Phase-Field Models of Solidification, 2001 John H. Barrett Memorial Lectures, University of Tennessee, Knoxville, TN, May 10, 2001.
  27. G. McFadden, Taylor-Couette Instabilities with a Crystal-Melt Interface, G.I. Taylor Medalist Symposium in Honor of Stephen H. Davis, 2001 Mechanics and Materials Summer Conference, San Diego, CA, June 28, 2001.
  28. G. McFadden, Phase-Field Models, Gordon Research Conference on Gravitational Effects in Physico-Chemical Systems: Interfacial Effects, New London, NH, July 9, 2001.
  29. D. Porter, panelist, Meet the Tcl Core Team, 8th Tcl/Tk Conference, part of the O'Reilly Open Source Convention, San Diego, CA, July 26, 2001.
  30. R. Pozo, Java Performance Analysis for Scientific Computing, Shortcourse on Java for High Performance Computing, Tenth SIAM Conference on Parallel Processing for Scientific Computing, Portmouth, VA, March 11, 2001.
  31. R. Pozo, Java Performance for Scientific Applications, NERSC, Lawrence Berkeley Labs, Berkeley, CA, June 2001.
  32. R. Pozo, Java Grande Forum and Java Benchmarks, JavaOne Conference, San Francisco, CA, June 2001.
  33. R. Pozo, Java Performance Analysis for Scientific Computing, Seminar for Java for High End Computing, Edinburgh Parallel Computing Center, Edinburgh, Scotland. Nov. 2000.
  34. B. Saunders, Numerical Grid Generation and 3D Visualization of Special Functions, 2001 Claytor Lecture, National Association of Mathematicians Annual Meeting, Joint Mathematics Meetings, New Orleans, Louisiana, January 13, 2001.
  35. B. Saunders, Effective 3D Visualizations of High Level Mathematical Functions, Mathematical Association of America New Jersey Section Meeting, Rowan University, Glassboro, NJ, April 21, 2001.
  36. B. Saunders, Using Numerical Grid Generation to Develop Interactive 3D Visualizations for the NIST Digital Library of Mathematical Functions, Second Howard-Maryland Mathematics Symposium, Howard University, Washington, D.C., April 27, 2001.
  37. B. Saunders, Using Numerical Grid Generation to Develop Effective 3D Visualizations for a Digital Library, IMA Career Workshop in Computational Science and Engineering: Minorities and Applied Mathematics – Connections to Industry and Government Laboratories, IMA, University of Minnesota, Minneapolis, MN, May 4-6, 2001.
  38. B. Saunders, Effective 3D Visualizations for the NIST Digital Library of Mathematical Functions, Mathematical Association of America Maryland, D.C., Virginia  Section Meeting, Virginia Tech, Blacksburg, VA, October 19, 2001.
  39. A.J. Slifka and B.J. Filla, Steady-State Measurement of Thermal Conductivity of Ceramics and Ceramic Coatings, National Physical Laboratory, Teddington, Middx, UK May 29, 2001.
  40. D. Sterling, Filter Coupled Ergodic Maps for Robust Synchronization, Dept. of Applied Math Dynamical Systems Colloquium, University of Colorado, Boulder, CO, February 22, 2001.

 

Conference Presentations

  1. B. am Ende, Current Status of Meshing of the Wakulla Springs Point Cloud. National Speleological Society, Mount Vernon, KY, July 27, 2001.
  2. I. Beichl, Applications of Sinkhorn Balancing: Low Cost Approximations for Hard Problems, Eastern Section meeting of the American Mathematical Society, Hoboken, NJ, April 28.
  3. J. Bernal, REGTET: A Program for Computing Regular Tetrahedralizations, 2001 International Conference on Computational Science, San Francisco, CA, May 28-30, 2001.
  4. R.F. Boisvert, Java Numerical Performance, IFIP Working Group 2.5 Meeting, Amsterdam, The Netherlands, May 26, 2001.
  5. R.F. Boisvert, The NIST Digital Library of Mathematical Functions, Workshop on Scientific Computing and Computational Science, Centrum voor Wiskunde en Informatik (CWI), Amsterdam, The Netherlands, May 28, 2001.
  6. C. Bouldin, J. Sims, H. Hung, John J. Rehr, and Alex Ankudinov, Rapid Computation of X-ray Absorption Near Edge Structure Using Parallel Computation", American Physical Society Meeting, Seattle, WA, March 12-16, 2001, Abstract L24.013 in Bulletin of the American Physical Society, Vol. 46, No. 1, 2001.
  7. J. Devaney, J. Hagedorn, O. Nicolas, G. Garg, A. Samson, M. Michel, A Genetic Programming Ecosystem, Proceedings of the 15th Annual International Parallel & Distributed Processing Symposium, IPDPS 2001, Workshop on Biologically Inspired Solutions to Parallel Processing Problems, San Francisco, April 23, 2001.
  8. J. Devaney, Genetic Programming for Data Visualization and Mining, Workshop on Combinatorial Methods for Materials R&D: Systems Integration in High Throughput Experimentation, American Institute of Chemical Engineering National Meeting, November 15, 2000
  9. J.E. Devaney, The Role of Choice in Discovery, The Third International Conference on Discovery Science, DS 2000, December 4-6, Kyoto, Japan.
  10. K. Devine, E. Boman, B. Hendrickson, W. Mitchell and C. Vaughan, Applications of Dynamic Load Balancing, Sixth U.S. National Congress on Computatinal Mechanics, Dearborn, MI, August 2001. (Presented by K. Devine.)
  11. M. Donahue, Micromagnetic Calculation of the High Frequency Dynamics of Nano-Size Rectangular Ferromagnetic Stripes, The 8th Joint MMM-Intermag Conference, San Antonio, TX, January 9, 2001.
  12. M. Donahue and D. Porter, Generalization of a Two-dimensional Micromagnetic Model to Non-Uniform Thickness, The 8th Joint MMM-Intermag Conference, San Antonio, TX, January 10, 2001 (poster).
  13. M.J. Donahue and D. Porter, OXS: An Extensible Public Domain Solver for Micromagnetics, poster presented HMM 2001, Ashburn, VA, May 21, 2001.
  14. M. J. Donahue and D. G. Porter, High Resolution Study of Discretization Effects in muMAG Standard Problem No. 1, 46th Annual Conference on Magnetism and Magnetic Materials, Seattle, WA, November 13, 2001.
  15. J. Eicke, R.D. McMichael, M.J. Donahue and D.G. Porter, Micromagnetic Calculation of Ferromagnetic Resonance Linewidth, HMM 2001, Ashburn, VA, May 23, 2001.
  16. D.E. Gilsinn, J.E. Lavery, Representation of Urban Terrain by L1 Splines, Tenth International Conference on Approximation Theory, St. Louis, MO, March 26-29, 2001.
  17. K. Gurski, An HLLC-type Approximate Riemann Solver for Ideal Magnetohydrodynamics, American Physical Society Division of Computational Physics, Boston, MA, June 25, 2001.
  18. F. Hunt, An Optimization Approach to Multiple Sequence Alignment, Atlantic Symposium on Computational Biology and Genome Information Systems and Technology, Durham, NC, March 16, 2001.
  19. A. Kearsley, Optimization Approach to Multiple Sequence Alignment, Advanced Technology Program National Meeting, Baltimore (June 3-5, 2001), poster.
  20. D. Lozier, Progress Report on the NIST Digital Library of Mathematical Functions, Sixth International Symposium on Orthogonal Polynomials, Special Functions and Applications, Rome, Italy, June 20, 2001.
  21. D.W. Lozier, Progress Report: Digital Library of Mathematical Functions Project, SIAM Annual Meeting, San Diego, CA, July 13, 2001.
  22. N. Martys, J. Sims, Computational Study of Colloidal Suspensions using Dissipative Particle Dynamics, 73rd Annual Meeting of the Society of Rheology, October 21 - 25, 2001, Bethesda, Maryland in the Session: Two Phase Systems: Emulsions, Blends and Suspensions.
  23. N. Martys, J. Hagedorn, Modeling Complex Fluids with the Lattice Boltzmann Method, Society of Rheology Meeting, Oct, 2001, Bethesda.
  24. N. Martys, J. Sims, Application of Dissipative Particle Dynamics for Modeling Cement Based Materials, 2000 MRS Fall Meeting, Symposium on Materials Science of High Performance Concrete, Boston, Nov 28-30, 2000.
  25. N. Martys, J. Hagedorn, N. Martys, and J. Hagedorn will present Modeling Fluid Flow in Cement Based Materials using the Lattice Boltzmann Method, at the Ventura California Gordon Conference on Cement based Materials (April 2002).
  26. R. D. McMichael, M. J. Donahue, D. G. Porter and J. Eicke, Review of Standard Problems in Micromagnetics, HMM 2001, Ashburn, VA, May 21, 2001, poster.
  27. W.F. Mitchell, A Refinement-Tree Based Partitioning Method for Adaptively Refined Grids, Tenth SIAM Conference on Parallel Processing for Scientific Computing, Portsmouth, VA, March 13, 2001.
  28. W.F. Mitchell and E. Tiesinga (presented by E. Tiesinga (842)), Multigrid Modeling of Two Confined and Interacting Atoms, Tenth Copper Mountain Conference on Multigrid Methods, Copper Mountain, CO, April 2, 2001.
  29. W.F. Mitchell, Load Balancing with a Refinement-Tree Based Partition, Sixth U.S. National Conference on Computational Mechanics, Dearborn, MI, August 3, 2001.
  30. D. Porter, Fulfilling the Promise of [Package Unknown], 8th Tcl/Tk Conference, O'Reilly Open Source Convention, San Diego, CA, July 27, 2001.

 

Visualizations Produced

  1. D. L. Feder, L.A. Collins, C.W. Clark, B.D. Anderson, P.C. Haljan, C.A. Regal, E.A. Cornell, P. Ketcham, T. Griffin, Dark Soliton in a Trapped Bose-Einstein Condensate Decaying into Quantum Vortex Rings, Mpeg movie presented at American Physical Society Meeting, March 12-16, 2001, Seattle, WA.
  2. T. Griffin, visualizations in Modeling Complex Fluids, presentation by N. Martys at Mechanical Engineering and Environmental Science joint seminar at Johns Hopkins University, March 6, 2001.
  3. T. Griffin, visualizations in Computer Simulations of Concrete Rheology, presentation by N. Martys at the American Concrete International Meeting, March 26, 2001, Philadelphia, PA.
  4. T. Griffin edited a video about the SURF program for a presentation by K. Gebbie, given at the American Physical Society (APS) April 28 - May 1, 2001 meeting in Washington, DC.
  5. T. Griffin made videos for N. Martys that were shown at: Symposium of Aggregate Research sponsored by the International Center of Aggregate Research (ICAR) in Austin Texas April 23 2001. Participants included major players in aggregate industry (e.g., Vulcan, Lafarge). VCCTL Consortium meeting at NIST April 19 2001. Participants included Master Builders, W.R. Grace, Holnam, and Cementex. Interfacial Consortium, May 2. Participants included Dow Chemical, Pittsburgh Paints.
  6. T. Griffin digitized experimental footage and visualized simulation data of wave formations in a large pool provided by B. Reinhardt for a presentation by D. Lozier at the 6th International Symposium on Orthogonal Polynomials, Special Functions and Applications (OPSFA), held in Rome, Italy, June 18-22.

 

 

3.3.        Conferences, Minisymposia, Lecture Series, Short-courses

 

MCSD Seminar Series

  1. Isom H. Herron (Rensselaer Polytechnic Institute), Mathematical Issues Arising from the Onset of Fluid Instabilities, November 28, 2000.
  2. Elsa Newman Schaefer (Maymount University), Use of the Monge-Amphere Equations to Model Optical Systems and Jet Flow, December 5, 2000.
  3. Isabel Beichl (MCSD), Estimating the Permanent with Importance Sampling, December 19, 2000
  4. Dianne P. O'Leary (MCSD and University of Maryland), A Dozen Open Problems at the Interface Between Mathematics and Computer Science, January 10, 2001.
  5. Carl H. Smith (University of Maryland at College Park), Three Decades of Team Learning, Jan. 16, 2001.
  6. Patrick Geoffray (Myricom, Inc.), Myrinet: A Smart Interconnect for Parallel Computing, February 5, 2001.
  7. Philip W. Smith (Texas Tech), High Performance Computing and Visualization at Texas Tech, April 6, 2001.
  8. Vassilios Tsiantos (Vienna University of Technology), Numerical Methods for ODEs in Micromagnetics --- Effect of Spatial Correlation Length in Langevin Micromagnetic Simulations, May 24, 2001.
  9. David Song (Oxford), Qubit Transformations, July 9, 2001.
  10. Paul Boggs (Sandia National Laboratories), A Software System for PDE-Based Optimization, June 27, 2001.
  11. Manil Suri (Univ. of Maryland Baltimore County), The p and hp Finite Element Modeling of Thin Structures, October 30, 2001.
  12. Barbara A. am Ende (MCSD), Visualizing a Sub-Aqueous, Subterranean Cave, November 6, 2001.
  13. Katharine F. Gurski (MCSD), An HLLC-type Approximate Riemann Solver for Ideal Magnetohydrodynamics, November 13, 2001.
  14. Peter A. Clarkson (University of Kent, UK), The Painlevé Equations - Nonlinear Special Functions, November 20, 2001.
  15. Raghu Kacker (MCSD), Analysis of Uncertainty in Interlaboratory Evaluations Based on the ISO Guide, November 27, 2001.
  16. James Lawrence (MCSD and George Mason Univ.), Some Problems of Combinatorial Geometry, December 4, 2001.
  17. Michael Mascagni (Florida State University), First and Last Passage Random Walk Algorithms, December 10, 2001.
  18. Jon Tolle (Univ. of N. Carolina), Hierarchical Control Problems, December 11, 2001.
  19. Fredrick R. Phelan Jr. (Multiphase Material Group, Polymer Division, NIST), Microstructure Modeling of Polymer Blends in Complex Flows, December 18, 2001.

 

DLMF Seminar Series

  1. Donald Richards (Univ. of Virginia), Special Functions of Matrix Argument, and their Applications in the Mathematical and Physical Sciences, January 4, 2001.
  2. George Casella (Univ. of Florida), An Introduction to Monte Carlo Statistical Methods, and Jun Liu (Harvard Univ.), Sequential Monte Carlo and Related Topics, at the Monte Carlo Methods and NIST Web Handbooks Symposium, in conjunction with Statistical Engineering Division, April 4, 2001.
  3. David Alan Grier (George Washington Univ.), Origins of the Handbook of Mathematical Functions, May 22, 2001.
  4. M. J. Seaton (Univ. College, London), Coulomb Functions for Attractive and Repulsive Potentials and for Positive and Negative Energies, July 16, 2001.
  5. Ulrich Jentschura (Technical Univ. of Dresden), Nonperturbative Physical Effects and Divergent Series, August 9, 2001.
  6. Peter A. Clarkson (University of Kent, UK), The Painlevé Equations - Nonlinear Special Functions, November 20, 2001.
  7. Annie Cuyt, Brigitte Verdonk, Johan Vervloet (Univ. of Antwerp, Belgium), Contribution to a Digital Library of Special Functions, November 21, 2001.
  8. Michael Kohlhase (Carnegie Mellon Univ.), Administration, Visualization, and Distribution of Mathematical Knowledge in the Internet Era, December 3, 2001.

 

Scientific Object Oriented Programming Users Group (SCOOP)

  1. Organizational Meeting, October 24, 2000 
  2. Ken Snyder (BFRL), Plans for an Upcoming Project, November 1, 2000.
  3. Steve Langer (ITL), The Design of OOF, December 6, 2000.
  4. Michael Donahue (ITL), An Introduction to the OOMMF Extensible Solver Class, February 6, 2001.
  5. Mike McLay (EEEL), Python, March 29, 2001.

 

Local Events Organized

 

Presentations

  1. R. Bohn, Introduction to Gaussian98, Gaithersburg, MD, August 7, 2001.
  2. J. Filla, Data Acquisition and Program Development Using LabVIEW 6i with an Introduction on Visualization and Consulting by T. Griffin and B. am Ende, Gaithersburg, MD, January 16, 2001.
  3. J. Filla hosted a National Instruments seminar on Technical Data Management with Otmar Foehner and Ed McConnell (National Instruments), using a new product called DIAdem, at the Boulder Labs on May 14, 2001.
  4. J. Filla hosted a LabVIEW users meeting, a LabVIEW 6 presentation, and a hands-on LabVIEW networking seminar taught by Ed McConnell of National Instruments at NIST-Boulder on November 2, 2001.
  5. J. Filla co-organized (with D. Smith, ITL) a demonstration, for NIST staff, of the Immersive Visualization Environment at the CU-Boulder BP Center for Visualization on November 16, 2001.
  6. S. Satterfield organized a presentation by Virginia Tech researchers Ron Kritz, John Kelso, and Lance Arsenault on their DIVERSE virtual reality software, Jan 17, 2001.
  7. S. Satterfield organized a presentation on DIVERSE by John Kelso of Virginia Tech on February 5, 2001.
  8. S. Satterfield coordinated with SGI for a presentation at NIST by SGI OpenGL Performer Product Manager Kimberly Neff on March 29, 2001.

 

Short Courses

  1. J. Filla, Hands-on Introduction to LabVIEW, NIST-Boulder on August 7 & 8, 2001 (12 hours).
  2. J. Koontz, Introduction to Java, NIST, Boulder, March 22, April 5 and April 19, 2001 (6 hours each).

 

Workshops

  1. S. Langer co-organized a workshop in late June to discuss the current state of the OOF finite element program and to plan future developments.  Approximately 65 OOF users and developers attended the two-day workshop from 5 countries, 9 companies, 18 universities, and 4 national labs.
  2. D. Lozier organized a one-week visit in January by I. Olkin and D. Kemp, of Stanford University and the University of St. Andrews respectively, to advance the DLMF chapter on statistical functions. Olkin presented a NIST Colloquium "Statistical Meta-Analysis". A third visitor, D. Richards of the University of Virginia, was here for one day and presented a DLMF Seminar Special Functions of Matrix Argument and Their Applications in the Mathematical and Physical Sciences.
  3. D. Lozier organized a second one-week visit in late March and early April by I. Olkin and D. Kemp to advance the DLMF chapter on statistical functions. He also organized a one-day meeting on Monte Carlo and NIST Web Handbooks, jointly sponsored by the DLMF Project and the Statistical Engineering Division. Lozier and Olkin opened the meeting with short background talks. Two additional visitors were here for the meeting: G. Casella of the University of Florida presented An Introduction to Monte Carlo Statistical Methods and J. Liu of Harvard University presented Sequential Monte Carlo and Related Topics. The meeting attracted many attendees from the Washington area.

 

External Event Organization

  1. R.F. Boisvert is Co-Editor of the Proceedings, IFIP Working Conference on the Architecture of Scientific Software, Ottawa, Canada, October 2000.
  2. R. Boisvert served on the Program Committee for the ACM Java Grande/ISCOPE Conference that was held on June 2-4, 2001 at Stanford University, San Francisco, CA.
  3. R. Boisvert is serving on the Program Committee for the conference Iterative Solvers for Large Linear Systems: Celebrating 50 Years of the Conjugate Gradient Method, which will be held in Zurich in February 2002. D. O’Leary will be giving a plenary talk at the conference.
  4. M. Donahue served on the program committee for The Third International Symposium on Hysteresis and Micromagnetics Modeling - HMM '01, May 21-23, 2001 George Washington University Virginia Campus, Ashburn, Virginia.
  5. A. Kearsley was a co-organizer and participant of the First Sandia Workshop on PDE-Constrained Optimization, Santa Fe, NM, April 4-6, 2001.
  6. D. Lozier conducted a meeting of the officers and membership of the SIAM Activity Group on Orthogonal Polynomials and Special Functions at the Sixth International Symposium on Orthogonal Polynomials, Special Functions and Applications, Rome, Italy, June 19, 2001.
  7. D. Lozier is on the organizing committee for the IMA Summer Program: Special Functions in the Digital Age, to be held at the Institute for Mathematics and Its Applications, University of Minnesota, July 22 - August 2, 2002.
  8. J. Meza of the Sandia National Laboratory and F. Hunt co-organized Graduate Student Focus on Diversity Day to be held during the SIAM Annual Meeting in July. F. Hunt wrote letters of invitation and a part of a proposal to be submitted by SIAM for funding by the Department of Energy. The funds are to be used for travel expenses for the speakers and for expenses of the day.
  9. W. Mitchell co-organized (with Prof. Joseph Flaherty, RPI) a minisymposium on Dynamic Load Balancing for Adaptive Computations at the Sixth U.S. National Congress on Computational Mechanics, Dearborn, MI on August 1-3, 2001.
  10. R. Pozo organized a short-course entitled Java for High Performance Computing for the Tenth SIAM Conference on Parallel Processing for Scientific Computing, Portsmouth, VA, March 11, 2001.
  11. R. Pozo served as Publicity Chair for the Joint ACM Java Grande ISCOPE 2001 Conference (June 2-4, 2001, Stanford University, California).
  12. R. Pozo served on the Program Committee for the Java for High Performance Computing Workshop at the European High Performance Computing and Networking (HPCN) 2001 conference (June 25-27, 2001; Amsterdam, the Netherlands).

 

Other Participation

  1. B. Alpert attended a DARPA workshop on Optimized Portable Algorithms and Application Libraries (OPAAL), May 18-19, 2001, in Seattle, as program advisor and reviewer. The program, in which teams from Caltech, University of Illinois, and University of Iowa participate, promotes interdisciplinary research in parallelism, meshless methods, and hierarchical methods for handling elaborate geometric issues arising in the solution of partial differential equations.
  2. J. Filla participated in the NIST 100 year anniversary celebration in Boulder on May 11th and 12th. At the request of the organizers of the Boulder event (Fred McGehan, Mary Brunner, and Stephanie Outcault), Jim Siegwarth and Filla set up a poster session on the Drinker nisti and their other extracurricular dinosaur hunting activities.
  3. W. Mitchell chaired the session on Advances in Grid and Mesh Technology at the Tenth SIAM Conference on Parallel Processing for Scientific Computing in Portsmouth, VA, March 12-14, 2001.
  4. D. Porter served as a Session Chairman at the 8th Joint MMM-Intermag Conference, San Antonio, Texas, January 8, 2001.

 

 

3.4.        Software Released

  1. E. Boman, K. Devine, B. Hendrickson, W. Mitchell, M. St. John, C. Vaughan (all other participants are with Sandia National Laborabories), Zoltan Version 1.23, suite of parallel algorithms for dynamically partitioning problems over sets of processors, February 2001. Available at http://www.cs.sandia.gov/Zoltan.
  2. M. Donahue and D. Porter released OOMMF 1.2 alpha 1, January 22, 2001 (available for download at http://math.nist.gov/oommf).
  3. M. Donahue and D. Porter released OOMMF 1.2 alpha 2, May 29, 2001 (available for download at http://math.nist.gov/oommf/).
  4. M. Donahue and D. Porter released OOMMF 1.1 beta 1, Oct 2, 2001 (available for download at http://math.nist.gov/oommf/).
  5. S. Langer released OOF 1.1.11 and PPM2OOF 1.1.19
  6. R. Pozo released a version update to SparseLib++ (v. 1.5d) which removed dependencies on pre-ANSI C++ compilers, and posted it on web site: http://math.nist.gov/sparselib++.
  7. R. Pozo updated computational C++ linear solvers using QR and LU matrix decomposition with the Template Numerical Toolkit and posted it on web site: http://math.nist.gov/tnt.

 

 

3.5.        External Contacts

 

MCSD staff members make contact with a wide variety of organizations in the course of their work. Examples of these follow.

 

Industrial Labs

Advanced Biologic Corp.

Advanced Research Systems, Inc.

Alabama Cryogenic Engineering

Altair Engineering, Inc.

American Superconductor

Avaya

Cadence Design Systems

Chesapeak Cryogenics

Compaq Corp.

Dow Chemical

Endocardial Solutions

Frontier-Technologies, Inc.

General Electric

Hewlett Packard

Hughes Corp.

IBM

Intel

Irvine Sensors

Johnson Scientific Group

Lucent Technologies

Motorola

MPI-Software Technology

Myricom, Inc.

Northrup Grumman

Praxair, Inc.

SAIC

Schema Group

Sierra Lobo, Inc.

Sun Microsystems

Sunrise-Systems Limited

Texas Instruments

The MathWorks


 

Government/Non-profit Organizations

 

Air Force Office of Scientific Research (AFOSR)

American Institute of Physics (AIP)

American Mathematical Society (AMS)

American Museum of Natural History

Argonne National Labs

Army Research Office (ARO)

Association for Computing Machinery (ACM)

Centre National de la Recherche Scientifique (France)

Defense Advanced Research Projects Agency (DARPA)

Fermi National Labs

IDA Center for Computing Sciences

Idaho National Engineering and Environmental Laboratory

IEEE Computer Society

Institute for Computer Applications and   Engineering

Lawrence Livermore Labs

Mammoth Cave National Park

Mathematical Association of America (MAA)

NASA

National Science Foundation (NSF)

National Institutes of Health (NIH)

Sandia National Laboratory

Society for Industrial and Applied Mathematics (SIAM)

U.S. Department of Energy (DoE)

W.M. Keck Foundation


 

Universities

 

Arizona State University

Carnegie-Mellon University

Case Western Reserve University

Clemson University

College of William and Mary

Columbia University

Cornell University

Courant Institute

Dartmouth College

Federal Institute of Technology Zurich (ETH)

Florida State University

George Mason University

George Washington University

Georgia Tech

Harvard University

Indiana University

Israel Institute of Technology

Johns Hopkins University

Louisiana State University School of Medicine

Marymount University

New Jersey Institute of Technology

New York University

Northwestern University

Oxford University (UK)

Purdue University

Rensselaer Polytechnic Institute

Rice University

Santa Monica College

Southern University (Baton Rouge)

Stanford University

SUNY Binghampton

Swarthmore College

Technical University of Denmark

Technical University of Dresden (Germany)

Technical University of Vienna (Austria)

Texas Tech

Towson University

UMIST (UK)

Uniformed Services University of the Health Sciences

Universitaet Wuerzburg (Germany)

Université Louis Pasteur

UCLA

University of California at Irvine

University College (London)

University of Alabama

University of Antwerp (Belgium)

University of Bayreuth (Germany)

University of Chicago

University of Colorado

University of Delaware

University of Houston

University of Iowa

University of Jyvaskyla (Finland)

University of Manchester (UK)

University of Maryland Baltimore County

University of Maryland, College Park

University of Minnesota

University of New Mexico

University of North Carolina

University of Pennsylvania

University of Pittsburgh

University of Southampton (UK)

University of Virginia

University of Washington

University of Wisconsin

Vanderbilt University

Vienna University of Technology

Virginia Tech

Wake Forest University


 

 

3.6.        Other Professional Activities

 

Internal

 

  1. R. Boisvert serves on the Scientific Computing Working Group of the Information Technology Services Planning Team.
  2. A. Dienstfrey served on the Research Advisory Committee.
  3. D. Lozier represents ITL on the Board of Editors of the NIST Journal of Research.
  4. D. Porter began service on the Open Systems Subgroup of the Information Technology Services Planning Team.
  5. Staff members regularly review manuscripts for the Washington Editorial Review Board (WERB) and the Boulder Editorial Review Board (BERB), as well as proposals for the NIST ATP and SBIR programs.

 

External

 

  1. B. Alpert was appointed to the Editorial Board of the SIAM Journal on Scientific Computing.
  2. I. Beichl serves on the Editorial Board of Computing in Science and Engineering.
  3. I. Beichl and W. Mitchell have been invited to join the editorial board of the new Journal of Numerical Analysis and Computational Mathematics to be published by Cambridge International Science Publishing.
  4. R. Boisvert serves as Editor-in-Chief of the Association for Computing Machinery (ACM) Transactions on Mathematical Software. He oversees activities of 15 associate editors, and is responsible for all acceptance/rejection decisions for research papers. The journal publishes 500 pages per year.
  5. R. Boisvert serves as Vice-Chair of the ACM Publications Board. He is the senior member of the Board, specializing in ACM’s electronic publication program.
  6. R. Boisvert serves as Co-Chair of the Numerics Working Group of the Java Grande Forum.
  7. R. Boisvert served on the Program Committee for the joint 2001 ACM Java Grande/ISCOPE Conference.
  8. R. Boisvert is serving on the Program Committee for the Latsis Symposium 2002: Iterative Solvers for Large Linear Systems: Celebrating 50 Years of the Conjugate Gradient Method to be held in Zurich in February 2002.
  9. R. Boisvert continued his service as Chair of the International Federation for Information Processing’s (IFIP) Working Group 2.5 on Numerical Software.  A meeting of the working group was held at the University of Amsterdam on May 26-27.  An associated open workshop on Scientific Computing and Computational Science was held at the Centrum voor Wiskunde en Informatik (CWI) in Amsterdam on May 28-29.
  10. M. Donahue serves on the Editorial Board for the Journal of Computational Methods in Sciences and Engineering.
  11. F. Hunt serves on the DOE Biological and Environmental Research Advisory Committee.
  12. R. Kacker is serving on an ASTM (American Society for Testing Materials) committee formed to revise its standards for test methods development to be consistent with the ISO Guide on expression of uncertainty and the NIST policy on expression of uncertainty.
  13. A. Kearsley served on an panel which developed the white paper Commentary from the Scientific Grassroots; A White Paper on the Issues and Need for Public Funding of Basic Science and Engineering Research. The paper was delivered at a small breakfast ceremony at the House Office building in September 2001. The panel was sponsored by the NSF in conjunction with the Jemison Institute at Dartmouth College.
  14. D. Lozier serves as Associate Editor of Mathematics of Computation.
  15. D. Lozier serves as Chair of the SIAM Activity Group on Orthogonal Polynomials and Special Functions.
  16. G. McFadden was appointed Associate Editor of the Journal of Crystal Growth. He is also on the Editorial Board the Journal of Computational Physics and the SIAM Journal on Applied Mathematics.
  17. G. McFadden was a co-editor of the proceedings of a conference, Interfaces for the 21st Century, to be published by Imperial College Press, London.
  18. D. Porter continued his service as a leading member of the Tcl Core Team, serving as maintainer of Tcl's package manager and initialization functional areas.
  19. D. Porter continued collaboration with Donal Fellows of the University of Manchester and the Tcl Core Team to produce a web-based drafting and archiving service for proposals of changes to the Tcl programming language.  The service is now in operation at http://purl.org/tcl/tip/ .
  20. R. Pozo was appointed Associate Editor on the ACM Transactions on Mathematical Software (TOMS).
  21. B. Saunders served on Mathematical Association of America (MAA) MD-DC-VA Section committee to select the Year 2001 winner of the John Smith Award for Distinguished College or University Teaching.
  22. Division staff referee manuscripts for a wide variety of journals including Acta Metallurgica, Computers and Mathematics with Applications, Computing in Science and Engineering, Fluid Dynamics Research, IEEE Transactions on Antennas and Propagation, IEEE Transactions on Circuits and Systems (Part I), Institute of Statistical Mathematics, Journal of Applied Physics, Journal of Computational and Applied Mathematics, Journal of Computational and Graphical Statistics, Journal of Computational Physics, Journal of Crystal Growth, Journal of Fluid Mechanics, Journal of Manufacturing Science and Engineering, Materials Science and Engineering A, Numerical Algorithms, Physica D, Physical Review, Physical Review B, Physics of Fluids, SIAM Journal on Numerical Analysis, SIAM Journal of Scientific Computing, and Springer-Verlag.
  23. Division staff referee manuscripts for a wide variety of conferences including ASME Design Engineering Technical Conference, Java Grande/ISCOPE Conference, Scientific Computing 2001, the 3rd International Conference on Large-Scale Scientific Computations, and the Fifteenth TOYOTA Conference on Scientific and Engineering Computations for the 21st Century'.
  24. Staff members review proposals for the following research programs: AFOSR, ARO, DARPA, DOE, NASA, NSF, and the W.M. Keck Foundation.

 

Outreach

  1. B.A. am Ende, T. Griffin, P. Ketcham, J. Devaney, and S. Satterfield gave demos of visualization research to Roosevelt High School students on June 27, and July 31, 2001.
  2. I. Beichl mentored J. Bernstein, a Montgomery Blair High School student, who worked on image processing. I. Beichl attended the Annual Research Convention at Montgomery Blair where all the projects were featured.
  3. I. Beichl organized the ITL part of the NIST Summer Undergraduate Research Fellowship (SURF) program. She mentored a SURF student, Jennifer Carlson who is working on Monte Carlo methods for combinatorial graph problems. She participated in the effort to make a new SURF video for recruiting purposes.
  4. I. Beichl served on the NIST Diversity Board, chaired the Evaluation Subcommittee and wrote a paper, A Review of the 1993 Report of the Affirmative Employment Committee for Female Scientists and Engineers. 
  5. MCSD hosted a half-day visit to NIST by Caroline Nguyen of Brooklyn, NY, a finalist in the 2001 Intel Science Search. Her project involved the analysis of a mathematical game. She met with Ron Boisvert, Fern Hunt, and Isabel Beichl, had lunch with five MCSD staff members, and saw a tour of the MCSD RAVE visualization facility.
  6. J. Filla and J. Siegwarth (NIST-CSTL) presented a poster session on the Drinker Nisti at the NIST Centennial celebration in Boulder on May 11th and 12th.
  7. D.E. Gilsinn acted as judge for the Science Expo at North Bethesda Middle School, January 18, 2001.
  8. K. Gurski mentored Paula Thonney, a master's student at Southern Illinois University, and Koung Hee Leem, a Ph.D candidate from University of Iowa, through the Association for Women in Mathematics mentor program.
  9. K. Gurski mentored Genetha Gray, a Ph.D candidate from Rice University.
  10. F. Hunt spoke to an employee group at the Environmental Protection Agency about her life and career. The talk on March 19th, 2001 was part of Women's History month.
  11. F. Hunt attended the 2001 reunion of the EDGE (Enhancing Diversity in Graduate Education) program. Hunt gave the keynote address.
  12. F. Hunt and Juan Meza of Sandia National Laboratory were co-chairs of the Graduate Student Focus on Diversity held during the SIAM National Meeting in San Diego, California.
  13. A. Kearsley is acting as an under-represented minority mentor for two female African-American graduate students (University of Maryland and University of Pittsburgh).
  14. A. Kearsley is serving as a project director for two Blair High School science student projects.
  15. B. Saunders attended the reunion of the George Washington University Summer Program for Women at the Joint Mathematics Meetings, New Orleans, Louisiana, January 12, 2001.
  16. B. Saunders responded to several requests for biography information from students in Georgia and Texas who were assigned research projects on African American mathematicians.
  17. B. Saunders received a certificate from Brown Station Elementary School, Gaithersburg, in recognition of her work with 5th grade students in mathematics.
  18. B. Saunders gave a presentation on her work at NIST to mathematics, computer science and engineering students at Montgomery College, Rockville, Maryland, October 25, 2001.
  19. MCSD staff participated in a visit to ITL on June 28 by a group of 15 students from minority institutions serving internships at the National Science Foundation. R. Boisvert presented an overview of NIST and ITL, followed by a technical presentation on the Digital Library of Mathematical Functions project. This was followed by a panel discussion on careers in mathematics by I. Beichl, J. Bernal, F. Hunt, and A. Kearsley.

 


 

   Part IV - Staff

 

Text Box:   
  Part IV


 Staff
 

 

 


Charge density on a computed diffusion-limited cluster aggregate.

 

Charge density on a computed diffusion-limited cluster aggregate.

 

 

 

 

 

 

 

 

 

 

MCSD consists of full time permanent staff located at NIST laboratories in both Gaithersburg, MD and Boulder, CO. This is supplemented with a variety of faculty appointments, guest researchers, postdoctoral appointments, and student appointments. The following list reflects the status at the end of FY2001.

 

Legend: F = Faculty Appointee, GR = Guest Researcher, PD = Postdoctoral Appointee, S = Student, PT= Part time

 

Division Staff

 

     Ronald Boisvert, Chief

     Robin Bickel, Secretary

     Peggy Liller, Clerk

     Joyce Conlon

 

     Brianna Blaser, S

     André Deprit, GR

     Jeffrey Fong, GR

     Karin Remington, PT

     David, Song, PD

 

Mathematical Modeling Group

 

     Geoffrey McFadden, Leader

     Bradley Alpert (Boulder)

     Timothy Burns

     Alfred Carasso

     Andrew Dienstfrey (Boulder)

     Michael Donahue

     Fern Hunt

     Anthony Kearsley

     Stephen Langer

     Agnes O'Gallagher (Boulder)

     Donald Porter

 

     Daniel Anderson, GR

     Eric Baer, S

     James Blue, GR

     Richard Braun, F

     Eleazer Bromberg, GR

     Daniel Cardy, S

     John Gary, GR

     Katharine Gurski, PD

     Kelly McQuighan, S

     Bruce Murray, GR

     Dianne O'Leary, F

 

Mathematical Software Group

 

     Roldan Pozo, Leader

     Daniel Lozier

     Marjorie McClain

     Bruce Miller

     William Mitchell

     Bert Rust

     Bonita Saunders

 

     Bruce Fabijonas, F

     Elaine Kim, S

     Leonard Maximon, GR

     Frank Olver, GR

     G.W. Stewart, F

     Abdou Youseff, F

 

Optimization and Computational Geometry Group

 

     Ronald Boisvert, Acting Leader

     Isabel Beichl

     Javier Bernal

     David Gilsinn

     Christoph Witzgall

 

     Theodore Einstein, GR

     Saul Gass, F

     Alan Goldman, GR

     James Lawrence, F

     David Song, PD

     Francis Sullivan, GR

 

Scientific Applications and Visualizations Group

 

     Judith Devaney, Leader

     Yolanda Parker, Secretary

     Barbara am Ende

     Robert Bohn

     James Filla (Boulder)

     William George

     Terence Griffin

     John Hagedorn

     Howard Hung

     Peter Ketcham

     John Koontz (Boulder)

     Steven Satterfield

     James Sims

 

     Deborah Caton, S

     Stefanie Copley (Boulder), S

     Howland Fowler, GR

     Julien Franiette, GR

     Olivier Nicolas, GR

     John-Lloyd Littlefield, S

     Vital Pourprix, GR