Modeling and Simulation:
A NIST Multi-Laboratory
Strategic Planning Workshop


Gaithersburg, MD
September 21, 1995


Executive Summary

Modeling and simulation (M & S) play important roles in the commercial and industrial world. M & S groups in industry are able to survive because they can demonstrate to their management that an investment in M & S saves far more money than it costs.

NIST staff have a role in performing M & S, both within NIST as part of NIST's laboratory programs, and as part of NIST's support for industry.

NIST also has a role to play in developing tools and environments for M & S and in disseminating software for M & S.

Finally, M & S are worthwhile only if excellent staff are hired for those tasks, staff capable of interdisciplinary interactions.

Workshop Overview

The workshop consisted of an introduction; five talks, each followed by a discussion period; and an open discussion session. Capsule versions follow immediately; more substantial summaries follow later.

Jim Blue opened the workshop with brief introductory remarks. He emphasized that the purpose of doing modeling and simulation is to gain understanding and insight. The three benefits are that modeling and simulation can be cheaper, quicker, and better than experimentation alone. It is common now to consider computation as a third branch of science, besides theory and experiment.

Dr. Michael Teter, of Corning, Inc., and Cornell University, spoke on Numerical Modeling at Corning, from Pots to Optical Waveguides. Corning is heavily dependent on modeling and simulation in the development of new product lines. Work that Teter and co-workers have done has saved millions of dollars in annual processing costs in each of six or eight different product lines. Number-crunching power is no longer a significant impediment to the advancement of simulation. The quality of research personnel is a very important concern, and scientific insight is more valuable than proficiency with a computer. ``The real purpose of simulation is that you get enough insight that you never have to run the simulation again.''

There are problems of immense technical challenge and importance that need long-term research, longer than industry can support. NIST can serve as a repository for the expertise that is required for long-term advances across industries.

Dr. William Coughran, of AT&T Bell Laboratories, spoke on Semiconductor Device Simulation as a Scientific Computing Problem. Semiconductor simulation requires an interdisciplinary team effort of physicists, chemists, materials scientists, computer scientists, and mathematicians. It is important to solve the right problem, not just interesting problems.

The benefits of semiconductor simulation to AT&T have been (1) the ability to explore alternate and novel designs in a timely way, (2) the ability to predict difficult-to-measure quantities, (3) the ability to characterize devices without building them, and (4) the ability to optimize designs. The direct benefit to AT&T is estimated to be $10-15 million per year. For example, through the use of modeling, a particular design was made available 1.5 years earlier than it would have been with experimental development alone.

A possible role for NIST is to establish model problems. Benchmarks would be useful; companies do not want to reveal specific parameters or equations that they use in their models. NIST can be involved in the basic materials science, but not in specific manufacturing processes. The role of the National Laboratories should be to create modeling tools and to do algorithmic development.

Dr. Clarence Law, DuPont Central Research and Development, spoke on Modeling and Simulation at DuPont. R&D must be more focused to create value in the marketplace through new technologies and more efficient manufacturing. Companies that create value in the marketplace prosper and grow. They must always be creating competitive advantages.

DuPont saved almost $1 billion between 1993 and 1995 from modeling and simulation. The savings came from increased yield, reduced downtime, and lower maintenance costs; these savings are directly attributed to increased process understanding, control, and new process technologies.

Dr. Robert E. Bixby, CPLEX Optimization, Inc., and Rice University, spoke on Applications and Algorithms for Linear and Integer Programming. Bixby's company produces software that is used in other companies' software, so that the end user is entirely unaware of CPLEX's contribution.

Improved algorithms and faster computers have made linear and integer programming software far faster and far more robust than it was just a decade ago. Using this software saves users millions of dollars per year through economies such as better use of equipment and staff.

Dr. Roldan Pozo, Mathematical Software Group, NIST, spoke on Trends and Issues in High-Performance Computing. Large scale simulation and modeling requires large scale computational resources. Parallelism is unavoidable. The cost of high-performance computing is mainly the cost of software development, not the cost of the machine. NIST is developing and distributing software libraries, as well as working on aids to software development for parallel computers.

Workshop Summary

(Note: Unless quotation marks are used, all attributions are paraphrases. Editorial notes are in italics.)

Introduction

Jim Blue, chair of the organizing committee, opened the workshop with brief introductory remarks. M & S are not simply a matter of ``Just put it on the computer,'' but cover a whole range of disciplines. Given the basic science, a mathematical model is developed and subjected to mathematical analysis. Numerical algorithms are proposed and analyzed, and a computer program is developed, partly by custom programming and partly by drawing on library programs. The resulting program is run to give a simulation, and needs to be compared with experiments to validate the whole process.

He then described some areas of mathematics and computer science used in modeling and simulation. He proposed the question, ``Why do modeling and simulation?'' The answer is: to get understanding and insight. The three benefits are that modeling and simulation can be

than experimentation alone. It is common now to consider computation as a third branch of science, besides theory and experiment.

``What should NIST do in Modeling and Simulation?'' was posed as the question to keep in mind throughout the workshop.

Simulation at Corning, From Pots to Optical Waveguides

Dr. Michael Teter
Corning, Inc. and Cornell University

Dr. Teter heads the modeling program at Corning, Inc., a research-based firm that built the second industrial research laboratory in the United States. Research provides critical underpinning for the development of new product lines at Corning. One important project in the last century was the development of glass that could withstand high thermal stress for use in railroad signal lights. The processing technology (Pyrex) that was developed for this purpose provided the basis for Corning's current laboratory glassware and home and business product lines. Another glass process, originally developed to make rocket nosecones, evolved into dinnerware.

Corning derives competitive advantage from a continuous stream of development of new materials and process technology. Examples of materials are listed on Dr. Teter's first foil. One material, Vycor, was developed by discovering effects of certain metal dopants. Vycor made possible the low cost production of television tubes; TV tubes were available in the early 1940s, but required optical polishing costing approximately $10,000 per tube. Glass ceramics were first developed as nonporous, high-strength, low-thermal-expansion materials suitable for missile nose cones; these later formed the basis for the firm's Corelle line of dinnerware. Some of the new materials, such as silicones and fiberglass, have had such wide-ranging uses that Corning has developed spin-off companies to exploit them, typically in the form of a joint venture with another firm.

Process technology is also of key importance. Centuries ago, glass was produced by techniques of blowing and hand craftsmanship that began to change only in the industrial revolution. Corning's development of the ribbon machine made incandescent lightbulbs objects of mass production, which was pivotal to the development of the lighting industry. Similarly, the development of precision tubing draws made possible the mass production of fluorescent lighting. Among other more recent examples, the outside flame deposition process affected the development of telescopes, and enabled optical fibers to be produced in volume. The global optical fiber network has become critical to the world's telecommunication industry; Corning is the leading producer of optical fiber, with AT&T a major competitor. Editor's Note: There are now approximately 70 million kilometers of optical fiber installed world-wide for telecommunications applications. First used for large trunk lines, fiber is migrating to the home. At present one often finds that a fiber connection ends at a hub that serves as few as 100 homes (Source: Donald Keck, Corning, Inc., talk at Optical Society of America Corporate Associates Meeting, Washington, DC, 9/28/95).

According to Teter, material and process innovations on this scale typically take 7 to 10 years to develop. Fewer than 5% of the projects that are pursued over this time scale are successful in providing a foundation for a new product line. The average Ph.D. scientist at Corning (Corning employs over 400 at a total cost of $100 million per year) will not be associated with a commercially successful project, according to the above definition, during a 35-year career.

However, a few Cornint scientists have more than ten major successes. Teter noted that a key reason why these scientists are successful is that they typically have a detailed mental picture of the material with which they work. Because glass is an extraordinarily difficult material to work with, it is difficult to obtain such a high level of detail from experiments alone. For example, molten glass is highly corrosive and will eat through any known substance; indeed, the lifetimes of glass furnaces are on the order of three years. The engineering of a new production process in this environment can be extremely costly, typically hundreds of millions of dollars; Pilkington Brothers, for example, was nearly bankrupted by the costs of developing the float glass process. Diagnosis of basic physical characteristics during processing is extremely difficult because in situ instrumentation is impractical, and remote probes (i.e., spectroscopy) give only limited information on amorphous materials like glass. In short, the key reason for multiyear time for successful development of new materials and processes --- and the high failure probability --- is a lack of detailed understanding of glass properties, from the macroscopic level to the atomic level. Given the difficulty of direct determination of these properties by experimental measurement, M & S become essential.

Corning's earliest approaches to modeling glass behavior, dating from the beginning of the century, had a physical basis: the use of viscous oils to simulate flow. Standard mathematical and physical techniques were applied to the analysis of oil experiments, and information derived from this analysis was used to guide process designs.

The use of digital computers for simulation began with the establishment in the early 1960s of Corning's Technical Computer Center, which Teter headed at one time. Most problems treated involved one-dimensional simulations of heat or mass flow, stress analysis, etc., and were used to solve ``custom'' problems. Codes for these simulations were seldom used for more than one application, and could take one month to complete, with most of the time spent in debugging. Teter observed that, even now, roughly 90% of the technical computing effort at the bench level is associated with the debugging of code, rather than on its original composition or running it in production mode. Many managers still do not have an adequate appreciation of how much time debugging takes.

In the 1970s simulation capability was substantially enhanced, partly because of the introduction of three-dimensional finite-element stress analysis. Before this, designing a new shape for a manufacturable TV tube would cost on the order of $250,000, and modeling capabilities played a critical role in reducing this cost. The development of realistic simulations for lightwave propagation in optical fibers made possible the design of the dispersion-free fibers required for long-range telecommunications.

A major advance in the early 1980s was the development of a three-dimensional, time-dependent, finite-element code for fluid flow that incorporated heat transfer processes. This code was written in three years by an internal company team consisting of Teter and two co-workers, and incorporated features of specific relevance to glass processing that could not be found in available commercial software packages. This code is capable of reproducing actual measurements done on a range of factory processes to an accuracy of within 5%, and is believed to have saved up to $10 million in annual processing costs in each of six or eight different product lines. One example was presented: a teapot made by the press-and-glow process. The existing process resulted in an extraneous bulge of glass in the bottom interior of the teapot. Use of the finite-element program helped redesign the process so that less material was used and processing time was cut by half.

In the 1990s, Teter began an effort to develop methods for computing the molecular structure of glass from first-principles quantum mechanics. This was motivated by the fact that no other way is known by which detailed information on this structure can be obtained; in addition, substantial technical developments in density functional theory occurred during the 1980s, which suggested that it might soon be possible to do accurate first-principles calculations. Teter found however that existing theoretical methods could not adequately reproduce the known crystalline forms of glass constituents (e.g., quartz), and so began a project in collaboration with Professor Joannopoulos at MIT to develop a large-scale electronic structure code that implements density functional theory in a plane- wave basis. This project has been very successful, and has led to the commercialization of the code. In fact, as a result of this effort, Corning acquired BIOSYM Technologies, Inc., which is one of the major commercial producers of molecular modeling software. Editor's Note: On August 15, 1995, it was announced that BIOSYM Technologies, Inc., would merge with Molecular Simulations, Inc., to form a new independent company in which Corning, Inc. will hold a 55% equity stake. This development is consistent with the pattern of Corning's other spin-off enterprises. These two firms are the leading producers of molecular simulation software, with combined 1994 revenues of $45 million; there are no competitors known to us that are of comparable size. (Source: BIOSYM Technologies, Inc., press release, 8/15/95)

In conclusion, Teter stated that numerical simulation has saved hundreds of millions of dollars for Corning. Modeling and simulation are an indispensable part of Corning's product and process development efforts to stay competitive, especially in today's world where companies have half as much time to develop products and processes as they used to. Simulation is now applied to problems from the macroscopic scale down to the atomic scale, as needed to understand the fabrication of glass.

He noted that ``number-crunching power is no longer a significant impediment'' to the advancement of simulation, and that available RISC workstations have attained a real-world level of performance that outstrips that of the Cray and of massively parallel processors that he has used. The quality of research personnel is a very important concern, and scientific insight is more valuable than proficiency with a computer. ``The real purpose of simulation is that you get enough insight that you never have to run the simulation again.''

Great care must be taken to hire capable people who understand how to do modeling and simulation. He noted that NIST has recently made very good hires that reflected the importance of exceptional quality researchers.

Questions, Answers, and Discussion

Lyle Schwartz: I am interested in your working environment. How are you constrained by the proprietary attitude of the company? Does this cause you difficulty in sharing information?

Teter: I have an adjunct affiliation with Cornell University, and there I can interact freely with colleagues. But I am still enjoined from discussing some work done at Corning even as long as twenty years ago. NIST can help industry researchers in this matter by demonstrating forefront technical leadership and sharing results. However, duplication of effort (due to proprietary constraints) is not all bad. Government should lead the way technically and provide forums for results.

Katharine Gebbie: You seem to be doing an excellent job, and have told us that we have very good people working on the same problems. Why shouldn't they be working for you rather than us? Is there really a role for government in this area?

Teter: There are problems of immense technical challenge that need longer-term research that industry can't support. For example, our molecular modeling program was designed for the specific needs of the glass industry. It doesn't handle metals very well, but could be extended to do so, and in fact NIST is taking this project up. NIST can serve as a repository of the expertise that is required for long-term advances across industries.

Judson French: You have told use that Corning has been expanding its R&D efforts. Did they also change their focus?

Teter: The main changes were organizational. The distinction between short-term and long-term research was clarified. Long-term efforts were given bigger targets to shoot for.

The Impact of Algorithms on Semiconductor Device Simulation; or Semiconductor Device Simulation as a Scientific Computing Problem

Dr. William M. Coughran, Jr.
AT&T Bell Laboratories

The areas for which Dr. Coughran is responsible at Bell Labs include semiconductors, lasers, fiber optic processes, services, manufacturing, and optimization (applied to such areas as routing and pricing). The emphasis of his talk was on semiconductors, to demonstrate modeling and simulation at AT&T. He said very little about the means by which projects were chosen. His emphasis was on the basis for detailed modeling of the process in semiconductors, and the value thereof.

The particular issues for AT&T that he covered were: (1) process simulation for the making of semiconductors; (2) process simulation for the operation of semiconductors (switching time, for example); (3) circuit simulation of collections of semiconductors, i.e., integrated circuits (ICs).

To begin the talk, Coughran showed a video on IC development that emphasized ``Algorithms for Computational Electronics'' and discussed the simulation process and the algorithms used in computational electronics. Semiconductor simulation requires an interdisciplinary team effort of physicists, chemists, materials scientists, computer scientists, and mathematicians. The emphasis was on the ability to compute the distribution of impurities in a semiconductor device.

The primary use of simulation in this context was to understand what happens in semiconductors in order to improve or to tailor the semiconductor behavior. It is particularly important to understand the failure modes in order to improve chip yield and to aid in designing future generations of ICs with smaller transistors.

The specific topic covered in detail was the application of simulation to device modeling. The models have to be based on true physical characteristics. The models represent complex mathematical equations in three dimensions. The modeling is made much more difficult because the equations are extremely non-linear. The equations include the Boltzmann transport equation, drift and diffusion of electrons and holes, hydrodynamic models, and energy balance.

He showed the results of the simulation of a 3D model of a wafer. It appears to be a nanotechnology application in metal deposition; one of the conclusions was that the Boltzmann equation model was not sufficient and that the modelers must continually push the envelope of understanding in order to simulate these devices better. Quantum electronics effects are becoming important and this is driving their modeling enhancements. The most important results seem to be the ability to predict what semiconductors will do so as to improve them.

He then showed a brief overview of how the computing was done, going from 2D to 3D. He placed emphasis on the visualization of the results, which is useful to promote understanding by upper management of what was being done, as well as for the modelers themselves.

He emphasized that the purpose of simulation is to understand what happens in semiconductors. It is particularly important to understand failure modes. He noted that in doing device modeling -- process simulation, device simulation, and circuit simulation -- many computational scientists have wasted too much time on unimportant problems. It is important to solve the right problem, not just interesting problems. Because of the interdisciplinary nature of semiconductor simulation, the timely response to questions posed by the computational experts in the simulation group is paramount.

Flexible packages or toolboxes to solve partial differential equations (PDEs) are needed.

At this time, the best work is being done on expensive supercomputers; demonstration of parallel computers on real problems would be welcomed.

The benefits of semiconductor simulation to AT&T have been (1) the ability to explore alternate and novel designs in a timely way, (2) the ability to predict difficult-to-measure quantities, (3) the ability to characterize devices without building them, and (4) the ability to optimize designs. The direct benefit to AT&T is estimated to be $10-15 million per year.

Questions, Answers, and Discussion

French: What might NIST contribute to this process?

Coughran: NIST should establish model problems, much as is done in video compression, encryption research, and so on. Benchmarks would be useful; companies do not want to reveal specific parameters or equations that they use in their models.

Hratch Sermerjian: How is it possible to know how accurate the models are? Can they predict sensible results (a curve of current vs. voltage, for example)?

Coughran: We do a lot of measurements. It is harder to measure in the process modeling area, but we try to characterize the parameters as best we can.

Gebbie: It appears that industry is doing an excellent job in handling the modeling needs, so what really is the role for government?

Coughran: A possible role for government is developing basic materials properties and providing benchmarking of codes. There needs to be more coupling between government and industry Editor's Note: industrial fellowships come to mind. A consortium working on precompetitiveness issues could be facilitated by government participation.

Teter: Methods in PDEs still need some new work.

Coughran: People working on methods must work with people who are actually working on solving the problem.

Gebbie: Why are NIST scientists any better to work on the PDE methods than in industry?

Teter: We now have a good CRADA with NIST, producing results.

(Someone): What percentage of this modeling is done on supercomputers?

Coughran: We are now using supercomputers exclusively. We want to go to clusters of workstations. We estimate that it would be 15 times more cost-effective.

Teter: Computing is heading towards workstations and clusters of workstations. The supercomputers and mainframes are dinosaurs.

Schwartz: What is the right amount of effort to put into modeling? How much computing is good?

Coughran: The quality of results is important, not the amount of computing. We maintain metrics of our work with the business units. For example, we showed that through the use of modeling, a particular design was made available 1.5 years earlier than it would have been with just experimentation. The use of anecdotal justification is becoming more acceptable.

Teter: You have to pay close attention to quality --- get the best modelers, not the most.

(Someone): Is the specific computer language important?

Coughran: Not really.

Semerjian: What about NIST working in the area of chemical processes --- are they so specialized that NIST can't contribute?

Coughran: NIST can be involved in the basic materials science, but not in specific manufacturing processes. The role of the National Laboratories should be to create modeling tools and do algorithmic development.

Editor's Note: a quote which came up in the general discussion, probably from Teter: ``Putting good people on good projects makes sense; putting bad people on good projects might kill the golden goose; putting good people on bad projects is awful since they will make it almost work; putting bad people on bad projects is good in that the bad projects die quickly.''

Modeling and Simulation at DuPont

Dr. Clarence G. Law
DuPont Central Research and Development

Dr. Law spoke on the role modeling and simulation is playing in the chemical industry. He described DuPont as a global, technology driven, manufacturing company of chemicals, materials, and energy. They have approximately 20 business units. He pointed out that DuPont's growth had stagnated and that R & D expenditures as a percent of sales have been declining over the last 10 years. The implications are that R & D must be more focused to create value in the marketplace through new technologies and more efficient manufacturing. Companies that create value in the marketplace prosper and grow. They must always be creating competitive advantages. A key core competency at DuPont is in Process Science & Engineering.

Law laid out the strategy that DuPont is implementing. Value is created by breakthrough technology, which is developed further through a better understanding of the underlying fundamentals. This knowledge base is then captured within a manufacturing process model, which aids in the development of pilot facilities. The pilot facilities provide data to make the model more robust and to provide validation to the process model. It is only after successful implementation in a pilot facility that the organization can begin to capture the value that has been created. To this end, the strategy is to design and build cost-effective manufacturing facilities, implement sustainable real-time process-control strategies, and ensure efficient and cost-effective business strategies by optimization of supply chains.

Breakthrough technology can come from experiment or through utilization of new molecular modeling methods that have the potential to provide rapid screening of potential processes. More likely, however, is that simulation will play a greater role in the second stage, providing a fundamental understanding of the breakthrough technology. Capturing this fundamental understanding with models provides the connectivity between an actual practical operating device and fundamental mechanistic information and allows for a more realistic assessment of process viability. Such models also have the potential to simulate not only steady state but can be designed to simulate dynamic behavior as well. Models provide the basis of extending the operating parameter space from pilot scale to plant scale and have a significant impact on the decision to build the full scale and on the nature of the ultimate design. Finally, robust models are used to control processes that provide cost saving in plant operation and operation optimization. The scope of models is quite broad; models impact chemical processes, reaction engineering, physical processes (e.g., non-Newtonian rheology), all the way to the properties and behavior of the final product (strength, corrosion resistance, etc.). Applications also include modeling the environmental impact of processes and operations optimization.

Law pointed out that DuPont has quantitative data on the impact this strategy has already yielded in terms of cost savings. The data presented indicate that, between 1993 and 1995, cost savings of almost $1 billion were achieved through increased yield, reduced downtime, and lower maintenance cost; the savings are directly attributed to increased process understanding, control, and new process technologies. The impact of M & S will be greatly enhanced with the development of more robust and trustworthy simulation packages. Many of the simulation models used at DuPont are produced by software vendors. These models often have limited capabilities because of the lack of fundamental data incorporated with the models. In many cases, issues related to model validation need to be more carefully addressed.

Questions, Answers, and Discussion

(Someone): (Question about the cost savings)

Law: We have to demonstrate real value added in order to justify the modeling and simulation work.

Schwartz: How is the modeling distributed throughout the organization?

Law: For the $1 billion in research, the central modeling has approximately 25% with the rest in the business units. The units tend to have more short term projects, whereas his central R & D group tends to look at long-term issues. Very often the development of new tools is required.

Semerjian: Where do the cost numbers for savings came from?

Law: The business units make the estimates.

Semerjian: Is the focus at DuPont moving from product to process research?

Law: Yes. [In response to a question:] M & S experts work as partners with physical scientists on project teams.

Teter: Cutback in research support is largely a domestic phenomenon. Our competition is international, and we don't see foreign companies making reductions. Are your international competitors scaling back on research?

Law: Yes, they are, to some extent. Our chief competitors, the German chemical companies, have also restructured and cut back on R & D.

Howard Bloom: Are the different modeling areas integrated, and if so, how easy is it to exchange the data? Could NIST help in this area?

Law: This is not really a problem area. However, visualization is an area in which NIST could usefully contribute.

(Someone): Are there any areas where models needed to be integrated?

Law: We have a consortium set up with a few universities that maintain a platform using C++ where this integration and appropriate tool development could be accomplished. We are even looking to have our foreign competitors involved with the platform.

Applications and Algorithms for Linear and Integer Programming

Robert E. Bixby
CPLEX Optimization, Inc., and Rice University

Dr. Bixby started his business because the funding agencies thought that linear programming (LP) was a dead field; it has proved otherwise. His company produces software for linear and integer programming; they do not do any modeling themselves. Typically, their software modules are used in other companies' software, so that the end user is entirely unaware of CPLEX's contribution.

Bixby discussed recent advances in algorithms for LP, including cases in which solutions are restricted to be integers. LP has long been recognized as important because it provides solutions to a variety of optimization problems of economic significance, including routing and scheduling tasks common to many industries.

He started out with the definition of LP, in which a linear function is to be maximized subject to equality and inequality constraints. In LP, large problems (thousands of equations and variables) are common. The matrices associated with the constraints are very sparse, allowing for flexibility in optimization. Integer Programming (IP) is LP where some or all of the variable can take on only integer values. Most (70%) of the industrial problems that he sees are actually IP rather than LP; for example, an integer (0/1) variable might represent building or not building a new plant.

He defined the steps in applying LP/IP technology: formulate a model, make accessible to an optimization program, solve it, and implement the resulting solution. This talk stresses the formulation and solving steps. In the past, most of the effort had gone into the model, and little work was accomplished in solving the problems.

Bixby started out by giving a machine comparison on solving a sample problem. The (now-ancient) SUN 3/150 took 44064 seconds, while the (new) SGI R8000 Power Challenge took 41 seconds. This speed-up by a factor of 1000 is partly from faster computers and partly from improved algorithms.

He then described some of the algorithms being used. The Simplex algorithms, in which successive trial solutions are all on vertices of a high-dimensional simplex, were introduced by Dantzig in 1947. The first reasonable codes were available in 1965. The first good codes were created in 1975; they were written in machine code and ran only on expensive mainframe computers. From 1975 to 1985 little happened, but there have been many advances since 1985. The advances came from better understanding of data structures and from the new workstations.

He described eight sample problems, with from 8,300 to 43,000 equations in from 15,000 to 107,000 variables, which he ran on a SPARCStation 10/41 for timing comparisons. With 1988 code it took a total of 5.2 days, and two problems could not be solved. With today's Simplex code it took a total of 5.2 hours, and all problems were solved.

He then described barrier (interior) algorithms, in which successive trial solutions are inside the high-dimensional simplex. He attributed the work to S. Cook and J. Edmonds in 1970 with some of the work done at NBS. In 1964 a projective algorithm was developed by Karmarkar (Bell Labs) and has proved to be very efficient. Running an interior method code on the eight sample problems on the four-processor SGI R8000 took only 14.5 minutes.

In general, though, some problems are easier with the Simplex method, and some with Interior methods.

Bixby then surveyed several applications such as plastic limit analysis, airline crew scheduling (Delta found that they could eliminate 107 pilots), fleet assignment (Continental found that they had three extra airplanes), manufacturing at Harris Semiconductor (on-time performance went from 74% to 95%), and the US Post Office--mail Handling (looking at problem of mail being late and discovered they had 300,000 too many employees). In many of these applications, large LP/IP problems need to be solved on a weekly basis. Such examples provide convincing evidence of the benefits of modeling and simulation. In these cases the number of possibilities is so vast that a near optimum solution is beyond the realm of human ability.

In conclusion, you don't need to write the optimization code yourself, but you do need to develop the model. IP solvers can still be vastly improved. LP solvers can exploit network structure. There is a need to integrate modeling languages, databases, and solvers.

Questions, Answers, and Discussion

Shukri Wakid: How are problems partitioned for the 16 processor computer?

Bixby: The parallelism is general purpose.

(Someone): What about airline utilization of modeling?

Bixby: The airlines are handling scheduling now using models on computers whereas before they were doing it by hand. They are also working on more flexible models, such as allowing one route to be served by different airplanes on different days.

Richard Wright: How about the difficulties associated with linearizing nonlinear problems?

Bixby: The answer is that in many cases you need to use nonlinear techniques or at least use sequential LP.

Trends and Issues in High-Performance Computing

Roldan Pozo
Mathematical Software Group, NIST

Dr. Pozo stated that large-scale simulation and modeling requires large scale computational resources. There are three ways to improve performance: faster processors, better algorithms, and exploiting parallelism. He emphasized the use of parallelism, not because parallelism is inherently good, but because it is unavoidable.

The spectrum of systems includes RISC workstations (IBM, DEC, Sun, Intel, HP), vector processors (Cray, Fujitsu, NEC), shared memory (SGI, Convex, Alliant), virtual shared memory (KSR, Myrias), distributed memory (IBM SP2, Cray T3D, Intel Paragon), and cluster computing (any Unix LAN). In order to implement clustering, one needs to use technology like asynchronous transfer mode (ATM) rather than ethernet.

The truth about parallel computing is that the cost of high performance computing is mainly the cost of software development rather than the cost of the machine. Developing parallel applications is hard because of: (1) integration of libraries and applications, (2) data management, and (3) strategy for data layout and/or transformations.

The current situation has applications typically developed (1) from scratch, (2) using a simple SPMD (single processor multiple data, or data decomposition) model, (3) in Fortran or C, and (4) with explicit message passing primitives. Similar components are coded over and over.

How is NIST looking at the problem? A software hierarchy is necessary: (1) user application, (2) application libraries, (3) distributed objects, (4) computation primitives, and (5) optimized kernels or message passing primitives.

A workshop was recently held to explore whether NIST was on the right track in parallel software development. Pozo learned about the dichotomy in parallel software development between industry and the academic research communities. Industry looks at tough problems, and academia at hard solutions. Industry has limited resources for software development, and academia has an army of students, post-docs, and researchers to throw at a problem. Industry needs practical solutions to real problems, and academia wants exotic solutions to model problems. Industry looks for portability and stability; academia has a short life-cycle, and focuses on tweekability and experimentation.

NIST has many opportunities because of (1) its expertise in mathematical modeling and numerical software, (2) its expertise in parallel library design, (3) its experience in software and performance analysis, (4) its unique relationship with industry and high-technology companies, and (5) the availability of computational platforms.

In conclusion, parallelism is not the goal. Developing parallel applications is hard. The main problem is software, not hardware. Exotic languages, environments, and tools have shown little success. Finally we must focus on long-term software solutions for real problems.

Questions, Answers, and Discussion

(There were comments about the need and availability of compilers that can handle parallelism.)

Teter: Given the choice between performance and programming ease, Corning has sacrificed some performance so that there is an effective interface to the programmer.

Bixby: We have great experience in working with the SGI shared memory computer where we feel that we are getting both performance and programming ease.

(Someone): Is fault tolerance important for parallel computers? That is, what happens when one of the machines fails?

Pozo: The answer is that this is a difficult issue. There are some massively parallel machines that have some fault tolerant capabilities.

Open Discussion

Jim Blue presided and asked to hear about issues that went across the speakers' topics.

(Someone): Is Fortran still useful?

Teter: There will always be a scientific language, it will always be called Fortran, and it will have the latest characteristics of other languages.

(Someone): Is the Fortran 90 standard generally accepted?

Teter: Fortran 77 is still dominant because of the tremendous backlog of codes already in place.

Bixby: The new people coming out of universities are thinking about ``objects'' and feel comfortable in the C language.

Wakid: Is there is a need to standardize the interface between compilers for parallel computing to allow for porting between computers?

Bixby: We do not provide our software for a variety of computers. Instead we implement the Simplex code on a distributed memory system. It should be possible to port it at a later time.

Teter: In one example at Corning, it took 1.5 years to port code from one computer to another and only two weeks to run it. For this reason, I am not excited about parallelism. I believe that semantic multiprocessing is the better approach.

Schwartz: What does an organization lose in a transformation away from a central computing facility?

Teter: We have gone through this exercise. All the scientists at Corning have their own workstations. The only issue is to have sufficient standardization so that help can be obtained from experts from other organizations. The biggest problems are maintenance and system administration.

Charles Clark: There is a large human effort to maintain a Unix workstation for the typical scientist.

(Someone): What about backup?

Law: At DuPont the workstations on the network are centrally backed up.

Schwartz: Is there some organizational environment where these issues are being tackled? Is there some way that NIST could get involved in solving this?

Teter: Ultimately the corporation wants to view maintenance as a service that can be contracted out.

Semerjian: Does one lose capability in losing the central site?

Teter: The units believe that they can do it on their own. It works fine until something breaks and then they need an expert.

Semerjian: Are there any problems we can't solve by going to workstations?

Teter: I believe that there are no such problems.

Walter Jones: How does one sell the idea of a new process for a product?

Law: You might want to model what is happening in the lab. You would go to your manager and get the time to work with the researcher to model the process. However, the reality is that no one has time to respond to most of the requests to start modeling projects.

Clark: To rephrase Jones's question, is there a different approach in selling modeling research from other types of research?

Law: We do not have a problem because, at DuPont, the value of modeling is well known. In addition, in general the modeling effort is one piece of a larger effort so that the value added is understood.

Teter: What does it take to justify a major new modeling effort in a new process area such as biological processing?

Law: To reiterate, the key thing is ``added value.''

(Someone): I'm doing molecular modeling): What can NIST do to help validate modeling methods by maintaining databases?

Teter: There is a danger in becoming a consumer product service. There could be a role in performing tests and publishing the results.

Wakid: How does DuPont get people to work together on modeling (the modeler and the scientist customer)?

Law: There is no clear cut answer. You have to keep them communicating with each other. There is a different vocabulary for each of the different disciplines. The modeler must learn the terminology of the customer.

Wakid: Can the World Wide Web be used as a tool for someone who needs to learn about modeling tools?

Paul Boggs: The person could get a tool, but it most likely won't be the right one based on the specific problem.

Blue: What might be a new initiative for NIST in the area of M & S?

Law: Things that are not in the secrecy or competitiveness area would be useful, such as visualization or user interfaces.

Teter: NIST could take the responsibility for numerical code recipes. The Slatec Library, developed by the national laboratories and NIST, exists and is free.

Boggs: A book of algorithms would not be too useful because of the tremendous variation in the problems, and the harder the problem, the more chance that there is not a standard algorithm solution.

Teter: I would like to see a national collection of utilities, each with a short explanation, available free. The whole field of numerical technology needs a few good textbooks that would indicate the way to solve sets of problems and provide standard solutions.

Boggs: The field is progressing in this area. There are collections of standard algorithms for solving simple problems that are available now.

Teter: There is a need for standards and knowledge in this area.



James L Blue
Wed Jan 24 10:56:21 EST 1996