ITLACMDScientific Applications  Visualization Group
Parallel Computing
Attractive Image NIST
 
Up Visualization Parallel Computing Data Mining Released Software

Dissipative Particle Dynamics



Understanding the flow properties of complex fluids like suspensions (e.g., colloids, ceramic slurries, and concrete) is of importance and presents a significant theoretical challenge. The computational modeling of such systems is also challenging because of the difficulty of tracking boundaries between different fluid/fluid and fluid/solid phases. Recently, a new computational method called Dissipative Particle Dynamics (DPD) has been introduced which has several advantages over traditional computational dynamics methods while naturally accommodating such boundary conditions. DPD resembles Molecular Dynamics (MD) in that the particles move according to Newton's laws, but in DPD the interparticle interactions are chosen to allow for much larger time steps. This allows for the study of physical behaviour on time scales many orders of magnitude greater than possible with MD. The original DPD algorithm used an Euler algorithm for updating the positions of the free particles (which represent "lumps" of fluids), and a leap frog algorithm for updating the positions of the solid inclusions. The NIST algorithm QDPD, for quarternion-based dissipative particle dynamics, is a modification of the DPD algorithm that uses a velocity Verlet algorithm to update the positions of both the free particles and the solid inclusions. In addition, the solid inclusion motion is determined from the quaternion-based scheme of Omelayan (hence the Q in QDPD).


The computational modeling of the flow properties of complex fluids like suspensions has been a great challenge because of the difficulty of tracking boundaries between different fluid/fluid and fluid/solid phases. QDPD naturally accomodates these boundary conditions. QDPD in its present form it is being used to study the steady-shear viscosity of a suspension of solid inclusions such as ellipsoids in a Newtonian fluid, i.e., one in which time evolution is governed by Newton's equations of motion.


The Quaternion-based Dissipative Particle Dynamics Method has been parallelized.

(bullet) Why Parallelize Dissipative Particle Dynamics?

The emergence and widespread adoption of the single program, multiple data (SPMD) programming model and the standardization of parallel communications libraries in the 1990s have increased the use of parallel computers and offered the carrot of the very highest performance for scientific computing. Another significant advance has been the availability of the message passing interface (MPI) standard. A program can now be both parallel and sufficiently independent of architectural details to be portable to a wide range of parallel environments, including shared-memory and distributed-memory multiprocessors, networks of workstations, and distributed cluster computers. In the case of the computational modeling of the flow properties of complex fluids, realistic simulations require many particles and hence large memory and long computation times. Parallel computing has allowed us to systematically explore regions of parameter space (e.g., different solid fractions, broader particle size and shape distributions) that would be prohibitive on single processor computers.

(bullet) How is the Parallelization Realized?

Two parallelizations have been created using MPI, a shared memory version and a distributed memory version. Data is replicated (the replicated data approach) in the shared memory version. In this approach every processor has a complete copy of all the arrays containing dynamical variables for every particle. The computation of forces is distributed over processors on the basis of cell indices. This is a very efficient way of implementing parallelism since the forces must be summed over processors only once per timestep, thus minimizing interprocessor communication costs. On shared-memory machines like an SGI Origin 2000, this approach is very attractive, since all processors can share the arrays containing dynamical variables.

On the other hand, the replicated data approach has turned out to be almost unusable on distributed memory systems including those with high speed interconnects like the IBM SP2/SP3 systems. Since the QDPD sequential code uses a link-cell algorithm which breaks simulation space into domains, it seems natural to map this geometrical, or domain, decomposition onto separate processors. Doing so is the essence of the parallel link-cell algorithm we have implemented. Our implementation has some fairly novel features arising from the DPD formalism (which forces some tricky bookkeeping to satisfy Newton's third law), the use of ellipsoids spread out across processors and the requirement of a sheared boundary condition (which causes particle movement of greater than one processor at times). This description is a good description of our spatial decomposition program, with the following additions to the description. First, following Plimpton, we distinguish between "owned" particles and "other" particles, those particles that are on neighboring processors and are part of the extended volume on any given processor. For "other" particles, only the information needed to calculate forces is communicated to neighboring processors. Second, the QDPD technique is being applied to suspensions, so there are two types of particles, "free" particles and particles belonging to ellipsoids. A novel feature of this work is that we explicitly do not keep all particles belonging to the same ellipsoid on the same processor. Since the largest ellipsoid that might be built can consist of as much as 50 percent of all particles, that would be difficult if not impossible to handle without serious load-balancing implications. What we do is assign each particle a unique particle number when it is read in. Each processor has the list of ellipsoid definitions consisting of lists of particles defined by these unique particle numbers. Each processor computes solid inclusion properties for each particle it "owns", and these properties are globally summed over all processors so that all processors have the same solid inclusion properties. Since there are only a small number of ellipsoids (relative to the number of particles), the amount of communication necessary for the global sums is small and the amount of extra memory is also relatively small. Hence it is an effective technique.

(bullet) What is the Performance of the parallel Code?

The replicated data approach has worked well for small to medium sized problems (tens-of-thousands of particles) on shared-memory SGIs. We have found speedups of as much as 17.5 on 24 processors of a 32 processor SGI Origin 2000. Using three such systems, we were able to get a year's worth of conventional computing done in a week.

For distributed memory systems, our spatial (domain) decomposition technique has proven to be effective. A parallel speedup of 24.19 was obtained for a benchmark calculation on 27 processors of an IBM SP3 cluster. Current results show a speedup of a factor of 22.5 on 27 200MHz Power3 processors on an IBM SP2/SP3 distributed memory system. THe same technique is also very effective in a shared-memory environment, where the speedups are a factor of 29 on 32 processors of an SGI Origin 3000 system and a factor of 50 on 64 processors.



(bullet)

Modeling the Flow of Suspensions in High Performance Concrete



(bullet) Papers/Presentations
(bullet) Edward Garboczi , Jeffrey Bullard, Nicos Martys and Judith Terrill, The Virtual Cement and Concrete Testing Laboratory: Performance Prediction, Sustainability, and the CSHub in NRMCA Concrete Sustainability Conference , Tempe, AZ, April 13-July 15, 2010.
(bullet) Judith Terrill, W. L. George , Terrence J. Griffin , John Hagedorn, John T. Kelso, Marc Olano, Adele Peskin , S. Satterfield, James S. Sims, J. W. Bullard, Joy Dunkers, N. S. Martys , Agnes O'Gallagher and Gillian Haemer, Extending Measurement Science to Interactive Visualization Environments in Trends in Interactive Visualization: A-State-of-the-Art Survey, Elena Zudilova-Seinstra, Tony Adriaansen and Robert Van Liere (Ed.) , Springer, U.K., 2009.
Note: Pages: 207-302
(bullet) Nicos Martys, Didier Lootens, W. L. George and Pascal Hebraud , Contact and stress anisotropies in start-up flow of colloidal suspensions, Physical Review E, 80, 2009. ID: 031401.
Note: Comment: Phys. Rev. E80, 031401, 7 pages, (2009)
(bullet) James S. Sims and Nicos S. Martys, Simulation of Sheared Suspensions with a Parallel Implementation of QDPD, Journal of Research of the National Institute of Standards and Technology, 109 (2) , pp. 267-277.
Links:  pdf and postscript.
(bullet) N. S. Martys , D. Lootens, W. L. George , S. Satterfield and P.Hebraud , Spatial-Temporal Correlations in concentrated suspensions in 15th International Congress on Rheology, Monterey CA., August 3-8, 2008.
(bullet) N. S. Martys , D. Lootens, W. L. George , S. Satterfield and P.Hebraud , Stress chains formation under shear of concentrated suspension in 15th International Congress on Rheology, Monterey CA., August 3, 2009-August 8, 2008.
(bullet) N. S. Martys , C. F. Ferraris , V. Gupta , J.H. Cheung , J. G. Hagedorn , A. P. Peskin and E. J. Garboczi , Computational Model Predictions of Suspension Rheology: Comparison to Experiment in 12th International Conference on the Chemistry of Cement, Montreal, Canada, July 8-13, 2007.
(bullet) Nicos S. Martys and James S. Sims, Modeling the Rheological Properties of Concrete delivered at Virtual Cement and Concrete Testing Laboratory Meeting, Gaithersburg, MD, June 8, 2000.
(bullet) James S. Sims, William L. George, Steven G. Satterfield, Howard K. Hung, John G. Hagedorn, Peter M. Ketcham, Terence J. Griffin, Stanley A. Hagstrom, Julien C. Franiatte, Garnett W. Bryant, W. Jaskolski, Nicos S. Martys, Charles E. Bouldin, Vernon Simmons, Olivier P. Nicolas, James A. Warren, Barbara A. am Ende, John E. Koontz, B. James Filla, Vital G. Pourprix, Stefanie R. Copley, Robert B. Bohn, Adele P. Peskin, Yolanda M. Parker and Judith E. Devaney, Accelerating Scientific Discovery Through Computation and Visualization II, NIST Journal of Research, 107 (3) , May-June, 2002, pp. 223-245.
Links:  postscript and pdf.
(bullet) James S. Sims, John G. Hagedorn, Peter M. Ketcham, Steven G. Satterfield, Terence J. Griffin, William L. George, Howland A. Fowler, Barbara A. am Ende, Howard K. Hung, Robert B. Bohn, John E. Koontz, Nicos S. Martys, Charles E. Bouldin, James A. Warren, David L. Feder, Charles W. Clark, B. James Filla and Judith E. Devaney, Accelerating Scientific Discovery Through Computation and Visualization , NIST Journal of Research, 105 (6) , November-December, 2000, pp. 875-894.
Links:  postscript and pdf.


(bullet) Parallel Algorithms and Implementation: William L. George Julien Lancien James S. Sims
(bullet) Collaborating Scientist: N. (Nick) Martys
(bullet) Visualization: Marc Olano Steven G. Satterfield
(bullet) Group Leader: Judith E. Terrill


Privacy Policy | Disclaimer | FOIA
NIST is an agency of the U.S. Commerce Department.
Date created: 2002-02-07, Last updated: 2011-01-12.
Contact