ITLACMDScientific Applications  Visualization Group
Parallel Computing
Attractive Image NIST
 
Up Visualization Parallel Computing Data Mining Released Software

Lattice Boltzmann Methods



The lattice Boltzmann method is a powerful technique for the computational modeling of a wide variety of complex fluid flow problems including single and multiphase flow in complex geometries. It is a discrete computational method based upon the Boltzmann equation. It considers a typical volume element of fluid to be composed of a collection of particles that are represented by a particle velocity distribution function for each fluid component at each grid point. The time is counted in discrete time steps and the fluid particles can collide with each other as they move, possibly under applied forces. The rules governing the collisions are designed such that the time-average motion of the particles is consistent with the Navier-Stokes equation.


This method naturally accomodates a variety of boundary conditions such as the pressure drop across the interface between two fluids and wetting effects at a fluid-solid interface. It is an approach that bridges microscopic phenomena with the continuum macroscopic equations. Further, it can model the time evolution of systems.


The Lattice Boltzmann Method has been parallelized.

(bullet) Why Parallelize the Lattice Boltzmann Method?

The Lattice Boltzmann Method is resource intensive. In general, running simulations on large systems (greater than 100x100x100 grid points) is not practical due to the lack of memory resources and long processing times. Because of these extreme demands on memory and computation, and the fact that the LB method generally needs only nearest neighbor information, the algorithm is an ideal candidate for parallel computing.

(bullet) How is the Parallelization Realized?

The code was implemented in C with MPI for portability. There are multiple included features that enable large problems to be run quickly.

(bullet) Single-Program Multiple-Data (SPMD) Model:

The data volume is divided into spatially contiguous blocks along one axis; multiple copies of the same program run simultaneously, each operating on its own block of data. Each copy of the program runs as an independent process and typically each process runs on its own processor. At the end of each iteration, data for the planes that lie on the boundaries between blocks are passed between the appropriate processes and the iteration is completed.

(bullet) Memory Management:

In order to run large problems, we use multiple techniques to keep within-processor memory needs as small as possible.

(bullet)

Since only nearest neighbor information is needed, all computation within a process is performed with only three temporary planes, Thus temporary memory requirement grows at a much smaller rate than problem size.

(bullet)

For fluid flow in complex geometries, we have both active sites (that hold fluid) and inactive sites (that consist of material such as sandstone). For efficient use of memory we use an indirect addressing approach where the active sites point to fluid data and the inactive sites point to NULL. Hence only minimal memory needs to be devoted to inactive sites. At each active site we point to the necessary velocity and mass data for each fluid component. Over the course of an iteration we visit each active cell in the data volume and calculate the distribution of each fluid component to be streamed to neighboring cells. New mass and velocity values are accumulated at each active cell as its neighbors make their contributions.

(bullet) What is the Performance of the Parallel Code?
(bullet) For Modeling Fluid Flow in Complex Geometries:

We ran a series of timing tests on multiple machines. We found in all cases that the non-parallelizable part of the computation accounts for between 0.7% and 3% of the total computational load. In one of the test cases the performance data from the SGI Origin 2000 closely matches this formula (T is the total time in seconds for an iteration; N is the number of processors): T = 0.090 + 11.98/N. The non-parallizable part of the computation is 0.090 seconds, while the parallelizable portion of the computation uses 11.98 seconds. So, for example, a single iteration took 12.08 seconds on one processor but only 1.11 seconds on 12 processors.

Other timing tests indicate that the time for the parallelizable portion of the code is roughly proportional to the number of active sites over the entire volume, while interprocess communication time is roughly proportional to the size of a cross-section of the volume. So as we process larger systems, the time for the parallelizable portion of the code should increase proportionally with the cube of the linear size of the system, while the non-parallelizable portion should increase with the square of the linear size of the system. This means that for larger systems, a larger proportion of the time is in the parallelizable computation and greater benefits can be derived from running on multiple processors.



(bullet)

Modeling Fluid Flow in Complex Geometries

(bullet)

Modeling Multicomponent Fluids

(bullet)

Studying Finite Size Effects

(bullet)

Modeling Taylor-Tomitaka Instability



(bullet) Papers/Presentations
(bullet) Nicos S. Martys and John G. Hagedorn, Multiscale modeling of fluid transport in heterogeous materials using descrete Boltzmann methods , Materials and Structures, 35, December 2002, pp. 650-659.
Links:  postscript and pdf.
(bullet) Eric Landis, Shan Lu, Nicos Martys and John Hagedorn, Experiments and Simulations of Concrete Microstructure Permeability delivered at Symposium on Materials Science of High Performance Concrete, November 28-30, 2000.
(bullet) Nicos Martys, John Hagedorn and Judith Devaney, Lattice Boltzmann Simulations of Single and Multi-Component Flow in Porous Media in Mesoscopic Modeling: Techniques and Applications, Nicolaides and Bick (Ed.) , Marcel Dekker, Inc., (to be published).
Links:  postscript and pdf.
(bullet) James S. Sims, John G. Hagedorn, Peter M. Ketcham, Steven G. Satterfield, Terence J. Griffin, William L. George, Howland A. Fowler, Barbara A. am Ende, Howard K. Hung, Robert B. Bohn, John E. Koontz, Nicos S. Martys, Charles E. Bouldin, James A. Warren, David L. Feder, Charles W. Clark, B. James Filla and Judith E. Devaney, Accelerating Scientific Discovery Through Computation and Visualization, NIST Journal of Research, 105 (6) , November-December, 2000, pp. 875-894.
Links:  postscript and pdf.
(bullet) N. Martys, J. Hagedorn, D. Goujon and J. Devaney, Large Scale Simulations of Single and Multi-Component Flow in Porous Media in Proceedings of SPIE: The International Symposium on Optical Science, Engineering, and Instrumentation, Denver, Colorado, July 19-23, 1999, 3772.
Links:  postscript and pdf.
(bullet) John Hagedorn, Nicos Martys, Delphine Goujon and Judith Devaney, A Parallel Lattice Boltzmann Algorithm for Fluid Flow in Complex Geometries delivered at Symposium on Computational Advances in Modeling Heterogeneous Materials, Fifth National Congress on Computational Mechanics, August 4-6, 1999.
Links:  postscript and pdf.
(bullet) N. Martys and J. Hagedorn, Numerical Simulation of Fluid Transport in Complex Geometries delivered at International Conference on Computational Physics, American Physical Society, 1997.
(bullet) Awards
(bullet) -


(bullet) Parallel Algorithms and Implementation: John G. Hagedorn
(bullet) Collaborating Scientist: N. (Nick) Martys
(bullet) Visualization: John G. Hagedorn N. (Nick) Martys
(bullet) Group Leader: Judith E. Terrill


Privacy Policy | Disclaimer | FOIA
NIST is an agency of the U.S. Commerce Department.
Date created: 2002-02-07, Last updated: 2011-01-12.
Contact