MutliBlock for IPARS ===================== M. Peszynska, 7/8/98, 12/10/00, 11/4/03 Qin Lu, 7/22/98 Ivan Yotov, 8/25/98 (last update: 11/04/03) CONTENTS: I. Overview and people and credits II. The general mechanism III. Input file and initialization IV. Interface Newton/GMRES V. MultiBlock implementation for a given physical model on the example of the hydrology model from /hydro. VI. How to build an executable VII. More on input VIII. References ==================================================================== I. Overview. The MultiBlock implementation for IPARS is based on the Mortar formulation of the domain decomposition algorithm, see [YOTOV,ACWY]. Results using this implementation are described in several papers listed in References section. Multimodel implementation is described in mmodel.txt. The code described here is based on the MultiBlock implementation for the two-phase code fully implicit code Piers, known in CSM as MB. The MB code has logically three parts: the original Piers code, the parallel data structure library DAGH / MACE and the interface between Piers and DAGH. Note that in 1999, MACE (Multiblock ADaptive Computational Engine code has replaced the use of DAGH, see http://www.caip.rutgers.edu/TASSL/Projects/ACE/ IPARS stands for a (fully) Implicit Parallel Accurate Reservoir Simulator, see http://www.ticam.utexas.edu/Groups/SubSurfMod/ACTI/ipars.html It has been designed by John Wheeler and it has been a joint effort of many developers at CSM including John Wheeler Steve Bryant Srinivas Chippada Joe Eaton Carter Edwards Qin Lu Manish Parashar Victor Parr Malgorzata Peszynska Mary Wheeler Ivan Yotov The people involved in the implementation of MultiBlock IPARS are Qin Lu Manish Parashar Malgorzata Peszynska Ivan Yotov ====================================================================== II. The general mechanism. IPARS in the current version IPARSv2 assumes that the reservoir is/can be split into several faultblocks. The number of faultblocks is unrelated to the number of processors executing the code in parallel. The boundary conditions assumed on every faultblock boundary are no-flow which means that there is not mass transfer across the faultblock interfaces. MultiBlock IPARS provides means for this transfer. It is organized through a mortar layer which is some extra "space" between the two faultblocks. Since the fauiltblocks are likely to have non-matching grids, the mortar layer provides a unified way of communicating the values of primary variables and their fluxes between adjacent blocks. The communication is done with projection routines. MultiBlock is activated through data specified in the input file. Even if multiple faultblocks are present, if multiblock data is absent form the input file, the code will perform computations on faultblocks treated as isolated reservoir regions, with no mass transfer. See details on input below. If MutliBlock is active, then the code will try to solve the following problem: for each time step find the interface values \lambda of primary variables on the faultblock boundary for which the jump in the corresponding fluxes B \lambda is zero. The problem B \lambda = 0 is nonlinear and we solve it using an inexact Newton-GMRES iteration. See details on the interface Newton below. The structure of MultiBlock IPARS resembles Piers in that it involves IPARS as a subdomain solver, MACE as an extra parallel library which is used to handle MultiBlock data structures, as well as the interface layer. The major difference is in that the MACE-IPARS interface code is designed to handle multiple numerical models as well as multiple physical models with the use of mostly common routines. The number of physical-model specific routines for MultiBlock is made as small as possible without compromising the readability of the code. The variables and routines in the interface code are made modular. For example, the number of variables and the mappings of IPARS variables to MultiBlock variables are established by setup routines which are model specific. The flux and property routines in the boundary layer , and the evaluation of matrix entries and residuals related to Dirichlet boundary condition are model specific. See below for technical details on setting a physical model up for MultiBlock implementation. The part of MultiBlock code which is model independent resides in /mblk directory. The model specific code is in the model directories /hydro etc. The calls to the multiblock code are commented out with keyword $MORTAR In future we intend to target the intra-model and intra-numerical scheme MutliBlock formulation. This will involve some modification in the current memory management and message passing routines and makefiles/sizefiles structure and maybe organized as IPARSv3 model. =========================== III. Input file and initialization. The mortar data reading, data structure and communication initialization are done in /input/idata.df by calling MACEMB_INIT(), which is in /mblk/mb_main.dC. Sample input data files are available as /data/hmb2.dat and hmb2b.dat. See data file /data/hmb2b.dat. The meaning of all multiblock related data structures in this input file is explained by comments. The convention for face numbers is as follows. 1 - low x 2 - high x 3 - low y 4 - high y 5 - low z 6 - high z Currently no mapping is implemented in IPARS, which means the domain is a union of bricks that all live in a global coordinate system. The exact position of each block is fixed by setting XYZ111 variable. The mortar coordinates are also given in this global coordinate system. Therefore mortar coordinates should be the same from the perspective of both neighboring blocks,e.g., MB_MYMORTAR_LB1(1,1) = 0. MB_MYMORTAR_LB2(1,1) = 24. $ Block 1 perspective MB_MYMORTAR_LB1(2,1) = 0. MB_MYMORTAR_LB2(2,1) = 24. $ Block 2 perspective The multiblock data structure allows for more general domains than the currently implemented. The code should eventually be extended to allow for irregular logically rectangular blocks (as implemented in the prototype simulator MB). Each block is locally mapped to a reference computational brick. Therefore the computational domain is again a union of bricks (just like in the case currently implemented). The difference is that now each computational brick has its own local coordinate system. THE COORDINATES FOR MORTAR ASSOCIATED WITH EACH BLOCK SHOULD BE GIVEN FROM THIS BLOCK'S PERSPECTIVE IN THIS BLOCK'S LOCAL COMPUTATIONAL COORDINATE SYSTEM!!! Mortar types: 0 - discontinuous piecewise constants (should be used with matching grids) 1 - continuous piecewise linears 2 - discontinuous piecewise linears 3 - continuous piecewise quadratics 4 - discontinuous piecewise quadratics Hints on choosing properly the size of mortar grids. A rigorous mathematical condition for the mortar grid size that guarantees stability and accuracy is given in [YOTOV], [ACWY]. This condition controls the dimension of the mortar finite element space by the dimensions of the adjacent subdomain spaces. For each interface a good practical choice is to take the trace of either one of the subdomain grids (preferably the finer), coarsen it by two (three, for the case of quadratics) in each direction, and use the resulting grid for a mortar grid. This choice will work for any mortar type. The type 0 (piecewise constants) is the usual mixed method Lagrange multiplier choice and is recommended on matching interfaces with mortar grid coinciding with the traces of subdomain grids. On non-matching interfaces, however, this choice does not work well, see [YOTOV,ACWY]. A memory allocation routine ALCBEA(), analogous to ALCGEA() that allocates grid-element-array, has been added in /memman/memman1.dc to allocate "boundary-element-array", which contains the cells on all six faces of a fault block. The first index of the array is defined for each of the six faces as, 1: X-, 2: X+, 3: Y-, 4: Y+, 5: Z-, 6: Z+. The properties on Dirichlet boundary are stored in the "communication layer" of corresponding grid-element-array. Hence one must set LAYERX, LAYERY and LAYERZ >= 1 if Multiblock is active. These values in the communication layer can be passed back and forth to mortar space data structure through boundary-element-arrays. The transmissibilities and boundary cell depthes on Dirichlet boundary are computed in /mblk/mb_init.df, which is model independent. =========================== IV. Interface Newton/GMRES The currently implemented Newton/GMRES algorithm on the interface works as follows (for ideas see ACTI report '96, Carter's TICAM report 98-02 ... and many others ... ). The driver routine IPARS (drive/ipars.df) after initializaiton enters a time step loop in which it calls a "step" routine for every physical model, for exmaple, HSTEP() for the hydrology model. If MultiBlock is active, this is then replaced by a call to MORTAR_STEP() (mblk/mb_mort.C) which governs the solution of the interface problem B \lambda = 0 (see section II). The solution of this nonlinear problem is with the Newton method: given a guess for \lambda, it finds the correction S for \lambda by solving B' (\lambda) S = - B( \lambda) The Jacobian B' (\lambda) is approximated (inexact Newton) by B'( \lambda) s \approx \frac{ B (\lambda + d S) - B( \lambda)} {d} where d is a small parameter. The solution of the interface problem then is \frac{ B (\lambda + d S) - B( \lambda)} {d} = - B ( \lambda) and it requires multiple evaluations of B ( guess). The linear problem above is solved using GMRES method. The currently implemented GMRES technique does not yet have the restart capability: in Piers the restart capability was said to not have improved the converegnce of the Newton algorithm. The above steps are represented in the code by the following steps. MORTAR_STEP() ------------- for a given guess of \lambda (usually from previous time step or at initializaiton arithmetic average of values on the interface), EvalF() // evaluate B (\lambda) Mortars_To_Ipars: // project \lambda from Mortars to Ipars boundary // conditions BC_POIL etc... IPARS_STEP() = HSTEP() // solve the problem in each faultblock using // the current boundary conditions on the interface // = take one "fake" time step BC_FLUX // compute new fluxes in VEL and copy them // to BC_FLOIL etc... Ipars_to_Mortars() // project fluxes from BC_FLOIL .. to Mortars If the result (=residual) is small enough, quit. Otherwise enter the Newton loop. while (no convergence yet = residual is not small enough) --------------------------------------------------------- Newton_Step() // find the correction s to the \lambda using GMRES. // Within the GMRES routine the IPARS-MB-specific call // is the MATVEC product which is CompDhF () // which calls EvalF (FROM_NEWTON_STEP) // to force evaluation of // B(\lambda + \d s ) instead of B(\lambda) // ResetStep() //reset the primary unknowns of the domain // to the beginning of current time step // find norm of the residual, if small enough, quit // otherwise, keep iterating // if we are here, then the Newton iteration converged and we have // a new "good" interface values \lambda // (or completely diverged: in this case we bail out) EvalF () // recompute the pressures for the new \lanbda ResetStep() //reset the primary unknowns of the domain to the beginning of //current time step ComputeNorm() // compute norm of the flux, if small enough, quit // or if too many iterations, bail out // when we are here, the Newton iteration converged for one step. END OF MORTAR_STEP() -------------------- Note : the interface solution may be changed, for example, one may try use nonlinear CG. The current implementation is close to the template but may be improved. ======================================================================== V. MultiBlock implementation for a given physical model on the example of the hydrology model from /hydro. Data structures ================ Boundary-element-arrays are allocated as the interface between IPARS and mortar space data structures. These variables are ------------------ declared in initialized in notes BC_POIL /hydro/harydat.h /hydro/harray.df BC_COIL .... BC_FLOIL BC_FLWAT IPARS Mblk code : ---------------- The MACE interface provides a number of classes with operations which take care of parallel communication / memory copies on the Mortar variables. The Mortar variables are organized as arrays of size equal to the number of primary variables. It is currently assumed that the number of primary variables is equal to the number of flux variables. In case of the /hydro model, there are two of them. These variables are declared in defined in set by PrimaryM /mblk/mb_mort.h /mblk/mb_mort.C mortar_setup() FluxM ... The call from mortar_setup() queries IPARS for the number of variables. There exists an auxiliary structure IparsFaces which holds pointers to the above mentioned IPARS variable BC_POIL.... etc. accessible from the C/C++ code. The IparsFaces are declared defined in set by IparsFaces /mblk/faces.h /mblk/mb_mort.C mortar_setup() The assignments are like ... IparsFaces.primary[0][0] = BC_POILX ... IparsFaces.primary[1][2] = BC_COILZ ... IparsFaces.flux[0][1] = BC_FLOILY ... etc. File structure and routines =========================== /hydro/ ------- For the /hydro model there are two files only which contain model specific MultiBlock routines. These are /hydro/ hfault.df // routines hbc_prop, hbcini, hbc_flux, hinflux, hfluxcopy hm_util.f // routines called at setup only: // hgetbc(), hset_faces(), hrestestep These routines "know", setup and use the number, names of primary variables, fluxes as IPARS grid arrays and IPARS boundary arrays (=Faces). 1. routine : HSET_FACES() *********** sets the number (=2) and names of primary variables in the model (POIL, COIL) 2. routines : HGETBC (), HBCINI() ************ Provide interface for copying the values between IparsFaces (Boundary element arrays) and IPARS Grid arrays for primary variables BC_POIL <-> POIL ... etc. 3. routine : HFLUXCOPY() *********** Provide interface for copying the values between IparsFaces (Boundary element arrays) and IPARS Grid arrays for flux variables BC_FLOIL <- VEL(1,*) BC_FLWAT <- VEL(2,*) 4. routine: HBC_PROP() *********** Compute the properties of the fluid on Dirichlet boundary, so that the initial Jacobian matrix calculation file tran3.df may get the related coefficients accordingly. 5. routine : HBC_FLUX() ************ Compute the velocities 6. routine: HINFLUX() ************ Compute the fluxes on Dirichlet boundary so that they may be included in the local/global mass balances calculation. /mblk/ ------ The mblk/ directory contains currently the following files mb_main.dC // contains just the routine MACEMB_INIT() mb_main.dh // prototypes of IPARS - Fortran macemb interface routines mb_intf.h // constants for Projection types etc. mb_ipars.f // Fortran code to read MBlock data from IPARS input file, // and initialize MBlock structures // contains also the Projection routines mb_util.C // C++ code working along with the code from // mb_ipars.f mbutil.C // C++ code interfacing Mortar Blas routines faces.h // IparsFaces class declaration m2ipars.C // contains interface functions Mortars2Ipars structures // and vice versa mb_f.dh // declaration of Newton info made available to // IPARS Fortran code mb_mort.C // (prev. mortar.C) main driver Mortar_Step() // and initialization routine Mortar_Setup() // and dependencies : EvalF(), NewtonStep(), CompDHF() mb_mort.h // prototypes of the routines from mb_mort.C, m2ipars.C mb_setup.df // Fortran wrappers for model (in)dependent calls // of IPARS routines from MultiBlock code mb_init.df //compute the transmissibilities and boundary cell //depthes on Dirichlet boundary /drive/, /input/ ---------------- Note that the calls to MultiBlock code from ipars.df idata.df hstep.df harray.df hvisual.df have been commented out with $MORTAR keyword so that /hydro model can compiled and linked to run with or without MultiBlock. Of course, the code will not run with MultiBlock data if it was not precompiled to do so. The declaration statements of MultiBlock variables in include files have not been commented out. The variables will not be allocated though for non-MultiBlock code. =================================================================== VI. Building an executable with MACE MACE subdirectory is included in the general distribution of IPARSv2. 1. Go to MACE directory and type make SYSTEM=linux all If your platform is not linux single proc, replace linux by platform where platform is one of the extensions in subdirectory arch_makes : arch_makes/make.defn.linux arch_makes/make.defn.intel arch_makes/make.defn.sgi32 arch_makes/make.defn.beowulf The first three are for single proc mahcines, the third one is for clusters. There are more files there but they may be obsolete or have not been tested recently. 2. In your ipars.siz and unix.mak (see make/modular/README) comment out mortar.siz and mace.mak, respectively. Make sure that SYSTEM in the machine file you chose (for exmaple, lnx.mak) agrees with the SYSTEM you used when creating MACE libraries. 3. Type make and follow the instructions from README file. =================================================================== VII. More on input There are a lot of parameters that you can use in order to control the iteration on the interface. I am listing here the most important ones. Most are self-explanatory. Examples are provided in input files. VII A. Parameters that control Newtonian iteration on the interface MB_NEWT_TOL absolute tolerance MB_NEWT_MAXITER MB_DELTA defines the size of step in fin. diff. approximation to interface jacobian MB_GMRES_TOL MB_GMRES_MAXITER MB_GMRES_RESTART MB_GMRES_PRECOND yes or no, default no, mb_prec must be implemented for the given model, this is diag preconditioner MB_OUTLEVEL controls how much output goes to stdout and to MB.0 by default set to OUTLEVEL (everything). Most concise info given if MB_OUTLEVEL = 1 and OUTLEVEL = 1 VII B. Parameters related to multimodel, and used to control the type and number of interface variables: MB_PRIM_CHOICE depends on model, see mmodel.txt MB_FLUX_CHOICE depends on model, see mmodel.txt MB_FORCE_GUESS instead of previous time step value, force projection form current "faces" to initialize the first guess this is necessary when doing restart, adaptivity, or when changing interface unknowns. in some cases may slow down computations, in some cases may increase robustness MB_SOLVE global number of intface vars/fluxes, see mmodel.txt MB_FACE_NVARS local (to blk/face) number of intface vars/fluxes, see mmodel.txt VII C. Used for multiblock Multigrid preconditioner (contact Ivan Yotov for details if needed) MB_MGLEVELS MB_MGFACTOR MB_MG_MAXITER MB_MG_TOL MB_MG_DOWN MB_MG_UP =================================================================== VIII. References [YOTOV] I. Yotov, Mixed finite element methods for flow in porous media, Ph. D. Thesis, Rice University, Houston, Texas, May 1996. TR96-09, Dept. Comp. Appl. Math., Rice University and TICAM report 96-23, University of Texas at Austin. [ACWY] T. Arbogast, L. C. Cowsar, M. F. Wheeler, and I. Yotov, Mixed finite element methods on non-matching multiblock grids, TICAM report 96-50, University of Texas at Austin (1996), submitted to SIAM J. Num. Anal. Qin Lu, Malgorzata Peszynska, Mary F. Wheeler, A Parallel Multi-Block Black-Oil Model in Multi-Model Implementation, , SPE 66359, submitted M. Peszynska, Advanced Techniques and Algorithms for Reservoir Simulation, III: Multiphysics coupling for two phase flow in degenerate conditions, IMA Volume on Resource Recovery, submitted. Mary F. Wheeler, John A. Wheeler, and Malgorzata Peszynska, A Distributed Computing Portal for Coupling Multi-Physics and Multiple Domains in Porous Media, XIII International Conference on Computational Methods in Water Resources, Calgary, Alberta, Canada, Bentley, L.R. and Sykes, J.F. and Brebbia, C.A. and Gray, W.G. and Pinder, G.F., eds., p. 167-174, Balkema, 2000 Malgorzata Peszynska, Qin Lu, and Mary Wheeler, Multiphysics Coupling of Codes , XIII International Conference on Computational Methods in Water Resources, Calgary, Alberta, Canada, Bentley, L.R. and Sykes, J.F. and Brebbia, C.A. and Gray, W.G. and Pinder, G.F., eds., p. 175-182, Balkema, 2000 Mary F. Wheeler, Malgorzata Peszynska, Xiuli Gai, Omar El-Domeiri, Modeling Subsurface Flow on PC Clusters, High Performance Computing 2000, A. Tentner, SCS, 2000 Malgorzata Peszynska, Qin Lu, Mary F. Wheeler, Coupling different numerical algorithms for two phase fluid flow , Mathematics of Finite Elements and Applications X, MAFELAP 1999, J. R. Whiteman, Ed., Chapter 12, p. 205-214, Elsevier, 2000.