1 rocket science using charm++ at csar orion sky lawlor 2003/10/21

Download 1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21

If you can't read please download the document

Upload: alaina-mcdaniel

Post on 18-Jan-2018

218 views

Category:

Documents


0 download

DESCRIPTION

3 Dynamic, coupled physics simulation in 3D Finite-element solids on unstructured tet mesh or hex mesh Finite-volume fluids on unstructured mixed or structured hex mesh Coupling every timestep via a least-squares data transfer Challenges: Multiple developers, modules Surface of propellant is burning away: mesh adaptation Robert Fielder, Center for Simulation of Advanced Rockets CSAR: Rocket Simulation

TRANSCRIPT

1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21 2 Roadmap CSAR FEM Framework Collision Detection Remeshing 3 Dynamic, coupled physics simulation in 3D Finite-element solids on unstructured tet mesh or hex mesh Finite-volume fluids on unstructured mixed or structured hex mesh Coupling every timestep via a least-squares data transfer Challenges: Multiple developers, modules Surface of propellant is burning away: mesh adaptation Robert Fielder, Center for Simulation of Advanced Rockets CSAR: Rocket Simulation 4 CSAR: Organizational CSAR: Center for Simulation of Advanced Rockets In CSE department of UIUC Multidisciplinary groups CS: Solution transfer, Meshing Structures: Mechanics, Cracks Fluids: Turbulence, Gas, Radiation Combustion: Burn rate 100+ people (including me!) 5 CSAR: Multiple Modules Use of 2 or more CHARM++ frameworks in the same program FEMmultiple unstructured mesh chunks MBLOCKmultiple structured mesh blocks AMPIAdaptive MPI-on-Charm++ All based on the Threaded CHARM++ framework (TCHARM) For example, Rocflus communication uses the FEM framework; but its coupled with an AMPI main program 6 Adaptive MPI-- AMPI Runs each MPI process as a user- level thread Multiple MPI processes per physical processor Cache usage, migration, load balancing,... Virtualized MPI implementation on Charm++: Debug 480-processor mesh motion bug using only 16 processors 7 Charm++ FEM Framework Handles parallel details in the runtime Leaves physics and numerics to user Presents clean, almost serial interface: One call to update cross-processor boundaries Not just for Finite Element computations! Now supports ghost cells Builds on top of AMPI or native MPI No longer depends on Charm directly Allows use of advanced Charm++ features: adaptive parallel computation Dynamic, automatic load balancing Other libraries: Collision, adaptation, visualization, 8 FEM Mesh: Serial to Parallel 9 FEM Mesh: Communication Summing forces from other processors only takes one call: FEM_Update_field Can also access values from ghost elements 10 Charm++ Collision Detection Detect collisions (intersections) between objects scattered across processors Built on Charm++ Arrays Overlay regular 3D sparse grid of voxels (boxes) Send objects to all voxels they touch Collect collisions from each voxel Collision response is left to caller 11 Collision Detection Algorithm: Sparse 3D voxel grid (implemented as Charm array) 12 Serial Scaling 13 Parallel Scaled Problem 14 Remeshing As the solids burn away, the domain changes dramatically Fluids mesh expands Solids mesh contracts This distorts the elements of the mesh We need to be able to fix the deformed mesh 15 Initial mesh consists of small, good quality elements 16 As solids burn away, fluids domain becomes more and more stretched 17 As solids burn away, fluids domain becomes more and more stretched 18 As solids burn away, fluids domain becomes more and more stretched 19 As solids burn away, fluids domain becomes more and more stretched 20 As solids burn away, fluids domain becomes more and more stretched 21 As solids burn away, fluids domain becomes more and more stretched 22 As solids burn away, fluids domain becomes more and more stretched 23 As solids burn away, fluids domain becomes more and more stretched 24 Compare to original-- elements are much worse! 25 Remeshing: Solution Transfer Can use existing (off-the-shelf) tools to remesh our domain Must also handle solution data Density, velocity, displacement fields Gas pressure/temperature Boundary conditions! Accurate transfer of solution data is a difficult mathematical problem Solution data (and mesh) are scattered across processors 26 Remeshing and Solution Transfer FEM: reassemble a serial boundary mesh Call serial remeshing tools: YAMS, TetMesh FEM: partition new serial mesh Collision Library: match up old and new volume meshes Transfer Library: conservative, accurate volume-to-volume data transfer using common refinement method 27 Deformation has distorted elements Remeshing: Before 28 Note stretched elements on boundary Remeshing: Before (Closeup) 29 After remeshing-- better element size and shape Remeshing: After (Closeup) 30 Remeshing restores element size and quality Remeshing: After 31 Gas Velocity: Deformed Mesh 32 Gas Velocity: New mesh 33 Temperature: Deformed Mesh 34 Temperature: New Mesh 35 Remeshing: Continue the Simulation Theory: just start simulation using the new mesh, solution, and boundaries! In practice: not so easy with the real code (genx) Lots of little pieces: fluids, solids, combustion, interface,... Each have their own set of needs and input file formats! Prototype: treat remeshing like a restart Remeshing system writes mesh, solution, boundaries to ordinary restart files Integrated code thinks this is an ordinary restart Few changes needed inside genx 36 Remeshed, after solution transfer 37 Can now continue simulation using new mesh (prototype) 38 Can now continue simulation using new mesh (prototype) 39 Can now continue simulation using new mesh (prototype) 40 Remeshed simulation resolves boundary better than old! 41 Remeshing: Future Work Automatically decide when to remesh Currently manual Remesh the solids domain Currently only Rocflu is supported Remesh during a real parallel run Currently only serial data format supported Remesh without using restart files Remesh only part of the domain (e.g. burning crack) Currently remeshes entire domain at once Remesh without using serial tools Currently YAMS, TetMesh are completely serial 42 Remeshing and Solution Transfer FEM: reassemble a serial boundary mesh Call serial remeshing tools: YAMS, TetMesh FEM: partition new serial mesh Collision Library: match up old and new volume meshes Transfer Library: conservative, accurate volume-to-volume data transfer using common refinement method 43 Parallel Mesh Refinement To refine, split the longest edge But if split neighbor has a longer edge, split his edge first Refinement propagates across mesh, but preserves mesh quality Initial 2D parallel implementation built on Charm++ 3D version, with Delaunay flipping, in progress Interfaces with FEM Framework 44 Conclusion Charms advanced runtime is coming into wider use in CSAR Various features applicable to a variety of domains AMPI FEM Framework Collision Detection Charm brings these projects Faster development Better performance