Users:General FEM Analysis/Parallel Computing Reference
(→How to start a parallel computaion) |
(→What is tested in parallel) |
||
Line 39: | Line 39: | ||
A list of program features that were already successfully used in parallel: | A list of program features that were already successfully used in parallel: | ||
=== Elements === | === Elements === | ||
− | * SHELL8, | + | * SHELL8, quadrilateral and triangles |
* Solid elements, linear and quadratic ansatz space | * Solid elements, linear and quadratic ansatz space | ||
=== Analysis === | === Analysis === |
Latest revision as of 14:10, 6 September 2010
Contents |
Requirements for parallel computing
Although Carat++ is designed this way that parallelization is completely integrated into the code, there are some additional software requirements for compiling a massive parallel version on Carat++. Additionally required is
- an installation of OpenMPI (version 1.3.3 is recommended) [1]
- a parallel build of Trilinos 10.0 (or higher) [2]
- the parallel equation solver SuperLU_distributed 2.3 is recommended [3]
Now to compile a parallel version of Carat++
If all requirements mentioned above are fulfilled the comilation of a parallel version of Carat++ is quite simple. All you have to do is to
- open your makefile.in
- replace the serial compiler by the parallel one (eg. mpic++ instead of c++)
- link against the parallel trilinos libraries
- link against the SuperLU_distributed library
- run "make"
How to start a parallel computaion
working on one machine
Starting a parallel session of Carat++ is very similar to starting a serial one. If all processes are meant to run on the machine the computation is started at (multicore machine), just type
mpirun -np <number_of_processes> carat20.exe Input.dat
instead of carat20.exe Input.dat in the serial case.
using a host file
If the computation is meant to use different machines connected via a network it is obvious that the IPs of all machines have to be defined within the program call. This is done via a hostfile which contains the IP adresses of all machines involved into the computation and adresses a number of processes to each machine. The following example shows a host file adressing four machines, assigning up to four processes to each machine.
MyHostfile: 128.187.141.106 slots=1 128.187.141.107 slots=2 128.187.141.108 slots=3 128.187.141.109 slots=4
Now the computation can be started from any of this four machines by typing
mpirun --hostfile MyHostfile carat20.exe Input.dat
assuming that on all machines executive and input file are available via the same pathes.
What is tested in parallel
A list of program features that were already successfully used in parallel:
Elements
- SHELL8, quadrilateral and triangles
- Solid elements, linear and quadratic ansatz space
Analysis
- Statics, linear and non-linear
- Eigenfrequency analysis
Loads
- point loads
- area loads (pressure, snow)
- volume loads (dead weight)
Optimization
Algorithms
- Steepest descenet
- Conjungated gradients
- ALM
Filtering
Response functions
- strain energy, linear and non-linear
- eigen values, single and Kreisselmeier-Steinhauser formulation
- mass
- displacement
What does not work in parallel (so far)
Although most algorithms work in parallel, there are still some points in the code left where parallelization is not possible so far. Known issues are
- Dirichlet condition MPC-COUPLING
- Interpolation of temperature loads
Special hints for input files
Despite all efforts to provide the same input format for serial and parallel computing, there are some issues where the parallel code demands a bit more discipline and accuracy of the user. Here some hints that might be helpful and keep you from spending your time shooting strange errors:
- The include mechanism can be used in parallel, but all boundary conditions and load cases have to be declared in the same file. This is owed to the fact that boundary conditions and load case are process individual data, and all this information is needed in one block to do the separation properly.
- Inside the load case definition Dirichlet conditions should be declared at first. A different order led to errors some times. Don't ask me my ...
References
Whos here now: Members 0 Guests 0 Bots & Crawlers 1 |