Users:General FEM Analysis/Parallel Computing Reference

From Carat++ Public Wiki
(Difference between revisions)
Jump to: navigation, search
(Created page with 'Category: Users:General FEM Analysis == Requirements for parallel computing == Although Carat++ is designed this way that parallelization is completely integrated into the c…')
 
(How to start a parallel computaion)
Line 18: Line 18:
 
=== working on one machine ===
 
=== working on one machine ===
 
Starting a parallel session of Carat++ is very similar to starting a serial one. If all processes are meant to run on the machine the computation is started at (multicore machine), just type
 
Starting a parallel session of Carat++ is very similar to starting a serial one. If all processes are meant to run on the machine the computation is started at (multicore machine), just type
   mpiexec -np <number_of_processes>  carat20.exe Input.dat
+
   mpirun -np <number_of_processes>  carat20.exe Input.dat
 
instead of ''carat20.exe Input.dat'' in the serial case.
 
instead of ''carat20.exe Input.dat'' in the serial case.
  
Line 33: Line 33:
  
 
Now the computation can be started from any of this four machines by typing
 
Now the computation can be started from any of this four machines by typing
   mpiexec --hostfile MyHostfile carat20.exe Input.dat
+
   mpirun --hostfile MyHostfile carat20.exe Input.dat
 
assuming that '''on all machines executive and input file are available via the same pathes'''.
 
assuming that '''on all machines executive and input file are available via the same pathes'''.
  

Revision as of 14:09, 6 September 2010


Contents

Requirements for parallel computing

Although Carat++ is designed this way that parallelization is completely integrated into the code, there are some additional software requirements for compiling a massive parallel version on Carat++. Additionally required is

Now to compile a parallel version of Carat++

If all requirements mentioned above are fulfilled the comilation of a parallel version of Carat++ is quite simple. All you have to do is to

  • open your makefile.in
  • replace the serial compiler by the parallel one (eg. mpic++ instead of c++)
  • link against the parallel trilinos libraries
  • link against the SuperLU_distributed library
  • run "make"

How to start a parallel computaion

working on one machine

Starting a parallel session of Carat++ is very similar to starting a serial one. If all processes are meant to run on the machine the computation is started at (multicore machine), just type

 mpirun -np <number_of_processes>  carat20.exe Input.dat

instead of carat20.exe Input.dat in the serial case.

using a host file

If the computation is meant to use different machines connected via a network it is obvious that the IPs of all machines have to be defined within the program call. This is done via a hostfile which contains the IP adresses of all machines involved into the computation and adresses a number of processes to each machine. The following example shows a host file adressing four machines, assigning up to four processes to each machine.

 MyHostfile:
  
 128.187.141.106  slots=1
 128.187.141.107  slots=2
 128.187.141.108  slots=3
 128.187.141.109  slots=4

Now the computation can be started from any of this four machines by typing

 mpirun --hostfile MyHostfile carat20.exe Input.dat

assuming that on all machines executive and input file are available via the same pathes.

What is tested in parallel

A list of program features that were already successfully used in parallel:

Elements

  • SHELL8, quadrilateraland triangles
  • Solid elements, linear and quadratic ansatz space

Analysis

  • Statics, linear and non-linear
  • Eigenfrequency analysis

Loads

  • point loads
  • area loads (pressure, snow)
  • volume loads (dead weight)

Optimization

Algorithms

  • Steepest descenet
  • Conjungated gradients
  • ALM

Filtering
Response functions

  • strain energy, linear and non-linear
  • eigen values, single and Kreisselmeier-Steinhauser formulation
  • mass
  • displacement

What does not work in parallel (so far)

Although most algorithms work in parallel, there are still some points in the code left where parallelization is not possible so far. Known issues are

  • Dirichlet condition MPC-COUPLING
  • Interpolation of temperature loads

Special hints for input files

Despite all efforts to provide the same input format for serial and parallel computing, there are some issues where the parallel code demands a bit more discipline and accuracy of the user. Here some hints that might be helpful and keep you from spending your time shooting strange errors:

  • The include mechanism can be used in parallel, but all boundary conditions and load cases have to be declared in the same file. This is owed to the fact that boundary conditions and load case are process individual data, and all this information is needed in one block to do the separation properly.
  • Inside the load case definition Dirichlet conditions should be declared at first. A different order led to errors some times. Don't ask me my ...

References

  1. http://www.open-mpi.org
  2. http://trilinos.sandia.gov
  3. http://crd.lbl.gov/~xiaoye/SuperLU




Whos here now:   Members 0   Guests 0   Bots & Crawlers 1
 
Personal tools
Content for Developers