Basic notions. What is parallel computing and its importance. Main aplication domains. Parallel computing paradigms: shared memory and distributed memory. Main types of supercomputer architectures. Hardware and software of a supercomputer. Current trends in hardware and software.
Parallelism at software level: threads, MPI, OpenMP. Measuring the algorithm efficiency: speedup and parallel efficiency (Amdhal’s law).
Examples in programming in OpenMP. Parallel loops, collective operations, barriers. Private and public variables.
MPI. Algorithms parallelization techniques. Data decomposition and domain decomposition. Master-slave model for data distribution. MPI communication methods for sharing data. Communicators and communication topologies..
Applications: i) Linear Algebra, ii) Solution of differential equations (Poisson equation), iii) Fourier transforms..
Bibliography of reference
B. Chapman, G. Jost, R. van der Pas, Using OpenMP: Portable Shared Memory Parallel Programming, MIT Press, 2007.
W. Gropp, E. Lusk, A. Skjellum, Using MPI Portable Parallel Programming with the Message Passing Interface, segunda edição, MIT Press, 1999.
P. Pacheco, Parallel Programming with MPI, Morgan Kaufmann Publishers, 1997.