Victor Eijkhout

TACC picture square

Texas Advanced Computing Center



The dominant parallel programming systems, MPI and OpenMP, are now 20 years old. Computer architectures have become considerably more complicated in this time, and these systems have undergone refinements accordingly, making them ever more complicated to use. Maybe it is time to take a step back and reconsider the nature of parallel programming: is all this complexity necessary at the user level?
Our parallel programming systems have a design that is inspired by underlying hardware mechanisms, which introduces considerations in the parallel program that are extraneous to the algorithm being implemented. This raises the question what the minimal specification is of an algorithm that allows for efficient parallel execution. Past experience has shown that a parallelizing compiler is not the right approach. A more interesting approach, writing a sequential program in terms of distributed objects, was tried in High Performance Fortran and failed there.
We argue that this `sequential semantics’ approach can work, if the programmer expresses the algorithm in terms of the right abstractions. We motivate and define these abstractions and show how the IMP (Integrative Model for Parallelism) system implements them, giving essentially the performance behaviour of a hand-written code. To the programmer, a Finite Element program in IMP has the complexity of a sequential code, without any parallel communication explicitly specified. We show results obtained so far, and future directions of research.


Victor Eijkhout has a degree in numerical mathematics from the Radboud University of Nijmegen in the Netherlands. His initial interest in iterative solution methods for linear systems gradually led thim to parallel programming and his contributions to the PETSc library. Additionally he has done research in recommender systems for iterative solver algorithms. In recent times, his interest has shifted to the axiomatic derivation of linear solver algorithms and a  theoretical approach to parallel programming. He is currently a research scientist at the Texas Advanced Computing Center. Recently he has published textbooks for High Performance Scientific Computing and Parallel Programming in MPI and OpenMP.