Opinion - The rise of parallelism (and other computing challenges)
In the past, parallelism was just one solution among the many available to manufacturers wanting to propose computer architectures with attractive peak performances.
Today, parallelism is no longer an “option”: it is now necessary for manufacturers to make large use of parallelism in order to propose attractive solutions.
Parallelism is no longer devoted purely to the field of high performance or high speed computing. As a consequence, it is almost everywhere: parallelism is used in PCs, cellular phones and much more. The extensive use of parallelism has transformed “More than Moore” into reality, contributing to the sustained amazement of modern users of computer devices.
The double-edges of the parallel sword
Fields such as computer science and numerical computing have traditionally faced a number of important challenges; however, the advent of grid computing and the massive use of parallelism have now raised many more important questions.
Will the convergence of parallel and distributed computing change the very nature of computer science and numerical computing? Will communication libraries or interfaces such as MPI or OpenMPI continue to permit programmers to maintain high performance? Do the numerical methods presently in use suit massive parallelism and the presence of faults in the systems? These are just few of the important questions that have arisen with the advent of parallel and distributed computing.
To ensure efficient use of new parallel and distributed architectures, new concepts related to communication, synchronization, fault tolerance and auto-organization must come into view and be widely used.
Innovation through evolution
Manufacturers agree that the architecture of future supercomputers will be massively parallel, and as a consequence, they will need to be fault tolerant and well suited to dynamicity. So, a kind of auto-organization will also be needed, since efficient control of these very large systems will not necessarily be possible solely from the outside.
Parallel and distributed algorithms will also have to cope more and more with the asynchronous nature of communication networks and the presence of faults in the system.
Further, concepts such as asynchronous algorithms—whereby each process can run at its own pace according to its load and performance—present many similarities with the concept of wait-free processes in distributed computing, but they have yet to generate the popularity they deserve.
Ideas such as these are gaining more and more attention in many fields, particularly among computer scientists working on communication libraries such as Open MPI. Thus many more questions are raised: where will parallelism lead us and along which roads will we travel to get there? All of these questions must be answered and new solutions found if we are to continue to drive the evolution of computing.
These questions and concepts will be discussed at the 16th Euromicro International Conference on Parallel, Distributed and network-based Processing (PDP 2008), which will be held from 13-15 February 2008 in Toulouse, France. Eighty-three papers from 22 countries in Asia, Europe, North-America and South America have been selected by the Program Committee.
In addition to the conference main track, Special Sessions will address hot topics such as grids, parallel and distributed bioinformatics, virtualization in distributed systems, security in networked and distributed systems, modeling simulation and optimization of peer-to-peer environments and next-generation web computing. Computer manufacturers will also present their architectures, processors and strategies.
- Didier El Baz, Head of the Distributed Computing and Asynchronism team, LAAS-CNRS