Feature - HPC adds a spark to EDF’s computing capacities
Jean-Yves Berthou is responsible for IT in the research and development area of Electricite de France — a major energy company in Europe. EDF’s 2,000 researchers use computing to work on a number of different projects, including areas such as minimizing CO2 emissions, alternatives to fossil fuels, and ensuring the security of electricity grids. Here, he describes the use and impact of high performance computing (HPC) at the company.
Berthou: In many cases physical experiments and testing are not possible, for example in the simulation of fuel assemblies and crack propagation in nuclear reactors, or in optimizing electricity production and trading. Even when experimentation is possible, numerical simulation can go beyond what is physically possible. However, experimentation still remains an indispensable tool.
In what application areas do you use HPC?
Berthou: It has been used for a long time in operational matters, such as optimizing day-to-day production, or choosing the safest and most effective configurations for nuclear refuelling. However, most of the advances towards higher levels of performance have been driven by the need to explain complex physical phenomena behind maintenance issues better; to assess the impact of new vendor technology; and to anticipate changes in operating or regulatory conditions.
For your business does cloud computing offer a viable alternative to owning and managing your own computing systems?
Berthou: EDF will not use third-party resources for its production requirements in the short term. It is collaborating with CEA (the French Commission for Atomic Energy) on distributed computing and, in particular, on the pooling of resources across an organization. EDF plans to combine its own resources to create an in-house “cloud” of virtual, pooled resources. It does not plan to use current external cloud offerings, because these are not yet mature enough for its requirements which are determined by performance, portability and virtualization considerations.
What are the challenges you see in the development of your HPC capability (e.g. scalability of applications, power consumption, cost of systems)?
Berthou: The major challenges relate to portability of codes across different systems and scalability to 10,000 cores and beyond. The requirement is to port a complete simulation environment, comprising coupled multiphysics codes, to systems with large numbers of cores. Power consumption for such large systems is a major issue because it has a significant bearing on the total cost of ownership.
Are new languages and programming paradigms needed particularly as we move toward exascale systems?
Berthou: This is an important issue. New languages and libraries are needed for large systems. For example, EDF’s structural mechanics codes represent an investment of over €100 million. This investment needs to be preserved as the codes are moved to new machines. This places constraints on development methodology and new languages. The expertise of staff also represents a major investment because it may take several years for a programmer to become proficient and productive on the various tools needed. Together, these create inertia and constitute a real barrier to change. More and more, over the last 20 years, EDF has been developing object-oriented codes. This approach cannot be abruptly changed.
What HPC systems would you like to have available in a few years' time?
Berthou: The market continues to demand an exponential increase in computer power. This will take us towards systems with performances of 10s and 100s of petaflops. These systems will need to run existing codes and be programmable using existing tools and languages.
—Emilie Tanke, for iSGTW. An earlier version of this article appeared in Planet HPC