Share |

Feature - SuperComputing 2010 comes to a close

Last week 10,000 people from around the world converged on the city of New Orleans to attend SuperComputing 2010.

Hot topics included (but were hardly limited to) climate change modeling, graphic processing units, and the rise of data-intensive science.

Climate change modeling

Keynotes, panels, and technical papers all sought to address the challenges facing climate modeling in the coming years. Some suggested that exascale supercomputers enabled by graphic processing will be necessary to run future climate model. But greater computational power on its own is not enough. A model that accurately described the Earth’s climate would provide increasingly accurate results when run at increasingly high resolution – and draw on increasingly large quantities of computational power in the process, since computational cost increases with resolution. But as the panelists at the “Pushing the Frontiers of Climate and Weather Models” panel pointed out, existing models are each optimized at a specific resolution, and will become less accurate if resolution is increased. Before we can take advantage of the higher resolutions enabled by greater computational resources, climate modelers will have to come up with models that are accurate at higher resolutions.

Graphics Processing Units

There remains a great deal of hype around the little chips known as graphics processing units, or GPUs for short. Proponents argue that GPUs are the only way to reach the exascale at a reasonable cost in both money and energy. But during the panel “Toward Exascale Computing with Heterogeneous Architectures,” NERSC director Kathy Yelick (standing in for John Shalf) pointed out several factors that are often overlooked. First, there is the fact that in some cases, the benefits of GPUs are overstated because they are being compared to unoptimized CPU code. After extensive benchmarking, Yelick and her colleagues found that actual speed-ups should range from 2.2 for memory-intensive code to 6.7 for compute-intensive code. Second, teaching developers and researchers to program in a new paradigm such as CUDA and translating existing applications into CUDA is no small endeavor. Anyone who believes that something new will come along soon may conclude that they are better off waiting to transition to a new architecture at that time, and skipping over CUDA entirely.

No votes yet

Comments

Post new comment

By submitting this form, you accept the Mollom privacy policy.