Around 20% of the Earth’s freshwater is situated in Brazil’s Amazon rainforest. But it’s not a static system, and understanding the dynamics – especially how much rainfall occurs over the basin - is a difficult problem. It’s even tougher to predict how climate change will affect this system.
Current climate models have uncertainties and errors, and Diego Carvalho, a User Community Support Manager for the GISELA grid in Rio de Janeiro, Brazil, thinks this had led people to underestimate the amount of rainfall across the Amazon. This directly impacts conservation efforts, which are crucial to preserving freshwater for healthy ecosystems as well as agriculture and human consumption.
Carvalho strives to enable climate scientists to use the grid to make more accurate models and improve their conservation efforts. He is working with climate scientists to use a modified weather research and forecasting model that can analyze atmospheric patterns with resolutions from as small as metres up to thousands of kilometers across, as he from a conference on the Role of e-Infrastructures for Climate Change Research in Trieste, Italy, last week. Results processed from this weather research and forecasting model and grid combination are compared with real-world charts for accuracy, and help scientists to evaluate and feel confident about the status of freshwater in the Amazon.
Another recent and successful example of the porting of a climate-related model to the grid was the Cuban AERMOD project. Researchers modified a US air quality mathematical model and optimized it to work on the GISELA grid. Cuban scientists now use it to predict the spread of pollutants from industrial factories.
Climate science requires vast computational resources because only infrastructures made up of thousands of computer cores can efficiently process the complex interactions between the atmosphere, ocean and other weather systems. The data itself is in the petabyte (1015 bytes) and even exabyte (1018 bytes) range. In comparison, the total amount of data stored by humans today is estimated at 295 exabytes. The LHC at CERN is the only institution that comes close to this level of computational processing, by using a computing grid – the WLCG – which is made up of 344,401 computer cores.
First step
However, using the grid can be challenging, especially for those without prior knowledge of grid interfaces and processes. But, if users overcome this hurdle, then they can take advantage of the most powerful computing resources in the world, said Carvalho.
This is because, unlike desktops, computing clusters or even supercomputers, grids are composed of dozens of computer clusters and perform heavy computations in parallel. Carvalho finds that in many cases, complex algorithms are not only faster, but produce more accurate results on grids. “However, the problem is that many scientists [especially non-physicists] who need these resources are grid-illiterate” said Carvalho.
That is where Carvalho comes in: he is the grid middle-man, coordinating and supporting research communities so that users get a ‘helping hand’ when using the grid for the first time.
Every time you double resolution in climate models, computing power increases by a factor of 10,” said Filippo Giorgi.
Carvalho states that the bigger the computational problem, the better. “I ask researchers, what is the size of their problem? The more CPUs hours they need the better; that is what the grid is for.” (CPU hour examples are 1 CPU working for 100 hours or 100 CPUs working for 1 hour). He hopes that more researchers, especially climate scientists, who require vast computational resources, continue to use grids such as GISELA. Also, as climate models improve, the computing power multiplies. “Every time you double resolution in climate models, computing power increases by a factor of 10,” said Filippo Giorgi, head of Earth System Physics at the International Centre for Theoretical Physics (ICTP).
However, many researchers believe climate science is better suited to supercomputers because there is a shorter latency when calculating between many data points. “It’s difficult to distribute a climate model over several computers. You need to have efficient exchange between atmospheric and oceanic data. It doesn’t do well on a distributed infrastructure,” said Sylvie Joussaume, the director of the National Institute of Sciences of the Universe, France.
Carvalho hopes this will build a case for long term investment by Latin American governments in e-infrastructures. It is a virtuous circle: If politicians see that society benefits from scientific research on grids, they are more likely to give their support. And surely there is no better benefit than preserving our freshwater.
Comments
Post new comment