Share |

New model revolutionizes tornado prediction

Tornadoes, some of the most violent storms, can take lives and destroy neighborhoods in seconds. Every state in the US is at some risk. With southern states in the midst of peak tornado season (March through May) and northern states about to experience their peak (late spring through early summer), the tornado modeling and simulation research of Amy McGovern of the University of Oklahoma (OU), US, is especially timely.

A simulated radar image of a storm produced by CM1. The hook-like appendage in the southwest corner is an indication of a developing vortex. Image courtesy McGovern and Dahl.

McGovern’s goal is to revolutionize tornado prediction by explaining why some storms generate tornadoes and others do not. With that in mind, McGovern is developing advanced data analysis techniques to discover how twisters move in both space and time. “We hope that with more accurate predictions and improved lead time, more people will heed warnings and loss of life and property will be reduced,” she says.

To provide greater insights into the types of atmospheric data most important to storm prediction, McGovern has formed collaborative relationships with researchers from OU’s School of Meteorology, the National Severe Storms Laboratory, and the National Center for Atmospheric Research.

Getting useful, reliable data from simulations requires a stable model. McGovern says that finding and implementing the model CM1, as a replacement for the one they were using (ARPS – designed for coarser domains), has been the team’s biggest success. She and OU meteorology graduate student Brittany Dahl say that CM1 is reliable at the high resolutions required for tornadic simulations, and that switching to CM1 has enabled them to focus on the science more intensely, rather than the workflow.

The storms that McGovern and her team create are based on conditions usually seen in an ideal environment for tornado development. “We start by using a bubble of warm air to perturb the atmosphere in the simulation, and then set the storm-building process in motion. From there, the equations and parameters we’ve set in the model guide the storm’s development,” explains Dahl. The team is trying to fashion storms that are more realistic by adding friction from grass and other ground elements as a variable. However, friction becomes difficult to model as storms begin to spin in pronounced fashion.

Work on Dahl’s master’s thesis will be the focus of the research project between now and December, and significant findings in storm prediction are expected. Dahl explains the goal of the US National Weather Service – to implement an approach to early warning called Warn-on-Forecast – involves using models to create forecasts on the computer. Running these models can provide sufficient assurance as to whether a storm is going to produce a tornado, so that people can be informed 30 minutes to an hour ahead of time.

“Warn-on-Forecast can be implemented sooner if the forecast models are able to run at a lower resolution. The current goal is half a kilometer horizontal resolution,” Dahl says. Her thesis will explore whether patterns that indicate a tornado is forming or will form can be found at the coarser (and thus less computationally intensive) resolution. The investigation will also require creating high-resolution images for comparative analysis.

Time series of simulated radar images from the storm during a full-length CM1 model run. Video courtesy McGovern and Dahl.

“The idea is to look at 50-meter, high-resolution versions of storms to observe the strength and longevity of the tornadoes they produce, then compare that with our run of the same storms at 500-meter resolution and determine if any patterns emerge at the coarser resolution that are connected to what we see at the fine resolution,” she explains.

“The only way you can confirm is to make the high-resolution simulations,” McGovern explains. “Those are not feasible to do across the US right now on a Warn-on-Forecast basis. We are running a 112 by 112 kilometer domain. Now scale that up to the entire US and ask it to run in real time; we’re not quite there yet.”

Dahl is using both Kraken and Nautilus supercomputers at the National Institute for Computational Sciences, a major partner in the US National Science Foundation’sExtreme Science and Engineering Discovery Environment (XSEDE). Dahl runs multiple simulations per week using 6,000 processor cores and 10 compute service units (hours) on Kraken. “What’s nice about having Kraken and Nautilus connected is that it makes it a lot easier to transfer the data over to where you can use Nautilus to analyze it,” says Dahl.

“The biggest thing Nautilus does for us is process the data so that we can mine it; we’re trying to cut these terabytes of data down to something that’s usable metadata,” adds McGovern. “We’re able to reduce one week of computations down to 30 minutes on Nautilus, and post-processing time is reduced from several weeks to several hours.”

Data mining will involve a decision-tree technique the team developed to pose a series of spatial-temporal questions about the relationship of one object in three-space to another object in three-space, and ask how that relationship changes over time. “We can ask questions about fields within the object – so if you have an updraft region, which is a three-dimensional object, you can ask how the updraft is changing vertically or horizontally and look at the gradient to see how strongly it’s increasing,” McGovern says.

To read Scott Gibson’s complete article, visit the NICS website.

Your rating: None Average: 4 (2 votes)

Comments

Post new comment

By submitting this form, you accept the Mollom privacy policy.