Share |

iSGTW Feature - BNL takes a cue

Feature - BNL takes a cue from nuclear physics


Deep in the bowels of Brookhaven's RACF.

Image courtesy of BNL

Even though real data from the Large Hadron Collider (LHC) has yet to touch the Grid, scientists at Brookhaven National Laboratory’s ATLAS Tier-1 center already have their hands dirty. Working on a daily basis with the Relativistic Heavy Ion Collider (RHIC) – a massive particle accelerator that smashes together beams of gold atoms to explore the complex world of nuclear physics – the almost 40 staff members at Brookhaven’s RHIC and ATLAS Computing Facility (RACF) are no strangers to storing and distributing large amounts of data.

“The benefit of an integrated facility like this is the ability to move around highly skilled and experienced IT experts from one project to the other, depending on what’s needed at the moment,” said RACF Director Michael Ernst. “There are many different flavors of physics computing, but the requirements for these two facilities are very similar to some extent. In both cases, the most important aspect is reliable and efficient storage.”

RACF Director Michael Ernst.

Image courtesy of BNL  

Unique in the USA

As the sole Tier 1 computing facility for ATLAS in the United States, Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing, processing and distributing ATLAS experimental data among scientists across the country.

This mission is possible because of the ability to build upon and receive support from the Open Science Grid project, Ernst said. Yet, even after ramping up to 8 petabytes of accessible online storage – a capacity ten times greater than existed when ATLAS joined the RACF eight years ago – the computing center’s scientists still have plenty of testing and problem-solving to conduct before the LHC begins operations this fall.

“You can’t just put up a number of storage and computing boxes, turn them on, and have a stable operation at this scale,” Ernst said. “Ramping up so quickly presents a number of issues because what worked yesterday isn’t guaranteed to work today.”

In order to test their limitations and prepare for real data, the computing staff participates in numerous simulation exercises, spanning from data extraction to actual analyses physicists might perform on their desktops. In a recent throughput test with all of the ATLAS Tier 1 centers, Brookhaven was able to receive data from CERN at a rate of 400 megabytes per second. At that speed, it would take just 20 seconds to upload a full set of music onto an 8-gigabyte iPod.

In the future – as ATLAS and RHIC undergo upgrades to increase luminosity, events become more complex, and data is archived – Brookhaven plans to build a new facility to house, power, and cool the currently used 2,500 machines, which must be replaced with newer models about every three years. This constant maintenance cycle, combined with unexpected challenges from data-taking and data analyses, are sure to keep Brookhaven’s Tier 1 center busy for years to come, Ernst said.

“This is all new ground,” he said. “You work with people from around the world to find a path that carries you for two years or so. Then, the software and hardware change and you have to throw everything away and start again. This is extremely difficult, but it’s also one of the parts I enjoy most.”

—Kendra Snyder, BNL

No votes yet

Comments

Post new comment

By submitting this form, you accept the Mollom privacy policy.