iSGTW - International Science Grid This Week
iSGTW - International Science Grid This Week
Null

Home > iSGTW - 5 May 2010 > Feature - Frontier guides computing through the collision landscape

Feature - Frontier guides computing through the collision landscape


Just like you might have trouble navigating using this antique map, detector experiments can?t make sense of their data using an out-of-date map of their detector. Image courtesy Boston Public Library?s Norman B. Leventhal Map Center under Creative Commons license

The colossal particle detectors that monitor collisions at the Tevatron in Illinois and the Large Hadron Collider in Switzerland are unique beasts.

Scientists design most of the parts inside them to meet an individual set of specifications. But every once in a while, they find something the detectors can share.

Scientists at the CMS and ATLAS experiments at CERN are using a software system that Fermilab?s Computing Division originally designed for the CDF experiment at the Tevatron. The system, called Frontier, helps scientists distribute at lightning speed information needed to interpret collision data. The system is based upon the widely used Squid web cache technology.

?Since data is often shared between sites or pulled from a remote site, the speed of data return is critical,? said John DeStefano, an engineer at the RHIC and ATLAS Computing Facility at Brookhaven National Laboratory. ?Not even the fastest database servers can bridge the physical gap between geographically disparate sites. People noticed how efficiently Frontier worked for CMS, and so far there has been a notable benefit for ATLAS as well.?

Frontier caught on thanks to the interconnectedness of the particle physics community, said Fermilab engineer Liz Sexton-Kennedy. Many scientists now working on experiments at the LHC also worked on experiments at the Tevatron.

Fermilab computer scientists Jim Kowalkowski and Marc Paterno came up with the original idea for Frontier. A group of computer scientists at Fermilab who had previously gained experience with a similar system designed for the DZero experiment worked to implement the ideas at CDF. Another group from Johns Hopkins University contributed by testing the system.

A diagram of the Frontier architecture within CMS; to enlarge, please click on the image. Image courtesy Dave Dykstra, Fermilab

Adjusting for a changing frontier

Particle detectors like CDF, CMS and ATLAS are large, complex machines whose many parts move in amounts imperceptible to the eye but are critical to a scientist making precise measurements of particle tracks.

This makes reading data from inside a particle detector a bit like driving in a dream landscape whose features frequently shift. To navigate such an unpredictable setting, drivers continually need to swap out their maps for new, updated ones. In order to properly read data that detectors collect about an event, physicists need to know the lay of the land inside the detector at the time of collision.

What?s more, hundreds of thousands of computers around the world all need to pair that updated information with collision data as they analyze it, said Dave Dykstra, a Fermilab engineer who now heads the Frontier project.

?All of them need to load the data all at once,? he said. ?It?s a big challenge.?

Scientists do not monitor the conditions of the detectors during each individual collision. In the CDF detector, beams of protons and antiprotons cross paths about 1.7 million times each second, each pass representing an opportunity for collisions. Scientists plan to cross beams of even more protons 3.1 million times per second in the CMS and ATLAS detectors once the LHC is up to full power.

Rather than try to keep up, scientists take new readings at frequent, regular intervals. A Frontier server takes information about the changing landscape of the detector from a database and sends it to other servers around the world, which then cache the information and share it with other, nearby computers. Only the Frontier server needs to request updated maps from the database.

The Frontier system uses HTTP, the same language Web sites use to communicate with Web browsers, to send database requests out to servers. HTTP is nimble enough to deliver information over long distances in multiple short bursts, and designed to handle huge numbers of users. Without Frontier, experiments would communicate through database queries better suited to a smaller number of local users.

Thanks to a recent upgrade by Dykstra, the system now saves even more time and computing power by skipping the step of reloading information if the detector maps have not changed. Frontier has earned its popularity, but like the computers it keeps supplied with new data, it must keep adapting to keep up with the changing landscape.

?Kathryn Grim, Fermilab

Tags:



Null
 iSGTW 1 December 2010

Feature ? Digital visual inspection

Feature - Collecting data gets easier with EpiCollect

Feature - Scientists step back for wide view of LHC data

Link of the week - World Community Grid turns sixs

Video of the week - Kids give us their take on supercomputers

 Announcements

CFP: EGI User Forum

CFP: ISGC 2011

VIVO Collaboration proposals due

Jobs in distributed computing, 2 NEW

 Subscribe

Enter your email address to subscribe to iSGTW.

Unsubscribe

 iSGTW Blog Watch

Keep up with the grid?s blogosphere

 Mark your calendar

December 2010

7-10, Euro-Africa Week on ICT Research & eInfrastructures

9-10,?ERINA4Africa

10, VIVO Collaboration proposals

13-18,?AGU Fall Meeting

14-16, UCC 2010

17, Abstract Submission deadline, EGI User Forum

?

January 2011

11, HPCS 2011 Submission Deadline

22,?ALENEX11

30 Jan ? 3 Feb, ESCC/Internet2

?

February 2011

1 - 4, GlobusWorld '11

15 - 16, Cloudscape III


More calendar items . . .

?

Footer INFSOM European Commission Department of Energy National?‚ Science?‚ Foundation RSSHeadlines?|?Site Map