Share |

Distributing the Square Kilometre Array

What is the Square Kilometre Array?

As the name implies, the SKA is an aggregate radio telescope that will study the sky by combining observations from thousands of separate receptors distributed over thousands of kilometers with a combined surface area of approximately one square kilometer. The result will be the world’s largest and most sensitive radio telescope, with 50 times the sensitivity and 10,000 times the survey speed of the best current-day telescopes. SKA data will be invaluable in answering a variety of fundamental questions, such as how dark matter accelerates the Universe’s expansion, how stars and then galaxies first formed following the Big Bang, and much more.

Like many other large-scale projects, the SKA is scheduled for multiple stages of planning, construction, and upgrades over the course of several decades; the collaboration is currently in the first planning stage, with Phase I construction slated to begin in 2016.

The SKA Organisation, which is headquartered in Manchester, UK, is a collaboration between institutions in Australia, Canada, China, Italy, New Zealand, the Republic of South Africa, the Netherlands, and the United Kingdom (India is an associate member of the collaboration).

From the beginning, it’s been clear that the planned Square Kilometre Array radio telescope installation would need to leverage distributed computing technology to succeed. It could not do otherwise, with an anticipated data rate of petabits per second. That’s more than 100 times the current global internet traffic, according to the SKA website.

Now, it seems that the installation will itself be distributed; as expected, the SKA collaboration announced their choice of site on 25 May 2012. But rather than choosing between the two prospective sites, the collaboration elected to split the installation between South Africa, which will be the primary location for the experiment’s first phase, and Western Australia.

Although splitting the array between the two sites will be more expensive, it also comes with a number of advantages. The previously competing bids had already built pilot installations at their respective locations. Splitting the bid will not only make it possible to incorporate both of those existing installations into the larger experiment, but will also allow the collaboration to do more science than otherwise would have been possible.

With the question of the experiment’s site settled, the collaboration now has until 2016, when construction is scheduled to start, to make more detailed plans for the system design. Nearly a dozen ‘work packages’ will start shortly, including the Science Data Processor work package, which will be responsible for the experiment’s computing.

Above: Artist's impression of mid frequency aperture arrays. Credit: SKA Organisation/TDP/DRAO/Swinburne Astronomy Productions.
Below: Artist's impression of the SKA dishes. Credit: SKA Organisation/TDP/DRAO/Swinburne Astronomy Productions.

So far, no decisions have been made regarding the overall software architecture, according to Tim Cornwell, the project lead for computing for the Australasian bid. “However, based on analysis up to now, the computational model will be tightly integrated HPC.”

The experiment will need to move massive amounts of data on a regular basis and do a high volume of computationally intensive analysis jobs. In fact, one of the installations, dubbed SKA1_SURVEY and located in Australia, will be so compute intensive that it may need an exascale high-performance computing system to manage its data. Furthermore, transporting data costs both money and power, which makes it necessary to minimize data movement.

Distributed computing models aren’t completely out of the picture, however.

“The derived science data products such as images, spectra, source catalogs will be made available via a Tier 0/Tier 1 scheme with Tier 1 data centers elsewhere,” said Cornwell, who is based at the Commonwealth Scientific and Industrial Research Organization’s Australia Telescope National Facility in Epping, Australia. “Cloud-based processing may be possible for these smaller size data products.”

Even a model that minimizes data movement faces some challenges. The network near the two SKA sites will need to be upgraded to accommodate the petabytes of data produced by the experiment.

Now that the experiment is moving forward as a dual-site collaboration, the SDP work package consortium hopes to submit a proposal to the SKA Organisation. If the proposal is accepted, work could begin as soon as the end of the year.

Your rating: None Average: 3.8 (4 votes)

Comments

Post new comment

By submitting this form, you accept the Mollom privacy policy.