Share |

Content about Feature

November 4, 2009

Feature - Clouds in Geneva: the telecom view

Image courtesy Catherine Gater

Recently, Geneva played host to the impressive ITU Telecom World 2009, an event held only every few years and of significant importance to the telecommunications and ICT sectors. Attracting its fair share of world leaders, it has created quite a media splash – this is the first event I’ve attended that featured a special lounge marked ‘Heads of State’ (and no, they wouldn’t let me in).The CERN booth was a lot busier than some of the larger, slightly shinier contributions, whose extensive marbled floors had a tendency to look rather empty. The CERN booth on the other hand, was packed with visitors admiring the dashboard Google Earth and finding out more about the LHC and the Worldwide LHC Computing Grid.Given the buzz around cloud computing at EGEE’09 a few weeks ago, I was interested to hear the telecom community’s take on the subject. A distinguished panel of experts fr

November 4, 2009

Feature - FOOTWAYS takes its first steps

Footways started quite literally in the back of someone’s garage. Image courtesy Igor Dubus

Google, Hewlett Packard and Apple all had humble beginnings in the back of someone’s garage. Could the same be true in the back of a garage in Orleans, France?
“I had never seen a router/switch in my life, I had to get into VPN, network, perl scripting. I also had problems with electricity supply and consumption — this was my home, not a dedicated IT room,” says Igor Dubus.
This was only the beginning. Having built a 96-node cluster in his garage to run computer models as the coordinator of the FOOTPRINT project (iSGTW ran an article about FOOTPRINT earlier this year,) Dubus launched his own start-up company, called FOOTWAYS. His goal is to develop this “garage-cluster” into a 12,000-node high performance computing center dedicated to pesticide modeling.
FOOTPRINT, an EU project, seeks to minimize water contam

November 4, 2009

Feature - Getting GPUs on the grid

Russ Miller, principal investigator at CI Lab, stands in front of the server rack that holds Magic, a synchronous supercomputer that can achieve up to 50 Teraflops. Image courtesy of CI Lab.

Enhancing the performance of computer clusters and supercomputers using graphical processing units is all the rage. But what happens when you put these chips on a full-fledged grid?
Meet “Magic,” a supercomputing cluster based at the University of Buffalo’s CyberInfrastructure Laboratory (CI Lab). On the surface, Magic is like any other cluster of Dell nodes. “But then attached to each Dell node is an nVidia node, and each of these nVidia nodes have roughly 1000 graphical processing units,” said Russ Miller, the principal investigator for CI Lab. “Those GPUs are the same as the graphical processing unit in many laptops and desktops.”
That’s the charm of these chips: because they are mass-manufactured for use in your

November 4, 2009

Feature - Vietnam welcomes three new grid sites; hospitals get new ‘HOPE’

Participants from ACGRID school at Institute for Francophone Informatics (IFI) in Hanoi, October 2009. Image courtesy ACGRID. Image of Vietnamese flag on front page courtesy Wikipedia/Creative Commons

Vietnam hosts some of the world’s most biodiverse areas — with six biosphere reserves — along with one of the lowest unemployment rates in the third world and, as of last month, three of the newest grid sites to join the world’s largest grid computing infrastructure.EUAsiaGrid was launched in April of 2008 to foster grid computing technology within Asia, and to create ties between the e-Infrastructure communities of Asia and Europe. Last month, EUAsiaGrid partners co-organized with the French research institute Centre National de la Recherche Scientifique (CNRS) the second ACGRID school, or Advanced Computing and Grid Technologies for Research, to train current and up

October 28, 2009

Feature - Dash heralds new form of supercomputing

Dash, pictured here, is an element of the Triton Resource, an integrated data-intensive resource primarily designed to support UC San Diego and UC researchers. Image courtesy of San Diego Supercomputer Center, UC San Diego.

The first of a new breed of supercomputers was born this fall when computer experts combined flash memory with supercomputer architecture to create Dash.
Normally supercomputers are measured by how many floating point operations, also known as “flops,” they can complete per second. And at a peak speed of 5.2 teraflops, Dash wouldn’t even make the top 500 list, where the slowest speed is about 17 teraflops.
“But if you look at other metrics, such as the ability to do input/output operations, it would potentially be one of the fastest machines,” said Allan Snavely, the project leader for Dash. “Dash is going after what we call data-intensive computing, which is quite different from

October 28, 2009

Feature - In case of emergency, call SPRUCE

A part of the complete synthetic social contact network of Chicago obtained by integrating diverse data sources and methods based on social theories. This sort of simulation can bring insight into how a virus will transmit through a population. Check out this SciDac Review article for more information about this research. Image courtesy of Madhav Marathe and SDSC.

When disaster strikes, simulations could give authorities the information they need to save lives. But simulations are computationally intensive, and during a crisis, there’s no time to wait in line for access to computer resources. That’s where urgent computing comes in.
“What you really want is to be able to hook together or have access to all the supercomputers that you need, wherever they are,” said Pete Beckman, project lead for TeraGrid’s Special PRiority and Urgent Computing Environment, or SPRUCE. “The purpose of this sort of urgent com

October 21, 2009

Feature - A grid-enabled workflow management system for e-Science

Image courtesy Jay Lopez, stock.xchng 

The recent trend towards Service Oriented Architecture — collections of services which operate according to a request/reply model — has stimulated the development of Workflow Management Systems (WMSs) which target the composition of services such as Taverna and Triana.
A new, grid-enabled scientific workflow management system, WS-VLAM, developed in the context of the Virtual Laboratory for e-Science, provides a basic set of tools for building workflows by connecting components to each other based on data dependencies.
The WS-VLAM workflow management system is designed to provide and support the coordinated execution of geographically distributed grid-enabled software components, which can be combined into a workflow. The system takes advantage of the underlying grid infrastructure and unites it with a flexible, high-level, rapid prototyping enviro

October 21, 2009

Feature - Clearing the air: solving an atmospheric controversy with DEISA

The PINNACLE project tests climate models. Image courtesy UCAR

Scientists seeking to develop models for predicting weather, climate and air quality have long been confronted with the fundamental problem of how to accurately forecast the height of the atmospheric boundary layer (ABL) as it develops during daytime heating.
In an attempt to solve this controversy, a team of scientists from the Delft University of Technology in the Netherlands, together with Imperial College London and the National Center for Atmospheric Research in Colorado, initiated the PINNACLE project, using the resources of the DEISA grid of supercomputers.
The ABL is the lower layer of the atmosphere, the part which we live in. Its height grows throughout the day, from a few hundred meters in the morning to one kilometer or more in the afternoon. The ABL has a large Reynolds number (a measure of the turbulence of the system), which me

October 21, 2009

Feature - Here to help: embedded cyberinfrastructure experts

It isn’t easy designing software that can run on a cluster like Fermilab's Grid Computing Center. That’s why advanced technical support is so essential. Photo by Reidar Hahn, Fermilab Visual Media Services.

Although much of today’s scientific research relies on advanced computing, for many researchers learning how to adapt and optimize applications to run on supercomputers, grids, clouds, or clusters can be daunting.
To help newcomers, many cyberinfrastructure providers offer in-depth support tailored to fit each user’s needs. This is much more than the typical technical support that helps users write scripts to enable their jobs to run. Instead, cyberinfrastructure experts are embedded directly into a user team to provide longer-term assistance.
One example is TeraGrid User Support and Services, led by director Sergiu Sanielevici.
“The designation of a supercomputer is that it’s basicall

October 14, 2009

Feature - A new test bed for future cyberinfrastructure

Image courtesy of jaylopez at stock.xchng.

Grid, cluster, and cloud developers will have somewhere new to test their software before letting it loose on the world, thanks to a new initiative called FutureGrid.
“I think people found that it was pretty hard to test early grid software on the machines that were available, because the machines that were available didn’t like being experimented on,” said Geoffrey Fox, principle investigator for FutureGrid. “FutureGrid is trying to support the development of new applications and new system software, which are both rapidly changing.”
The FutureGrid collaboration, which will be headquartered at Indiana University, had its first all-hands meeting 2-3 October.
“We will have early users throughout the first year,” said Fox. A small number of users are already signed up, but there remains room for more on the FutureGrid roster.
“We would like

October 14, 2009

Feature - Putting Linux on the grid

Popular middleware flavours are now included as part of the standard selection box for Debian and Fedora users. Image courtesy Karen Andrews, stock.xchng

In the field of grid computing, Globus has long been a major brand. One of the earliest grid middleware solutions, the Globus Tookit is not only a popular middleware flavor, but it also offers important building blocks for many other grid solutions, including the ARC middleware produced by the KnowARC project.
Now, KnowARC has brought Globus and VOMS (The Virtual Organization Membership Service) to the Debian and Fedora Linux distributions. These packages are also available in Ubuntu, as they take packages from Debian automatically. Furthermore, they are also in EPEL (Extra Packages for Enterprise Linux), an add-on repository for RedHat Enterprise Linux and derivatives such as CentOS and Scientific Linux that are maintained by Fedora.
The ARC middleware relies on a number of Globus libraries in thei

October 14, 2009

Feature - Supercomputing code helps develop new solar cells

Image courtesy of Patrick Moore.

If scientists could use simulations to zoom in on the atomic level of solar cells, the insight they gain could launch solar power into the next energy orbital.
Unfortunately, those simulations would require an exorbitant amount of computational power.
“Typically we need to simulate tens of thousands of atoms,” said Lin-Wang Wang, a scientist at Lawrence Berkeley National Laboratory. “For the conventional code, if the number of atoms increases by a factor of ten, the computational load increases by a factor of a thousand.”
In fact, the same problem arises with nano-scale simulations of a wide variety of materials. That’s why Wang and his research team came up with the LS3DF code.
“We were thinking about how to improve the algorithm and have linear scaling,” said Wang. When an algorithm scales linearly, the computational cost increases at the same rate

October 7, 2009

Feature - A new age for the oldest science

An illustration of a huge star cluster in our own Milky Way galaxy. The red ones are supergiant stars and the blue ones are young stars. There are an estimated 20,000 stars in the cluster. (Click on image above to enlarge.) Image: NASA/courtesy nasaimages.org

For millennia, astronomy meant looking at the night sky and sketching what you saw, making star maps by estimating the relative brightnesses of stars by eye and the routes of wandering planets traced against the celestial sphere.
Even the advent of the telescope didn’t change this much. Sure, astronomers could see fainter objects like the Galilean moons of Jupiter, and resolve point-like planets into disks with structures of color and shade. But the human visual system remained an integral component, limiting data gathering to what could be seen and sketched by human observers in real time.
The advent of the photographic plate cause

October 7, 2009

Feature - An unexpected bounty of Near Earth Objects

Image of a near-earth object detected by the Sloan Digital Sky Survey. The blue, red and green streaks show the object as it moves through three of the five SDSS filters over a period of five minutes. The two white objects are distant stars. Image courtesy Stephen Kent.

While scanning through images from the Sloan Digital Sky Survey, Fermi National Accelerator Laboratory researcher Stephen Kent noticed something unusual — a few extended streaks scattered among the millions of point-like stars and galaxies.
Kent realized the streaks were produced by Near Earth Objects (NEOs), asteroids or extinct comets whose orbits bring them close to Earth — close enough that they could collide. They appear as streaks because the closer an object is to Earth, the more quickly it moves across our sky. That’s why the patterns of distant stars appear unchanged over the course of our lifetimes, whereas our closest neighborin

October 7, 2009

Feature - Grid in a cloud: Processing the astronomically large

This is Westerlund 2, a young star cluster in the Milky Way which contains some of the hottest, brightest and biggest stars known. (Click on image to enlarge.) Image courtesy NASA/CXC/Univ. de Liège/Y. Naze et al

(Editor’s note: Alfonso Olias is part of a European Space Agency team working on a project that involves processing data from one billion stars — with some individual stars surveyed multiple times. Their solution? To run a grid inside a cloud. Here, he gives a first-hand account on their effort.)
 
We recently experimented with running a grid inside a cloud in order to process massive datasets, using test data drawn from something astronomically large: data from the Gaia project.
Gaia is a European Space Agency mission that will conduct a survey of one billion stars in our Galaxy — approximately 1% of the Milky Way galaxy. Over a five-year period, it will monitor each o

September 30, 2009

Feature: MANGO-NET - Helping to bring African ICT up to speed

Mangoes are cultivated in African countries such as Nigeria, MANGO-NET aims to cultivate an ICT infrastructure. Image courtesy Amr Safey, stock.xchng 

While computing technology is ubiquitous and increasingly powerful, its availability in developing nations remains limited. African universities struggle to participate in cutting-edge research because they do not have access to a widespread computer infrastructure, so their ability to conduct experiments and share results is compromised. As more science becomes “e-science,” the problem gets worse.
To help solve this, MANGO-NET (Made in Africa NGO NETwork), was launched. This project seeks to boost information and computing technology (ICT) throughout Africa, by establishing a network of schools and production labs to train ICT students to build their own computers. Because components are bought in bulk, it should reduce hardware costs, decreasing African depe

September 30, 2009

Feature - Sharing a drink from the data firehose

(Clockwise from top): Nural Akchurin, Sung-Won Lee, Alan Sill and Vanalet Rusuriye examine data transfer and local cluster performance for the Tier-3 center at Texas Tech University while remotely monitoring parameters of the CMS experiment. The mini-Remote Operations Center at TTU keeps the group in close contact with the CMS operations at CERN. Image courtesy Alan Sill, TTU

The Large Hadron Collider will generate a torrential flood of nearly half a Gigabyte of data each second.
It’s too much data to simply record for later contemplation. It would fill your 160 GB iPod in about five minutes, and your 500 GB laptop in about 15 minutes. Instead, physicists will have to filter it, monitor it and analyze it, day in and day out.
That will take the efforts of more than 7500 scientists scattered around the world. Researchers found a way for the physicists who are not at CERN to assist in filtering, monitoring and analyzing the data rem

September 23, 2009

Feature – New organization shakes up earthquake consortium

Earthquake engineers at University of Nevada, Reno test a 110-ft bridge model to failure. Image courtesy of Joan Dixon/University of Nevada, Reno.

We cannot stop earthquakes and tsunamis from happening. But with well-engineered buildings, we can prevent some of the death and damage these natural disasters leave in their wake.
First, however, engineers must understand how buildings react when shaken by earthquakes or pummeled by tsunami waves. To accomplish that goal, researchers use a combination of specialized equipment: giant tables that shake, wave tables filled with water, and high-end computing resources that can simulate just about anything.
To find out how sound a building will be during an earthquake, researchers can build a model on top of a large shake table. But most of the shake tables in the United States are not large enough to accommodate an entire building. Instead, they accommodate individual building com

September 23, 2009

Project develops new standards for sharing between grids

Authorization Interoperability Project members , left to right, Oscar Koeroo (NIKHEF), Gabriele Garzoglio (Fermi National Accelerator Lab), and Frank Siebenlist (Argonne National Laboratory). Photo courtesy Open Science Grid.

Although the Grid is all about resource sharing, the software that governs individual grids has not always been capable of interacting well. The Grid Authorization Interoperability Project has created a new standard that could change that.
Grids make their computational and storage resources available online for use by others through software known as gateway middleware. To access a grid, a user presents her credentials—certification that she has rights to access that grid’s resources—to a resource gateway. The gateway in turn talks to an authorization system, local to the grid the user is accessing, in order to assign the appropriate privileges to the user.
Most grids have independently d

September 23, 2009

Feature - The digital story behind the mask

Mak Yong dancer in a 3D body scanner. Courtesy Info Com Development Centre (IDeC) of Universiti putra Malaysia.

Capturing culture in digital form can lead to impressive demands for storage and processing. And grid technology has a role to play in providing those resources. For instance, a 10-minute recording of the movements of a Malay dancer performing the classical Mak Yong dance, using motion-capture equipment attached to the dancer’s body, can take over a week to render into a virtual 3D image of the dancer using a single desktop computer. Once this is done, though, every detail of the dance movement is permanently digitized, and hence preserved for posterity.
The problem, though, is that a complete Mak Yong dance carried out for ceremonial purposes could last a whole night, not just ten minutes. Rendering and storing all the data necessary for this calls for grid computing.
Faridah Noor, an associate professor at the University of

September 16, 2009

Feature - EGI, from the interim director’s view

Image courtesy EGI

The European Grid Initiative, or EGI, will be one of the focal points at the EGEE’09 conference in Barcelona. In the time since EGI was announced and the plans detailed in last September’s GridBriefing, it has appointed an interim director in July, Steven Newhouse, who has found the time to speak to iSGTW.What is EGI?Newhouse: EGI stands for the European Grid Initiative. It’s a project that wil be submitted to the European Commission (EC) for funding in November. It builds on the work of the EGI-Design Study project, which looked to the grid community to identify the best models for providing a sustainable, long-term grid infrastructure to support different scientific communities within Europe.What is the main aim of EGI?Newhouse: The aim is to coordinate a production-quality grid infrastructure for European researchers. When grids first started up, people were s

September 16, 2009

Feature - Visualizations go big in planetarium show

An image from Toomre and Brown's visualization of the magnetic field of the solar convection zone. Here, the sunspot is magnified for better visibility, and is not to scale relative to the sun. © 2009, American Museum of Natural History

"Journey to the Stars" is currently showing at the American Museum of Natural History’s Hayden Planetarium in New York City, US
The stars are writ large in all their majesty in “Journey to the Stars,” a planetarium show that uses grid-generated simulations to take audiences deep under the surface of the sun.
With Whoopi Goldberg as a guide, viewers embark on a journey through the lifespan of stars and the origin of life. Visualizations of the universe, projected onto the 87-foot seven-million-pixel dome of the Hayden Planetarium in New York City, explain how stars first formed and then exploded to produce the chemical elements that make life possible.
The 25-minute journey culm

September 16, 2009

Newsflash - Fall conference line-up

At the EGEE conference in Barcelona, attendees can take sessions on ‘Grids, new media and video’ and ‘From abstract to international news story.’ Image courtesy stock.xchng

The fall conference season will take iSGTW readers to the shores of the Mediterranean in Barcelona, the heights of the Canadian Rocky Mountains in Banff, and the banks of the Columbia River in Portland. Read on to find out more about some of the booths and workshops you’ll find at each conference!
Enabling Grids for E-sciencE ’0921-25 September, Barcelona, Spain
Next week (21-25 September), the bulk of the European grid community will gather in Barcelona for the final conference of Europe’s flagship computing grid project, EGEE.
“With the transition from EGEE to the new European Grid Initiative at the forefront of everyone's minds, this final EGEE conference will be the perfect time for members of the grid community to promote their wo

September 9, 2009

Feature - A SLiM chance for viruses

Viruses hijack the replication machinery of cells. Image courtesy Simon Hettrick

Viruses have evolved a clever way of reproducing. They hijack the replication machinery of their host cell, which is controlled and regulated by a variety of signaling pathways, and fool them into producing copies of the virus.
Richard Edwards, head of the Bioinformatics and Molecular Evolution group at the University of Southampton, UK, is trying to better understand signaling pathways in order to develop treatments for viruses — and for diseases that operate similarly.  This is an enormous task, because to understand signaling pathways in the human body requires studying the interactions between the 20,000 or so proteins contained within the cells.
To do so, Edwards is focusing on short, linear motifs known as SLiMs. “A protein can be thought of as a sequence of amino acids, like beads on a string” explains Edwards. “[SLiMs] consist o

September 9, 2009

Feature - Calming the wakefield

A snapshot of a simulation of the wakefield generated by a particle bunch moving through a series of ILC cavities, from three different perspectives. The colors represent the magnitude of the fields, with warmer colors representing the strongest fields.

For the International Linear Collider to run at maximum performance, each of its 27,000 cavities must be designed as precisely as possible.
It is very time consuming and costly, however, to produce physical prototypes, so researchers at SLAC National Accelerator Laboratory decided to use a supercomputer to create and test virtual prototypes of the cavities.
The ILC, which is in its design phase, will use superconducting cavities to accelerate electrons and their antimatter partners, positrons, to nearly the speed of light before colliding them. By studying these collisions, researchers will be able to probe more deeply into the subatomic world.
As particle bunches travel through the accelerator cavities,