Share |

Content about Feature

March 31, 2010

Feature - Scientific software goes parallel

Most scientists do their data analysis using commercial software over which they have little control. Yet, to take advantage of the multi-core processors new computers ship with, algorithms must be designed to run in parallel.
Luckily, many of the more popular scientific software packages have gone parallel. Some even offer versions or toolboxes that manage clusters or grids. iSGTW scoured the world of scientific software for the latest information on parallelization in scientific software.

Researchers at Cornell University used Windows HPC Server 2008 in conjunction with MATLAB to investigate turbulent combustion systems. Image courtesy Microsoft.

Excel
“More than 50 percent of customers cite lack of skills and complexity of integrated cluster solutions as inhibiting to adoption,” said Kyril Faenov, general manager of Microsoft’s technical computing group. “On the other hand, we know that more than 50 pe

March 24, 2010

Feature - Ecological forecasting in NEON

TOP: NEON's proto-tower just north of Boulder, Colorado, where the project is testing equipment. The site is already producing a real-data stream.
BOTTOM: Hongyan Luo conducts tests at the base of NEON's test proto-tower.
Images courtesy of NEON, Inc.

Massive independent networks of environmental and ecological data stations distributed across the globe could launch environmental science into the petascale era, transforming the way scientists look at our planet.
In the United States, the National Ecological Observatory Network is poised to begin construction later this year.
“What NEON is about is measuring the effects of climate change, land use change, and invasive species on continental scale ecology. And we’re doing that in order to enable ecological forecasting,” said Michael Keller, the chief of science at NEON.
Ecological forecasting, like weather forecasting, uses extensive data sets over large areas and pe

March 24, 2010

Feature - Grant ensures sustainable future for software

HECToR, seen here, is the UK’s national supercomputer service, and is run by an organization called the EPCC.  Image courtesy HECToR

A Software Sustainability Institute (SSI)  has just been established, with the aid of a grant of £4.2 million (roughly about 6.4 million US Dollars, or 4.7 million Euros, as of press time) from the UK’s Engineering and Physical Sciences Research Council, or EPSRC.
Software was highlighted as a key facility needed for high quality research, in a recent study. A team of academics and software engineers based at the University of Southampton’s School of Electronics and Computer Science, the Department of Computer Science at the University of Manchester, and led by the EPCC at the University of Edinburgh, will work in partnership with the research community to manage software beyond the lifetime of its original funding, so that it is strengthened, adapted and customi

March 24, 2010

Feature - Q&A: Grid Colombia warms up

A group photo at an Open Science Grid-sponsored Grid Colombia workshop, which took place in October 2009. Image courtesy of Open Science Grid.

With a little help from colleagues at Open Science Grid and EGEE (via EELA-2), Colombia is on the cusp of launching its first national grid infrastructure. iSGTW caught up with Jose Caballero to learn more about the present and future of this promising project. Caballero currently does software development for ATLAS, and serves as the OSG liaison to South America. Previously, he spent five years working with the gLite grid software for the CMS experiment.
iSGTW: How did Grid Colombia get started?
Caballero: EELA-2 (E-science grid facility for Europe and Latin America) chose Colombia to host one of its main conferences in 2008, and that brought the worldwide grid movement to the attention of both academia and government in Colombia. After that, universities started to study the creation of a national

March 17, 2010

Feature - Case Study: Einstein@OSG

A screenshot of the Einstein@Home screensaver. Image courtesy of Einstein@Home.

For over five years, volunteers have been lending their computers’ spare cycles to the Laser Interferometer Gravitational Wave Observatory (LIGO) and GEO-600 projects via the BOINC application Einstein@Home. Now a new application wrapper, dubbed “Einstein@OSG,” brings the application to the Open Science Grid.
Today, although Einstein@OSG has been running for only six months, it is already the top contributor to Einstein@Home, processing about 10 percent of jobs.
“The Grid was perfectly suitable to run an application of this type,” said Robert Engel, lead developer and production coordinator for the Einstein@OSG project. “BOINC would benefit from every single CPU that we would provide for it. Increasing the number of CPUs by 1000 really results in 1000 times more science getting done.”
Getting Einstein@Home to run on a grid wa

March 17, 2010

Feature - OSG All Hands Meeting

Attendees visited vendor tables to network and watch demonstrations at the first ever vendor and e-demonstration session to run at an OSG All Hands Meeting.
Image by Miriam Boon.

Last week, 183 researchers and vendors gathered at Fermilab in Batavia, Illinois for the Open Science Grid’s annual All Hands Meeting.
In addition to hosting workshops for CMS and ATLAS computing meetings, sessions covered a variety of topics, including security, virtualization, cloud computing, biology applications, reports from European colleagues, and the future of US cyberinfrastructure. This year also marked the first vendor and e-demonstration session.
Several people expressed pleasure at the dynamic discussions that occurred during the panel-style sessions, said Paul Avery, a researcher at the University of Florida and co-chair of the OSG Consortium Council. Kent Blackburn, Avery’s fellow co-chair and a researcher with LIGO Caltech, suggested that the inc

March 17, 2010

Feature - Sixty seconds to save a city

Circles indicate warning times for the earthquake that hit Taiwan on 4 March 2010, using a new approach to detection that gives up to 40 seconds more early warning. Taipei is the northernmost city indicated on the map, on the 50 second circle. Image courtesy Nai-Chi Hsiao, Central Weather Bureau, Taiwan.

At the International Symposium on Grid Computing (ISGC 2010) in Taipei last week, a special two-day EUAsiaGrid Disaster Mitigation Workshop devoted a day to the latest technological progress in monitoring and simulating earthquakes and tsunamis. In a situation where every second counts, grid computing could one day help authorities assess the potential impact of an earthquake quickly enough to avoid the worst consequences.
 
The day before ISGC 2010 began, Taiwan was hit by a magnitude 6.4 earthquake in the south part of the island, making headlines worldwide.
But earthquakes are a daily reality for Taiwan’s inhabitants, and indeed

March 10, 2010

Feature - A neutrino's journey: From accelerator to analysis

The first T2K event seen in Super-Kamiokande. Each dot is a photo multiplier tube which has detected light. The two circles of hits indicate that a neutrino has probably produced a particle called a π 0, perfectly in time with the arrival of a pulse of neutrinos from J-PARC. Another faint circle surrounds the viewpoint of this image, showing a third particle was created by the neutrino. Image courtesy of T2K.

Neutrinos are the introverts of the particle physics world. They travel through the universe largely unnoticed, except for the very rare interaction. Every day, neutrinos pass through you, and you don’t even notice. Don’t panic – they can’t hurt you, because they don’t interact with your body’s matter.
Neutrinos are neutral – free of charge. That means that electricity and magnetism can’t draw them out and force them to interact. Likewise, they have so little mass tha

March 10, 2010

Feature - Dealing with dengue

Close-up of the Aedis aegypti mosquito that carries dengue. Image courtesy Centers for Disease Control and Prevention (CDC)

First, you get a bad headache. Then your joints feel like they are being crushed. This is followed by fever and a bright red rash on your legs and chest. You may also start vomiting or have diarrhea. This is dengue fever, and it affects two-fifths of the planet’s population. Thanks to the EUAsiaGrid project, grid technology is doing its part to help reduce the burden of this devastating disease.
For most, dengue fever passes after a very unpleasant week, but for some it leads to dengue haemorrhagic fever, which is often fatal. Like malaria, dengue is borne by mosquitoes. Unlike malaria, though, it affects people in cities as much as in the countryside. As a result, it has a particularly high incidence in heavily populated parts of South–East Asia, where it is a significant source of infant mortality in several countrie

March 10, 2010

Opinion - EUAsia Grid makes a virtue of diversity

Seventeenth-century Dutch map of Asia. Image courtesy Flickr

EUAsiaGrid, a two-year project to promote grid awareness in South-East Asia, is entering its final phase.  Time to take stock of some of the unique aspects of running such a geographically and culturally diverse grid project, and the opportunities it has created for closer scientific collaboration.
More than half the world lives in Asia.
Even putting aside the two titans of India and China, there are some 600 million inhabitants – 100 million more than the entire EU - in the region commonly referred to as South-East Asia, which stretches nearly twice the width of the continental United States from Burma in the West to Indonesia’s Papua province in the East.
Most of the Asian partners in EUAsiaGrid are from areas prone to natural disasters such as earthquakes, volcanoes, typhoons and tsunamis.
Despite the challenging circumstances, EUAsiaGrid has managed

March 3, 2010

Feature - Black holes and their jets

This simulation depicts a black hole with a dipole as a magnetic field. This system is sufficiently orderly to generate gamma ray bursts that travel at relativistic speeds of over 99.9% the speed of light. (To see what happens with a more complex magnetic field, see the next video below!)
The black hole pulls in nearby matter (yellow) and sprays energy back out into the universe in a jet (blue and red) that is held together by the magnetic field (green lines).
The simulation was performed on the Texas Advanced Computing Center resources via TeraGrid, consuming approximately 400 000 service units.
Video courtesy of Jonathan McKinney and Roger Blandford.

Jets of particles streaming from black holes in far-away galaxies operate differently than previously thought, according to a study published recently in Nature.
High above the flat Milky Way galaxy, bright galaxies called blazars dominate the gamma-ray sky, discrete spots on the da

March 3, 2010

Nice to meet you, authentically

Famed Australian cricket player Neil Harvey (right) shaking hands at the start of the 1950/51 Test series between England and Australia. Image courtesy Wikimedia Commons

We all know that authentication is a must when using grid resources but how much do we really know about it? Jens Jensen from the UK's Science and Technologies Facilities Council Rutherford Appleton Laboratory (STFC RAL) explains, using as a case history his own experience in working as part of that country’s Certificate Authority — one of the largest in the world.
 
If you have ever used the grid, then you know that you “shake hands” with a resource using a certificate — a “digital passport” which identifies you to the resource. In turn, the resource also sends a certificate of its own to you (but which you will most likely see only if something goes wrong).Why authenticate?
When you access any valuable resource, whether it’

March 3, 2010

Opinion - Volunteering for a better world: harnessing technology and willing citizens

Firefly in the daytime. Image courtesy Museum of Science, Boston.

By using the strengths of distributed computing technologies, both specialized researchers and citizens have the opportunity to participate in a new way of doing science.
We live in a time when nearly all information is available to nearly all people everywhere.
We are entering an age where all types of people can also contribute to many types of information. A school bus driver in rural Romania may be part of a biomedical research project. Or a banker in Los Angeles might moonlight as a collaborator in an astronomy project – classifying galaxies in her spare time.
This new movement in science, called “citizen science,” allows non-specialist volunteers to participate in global research. The projects are as diverse as backyard insect counts (the Firefly citizen science project), studies of how malaria develops and

February 24, 2010

Feature: Virtualization - Key for LHC physics volunteer computing

BOINC is versatile enough that even mobile phones can do volunteer computing. Image courtesy BOINC

In 2006, the team that built LHC@home was given a challenge by Wolfgang von Rueden, then IT Division Leader at CERN: look at the use of the volunteer computing project BOINC (Berkeley Open Infrastructure for Network Computing)  for simulating events of interest in the Large Hadron Collider. It presented a demanding problem.
The software environments used by the experiments such as ATLAS, ALICE, CMS and LHCb are very large.
Furthermore, all LHC physics software development is done under Scientific Linux, whereas most volunteer PCs run under Windows. Porting the large and rapidly changing software is not practical, so another approach was needed.
The solution?
Marrying volunteer computing with the CernVM virtual image management system under development. It would enable the practical use of thousands of volunteer P

February 24, 2010

Virtualization - putting all the spinning plates on one stick

Image courtesy Don Fulano, Flickr

To give a better idea of what virtualization is and how it can be used, we look at recent developments at CERN, using it as a sort of case study.
In the words of EGEE director Bob Jones, a computer center operates “a lot like the guy spinning plates at the circus. He’s got more and more plates spinning on more and more sticks, and he’s running around trying to keep them all going.”
At the typical computer center, one machine is doing graphics, one is doing electronic document handling, and one is doing something else.
“But what virtualization does it put all those plates on one stick,” said Jones. “You don’t need several different physical machines, but have everything running on one machine doing everything.”
As a result, it’s faster and more efficient.
Virtualization technology is not a new idea. Virtual memory was develope

February 24, 2010

Q & A: Larry Rudolph talks about pervasive computing, virtualization, and science

Image courtesy of Larry Rudolph.

We’ve all heard about how pervasive computing will change the way we connect and compute in our everyday lives. But what about the way we do science? How is that going to change?
Larry Rudolph joined VMware in 2008 to help start a project on mobile phone virtualization, after five years as part of Project Oxygen: Pervasive Human-Centric Computing at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Read on to find out what he had to say about pervasive computing, virtualization, and science.
iSGTW: How would you define virtualization or virtual machines?
Rudolph: A virtual machine is a computer made out of software. It is just like a regular computer. It can run programs, and it has a file system, mouse, keyboard, and display. Virtual machines run on physical computers, but it can be easily moved from one physical machine to another an

February 17, 2010

Feature - Doing science on the hub

Michael McLennan demonstrates a visualization tool hosted on nanoHUB.org in his office at Purdue University. Image courtesy Miriam Boon.

The HUBzero platform will be released as open source for the first time at the HUBbub 2010 workshop, 13-14 April. The release of this powerful platform could change the way you research, collaborate, and teach.
HUBzero has been described as a cloud, a content management system, and “FaceBook for scientists.” In a way, these are all true. Yet none of them adequately convey the capabilities of this platform.
It all began with a web infrastructure called PUNCH, which was developed in 1995 at Purdue University in order to deploy simple science gateways. Scientists could use PUNCH to create a web form that, when filled out and submitted, would run batch jobs.
At the time, this was pretty revolutionary. But by 2002, it was time for an update. So they began work on the now well-known nanotechnology resource

February 17, 2010

OSCAR understands the language of chemistry, naturally

When it comes to chemistry terminology, one person’s sodium chloride is another’s salt. Image courtesy Snack/stock.exchng

Like any other language, the language of chemistry lacks uniformity.
New words are invented, old words fade out of use, styles of writing change and some writers suffer from less-than-perfect grammar.
What’s more, there is no single way of referring to a chemical: one person’s salt is another’s sodium chloride (and yet another’s NaCl). To search for a specific word in a chemistry text, a researcher must take into account every permutation of that word and every possible mistake in representing it.
This is highly inefficient, and with more sources of chemistry information becoming available every day, it’s not getting any easier to find relevant information.
But now, there’s OSCAR to help. Also known as “Open Source Chemistry Analysis Routines,” it

February 10, 2010

Bringing LHC data to US Tier-3s

Computer racks at the Fermilab Grid Computer Center. Image courtesy of Fermilab.

It’s a challenge for smaller research groups to get set up on a grid, but that’s exactly what physicists at over 40 sites across the United States need to do to get access to data from the Large Hadron Collider.
The new US Tier-3 centers – evenly split between the ATLAS and the Compact Muon Solenoid experiments – have each received about $30,000 in funding as part of the American Recovery and Reinvestment Act. Physicists scattered around the country will be able to use them to do their own analysis of data generated by two of the LHC experiments.
To get these sites online, a great deal of expertise will be needed. And that’s where the US LHC Tier-3 support group comes into the picture.
“What we are trying to do is to help them get their systems set up and connected to the grid, to make it easier for them to get access to data and addi

February 10, 2010

Playstation goes from games to grid

Playstation: more than just fighting aliens on distant planets. (Click to enlarge.) Scene from Halo 3 courtesy of wallpaperez.net

Now, a Sony PlayStation 3 doesn’t just let you pretend to be the Beatles in ‘Rock Band’ or fight in alien ring-worlds in ‘Halo.’
The PS3 is the latest piece of hardware to get on the grid. A mini-cluster of PS3s in Ireland is running software which can screen for new and potentially life-saving drugs.  
Eamonn Kenny, portability coordinator for the Enabling Grids for E-sciencE project, and Peter Lavin, a grad student at Trinity College Dublin, have ported the EGEE-supported gLite middleware, specifically the worker node software (performing the majority of the computational work on the Grid) to eight connected PlayStation 3’s.
gLite, the middleware which connects 13,000 researchers world-wide to the computing resources of the EGEE grid, is mostly run from multi-core processors

February 3, 2010

Back to Basics - What makes parallel programming hard?

BY DARIN OHASHI Darin Ohashi is a senior kernel developer at Maplesoft. His background is in mathematics and computer science, with a focus on algorithm and data structure design and analysis. For the last few years he has been focused on developing tools to enable parallel programming in Maple, a well-known scientific software package.

Dual and even quad-processor computers may be increasingly common, but software that is designed to run in parallel is another story entirely. In this column, Darin Ohashi takes us back to basics to explain why designing a program to run in parallel is a whole different ball game.
In my previous column, I tried to convince you that going parallel is necessary for high performance applications. In this week’s column, I will show you what makes parallel programming different and why that makes it harder.

Glossary of terms:

process: a running instance of a program. A process's memory is usually

January 27, 2010

Answering a truly big question: how did dinosaurs move?

Dinosaurs such as this therapod (“beast-foot”) are  believed to be the ancestors of modern birds. Image courtesy LDAustinArt.com

In a memorable scene from Steven Spielberg’s Jurassic Park, a Tyrannosaurus rex gallops behind a jeep, close to overtaking it, lunging to take a bite out of Jeff Goldblum — to the horrorified delight of millions of thrill-seeking movie-goers. 
Assuming dinosaurs could be resurrected, how realistic would this situation be?
Not very, according to Karl Bates, a researcher in dinosaur locomotion. In fact, our scrawny-armed, prehistoric friend would probably have trouble outrunning a bicyclist. Depending on how fast you run, you may or may not be in trouble if you were on foot.
How does Bates know this?
Because he is a member of the Animal Simulation Laboratory at the University of Manchester, UK, which for over five years has made computer models of prehistoric an

January 27, 2010

Feature: The grid that sifts for dark matter

Cryogenic Dark Matter Search detectors. The CDMS experiment uses five towers of six detectors each. Photo credit: Reidar Hahn.

Think of grid computing as a sieve that physicists use to sift out those rare events that might just be signs of dark matter — the mysterious substance that appears to exert gravitational pull on visible matter, accelerating the rotation of galaxies.
FermiGrid, the campus grid of Fermilab and the interface to the Open Science Grid, recently helped researchers from the Cryogenic Dark Matter Search experiment do just that: identify two possible hints of dark matter.
Dark matter has never been detected. And although the CDMS team cannot yet claim to have detected it, their findings have generated considerable excitement in the scientific community.
“This is a very intriguing result,” said Lauren Hsu, a CDMS researcher at Fermilab who announced the experiment’s results at a talk last Dec

January 27, 2010

Feature: Grids and clouds - reaching for the next phase

A composite image of the Cat’s Eye Nebula with data from NASA’s Chandra X-ray Observatory (blue) and Hubble Space Telescope (red and purple). This cloud of dust and gas is about 3,000 light-years from Earth. The hybridization of grids and clouds seems considerably closer than that. Image courtesy Smithsonian Astrophysical Observatory

“This is not a replacement technology,” says Ignacio Llorente, “This is the next phase of evolution for grids.”
Llorente coordinates ‘virtual machine’ management  for RESERVOIR, an EC-supported project standing for Resources and Services Virtualization without Barriers, that works to enable deployment and management of complex IT services across different administrative domains. This project began collaborating with EGEE in June 2009 to marry the advantages and practicalities of cloud computing with grid technology.
After starting in February 2

January 20, 2010

Back to Basics - Why go parallel?

BY DARIN OHASHI Darin Ohashi is a senior kernel developer at Maplesoft. His background is in mathematics and computer science, with a focus on algorithm and data structure design and analysis. For the last few years he has been focused on developing tools to enable parallel programming in Maple, a well-known scientific software package.

Parallel programs are no longer limited to the realm of high-end computing. In this column, Darin Ohashi takes us back to basics to explain why we all need to go parallel.
Computers with multiple processors have been around for a long time, and people have been studying parallel programming techniques for just as long. However, only in the last few years have multi-core processors and parallel programming become truly mainstream. What changed?
For years, processor designers have been able to increase the performance of processors by increasing their clock speeds. But a few years ago, they ran into a few serious problems. RA