Feature - Common cheap chips get jobs done faster
|
|
Although much of today’s most interesting research requires immense amounts of computational power, many scientists don’t have the funds to access expensive supercomputers or clusters. Now, a commonplace computer chip may provide a thrifty way to get those jobs done. Graphics processing units (GPUs) are found in most computers and are designed to render 3-D computer graphics. They contain hundreds of processing cores on a single chip, and several chips can fit into a single desktop. Computational geneticists Marc Suchard of the University of California Los Angeles and Andrew Rambaut of the University of Edinburgh were able to use GPUs to speed up their computations by a factor of 100. |
Suchard and Rambaut study the evolutionary relatedness of rapidly evolving viruses like influenza. The genes of some viruses change rapidly, and new strains can be resistant to both our natural immunity and antiviral drugs. Understanding how such changes occur helps scientists develop drugs to fight future strains of the virus. Since scientists cannot observe what past genetic sequences looked like, they must use complex probability models to reconstruct the most likely evolutionary path between the present strain and past strains. These calculations are so complex, however, that they would take over a year to compute on a regular desktop computer. “Scientists don’t want to try computations that are going to take a year to run,” explained Suchard. “But if they can do the computations in less than a couple of days, scientists can now start to look at those computationally intensive research problems.” To speed up their calculations by a factor of 100, researchers like Suchard and Rambaut could buy 25 computer nodes for about $50,000 – a steep price tag for many research labs. However, three GPUs can achieve the same speedup for a fraction of that price – about $1,200. GPUs are not without their limitations, however. Currently, GPUs are not in wide use because researchers have to write custom code for them, which is time-consuming, explained Don Holmgren of the computing division at Fermi National Accelerator Laboratory. |
Some of the largest GPUs contain 240 cores arranged in sets of eight, and while each set can run different computations, the cores in each set must do the same computation. In a cluster, multiple programs can run on multiple datasets, and each node can do different computations. So GPUs only work on computations that can run in that way, whereas clusters are more flexible. Despite the limitations, Holmgren thinks GPUs could eventually be used within grids to further accelerate computations. “I think it’s going to take a little time before GPUs are used in grids because of the lack of current applications using them,” he said. “But once the code and better libraries are available, I think there will be a lot of interest in using GPUs with the grid.” —Amelia Williamson, for iSGTW |
Comments
Post new comment