Share |

Opinion - GPU-based cheap supercomputing coming to an end

Feature - GPU-based cheap supercomputing coming to an end


Intel’s Sandy Bridge architecture places the processor and GPU on the same chip.

Image courtesy Greg Pfister.

Nvidia’s CUDA has been hailed as “Supercomputing for the Masses,” and with good reason – amazing speedups ranging from 10x through hundreds have been reported on scientific / technical code. CUDA has become a darling of academic computing and a major player in DARPA’s Exascale program, but performance alone does not account for that popularity: price clinches the deal. For all that computing power, they’re incredibly cheap. As Sharon Glotzer of UMich noted, “Today you can get two gigaflops for $500. That is ridiculous.” It is indeed. And it’s only possible because CUDA is subsidized by sinking the fixed costs of its development into the high volumes of Nvidia’s mass market low-end GPUs.

Unfortunately, that subsidy won’t last forever; its end is now visible. Intel has now started pounding the marketing drums on something long predicted: integration of Intel’s graphics onto the same die as its next generation “Sandy Bridge” processor chip, due out in mid-2011.

Probably not coincidentally, mid-2011 is when AMD’s Llano processor will see daylight. It incorporates enough graphics-related processing to be an apparently decent DX11 GPU, although to my knowledge the architecture hasn’t been disclosed in detail.

Just prior to this Fall’s IDF (Intel Developer Forum), Anandtech received an early demo part of Sandy Bridge and checked out the graphics, among other things. Their net is that for this early chip, with early device drivers, at low, but usable resolution (1024x768) there’s adequate performance on games like “Batman: Arkham Asylum,” “Call of Duty MW2,” and a bunch of others, significantly including “Worlds of Warfare.” And it’ll play Blue-Ray 3D, too.

Anandtech’s conclusion is, “If this is the low end of what to expect, I’m not sure we’ll need more than integrated graphics for non-gaming specific notebooks.” I agree. I’d add desktops, too. Nvidia isn’t standing still, of course; on the low end they are saying they’ll do 3D, too, and will save power. But integrated graphics are, effectively, free. Cheaper than free, actually, since there’s one less chip to socket onto the motherboard, eliminating socket space and wiring costs. The power supply will probably shrink slightly, too.

AMD has, of course, also been demonstrating its CPU/GPU Fusion architecture.

This means the end of the low-end graphics subsidy of high-performance GPGPUs like Nvidia’s CUDA. That subsidy is very significant, because the fixed costs of developing any chip family are very large; spreading them out over a high-volume low end makes a major difference, even if the high end has substantial revenue. So prices will rise, since GPGPUs will no longer have a huge price advantage over purpose-built HPC gear. How much will they rise? It’s very hard to say, but I have one somewhat wobbly data point saying that the difference will be substantial.

In a recent talk at CSU, Dr. Richard Linderman, chief scientist of the Information Directorate at the Air Force Research Lab in Rome, NY, spoke of building systems using IBM Cell chips. He could do it from Sony PS3s, and IBM offerings of two Cells plus a CPU in a 1U rack (actually, I think that product was a blade, but it doesn’t matter for this).

The PS3s, subsidized through huge volume and games, were $380 each. The IBM offering, designed and priced for HPC was, in the cheapest offer, about $6,000. So they went with as many PS3s as they could buy, mounted them in grocery-store bread racks, and merrily started computing.

This is about a 10X difference – and this with the chip design itself already amortized. That’s just changing the packaging so something not designed for high-volume manufacturing. It did also add some aspects targeting HPC, like InfiniBand links in and out rather then 1Gb Ethernet, but even so, a factor of ten not counting chip development is significant.

That is just one data point, and it is wobbly – IBM’s DNA doesn’t include producing high-volume anything – but it is consistent with a thorough sinking of the cheap part of the GPGPU phenomenon.

Now, the market for HPC gear is certainly expanding. In a long talk at the 2010 ISC in Berlin, Intel’s Kirk Skaugan (VP of Intel Architecture Group and GM, Data Center Group, USA) stated that HPC was now 25% of Intel’s revenue – a number double the HPC market I last heard a few years ago. It justifies Intel’s investment in MICA, Intel’s HPC-oriented Many Integrated Cores Architecture, which is Larrabee with a different name and no graphics. But revenue doesn’t directly translate into the volumes of low-end graphics.

So enjoy your “supercomputing for the masses,” while it’s around; its days are numbered.

A version of this article originally appeared in Greg Pfister’s blog, Perils of Parallel.

—Greg Pfister

No votes yet

Comments

Post new comment

By submitting this form, you accept the Mollom privacy policy.