Share |

Merging black holes

This movie is divided into two parts, each part showing a different numerical simulation, with brief captions that describe what is being shown. Part 1: Binary black holes orbit, lose energy because of gravitational radiation, and finally collide, forming a single black hole; gravitational waveform, spacetime curvature, and orbital trajectories are shown. Part 2: Event horizon and apparent horizons for the head-on collision of two black holes. CC BY-NC 2.5 Simulating eXtreme Spacetime - a Caltech-Cornell project.

The Simulating eXtreme Spacetime project generates fantastic simulations like those shown above using a code called the Spectral Einstein Code, or SpEC for short.

We tracked down a member of the collaboration to ask a few questions.

iSGTW: The first paper using SpEC was published in 2000. Has the code continued to undergo development since then/is development ongoing?

Harald Pfeiffer, Canadian Institute for Theoretical Astrophysics and University of Toronto: The code has been under continual development since then. In fact, previous versions of the code date back several years earlier.

iSGTW: How resource intensive is this code - can it do these simulations overnight on a workstation? Or does it need many hundreds or thousands of CPU-hours?

Pfeiffer: Binary compact object simulations (where each object can either be a black hole or a neutron star) require 10s to 100s of thousand of CPU-hours per run. For binary black holes, the high cost is mostly determined by the high accuracy required for gravitational wave detectors (these detectors use our simulations as filters to enhance their sensitivity). For neutron star-black hole and neutron star-neutron star binaries the high cost is mostly determined by the large amount of physical effects that need to be simulated: hydrodynamics, magnetic fields, nuclear physics, neutrinos...

iSGTW:  How much of the SpEC code is parallelized, and what kind of parallelism are we talking about -- are the parallel calculations independent of each other, or are they dependent requiring a low-latency connection between nodes?

Pfeiffer: Given our CPU requirements, we have to be parallel. We use MPI and need a moderately fast interconnect. Infiniband is best, Gigabit Ethernet looses about 20% efficiency. The efficiency loss of gigE is not terrible, and we do run on gig-E clusters, as it is often easier to get compute time there.

iSGTW: What kind of architectures does SpEC run on -- has it run on clusters? Grids? Clouds? Supercomputers?

Pfeiffer: Beowulf clusters and supercomputers. We run on in-house clusters at Caltech and CITA, and at various supercomputers (Kraken, Ranger, Lonestar, funded through "NSF Teragrid", SciNet at Univerity of Toronto, funded by "Compute Canada").

For more simulations, or to learn more about extreme spacetime physics, visit the SXS collaboration's homepage, or skip straight to their movies page here.

Your rating: None Average: 4 (3 votes)

Comments

These are interestin facts

These are interestin facts about clouds. Good to know about this one. - Aflac Assist LLC

Post new comment

By submitting this form, you accept the Mollom privacy policy.