Share |

Beyond the Standard Model

A new frontier for physics could be just around the corner – or the accelerator ring, in this case. This picture of the LHC tunnel was taken while the accelerator was still under construction. Courtesy of CERN.

Lattice gauge theory researchers are using computing to ask an intriguing question: What if the Large Hadron Collider’s results aren’t what we expected? What if the much-anticipated Higgs Boson is heavier than expected, or if it isn’t found at all?

“In the Standard Model there is a definitive prediction where they should find the Higgs particle. They know its mass or its narrow mass range, and the LHC is perfectly capable of finding such a particle,” explained Julius Kuti, a lattice gauge theory researcher at the University of California at San Diego. “It is not unlikely that the elementary Higgs particle, which is a key part of the Standard Model, will be replaced by a new scenario Beyond the Standard Model, or BSM as widely known.”

A variety of BSM theories might be able to explain new experimental results at the LHC. The composite Higgs scenario Kuti is working on, for example, re-imagines the Higgs as a composite of new sub-quark-like particles.

Kuti’s team is using lattice gauge theory techniques to look for theoretical predictions of the composite Higgs scenario. In lattice gauge theory, components such as the gauge field and fermions are regularly distributed onto a four-dimensional grid – three space dimensions and one time dimension.

The bad news? Solving a composite Higgs model requires a tremendous amount of computational power. The good news is that, “after tricks and hard work,” the calculations are suited to using GPUs, according to Kuti.

By the numbers...

The USQCD GPU cluster is moderately sized, but it does the job.

  • 12 nodes
  • 4 Fermi Tesla C2050 GPUs per node
  • 48 GPUs total
  • 2 CPUs per node
  • 4 cores per CPU
  • $150,000
  • approximately 48 teraflops peak performance
  • 4 teraflops sustained in composite Higgs calculations

To meet their computing needs on a tight budget, the UCSD lattice gauge group built a cluster of GPUs in their LHC Tier-2 center. The GPU cluster was assembled in November, tested, and then debugged; the first composite Higgs calculations began running in February.

But to get their models onto the GPUs, they had to tailor their code.

Kuti was able to adapt existing code developed by a colleague and collaborator who had already run some quantum chromodynamics (a research field which originally drove lattice gauge theory) calculations on GPUs. The most important part of the code was created using CUDA, the NVIDIA programming paradigm designed for use on GPUs. Writing code for GPUs is not as difficult as it once was, but Kuti estimates that it is five to ten times more time consuming than writing the same code in a high-level standard language with Open MPI.

Even with the additional computing power of the GPU cluster, a researcher will need to use the cluster for a year or so to study a given BSM. Over the course of that year, the researcher would run the model for a few months, look at the results, and then submit another set of calculations for the same model, or tweak the parameters for the previous calculation to extract predictions which are relevant for new LHC physics.

Although the new GPU cluster at UCSD is integrated into the local Tier-2 center, it is not hooked up to Open Science Grid because the research they are already executing leaves them with no spare cycles. In fact, Kuti's group is also using significant computing resources from the United States QCD collaboration's cyberinfrastructure; that includes resources at Fermilab, Jefferson Lab, and Brookhaven National Laboratory.

Your rating: None Average: 5 (1 vote)

Comments

Post new comment

By submitting this form, you accept the Mollom privacy policy.