Fermilab Today Friday, Jan. 20, 2012
Search
Feature

New GPUs power through complex QCD calculations

Amitoj Singh and Don Holmgren display one of the new GPUs installed to calculate lattice QCD. Credit: Brad Hooker

As Fermilab employees bundle up and head out into the cold at the close of each day, the Grid Computing Center stays hot - literally. The center hums with the sound of cooling fans, as thousands of processors drill away at a single problem in a series of enormously complex calculations known as lattice quantum chromodynamics.

QCD, the theory of quarks and gluons, is a way of relating the unusual interactions of these fundamental particles to what can be observed in experiments. The method for determining the properties of particles that contain these quarks and gluons is called lattice QCD. Created in the early 1970s, this intensive calculation system has gained strength in recent years with the advent of high-powered computer processors.

The newest edition to the GCC is a cluster of graphics processing units. These GPUs power through data faster than any nodes in the building, at more than five times the rate of the CPUs of the previous generation.

The cluster is part of a national project called USQCD. This collaboration, incorporating nearly every lattice theorist in the country, develops the computing software and hardware needed to meet the high demands of lattice QCD.

"Lattice calculations are exciting because, as they improve more and more, they give us a cleaner and cleaner search for effects beyond the Standard Model that haven't been discovered yet in the dynamics of the particles containing quarks," said Paul Mackenzie, spokesman for the national collaboration of QCD and a theoretical physicist at Fermilab.

The calculations, requiring tens of thousands of processors, have propelled a constant evolution in processing cores at the laboratory. With the number of transistors in one core doubling approximately every two years, the nodes at Fermilab are superseded by a new generation of processors every two to four years.

Last week, a team led by Don Holmgren and Amitoj Singh from the Computing Division installed Fermilab's largest cluster yet of GPUs in the GCC farm. The cluster adds an extra 45 teraflops of processing power to tackle the lattice QCD algorithms.

"We have a lot of rich experience within our department," Singh said. "So we'll be confident handling this machine. This is not the first time we've had leading-edge technology at this laboratory."

The next step for USQCD will be collaborating with GPU-manufacturing giant nVidia to port the lattice calculation codes to thousands of GPUs.

While the private industry would like to see increasingly powerful processers applied to cellphones, laptops and other consumer electronics, the project hopes to first answer the question as to how important GPUs will be to scientific computing.

"We don't know if GPU-like chips are the way of the future or just a flash in the pan now," Mackenzie said. "The scientific computing world is changing and we don't know exactly how it's going to change over the years. Computers 10 years from now will look very different from how they've looked the last 10 years."

Brad Hooker

Fermi National Accelerator - Office of Science / U.S. Department of Energy | Managed by Fermi Research Alliance, LLC.
 
Security, Privacy, Legal  |  Use of Cookies