Unclassified computing scales to new heights
Al Chu (left) and Ryan Braby check Sierra, which is housed in the Bldg. 451 computer room.
Unclassified high performance computing at the Laboratory will scale new heights with the recent installation of Sierra, a Dell supercomputing system.
Clocking in at a peak speed of 261 teraFLOP/s (trillion floating operation per second), Sierra will become the most powerful high performance computing (HPC) resource available for unclassified research at LLNL. The new system was brought in for the Lab’s Multi-programmatic and Institutional Computing (M&IC) program to support Laboratory Directed Research and Development (LDRD), Grand Challenge and other unclassified research efforts.
Research areas requiring Sierra’s number-crunching power include: climate modeling and atmospheric simulations, supernova and planetary science, materials and strength modeling, laser and plasma simulations, and biology and life sciences.
“Every year the demand for (unclassified) high performance computing increases faster than we can fulfill it,” said Brian Carnes, program director for Multi-programmatic and Institutional Computing (M&IC). “Sierra will help. Providing young talent with the computing cycles they need is critical to the Laboratory’s maintaining pre-eminence in simulation science.”
“In recent years, the request for HPC cycles for worthy Grand Challenge research projects has far outstripped the available cycles. Sierra will provide additional cycles to the benefit of projects ranging from molecular imaging, climate change, astrophysics, inertial confinement fusion, and catalytic chemistry to seismic and nuclear explosion monitoring,” said Fred Streitz, director of the Institute for Scientific Computing Research (ISCR), who oversees the Grand Challenge Program.
Grand Challenge projects are collaborative research efforts with academic institutions and other labs in fields of interest to the Lab — projects that push the envelope of “capability” computing. Capability computing is when powerful HPC systems are largely dedicated to taking on the most complex and difficult problems. “Capacity” computing in when HPC systems are subdivided to run multiple jobs simultaneously.
HPC computing resources are also critical to Laboratory Directed Research and Development (LDRD) projects, higher risk projects with a big potential payoff. “LDRD is vital to the Laboratory’s R&D culture and the institution’s reputation for innovation,” said Judy Kammeraad of the LDRD program. “HPC is playing an increasingly important role in the scientific research and technology development at the heart of LDRD.”
With Sierra to become available in the new fiscal year, Atlas’ 44.2 teraFLOP/s of computing muscle will be increasingly used to carry a capacity computing load. Sierra would rank just inside the top 20 of the world’s most powerful supercomputers on the current Top500 list. Sierra boosts Livermore Computing’s unclassified computing resources to more than 700 teraFLOP/s from about 440.