FROM TRINITY TO TRINITY
A New Frontier in Supercomputing
CHARLESMcMILLAN, Laboratory Director
WELCOME to this issue of National Security Science.
This year, the Laboratory is celebrating its 70th anniversary, and this issue’s principal topic—advanced supercomputing—is
one that is dear to my heart: not only can today’s supercomputers trace their origins back to computers used in the Manhattan
Project, but I was privileged to work with colleagues in the early 1990s on the formation of the Accelerated Strategic
Computing Initiative (ASCI) program. ASCI launched the modern generation of supercomputers; it was a key component
of America’s plan to move from reliance on full-scale underground nuclear tests to much more sophisticated computer-based
simulations of full-scale weapon detonations.
When the United States stopped underground nuclear testing in 1992, a mission of the nuclear security labs fundamentally
changed: we went from designing, building, and testing nuclear weapons to using our science and engineering capabilities to
ensure that the stockpile of current weapons remained safe, secure, and effective into the future.
So far, we have succeeded in this mission. Last September, along with the directors of the other national security laboratories,
I sent my Annual Assessment Letter to the secretaries of Energy and Defense and to the Nuclear Weapons Council. My
letter states that after a thorough technical evaluation by Laboratory staff, I can give my assurance that the weapons in the
stockpile that Los Alamos stewards have no reliability or safety concerns that must be resolved with underground testing. My
predecessors have sent similar letters of assurance every year for the past 17 years.
Without supercomputing, we could not do the technical work that underwrites the Annual Assessment Letter. The weapons in
the stockpile are built of thousands of components that are now beyond their expected lifespan. These aging components must
be continuously studied and evaluated. Any problems that are found must be identified and addressed.
In the absence of continued real-world underground nuclear testing, the national security laboratories have increased their
reliance on simulations, which have demonstrated increasing fidelity from year to year. This increased fidelity has placed
growing demands on our supercomputers. These supercomputers are unique, designed and built at the laboratories in
partnership with commercial venders. These new designs have found their way into commercial and academic applications.
It is not an exaggeration to say that the needs of Stockpile Stewardship have driven the evolution of the world’s
Because the weapons detonate in 3D, we need high-resolution 3D simulations to observe their performance. This is a much
greater challenge than the one faced by the Manhattan Project; it is a challenge to build a nuclear weapon that works, but it is
much harder to understand how and why it works—we need this level of knowledge to simulate it in high-resolution 3D.
When we at ASCI first estimated what we would need by now in high-performance computing, we underestimated. In
my view, we must continue to advance the power and resolution of our computers to do our mission; the ongoing weapon
life-extension programs and our annual assessment of the deterrent depend on it.
This means a new frontier in supercomputing, one we are calling Trinity. With Trinity, we come full circle: the Trinity Test
of 1945 was the first full-scale, real-world test of a nuclear weapon; with the new Trinity supercomputer, our goal will be to
provide the computing power to explore one of the most challenging puzzles remaining from nuclear testing; a puzzle that has
eluded solution for almost 70 years.
I hope this issue of National Security Science leaves you with a better understanding of why and how supercomputing is
key to the Annual Assessment Letter and why our supercomputing capabilities must continue to grow.