Software Technology

Enabling a new era of computational and scientific capabilities by advancing high-performance computing on an exponential scale.

Contacts  

  • Galen Shipman
  • Applied Computer Science
  • (505) 665-4021
  • Email
  • Michael Lang
  • Computer, Computational, and Statistical Sciences
  • (505) 500-2993
  • Email
  • James Ahrens
  • Applied Computer Science
  • (505) 667-5797
  • Email
Video thumbnail image for ExaSky software
3:21

ExaSky: Next-generation dark matter cosmology simulations (demonstration)

The Department of Energy’s Exascale Computing Project (ECP) has selected 35 software development proposals representing 25 research and academic organizations.

The awards for the first year of funding total $34 million and cover many components of the software stack for exascale systems, including programming models and runtime libraries, mathematical libraries and frameworks, tools, lower-level system software and data management and I/O, as well as in situ visualization and data analysis.

Los Alamos-led

Enhancing and Hardening the Legion Programming System for the Exascale Computing Project

Led by Galen Shipman of the Computer, Computational, and Statistical Sciences Division and Professor Alex Aiken of Stanford University

Usable exascale systems will require significant advances in the programming environment to effectively manage billion way concurrency and optimize data movement in a complex memory and storage hierarchy, while providing performance portability across different system architectures. The Legion programming system is well positioned to address these challenges. In this project, we will build upon the Legion open source programming system, delivering new features required by exascale applications, integrating with other elements of the exascale software stack and co-designing the system with hardware vendors.

Algorithms and Infrastructure for In Situ Visualization and Analysis

Led by James Ahrens of the Computer, Computational, and Statistical Sciences Division

Thise project will deliver algorithms and infrastructure suitable for the visualization and analysis needs of exascale applications. Many high performance simulation codes are currently using post hoc processing, meaning they write data to storage and then analyze it afterwards. Given exascale storage constraints, in situ processing will be necessary. In situ data visualization and analysis, selects, reduces and generates extracts from scientific results during simulation runs to overcome bandwidth and storage bottlenecks. Our capability will leverage our existing, successful open source visualization software packages, ParaView and VisIt. Lawerence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and Kitware Incorporated will participate in the project.

Simplified interface to Complex Memory

Led by Michael Lang of the Computer, Computational, and Statistical Sciences Division

As new technology is adopted by high performance computing (HPC) computing hardware developers to address the challenges of eExascale, a large amount of focus (and as a result, innovation), has landed on memory sub-systems. Many new devices are available and emerging in the near term creating that create a diverse and complicated programming environment for developers that who wish to use this technology. The goal of this ECP project is to provide a common simplified runtime and application interface to these many devices. This work will also have broad applicability not only to eExascale and HPC applications but more generally to Linux- based software development. This project is a collaboration between Los Alamos National Laboratory, and Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, Sandia National Laboratories and Georgia Tech.

Partner-led

Open MPI for Exascale (OMPI-X)

Led by David Bernholdt, Oak Ridge National Laboratory and Howard Pritchard, High Performance Computing Division

This project will focus on enhancing the message passing interface (MPI) standard, as well as the Open MPI implementation of the standard, to better support exascale class applications. Work will include improving the scalability, resilience and runtime interoperability of Open MPI with OpenMP and other program models.

ECP VTK-m: Updating HPC Visualization Software for Exascale-Era Processors

Led by Ken Moreland, Sandia National Laboratories and Chris Sewell, Computer, Computational, and Statistical Sciences Division

This project will develop scientific visualization and analysis algorithms that take advantage of the shared-memory parallelism available on many-core CPUs and accelerators, such as Intel Xeon Phis and Nvidia GPUs. The VTK-m library may be used on its own or as a part of the popular scientific visualization ecosystem that includes the Visualization Toolkit (VTK), ParaView, Catalyst, and Cinema.

Data Libraries and Services Enabling Exascale Science

Led by Rob Ross, Argonne National Laboratory and Galen Shipman, Computer, Computational, and Statistical Sciences Division

High Performance Computing applications are often composed of reusable software components for specialized services such as data management and I/O. This project will build upon the highly successful  ROMIO, Parallel netCDF, Darshan and Mercury open source software projects to evolve and deliver new data management and I/O capability on exascale systems.

ECP Applications: Effective Use of Kokkos to Achieve Performance Portability Across Exascale Architectures

Led by Carter Edwards, Sandia National Laboratories and Galen Shipman, Computer, Computational, and Statistical Sciences Division

This project will enable applications to productively develop node-level parallelism for exascale systems. This projectWe builds upon prior work on the Kokkos programming model and will deliver a performance, portable, node-level programming model across exascale compute node architectures such as multicore CPUs, manicure CPUs, and GPUS. The primary focus of this project will be the delivery of these technologies to for ECP applications.

Programming Toolchain for Emerging Architectures and Systems (PROTEAS)

Led by Jeff Vetter, Oak Ridge National Laboratory and Kei Davis Computer, Computational, and Statistical Sciences Division

While machine architectures have become increasingly parallel, and with computing resources increasingly heterogeneous, some fundamental properties of mainstream compilation toolchains have remained unchanged since their first inception. This project will develop such a toolchain that incorporates, in a fully integrated fashion, concepts such as parallelism and facilities such as performance profiling. Such capabilities address two of the most important challenges facing applications targeting future exascale computing platforms: programmer productivity and performance portability.