Los Alamos National LaboratoryIS&T Co-Design Summer School
Train future scientists to work across disciplines to solve today's challenges

Runtime Systems

The research areas span a wide array of topics in computer science and applied mathematics with applications to scientific problems.

Key Personnel  

  • School & Application Science Lead
  • Ben Bergen
  • (505) 667-1699
  • Email
  • Computer Science Lead
  • Allen McPherson
  • (505) 665-6548
  • Email
  • Christoph Junghans
  • (505) 665-2278
  • Email
  • Sponsor IS&T Center
  • Frank Alexander
  • (505) 665-4518
  • Email

The vast majority of today’s scientific computing conforms to the MPI+X model., where the X is generally a standard programming language, such as C, C++, or FORTRAN, and possibly a node-level acceleration scheme such as CUDA or OpenMP. Load balancing and fault tolerance, if desired, are left to the developer.

This is particularly problematic for applications that take advantage of techniques such as adaptive mesh refinement. As regions of the domain are refined and coarsened the workload can change drastically. This is exacerbated when multi-time stepping is involved due to varying task lengths. Similarly, as the refinement level increases, the amount of data, and the potential losses due to a fault, increase accordingly.

A growing trend in recent years is the development and utilization of system-level runtimes intended to simultaneously provide a more intuitive means of exploiting the available parallelism as well as mechanisms to simplify the challenges of load balancing.

To this end, we have surveyed contemporary runtime systems with respect to the perceived challenges of this application:

  • The ability to be utilized with tree-based data structures that, while highly conducive to adaptive mesh refinement, do not exhibit easily exploitable spatial locality.
  • Tools to load balance workloads with the potential to change drastically during execution
  • A focus on execution on distributed systems

As a result of this survey, we have chosen to study FOUR runtimes: The University of Illinois’s Charm++, Louisiana State University’s HPX, The Open Community Runtime project, and Concurrent Collections.

Charm++

The Charm++ programming language provides a high level abstraction for parallel-programming, which aims to offer portable performance whilst simultaneously enhancing programmer productivity. Unlike many approaches to parallel programming, Charm++ does not use a traditional message passing approach to dictate program flow, but instead uses asynchronous message driven work loads to expose additional parallelism. Additional features offered by Charm++ include auto check-pointing; fault tolerance; dynamic load balancing and processor-count agnostic restart -- all of which are highly desirable when running at large scale.

HPX

HPX is built around a unified programming model designed to transparently utilize all available resources. HPX is intended to utilize the C++11 Standard, as well as the Boost C++ libraries, so as to provide a familiar, message driven, interface with a focus on futures.

Open Community Runtime (OCR) Project

The OCR project involves the development of an application-building framework with the goal of exploiting systems with high-core counts and an initial focus on HPC applications. To do this, OCR is being developed as a tool to help application developers improve power efficiency, programmability, and reliability.

Concurrent Collections (CnC)

Concurrent Collections (CnC) is a programming model designed around allowing domain experts to express the parallelism and dependencies of an algorithm at a high level while also allowing tuning experts to rapidly implement and optimize applications. CnC acts as an interface to avoid the need to think about lower level parallelization techniques, such as thread primitives and message passing, while also allowing for a high degree of portability. Specifically, we are utilizing Intel Concurrent Collections but we also hope to port the code to the OCR implementation of CnC.

Visit Blogger Join Us on Facebook Follow Us on Twitter See our Flickr Photos Watch Our YouTube Videos Find Us on LinkedIn Find Us on iTunesFind Us on GooglePlayFind Us on Instagram