# Los Alamos National Laboratory

# Computational Physics Student Summer Workshop

### Sponsored by the Los Alamos National Laboratory Advanced Scientific Computing (ASC) Program

Los Alamos National Laboratory's X Computational Physics Division, in cooperation with other related divisions including Theoretical Design and Computer, Computational, and Statistical Sciences, is pleased to sponsor the **annual Computational Physics Student Summer Workshop**.

The workshop seeks to bring to the Laboratory a diverse group of exceptional undergraduate and graduate students for informative, enriching lectures and to work with its staff for 10 weeks on interesting, relevant projects that may culminate in articles or conference presentations. Students are organized into groups of 2-3 working under the guidance of one or more mentors. Each participant is awarded a fellowship that typically ranges from $7,500 to $13,000, based on academic rank (junior, senior, 1st year graduate student, etc.)

Each workshop covers a range of research projects. Select a title below for information about the 2018 projects.

Developing predictive models for brittle damage evolution is a challenging problem. In order to be applicable at larger length scales (hundreds of microns up to meters depending on application), continuum-scale material models must somehow scale up important mechanisms, particularly crack-to-crack interactions but also other phenomena seen at the micro- and meso-scales (crack coalescence, crack branching and reorientation, plastic zones at crack tips, etc.)

This project is focused on capturing these sub-scale mechanisms by incorporating statistical information about the crack network into constitutive models used in hydrocodes. Goals of the project include:

- Using machine learning algorithms to generate statistical information that can be used for a wide range of loading conditions, with particular focus on high-rate loading (i.e., shock waves and impact loading), and
- Investigating the relationship between material strength and hardening (i.e., plasticity) with damage evolution in quasi-brittle metals.

Supernovae are the sites of production and dissemination of most of the heavy elements. The distribution of these elements, especially the ratios of different isotopes, can be used to test our understanding of the engine behind these supernovae. Supernova dust grains are one of the leading probes used to study supernova isotopics.

This project will work with multi-dimensional supernova explosion calculations, using statistical methods to compare to dust grain data to constrain the properties of the supernova engine. Depending upon the interest of the students, they will focus on detailed nucleosynthetic yield calculations, dust formation or statistical methods to compare the simulation results to the existing data.

Development of computational approaches that will bridge the current gap in the description of solid-liquid interfaces used in simulations and the interfaces existing in experimental conditions is one of the great challenges to theory. Many important processes, such as chemical or electrochemical reactions, occur at the solid-liquid interface and any rational design of optimal systems demands atomic level understanding of materials in more realistic in situ conditions. Oxygen reduction reaction (ORR) on N-C type materials will serve as our model system for the development of analytical tools necessary to study (electro-)chemical processes on the interface of the M-N-C and N-C materials and the solvent.

The state of the art Quantum Molecular Dynamics (QMD) simulations suite of programs developed at the Laboratory has the capability to provide an accurate description of the electron-transfer reactions on the solid-liquid interface using realistic models. In our case the model will contain surface of a material which will be modeled as an extended N-doped graphene surface in an interface with water molecules. QMD simulations of this system in realistic time scales (hundreds of picoseconds) will be possible only by applying extended Lagrangian QMD technique, which allows us to perform Self-Consistent-free QMD simulations saving almost an order of magnitude in the calculations of the forces. In order to analyze and visualize key electron density rearrangements occurring on the solid-liquid interface in the QMD simulations, a molecular orbital visualization tool needs to be implemented in the current suite of programs.

We propose to add the aforementioned functionality to the Laboratory PROGRESS library by coding a change in the projection of the system's wave function to the atomic Slater orbitals as a function of time. Making additions to this library is fairly straightforward and will benefit quantum chemistry applications beyond the one proposed herein. The students will work on a cutting edge problem of modern material science and will gain experience in software development in a multi-disciplinary environment typical of a long lasting development.

Understanding how matter behaves in extreme conditions is of importance to many areas of study, such as earth science and astrophysics. The behavior of matter is particularly intriguing when molecules are involved.

We intend to use molecular quantum mechanics and statistical physics to study the behavior of molecular matter. We will do this by optimizing and developing models to reproduce observed behavior of molecular matter. For example, we can extend models to include restricted motion of molecules while in a dense environment and then study the systems’ modified spectroscopic and thermodynamic properties.

This project will investigate the ability of Eulerian hydrodynamic schemes to correctly model shock reflections and refractions. There is a large body of experimental data on such problems. We will focus on cases that are also amenable to analytic interpretation. Such cases include small incident angle shock reflections and refractions where the wave patterns near the interaction can be described by shock polars. We will focus on three hydro codes: the Laboratory codes xRage and Pagosa, and the University of Chicago code Flash. The study of such interactions has a long history and the project will include an investigation of the relevant existing literature.

The goal of this project is to use the Eulerian hydrocode xRage to investigate the effects of material interface initial conditions on the turbulence that develops in shock tube experiments.

When a shock travels across a material interface, the misalignment between the pressure gradient at the shock and the density gradient caused by irregularities at the interface lead to Richtmyer-Meshkov instability. The turbulence that ensues is sensitive to the initial conditions of the interface perturbations and is difficult to model with existing turbulence closures. Evidence from previous work at the Laboratory indicates that two regimes are possible, depending on the R.M.S. slope of the initial interface. These regimes exhibit different turbulence statistics, such as mixing layer growth.

The project will consist of:

- Running high resolution numerical simulations chosen to produce flows within each of the two regimes,
- Running low resolution numerical simulations of the same configurations simulated in step 1, but with a turbulence parameterization (BHR). Step 2 will be carried out 2.1) using conventional BHR, which is initialized manually, and 2.2) using a new modal model that initializes BHR, and
- Diagnosing the mixing rates in each flow. Two students share the work for task 1, tasks 2.1 and 2.2 will be assigned to a student each. The tools needed for task 3 will be developed jointly by the students, and each student will use the tools to diagnose the flows each one of them runs. They will jointly analyze and present results.

Shock tube experiments that have investigated initial conditions will be modeled by the students. The modal model has been validated against the shock tube for one-dimensional simulations. The students will run two-dimensional (and maybe three-dimensional) simulations with the same initial conditions and gas properties. Sample shock tube input files will be provided to the students in order to get them running the code immediately.

FleCSALE is a C++ library for studying multi-phase continuum dynamics problems with different runtime environments. It is specifically developed for existing and emerging large distributed memory system architectures. FleCSALE uses the Flexible Computational Science Infrastructure (FleCSI) project for mesh and data structure support. The goal of FleCSALE is to support multi-phase fluids and tabular equation of states (EOS).

Students will develop a project to explore ways to better target the enormous compute capacity of these future machines.

The essential point of verification is to measure the numerical error in a particular code or simulation. This is typically done by measuring the rate at which a simulation converges to the correct answer as the grid is refined, whether the 'correct' answer is actually known analytically (code verification) or is merely estimated (solution verification). The method of manufactured solutions (MMS) is a technique for code verification that uses user-defined source terms to enable complex exact solutions to be defined easily, for a wide variety of physics models.

The Laboratory is implementing code to automatically generate source terms based on manufactured solutions for the base hydro equations in the Eulerian hydrocode xRage, as well as incorporating those source terms into xRage to run verification studies. Depending on the interests and capabilities of the particular students selected, there are two main ideas for the Workshop:

The first is to dig into the behavior of the xRage hydro itself. The students will devise a variety of different manufactured solutions to test different aspects of the code, and then run simulations with a variety of options in xRage, in order to characterize the behavior of e.g. the different hydro algorithms available. Analyzing these results will require the students to learn something about numerical methods, and presenting their results will be an exercise in visualization of higher dimensional data, which is always an interesting experience.

The second idea is to expand the coverage of the MMS code beyond simple hydrodynamics. The code is easily extensible to handle many different physics models; essentially the only requirement is that the model have a well-defined system of equations that can be written down. We have floated several ideas for this over the last few months, including reacting flows, elastic solids, radiation diffusion, etc. This has the advantage of giving the students a chance to learn new physics models, as well as the exact solutions that are available for these models.

With either one of these topics, the students will be contributing to a larger project here at the Laboratory. This will provide them with opportunities to network with Laboratory staff, as well as a chance to learn the skills required for collaborative software development.

Analysis of climate datasets is often challenging because intrinsic, natural variability of the climate system can skew interpretation of results. The climate system is chaotic and the proverbial butterfly flapping its wings over Texas, at least in some part, can be responsible for initialization of hurricanes.

In this project we explore the impacts of this reality by using techniques to assess intrinsic climate variability via a Large Ensemble Analysis Downscaling (LEAD) workflow, which uses climate realizations from the CESM Large Ensemble as well as CMIP5 and the Localized Constructed Analogs datasets to downscale intrinsic variability via Empirical Quantile Mapping (eQM). Spatio-temporal analysis via wavelets will compress these downscaled results into the key scales and timing of climate changes, facilitating understanding and analysis of localized impacts of climate change across the globe. A key outcome of this project is ideally the publication of this open-source dataset.

Analysis will be performed using the high-performance computing python-based analysis techniques championed by the Pangeo open-source community. The intention in this project is to leverage the best of open-source, interpreted language analysis techniques to provide at-scale high-performance computing solutions to the analysis of large climate data sets to best understand the role of intrinsic climate variability on climate impacts.

The advent of the Atacama Large Millimeter Array (ALMA) in late 2011, with its unprecedented sensitivity and spatial resolution, is transforming the study of proto-planetary disks (PPD). ALMA has already produced several milestone results, including the discoveries of large-scale asymmetric features (e.g., IRS 48, Science 2013) in the dusty PPD that harbor young planets and some ring-type structures (e.g., HL TAU) that are possibly generated by massive planets.

Over the years, we have developed a state-of-the-art code, LA-COMPASS, to simulate PPD. Students will acquire a wide range of skills from running massively-parallel codes on large computing platforms to post-processing, visualizing and analyzing large datasets. They will also learn how to fit the simulation results to the observation data/image by varying their modeling parameters, and publish their finding in peer-reviewed journal.

Our project entails developing a 1D, frequency-dependent Monte Carlo code that solves the thermal radiative transfer equations. This solution method is an alternative to the implicit Monte Carlo method and has been demonstrated to reproduce solutions accurately for 0D, multi-group problems in previous work. The method is based on analog simulations of the underlying physics, sampling on an event-by-event basis. By directly simulating the non-linear physics, it overcomes some of the difficulties of IMC, but becomes inherently difficult to parallelize and can be computationally expensive with poor memory access patterns.

The main goals of the project are to develop the code (with modern software design principles), explore the cost of this method to determine if it could be comparable to IMC with more investment, and (time permitting) explore methods to accelerate the solution, e.g., using time-discretization approximations in the sampling or variance reduction techniques to try to break the inherent serial nature of the problem and simple domain copy parallelization.

Even if the method is too expensive to extend to problems in the IMC regime, there are astrophysics calculations where this method should be valuable.

The goal of this project is to demonstrate the feasibility of using higher fidelity data with more efficient throughput for evolving computer architectures. This involves developing new methods for storing high-resolution data, novel application of 2D interpolation methods and optimization of these methods for emerging computer architectures available at the Laboratory. In particular, the project will demonstrate the effectiveness of these methods on tabular equations of state (EOS).

The SESAME database provides tabular EOS data on a rectangular grid where the independent variables are density and temperature and the dependent variables are pressure, internal energy, and free energy. These tables are large and cover many decades of variation in density and temperature. Interpolation, extrapolation, and evaluation of thermodynamic derivatives use various types of stencils on the rectilinear grid. An alternative approach is to express the EOS data on triangular patches and to store the thermodynamic derivatives as well as the basic thermodynamic variables at the vertices of the patches. The use of triangular patches would allow great flexibility in how the data are laid out and accessed. The triangular patches can fill irregular regions and be made to align with phase boundaries. The inclusion of derivatives enables higher order interpolation on the patches. The use of a finite element connectivity to connect to the vertex data would also allow discontinuous derivatives at phase boundaries.

Basic approach:

- Modify the code Lineos2ses to generate data for an analytic EOS onto a pre-determined triangular grid.
- Develop a code to do Hermite interpolation on triangular patches.
- Optimize that code to run on one or more Laboratory supercomputers.
- Compare timing and accuracy using the EOSPAC library on the same analytic EOS.