Putting Design into Turbulence
The surfaces of airplanes, cars, and jet engines have been carefully designed to create the right kind of turbulence in the air that rides over them. It’s turbulence that maximizes the performance of these modern transportation marvels. Los Alamos researchers are now extending that design approach to control the turbulent mixing of a heavy fluid with a light one. If successful, it could mean more-efficient production of fusion energy.
We recognize and experience turbulence in many forms. The swirling eddies and energetic froth of turbulence are seen in white water rapids, volcanic eruptions, and speedboat wakes. The unpredictable nature of turbulence is experienced by anyone who’s taken a plane ride through a storm and felt the sudden bumps, rolls, and shudders caused by turbulent air. But while turbulence can be seen, felt, and experienced, can it also be controlled?
In inertial confinement fusion, a spherically converging shock (arrows) compresses a millimeter-size metal capsule (blue) filled with deuterium-tritium (DT) gas (red). With sufficient compression, the pressure and temperature ignite the DT fuel, forming a self-sustaining fusion reaction (fusion burn). However, any small bumps or imperfections in the spherical implosion will cause the capsule material, melted by the shock, to grow fingerlike projections that mix with the fuel and rob it of its heat.
Of particular interest at Los Alamos is the turbulence that arises spontaneously as two fluids mix, say, when a layer of rising warm fluid (air or water) pushes through a layer of sinking cool fluid—called Rayleigh-Taylor, or buoyancy-driven, mixing—or when a high-pressure front in the atmosphere slides across a low-pressure one—called Kelvin-Helmholtz, or shear-driven, mixing. The interface between the two fluid layers may initially be smooth or laminar, but tiny variations in that smoothness will initiate the curling motions of turbulence, in which one fluid curls around and entrains the other (as in the image at left). Very quickly these circulating eddies merge and/or break up across a broad, cascading spectrum of length scales that differ by factors of thousands or millions.
Because it mixes two fluids on many length scales all the way down to the atomic scale, turbulence works efficiently to transport heat, mass, and momentum from one fluid layer to another, often to good effect. In home heating, for example, it causes hot air near a radiator to be rapidly transported to the rest of the room. In an internal combustion engine, it causes the splitsecond mixing of air with fuel to produce cleaner, more-efficient burning.
But turbulent mixing is a hindrance in one particular system, namely inertial confinement fusion (ICF). ICF is a laser-driven system for creating fusion energy. In ICF a strong shock wave (high-pressure pulse) implodes (collapses) a millimeter-size spherical metal capsule to about a thousandth of its original volume. Deuterium-tritium (DT) gas within the capsule gets so compressed and hot that its nuclei begin to fuse into helium nuclei, releasing large amounts of nuclear energy. That energy, in turn, provides the heat to sustain additional fusion reactions. But there’s a fly in the ointment. Any small bumps or imperfections that develop during the implosion will rapidly grow and cause metal to mix with fuel, damping the heating process and perhaps even quenching the fusion burn (see figure below).
Los Alamos’ Malcolm Andrews, E.O. Lawrence Award winner for his work on turbulent mixing, sees a way around this problem. “It may be possible,” says Andrews, “to control the turbulent mixing down to acceptable levels, not by removing all bumps—that’s unrealistic—but rather by creating very subtle, very long wavelength ripples at the interface between the metal and the fuel.”
Andrews and Los Alamos colleagues are pursuing this counterintuitive idea through the Turbulence by Design project, sponsored by the Laboratory Directed Research and Development program. This project aims to design turbulence not just in ICF but in all cases in which two fluids (gas or liquid) of different density are driven past each other by buoyancy forces and mix far from the influence of solid walls and boundaries.
Turbulence by Design
The inspiration for Andrews’ idea comes from technology’s brilliant successes over the last halfcentury in designing and controlling “wall” turbulence: turbulent flow around the solid boundaries of airplanes, cars, turbine blades, and even golf balls. By reshaping these boundaries, engineers purposely create turbulence—of the right kind and in the right place—to dramatically control the adjacent airflow, thereby reducing the drag and/or increasing the maneuverability and efficiency of these objects.
A case in point is turbulators, the small, fixed vanes of metal that poke up from the top surface of an airplane wing and are designed to help during slow flight. As seen in the illustration above, in normal flight (A), the thin layer of turbulent air flowing along the top surface of the wing has lower pressure than the layer flowing along the wing’s underside, and the vertical pressure difference between the two provides the lift that keeps the plane aloft. To slow the plane for landing, a pilot may raise the craft’s nose, which tilts the wings upward. Without turbulators (B), the wing tilt can cause the low-pressure layer (light blue) to separate from the aft section of the wing’s top surface, allowing high-pressure air from behind to move in under that layer and press down. The result would be a sudden loss in lift, causing the plane to stall and crash.
Enter the turbulators (C). Their effect is to produce turbulent eddies that entrain fast-moving air from above into the low-pressure layer, thereby increasing the top layer’s thickness and momentum. That extra momentum pushes back on invading high-pressure air and keeps the low-pressure layer flowing on the wing surface even when the wings tilt up. So the lift stays steady during the plane’s descent.
Such designed turbulence has typically been limited to wall turbulence, in which the flow past the fixed shape of a solid boundary continuously drives the formation of the desired eddies. In the turbulent mixing of ICF, there are no solid boundaries; instead, buoyancy drives bubbles of the light DT fuel to rise through heavy fingers of molten metal. The shear generated as the rising and falling fluids slide past one another causes curling eddies to mingle the two fluids. This turbulent mixing layer increases in width as the two materials interpenetrate and mingle, releasing potential energy as the heavy fluid falls. That energy drives a cascade of smaller and smaller eddies until the two fluids are combined at the atomic level.
On the face of it, there seems little opportunity for external control of this unbounded mixing layer. However, just as turbulators design large-scale turbulence and momentum entrainment on a wing, Andrews suggests that large-scale (long-wavelength) but very small amplitude ripples in the initial interface between a heavy and light fluid can control the overall growth of the mixing layer. If Andrews is right and long-wavelength ripples control mixing, then turbulence must somehow store a “memory” of the rippling interface where the turbulence originated.
Breaking with Tradition
Turbulence with memory is a somewhat heretical, almost contradictory, viewpoint. Most scientists assume that turbulent mixing involves rapid loss of memory, the random formation and breakup of eddies erasing information about the initial shape of an interface between two fluids just as effectively as random changes of direction would erase a hiker’s memory of the direction back to his or her starting point. Scientists have built mathematical descriptions of turbulence—turbulence models—that are based on the assumption of memory loss and therefore predict certain universal properties. In particular, in these models, energy flows equally through all scales (sizes of eddies), from the largest to the smallest, for a given problem, keeping the turbulence in balance (equilibrium) across the scales.
Counter to that orthodox view, Andrews emphasizes that equilibrium flows are hard to find in reality. “Fluid mixing in ICF, for example, is far from equilibrium, and it is this intrinsic out-of-balance flow that makes mathematical models describing the development of turbulence difficult to formulate and solve,” he explains. “But what if memory of the starting configuration simply persists and dominates at ‘late time,’ when the flow is seemingly turbulent? Then these same, alreadycomplex mathematical models must also include knowledge of the starting conditions and must account for late-time effects.”
The schematic shows the Texas A&M water channel facility where cold, heavy water (top) and warm, light water (bottom), with milk added to show the interface, flow left to right past a splitter plate. The flapper at the end is moved up and down to create a wavy interface between the two fluids as they flow past. The experimental result in A shows that long-wavelength ripples in the interface (introduced by moving the flapper slowly) maintain their shape, while buoyancy forces drive the cold and hot fluids to mix. In contrast, the result in B shows that short-wavelength ripples (introduced by rapid flapper motion) lead to mixing on very-small scales and wash out any memory of the initial ripples.
Andrew’s team has already gathered experimental evidence showing that memory of long-wavelength ripples in the initial interface between two fluids does indeed persist during mixing (see figure above).
To turn that evidence into a design tool for ICF fusion capsules or other technological applications such as internal combustion, climate prediction, and free-flowing jets and plumes, the Turbulence by Design team must first understand how those initial ripples in density or velocity propagate in time. They need to learn from experiment (actual and numerical) how different interfacial shapes alter the development of the mixing layer. Then they must translate that behavior into a predictive foundation for use in turbulence models, which can be put on a high-performance computer to predict or design the outcome of many different initial conditions that occur in experiments for ICF and other fluid-based applications.
Discovering the Effects of Initial Conditions
Just 10 years ago it would have been impossible to achieve the level of knowledge required to carry out this program. Neither the experimental diagnostics nor the high-performance computing capabilities were up to the task. Today, those capabilities are in hand at the Laboratory, and researchers are gung-ho about pursuing this new adventure in turbulence research.
The game plan is to gather data from turbulent mixing experiments at Los Alamos (the shocktube experiment) and at the water channel facility (represented above) at Texas A&M University. Both have a proven capability to control the shape of the initial interface between the fluids and to track how the perturbations from a smooth, uniform interface affect the mixing over time. The choice of initial interfaces will be guided by results from direct numerical simulations of turbulence performed on the highest-speed (petaflop) computers, including Roadrunner at Los Alamos. The measured effects of those initial interfaces will then guide how turbulence models are adapted to capture the realistic influence of initial conditions on turbulent mixing. State-ofthe- art capabilities and close coupling of experiment, computation, and theory make Los Alamos a perfect location for the Turbulence by Design project.
The Shock-Tube Experiment. The Laboratory’s Kathy Prestridge and her team use a unique shock tube to study the mixing induced when a shock hits the interface between air and a higher-density gas. This setup mimics, in part, the shock-induced mixing between DT fuel and capsule material in ICF experiments. For the initial condition, the team can successfully create a very-stable and reproducible wavy curtain of high-density gas and can alter the initial shape of the curtain to study the effects of different initial interfaces. As a flat shock passes through the long-wavelength ripples of the interface between the air and the curtain, those ripples grow more pronounced, curl to form eddies, entrain the air around them, and eventually develop into a turbulent mixing layer that travels down the shock tube (see the figure below).
Prestridge describes the team’s goals. “We’re gathering enough data to simultaneously determine the mean velocity and mean density at every point in the flow, down to the 50-micron scale, as well as the fluctuations. With that we can measure the effects of initial conditions and provide the necessary statistical data for the development of a ‘memory’ turbulence model.” Prestridge will investigate initial interfaces that numerical experiments and analytical studies suggest might be promising for producing late-time effects.
Numerical Experiments. The modern toolkit for high-performance computers now includes programs to perform high-resolution direct-numerical simulation (DNS) of turbulent flows. The quality of the latest DNS results is on a par with, or in some cases even supersedes, experimental data. How can that be? First, DNS uses the fundamental equations of fluid flow to calculate pressures, densities, and velocities in turbulent flows. Second, the petaflop computing power of today allows those values to be calculated for a very-fine lattice of time and space points, fine enough to calculate the smallest eddies in the turbulent flow. Those two features make DNS results as real as experimental data.
Each set of sequential photographic images from the Los Alamos shock-tube experiment shows horizontal slices through a high-density gas curtain (white) that when shocked develops into a turbulent mixing layer as it moves down the tube. The surrounding air is black. The different initial shapes of the gas curtains give rise to quite-different mixing layers at late times. A high-intensity, very-short laser pulse serves to freeze the flow’s high-speed motion, allowing cameras to capture an image.
Daniel Livescu, leader of the fluid-mixing DNS effort at Los Alamos, explains why DNS data can be better than experimental data. “We have the freedom to set up the boundary and initial conditions of the flow as we want them or to focus on particular types of flow, such as those in the turbulent mixing experiments of Texas A&M or in the Prestridge shock-tube experiment. That freedom sets up a complementary discovery path between actual and numerical experiments: DNS points the way for actual experiments, and unexpected experimental results can be explored in detail by DNS.”
Livescu and collaborator Mark Petersen recently used DNS to perform the largest simulation to date of Rayleigh-Taylor mixing (heavy fluid falling through and mixing with a light fluid). By selecting the appropriate set of initial conditions, they became the first researchers to simulate mixing-layer growth rates comparable to those seen in experiments. Next, the Livescu team will vary the initial conditions to identify perturbations that have long-lasting influence on the growth rate.
Retooling Turbulence Models—The Central Idea of Design
So the team’s strategy is shaping up. The researchers have established that initial conditions affect the development of turbulence, and now the goal is to translate that behavior into mathematical turbulence models that can predict experimental outcomes.
Andrews explains, “Right now we don’t formally know how to set initial conditions in our turbulence models, and neither do we know the best conditions to choose for a given purpose. It’s like walking onto a new continent—we know how big it is, but we don’t know what we’re going to find. We may find some places where very-special things happen.”
The examples above show two Rayleigh-Taylor mix configurations, computed by Ray Ristorcelli, in which the heavy fluid (purple) is four times denser than the light fluid (red) beneath it. In both examples, the interface between the heavy and light fluids contains ripples made from the same two wavelengths, but in the second example, a deliberate misalignment (a 45-degree phase shift) exists between the two wavelengths. The late-time configurations are strikingly different. If this were not a computation, one might guess that the “lean” in the second configuration was caused by shear, but there was none. Says Andrews, “This is a surprising result for such a simple change. It shows that deliberate perturbations, designed either from experiment, DNS, or theory, can affect late-time turbulence, perhaps minimizing (or maximizing) mixing. The ability to reproduce these ‘designs,’ from initial conditions to late-time behavior, then becomes a test for setting the right initial conditions in our turbulence models, conditions that are necessary for accurate predictive capabilities for these models.”
Once the team can understand, control, and predict the flow from initial conditions to late-time turbulent mix, it will have a complete design route, which will then open the possibility of inverse design. In other words, one could start from a desired result and provide or predict an initial condition (the size and location of a turbulator or the initial ripple of the density interface) that would produce that result.
Reconciling Experiment and Theory
Turbulence research is often a humbling experience because it usually involves discovering how little the researcher knows or understands about turbulence. However, Andrews predicts that understanding initial conditions and their influence on turbulence will likely lead to efficient energy-production designs for ICF, highspeed trains, and more-efficient internal combustion engines. That understanding may also resolve a host of outstanding inconsistencies between experiment and theory, a possibility that intrigues turbulence researchers as they struggle to understand and control one of the great unsolved problems of physics.
Andrews sums up the vision for the Turbulence by Design project this way: “We can’t provide a complete theory of turbulence because turbulence tends to reflect its drivers, and these can be very disparate. However, just as the physicist Richard Feynman pointed to ‘space at the bottom’—room for exploration at the smallest scales—and thus stimulated the development of nanoscience, perhaps Los Alamos, through its variabledensity turbulence problems, can point to ‘memory within turbulence’ as a new field waiting to be explored and developed.” v —Necia Grant Cooper
In this issue...