DOE/LANL Jurisdiction Fire Danger Rating:
  1. LANL Home
  2. media
  3. news
January 11, 2024

AI breakthrough creates images from nothing

Innovative framework that generates images from nothing can enable new scientific applications

2024-01-11
A new generative AI model can create images from a blank frame.

A new, potentially revolutionary artificial intelligence framework called “Blackout Diffusion” generates images from a completely empty picture, meaning that the machine-learning algorithm, unlike other generative diffusion models, does not require initiating a “random seed” to get started. Blackout Diffusion, presented at the recent International Conference on Machine Learning, generates samples that are comparable to the current diffusion models such as DALL-E or Midjourney, but require fewer computational resources than these models.

“Generative modeling is bringing in the next industrial revolution with its capability to assist many tasks, such as generation of software code, legal documents and even art,” said Javier Santos, an AI researcher at Los Alamos National Laboratory and co-author of Blackout Diffusion. “Generative modeling could be leveraged for making scientific discoveries, and our team’s work laid down the foundation and practical algorithms for applying generative diffusion modeling to scientific problems that are not continuous in nature.”

Diffusion models create samples similar to the data they are trained on. They work by taking an image and repeatedly adding noise until the image is unrecognizable. Throughout the process the model tries to learn how to revert it back to its original state.

Current models require input noise, meaning they need some form of data to start producing images.

“We showed that the quality of samples generated by Blackout Diffusion is comparable to current models using a smaller computational space,” said Yen Ting Lin, the Los Alamos physicist who led the Blackout Diffusion collaboration.

Another unique aspect of Blackout Diffusion is the space it works in. Existing generative diffusion models work in continuous spaces, meaning the space they work in is dense and infinite. However, working in continuous spaces limits their potential for scientific applications.

“In order to run existing generative diffusion models, mathematically speaking, diffusion has to be living on a continuous domain; it cannot be discrete,” Lin said.

The theoretical framework the team developed, on the other hand, works in discrete spaces (meaning each point in the space is isolated from the others by some distance), which opens up opportunities for a variety of applications such as text and scientific applications.

The team tested Blackout Diffusion on a number of standardized datasets, including the Modified National Institute of Standards and Technology database; the CIFAR-10 dataset, which has images of objects in 10 different classes; and the CelebFaces Attributes Dataset, which consists of more than 200,000 images of human faces. In addition, the team used the discrete nature of Blackout Diffusion to clarify several widely conceived misconceptions about how diffusion models internally, providing a critical understanding of generative diffusion models.

They also provide design principles for future scientific applications. “This demonstrates the first foundational study on discrete-state diffusion modeling and points the way toward future scientific applications with discrete data,” Lin said. The team explains that generative diffusion modeling can potentially drastically speed up the time spent running many scientific simulations on supercomputers, which would both support scientific progress and reduce the carbon footprint of computational science.  Some of the diverse examples they mention are subsurface reservoir dynamics, chemical models for drug discovery and single-molecule and single-cell gene expression for understanding biochemical mechanisms in living organisms.

Paper: “Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces.” Proceedings of Machine Learning Research. DOI: 10.48550/arXiv.2305.11089.

Funding:  This work was supported by the Laboratory Directed Research and Development (LDRD) program. The team has been awarded by the LDRD program to continue the effort in designing generative diffusion models for physical models that are critical to the Laboratory’s mission. 

LA-UR-23-32592

Contact

Nick Njegomir | (505) 695-8111 | nickn@lanl.gov

Related Topics
  • Artificial Intelligence |
  • Science, Technology & Engineering

Share

Explore More Topics
About the LabArtificial IntelligenceAwards and RecognitionsCommunityComputingEnergyHistoryOperationsScience, Technology & EngineeringSpaceWeapons

More Stories

All News
2026-04-21

NASA’s Curiosity Rover finds more evidence of ancient lakes on Mars

The findings shed new light on the potential for past life

2026-04-16

Los Alamos leads research in versatile quantum computing

Innovative experiments demonstrate valuable capabilities for quantum annealing machines

2026-04-16

Meet URSA: The AI agent transforming how science gets done

AI framework is built to bring AI into the heart of scientific discovery

2026-04-13

Mapping Earth’s hidden hydrogen for energy dominance

A recent study examines the vast potential of subsurface hydrogen

2026-04-09

Researchers show some quantum learning models are classically simulable

Fix for quantum ‘curse of dimensionality’ may mitigate advantage versus classical computing

2026-03-31

Smarter monitoring with limited resources

The “Persistent DyNAMICS” framework makes monitoring more practical and cost-effective

Subscribe to our Newsletter

Sign up to receive the latest news and feature stories from Los Alamos National Laboratory