What if the thermal noise that hinders the efficiency of both classical and quantum computers could, instead, be used as a power source? What if computers could make use of the noise instead of suppressing or overcoming it? These are the goals of a relatively new branch of computing known as thermodynamic computing. A collaboration between researchers at the Molecular Foundry and the National Energy Research Scientific Computing Center (NERSC), both U.S. Department of Energy (DOE) user facilities located at Lawrence Berkeley National Laboratory (Berkeley Lab), is bringing them closer to reality. In a paper published in Nature Communications, the researchers have proposed a design and training framework for a type of thermodynamic computer that mimics a neural network, which could drastically reduce the energy requirements of machine learning. 

Modern computing requires energy: a single Google search, for example, consumes enough energy to power a six-watt LED for three minutes. This is partly because computers must contend with thermal noise — that is, the vibration of charge carriers, mostly electrons, within electronically conductive materials. In classical computers, even the smallest devices, such as transistors and gates, operate at energy scales thousands of times larger than that of this vibration. This difference in scale between signal and noise enables the consistent output that makes computation possible, but it comes at an energy cost: classical computers require large amounts of power to work reliably and operate far above the threshold of thermodynamic efficiency.

Both classical and quantum computing seek to eliminate or tamp down thermal noise. But thermodynamic computing, a branch of unconventional computing, inverts the paradigms of both and uses those same fluctuations as its power source. This drastically reduces the amount of external energy required to perform computations and allows for operation at room temperature, unlike many quantum computers. In this way, thermodynamic computing is an exciting example of Beyond-Moore’s-Law microelectronics and low-power, energy-aware computing.

“Thermodynamic computing is noise-powered,” said Molecular Foundry staff scientist Stephen Whitelam, an author on the paper. “The premise of thermodynamic computing is that if you take a physical device with an energy scale comparable to that of thermal energy and leave it alone, it will change state over time, driven by thermal fluctuations. The goal is to program it so that this time evolution does something useful. Classical and quantum computing fight noise; thermodynamic computing is powered by it.”

Overcoming roadblocks

Thus far, two primary challenges have stood in the way of thermodynamic computing as a practical framework for computation. First, existing thermodynamic computers are designed to do computation at thermodynamic equilibrium, meaning researchers must wait for the computer to settle into its lowest-energy configuration before they can perform a calculation. Even if a system’s ground state is well-defined, the amount of time it takes to reach equilibrium is unpredictable – and it can be too long to be practical for day-to-day computational use.

Additionally, the range of computations that can be performed using thermodynamic computing has been limited to solving linear algebra problems. For thermodynamic computing to be useful for general-purpose computation, systems will also need to be able to solve nonlinear calculations.

In their paper, Whitelam and his colleague Corneel Casert of NERSC address these challenges, using digital simulations to demonstrate that nonlinear computations – like those performed by neural networks – are indeed possible using thermodynamic computers that are not working at equilibrium.

Two researchers stand in front of a colorful supercomputer. A NERSC logo and some of the letters of "Perlmutter" are visible on the machine.

According to Whitelam and Casert, when the components of the computer are themselves nonlinear, it becomes possible to train a thermodynamic computer to perform nonlinear computations at specified times, regardless of its equilibrium status. This means the computer operates more like a classical computer, without the need to wait for equilibrium. It also expands the set of thermodynamic algorithms to the same types of complex, nonlinear problems a neural network can do, meaning thermodynamic computing could be an appropriate tool for machine-learning workloads that have previously been outside its capabilities.

“A nonlinear thermodynamic circuit can behave like a neuron in a neural network,” said Whitelam. “Nonlinearity is what gives a neural network its expressive power. What we reasoned is that if you build these thermodynamic neurons into a connected structure, then that structure should have the expressive power to mimic a neural network and so be able to do machine learning.” 

Together, these solutions expand what thermodynamic computing can do.

Inverted training

The challenge, then, becomes training such a system. A thermodynamic computer is a stochastic system, meaning that no two runs on a thermodynamic computer look the same, and the methods used for training digital neural networks don’t apply. But Whitelam and Casert have offered a solution there as well.

To train Whitelam’s model of the thermodynamic computer, Casert engineered a large-scale computational framework. Using 96 GPUs in parallel on the Perlmutter supercomputer at NERSC, Casert built and ran massively parallel evolutionary simulations, evaluating billions of noisy dynamical trajectories per generation to discover the most effective network parameters. 

In particular, he used a framework known as a genetic algorithm: beginning with a set of different thermodynamic neural networks and evaluating the effectiveness of each, he selected the best-performing and mutated them, adding random noise to their parameters, and evaluated them again. Ultimately, Casert simulated more than a trillion runs of a thermodynamic computer, using Perlmutter’s GPUs in parallel. This training framework is considerably more costly than the methods used to train digital networks, but it yields a computer that can operate using very little energy after it’s built and trained.

“It’s a very different way of optimizing a neural network. Training a thermodynamic neural network by simulating it digitally is expensive, but once trained and built as physical hardware,  we can perform inference on that hardware for a very low energy cost,” said Casert. 

The combination of design and training show that a machine-learning computer that uses far less energy is possible. 

More hardware, more algorithms

The field of thermodynamic computing is relatively young – so where does it go from here? According to Whitelam, it’s important to work out how to realize these designs in hardware. Currently, the team is looking for experimental partners to make both hardware and software a reality — another step exploring what’s possible with thermodynamic computing.

Another step, he says, is more algorithms. Existing algorithms are meant for systems working at equilibrium; with that requirement no longer a roadblock, new ones will need to be developed. The field will also need new algorithms for nonlinear computations, mirroring the ones used for digital neural networks. 

“It’s an exciting field,” said Whitelam. “We’re looking for more efficient ways of computing, and thermodynamic computing is definitely one of them.”

###

Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab’s expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab’s world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy’s Office of Science. 

DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.

An illustration showing a layer of closely packed spheres, mostly gray, with a few blue and red ones mixed in. Floating above the layer are several small clusters of spheres in different arrangements of the blue, red, and gray.

Atomic Neighborhoods in Semiconductors Provide New Avenue for Designing Microelectronics

Scientist in protective eyewear adjusts a microscope in a high-tech lab filled with optical and electronic equipment.

New Photon-Avalanching Nanoparticles Could Enable Next-Generation Optical Computers

A man wearing a burgundy sweater, light-blue button-down shirt, and dark-rimmed eyeglasses stands in front of machinery made of transparent and black-toned materials while holding a large, rainbow-colored reflective disc in a clean room setting with white walls.

Science Power-up: The Most Exciting Thing in Microelectronics