Researchers report spintronics-based probabilistic computers compatible with current AI

Researchers at Tohoku University and the University of California, Santa Barbara, have shown a proof-of-concept energy-efficient computer compatible with current AI. It utilizes a stochastic behavior of nanoscale spintronics devices and is particularly suitable for probabilistic computation problems such as inference and sampling.

The team presented the results at the IEEE International Electron Devices Meeting (IEDM 2023) on December 12, 2023.

 

There is an increasing demand for domain-specific hardware. A probabilistic computer with naturally stochastic building blocks (probabilistic bits, or p-bits) is a representative example due to its potential capability to efficiently address various computationally hard tasks in machine learning (ML) and artificial intelligence (AI).

Just as quantum computers are a natural fit for inherently quantum problems, room-temperature probabilistic computers are suitable for intrinsically probabilistic algorithms, which are widely used for training machines and computational hard problems in optimization, sampling, etc.

The researchers in this work have shown that robust and fully asynchronous (clockless) probabilistic computers can be efficiently realized at scale using a probabilistic spintronic device called stochastic magnetic tunnel junction (sMTJ) interfaced with powerful Field Programmable Gate Arrays (FPGA).

Until now, however, sMTJ-based probabilistic computers have been only capable of implementing recurrent neural network, and developing the scheme to implement feedforward neural networks have been awaited.



"As the feedforward neural networks underpin most modern AI applications, augmenting probabilistic computers toward this direction should be a pivotal step to hit the market and enhance the computational capabilities of AI," said Professor Kerem Camsari, the Principal Investigator at the University of California, Santa Barbara.

In their recent breakthrough, the researchers have made two important advances. First, leveraging earlier works by the Tohoku University team on stochastic magnetic tunnel junctions at the device level, they have demonstrated the fastest p-bits at the circuit level by using in-plane sMTJs, fluctuating every ~microsecond or so, about three orders of magnitude faster than the previous reports.

Second, by enforcing an update order at the computing hardware level and leveraging layer-by-layer parallelism, they have demonstrated the basic operation of the Bayesian network as an example of feedforward stochastic neural networks.

"Current demonstrations are small-scale, however, these designs can be scaled up by making use of CMOS-compatible Magnetic RAM (MRAM) technology, enabling significant advances in machine learning applications while also unlocking the potential for efficient hardware realization of deep/convolutional neural networks," said Professor Shunsuke Fukami, the principal investigator at Tohoku University.

Source: 
Posted: Dec 14,2023 by Roni Peleg