The exponential growth in computational power in the past 5 decades was driven by scaling of complementary metal oxide semiconductor (CMOS) technology. Scaling has three major aspects, energy scaling ensures constant computational power budget, size scaling ensures more transistors in the same area resulting in higher computation density, and complexity scaling ensures improvements in architecture for higher computing efficiency. Unfortunately, the current “Dark Silicon” era is deprived of these quintessential scaling trends owing to fundamental limitations at the material, device and architectural levels and awaits innovation to restore the growth in the computation power. The current research involving neuromorphic computing has great potentials to further the computational capabilities, which has led me to look into neural networks. The ability of the brain to process large amounts of information and seamlessly arrive at conclusions in problems like pattern classification, while consuming a miniscule amount of power (~20W) with a relatively small area is extremely fascinating. This is achieved through a highly complex architecture involving billions of neurons connected through trillions of synapses, which results in highly parallel computation. Additionally, brain employs analog in-memory computation which enhances its energy efficiency compared the deterministic digital computing which uses von Neumann architecture, where the memory and computing is separated. Compared to the current state-of-the art super-computers, brain demonstrates extreme energy and area efficiency while trading for speed and information storage capacity. Being part of a device group, the decline in scaling prompted us to think of novel device ideas which can be implemented in neural networks to reinstate different aspects of scaling. This resulted in the conception of idea of the “Gaussian synapse” using heterogeneous integration of two-dimensional (2D) field-effect transistors (FETs) biased in their respective subthreshold regimes. The Gaussian synapse facilitates energy scaling, whereas the 2D materials enables size scaling without losing electrostatic control and finally, probabilistic neural networks (PNNs) enable complexity scaling due to its ability to seamlessly capture non-linear decision boundaries using fewer components than typical artificial neural networks (ANNs). The Gaussian synapse is achieved through the series connection of n-type molybdenum disulfide (MoS2) and p-type black phosphorus (BP) FETs resulting in a Gaussian transfer function. Subsequently, the FETs are top-gated to introduce dynamic control of amplitude, mean and standard deviation of the Gaussian function, which enables its use in PNNs. The Gaussian synapses are then used for classification of EEG signals into different brain waves characterized by their frequency as alpha, beta, gamma, delta and theta waves. Looking ahead, ultra-low power devices with in-memory computing will further drive the scaling requirements to achieve highly efficient Gaussian synapses.
You may also be interested in...
Metallic-mean quasicrystals as aperiodic approximants of periodic crystals
Contributor Nature Communications
In situ training of feed-forward and recurrent convolutional memristor networks
Contributor Nature Machine Intelligence
Materials tactile logic via innervated soft thermochromic elastomers
Contributor Nature Communications