Abstract
Multi-valued logic (MVL) is a promising solution for high-power consumption and area caused by the limitation of binary systems. Quaternary logic is one of the MVL types more compatible with binary logic. While traditional CMOS technology struggles with achieving multiple threshold voltage levels efficiently, leading to greater complexity and energy inefficiency, CNTFETs offer tunable thresholds ideal for MVL applications. Neural networks can retrieve data from inaccurate data, detect trends, and extract patterns that traditional computing methods or humans find difficult to extract. In artificial neural networks (ANNs), the demand for substantial memory to store numerous weights is a critical consideration. Leveraging emerging technologies like magnetic tunnel junctions (MTJ) for non-volatility and CNTFETs for diverse threshold voltage values presents a groundbreaking stride. This convergence of technologies assures non-volatile storage and facilitates the intricate hardware implementation of MVL systems, marking a pioneering leap forward in crafting the next generation of memory for ANN applications. This paper proposes an algorithm for quantizing neural networks using quaternary logic. According to the proposed algorithm, the circuits required to implement quaternary neural networks are designed. The simulation results demonstrate that the proposed algorithm for quantized neural networks significantly reduces the overall memory requirements compared to their full precision counterparts, with minimal accuracy degradation. Specifically, the accuracy drop is less than 1.67% on CIFAR-10 and 1.22% on CIFAR-100 for ResNet-18, while improvements for MNIST on MLP, CIFAR-10 on LeNet-5, and CIFAR-10 and CIFAR-100 on VGG-16, are even observed.














Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
References
Deng L, Jiao P, Pei J, Wu Z, Li G (2018) GXNOR-net: training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework. Neural Netw 100:49–58. https://doi.org/10.1016/j.neunet.2018.01.010
Mondal A, Srivastava A (2020) Energy-efficient design of MTJ-based neural networks with stochastic computing. ACM J Emerg Technol Comput Syst 16(1):1–27. https://doi.org/10.1145/3359622
Roy U, Pramanik T, Roy S, Chatterjee A, Register LF, Banerjee SK (2021) Machine learning for statistical modeling. ACM Trans Design Autom Electron Syst 26(3):1–17. https://doi.org/10.1145/3440014
Rastegari M, Ordonez V, Redmon J, and Farhadi A, "XNOR-Net: imagenet classification using binary convolutional neural networks," In: European Conference on computer vision, Cham, 2016: Springer International Publishing, in Computer Vision – ECCV 2016, pp. 525–542.
Ananthakrishnan A, Allen MG (2021) All-passive hardware implementation of multilayer perceptron classifiers. IEEE Trans Neural Netw Learn Syst 32:4086–4095. https://doi.org/10.1109/TNNLS.2020.3016901
Schmuck M, Benini L, Rahimi A (2019) Hardware optimizations of dense binary hyperdimensional computing: rematerialization of hypervectors, binarized bundling, and combinational associative memory. ACM J Emerg Technol Comput Syst 15(4):1–25. https://doi.org/10.1145/3314326
Samiee A, Borulkar P, DeMara RF, Zhao P, Bai Y (2019) Low-energy acceleration of binarized convolutional neural networks using a spin hall effect based logic-in-memory architecture. IEEE Trans Emerg Topics Comput 9(2):928–940. https://doi.org/10.1109/tetc.2019.2915589
Ghasemi SA, Jahannia B, Farbeh H (2022) GraphA: an efficient ReRAM-based architecture to accelerate large scale graph processing. J Syst Archit 133:102755. https://doi.org/10.1016/j.sysarc.2022.102755
Wang C et al (2019) Cross-point resistive memory. ACM Trans Des Autom Electr Syst 24(4):1–37. https://doi.org/10.1145/3325067
Huang K et al (2023) Structured dynamic precision for deep neural networks quantization. ACM Trans Des Autom Electr Syst 28(1):1–24. https://doi.org/10.1145/3549535
Yang Z et al (2020) Searching for low-bit weights in quantized neural networks. Adv Neural Inf Process Syst 33:4091–4102
Natsui M, Chiba T, Hanyu T (2018) Design of MTJ-Based nonvolatile logic gates for quantized neural networks. Microelectron J 82:13–21. https://doi.org/10.1016/j.mejo.2018.10.005
Ben Jamaa MH et al (2008) Variability-aware design of multilevel logic decoders for nanoscale crossbar memories. IEEE Trans Comput-Aided Des Integr Circuits Syst 27(11):2053–2067. https://doi.org/10.1109/tcad.2008.2006076
Bakhtiary V, Amirany A, Moaiyeri MH, Jafari K (2023) An SEU-hardened ternary SRAM design based on efficient ternary C-elements using CNTFET technology. Microelectron Reliab 140:114881. https://doi.org/10.1016/j.microrel.2022.114881
Raghavan BS and Bhaaskaran VSK, "Design of novel multiple valued logic (MVL) circuits," Presented at the 2017 international Conference on nextgen electronic technologies: silicon to software (ICNETS2), 2017.
Levashenko V, Lukyanchuk I, Zaitseva E, Kvassay M, Rabcan J, Rusnak P (2020) Development of programmable logic array for multiple-valued logic functions. IEEE Trans Comput Aided Des Integr Circuits Syst 39(12):4854–4866
Amirany A, Jafari K, Moaiyeri MH (2020) BVA-NQSL: a bio-inspired variation aware nonvolatile quaternary spintronic latch. IEEE Magn Lett 11:1–5. https://doi.org/10.1109/lmag.2020.3036834
Cai Y, Tang T, Xia L, Li B, Wang Y, Yang H (2020) Low bit-width convolutional neural network on RRAM. IEEE Trans Comput Aided Des Integr Circuits Syst 39(7):1414–1427. https://doi.org/10.1109/tcad.2019.2917852
Li J et al (2013) Low-energy volatile STT-RAM cache design using cache-coherence-enabled adaptive refresh. ACM Trans Des Autom Electron Syst 19(1):1–23. https://doi.org/10.1145/2534393
Yang N, Wang X, Lin X, Zhao W (2021) Exploiting carbon nanotube FET and magnetic tunneling junction for near-memory-computing paradigm. IEEE Trans Electron Devices 68(4):1975–1979. https://doi.org/10.1109/ted.2021.3059817
Lee CS, Pop E, Franklin AD, Haensch W, Wong HSP (2015) A compact virtual-source model for carbon nanotube FETs in the Sub-10-nm Regime—Part I: intrinsic elements. IEEE Trans Electron Devices 62(9):3061–3069. https://doi.org/10.1109/ted.2015.2457453
Lee C-S, Pop E, Franklin AD, Haensch W, Wong H-SP (2015) A compact virtual-source model for carbon nanotube FETs in the Sub-10-nm Regime—Part II: extrinsic elements, performance assessment, and design optimization. IEEE Trans Electron Devices 62(9):3070–3078. https://doi.org/10.1109/ted.2015.2457424
Bishop MD, Hills G, Srimani T, Lau C, Murphy D, Fuller S, Humes J, Ratkovich A, Nelson M, Shulaker MM (2020) Fabrication of carbon nanotube field-effect transistors in commercial silicon manufacturing facilities. Nature Electron 3(8):492–501. https://doi.org/10.1038/s41928-020-0419-7
Ho R, Lau C, Hills G, Shulaker MM (2019) Carbon nanotube CMOS analog circuitry. IEEE Trans Nanotechnol 18:845–848. https://doi.org/10.1109/tnano.2019.2902739
Hills G et al (2019) Modern microprocessor built from complementary carbon nanotube transistors. Nature 572(7771):595–602. https://doi.org/10.1038/s41586-019-1493-8
Jooq MKQ, Moaiyeri MH, Al-Shidaifat A, Song H (2022) Ultra-efficient and robust auto-nonvolatile schmitt trigger-based latch design using ferroelectric CNTFET Technology. IEEE Trans Ultrason Ferroelectr Freq Control 69(5):1829–1840. https://doi.org/10.1109/TUFFC.2022.3158822
Shulaker MM et al (2017) Three-dimensional integration of nanotechnologies for computing and data storage on a single chip. Nature 547(7661):74–78. https://doi.org/10.1038/nature22994
Pajouhi Z (2020) Ultralow power nonvolatile logic based on spin-orbit and exchange coupled nanowires. IEEE Trans Comput-Aided Des Integr Circuits Syst 39(9):1866–1874. https://doi.org/10.1109/TCAD.2019.2925373
Wang Z, Zhao W, Deng E, Klein J-O, Chappert C (2015) Perpendicular-anisotropy magnetic tunnel junction switched by spin-Hall-assisted spin-transfer torque. J Phys D: Appl Phys 48(6):065001. https://doi.org/10.1088/0022-3727/48/6/065001
Ikeda S et al (2008) Tunnel magnetoresistance of 604% at 300K by suppression of Ta diffusion in CoFeB∕MgO∕CoFeB pseudo-spin-valves annealed at high temperature. Appl Phys Lett 93(8):082508. https://doi.org/10.1063/1.2976435
Slonczewski JC (1989) Conductance and exchange coupling of two ferromagnets separated by a tunneling barrier. Phys Rev B Condens Matter 39(10):6995–7002. https://doi.org/10.1103/physrevb.39.6995
Amirany A, Epperson G, Patooghy A, Rajaei R (2021) Accuracy-adaptive spintronic adder for image processing applications. IEEE Trans Magn 57(6):1–10. https://doi.org/10.1109/TMAG.2021.3069161
Amirany A, Jafari K, Moaiyeri MH (2022) DDR-MRAM: double data rate magnetic RAM for efficient artificial intelligence and cache applications. IEEE Trans Magn 58(6):1–9. https://doi.org/10.1109/TMAG.2022.3162030
Jamshidi V, Fazeli M (2018) Design of ultra low power current mode logic gates using magnetic cells. AEU-Int J Electron C 83:270–279. https://doi.org/10.1016/j.aeue.2017.09.009
Amirany A, Jafari K, and Moaiyeri MH, "High-performance and soft error immune spintronic retention latch for highly reliable processors," Presented at the electrical engineering (ICEE), Iranian Conference on, 2020.
Smith KC (1981) The prospects for multivalued logic: a technology and applications view. IEEE Trans Comput 30(09):619–634
BahmanAbadi M, Amirany A, Jafari K, Moaiyeri MH (2022) Efficient and highly reliable spintronic non-volatile quaternary memory based on carbon nanotube FETs and Multi-TMR MTJs. ECS J Solid State Sci Technol 11(6):061007. https://doi.org/10.1149/2162-8777/ac77bb
Haykin S, Neural networks: a comprehensive foundation. Prentice Hall PTR, 1994, p. 768.
Gurney K, An introduction to neural networks. Taylor & Francis, Inc., 1997, p. 288.
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791
Simonyan K, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
He K, Zhang X, Ren S, and Sun J, "Deep residual learning for image recognition," Presented at the 2016 IEEE Conference on computer vision and pattern recognition (CVPR), 2016.
Wang E, Davis JJ, Cheung PY, Constantinides GA (2020) LUTNet: learning FPGA configurations for highly efficient neural network inference. IEEE Trans Comput 69(12):1795–1808
Huang S, Jiang H, Yu S (2023) Hardware-aware quantization/mapping strategies for compute-in-memory accelerators. ACM Trans Des Autom Electron Syst 28(3):1–23
Moaiyeri MH, Navi K, Hashemipour O (2012) Design and evaluation of CNFET-based quaternary circuits. Circuits Syst Signal Process 31(5):1631–1652
Fakhari S, Hajizadeh Bastani N, Moaiyeri MH (2018) A low-power and area-efficient quaternary adder based on CNTFET switching logic. Analog Integr Circuits Signal Process 98(1):221–232. https://doi.org/10.1007/s10470-018-1367-2
Wang Z, Zhao W, Deng E, Klein J-O, Chappert C (2015) Perpendicular-anisotropy magnetic tunnel junction switched by spin-Hall-assisted spin-transfer torque. J Phys D Appl Phys 48(6):065001
Amirany A, Jafari K, Moaiyeri MH (2020) BVA-NQSL: a bio-inspired variation-aware nonvolatile quaternary spintronic latch. IEEE Magn Lett 11:1–5
Amirany A, Moaiyeri MH, and Jafari K, "MTMR-SNQM: multi-tunnel magnetoresistance spintronic non-volatile quaternary memory," In: 2021 IEEE 51st international symposium on multiple-valued logic (ISMVL), 2021: IEEE, pp. 172–177.
BahmanAbadi M, Amirany A, Jafari K, Moaiyeri MH (2022) Efficient and highly reliable spintronic non-volatile quaternary memory based on carbon nanotube FETs and multi-TMR MTJs. ECS J Solid State Sci Technol 11(6):061007
Lin S, Kim Y-B, Lombardi F (2011) CNTFET-based design of ternary logic gates and arithmetic circuits. IEEE Trans Nanotechnol 10(2):217–225. https://doi.org/10.1109/tnano.2009.2036845
Datla SRPR and Thornton MA, "Quaternary voltage-mode logic cells and fixed-point multiplication circuits," Presented at the 2010 40th IEEE international symposium on multiple-valued logic, 2010.
Rahmati S, Farshidi E, Ganji J (2021) Low energy and area efficient quaternary multiplier with carbon nanotube field effect transistors. ETRI J 43(4):717–727
Doostaregan A, Abrishamifar A (2020) On the design of robust, low power with high noise immunity quaternary circuits. Microelectron J 102:104774
Fakhari S, Hajizadeh Bastani N, Moaiyeri MH (2019) A low-power and area-efficient quaternary adder based on CNTFET switching logic. Analog Integr Circuits Signal Process 98(1):221–232
Sharifi F, Moaiyeri MH, Navi K, Bagherzadeh N (2015) Quaternary full adder cells based on carbon nanotube FETs. J Comput Electron 14(3):762–772. https://doi.org/10.1007/s10825-015-0714-0
Moaiyeri MH, Sedighiani S, Sharifi F, Navi K (2016) Design and analysis of carbon nanotube FET based quaternary full adders. Front Inf Technol Electron Eng 17(10):1056–1066. https://doi.org/10.1631/fitee.1500214
Ebrahimi SA, Reshadinezhad MR, Bohlooli A, Shahsavari M (2016) Efficient CNTFET-based design of quaternary logic gates and arithmetic circuits. Microelectron J 53:156–166
Daraei A, Hosseini SA (2019) Novel energy-efficient and high-noise margin quaternary circuits in nanoelectronics. AEU-Int J Electron C 105:145–162. https://doi.org/10.1016/j.aeue.2019.04.012
Datla SRR and Thornton MA, "Quaternary voltage-mode logic cells and fixed-point multiplication circuits," In: 2010 40th IEEE international symposium on multiple-valued logic, 2010: IEEE, pp. 128–133.
Yin P, Zhang S, Lyu J, Osher S, Qi Y, Xin J (2018) BinaryRelax: a relaxation approach for training deep neural networks with quantized weights. SIAM J Imag Sci 11(4):2205–2223. https://doi.org/10.1137/18m1166134
Long Y, Lee E, Kim D, and Mukhopadhyay S, "Q-PIM: a genetic algorithm based flexible DNN quantization method and application to processing-in-memory platform," Presented at the 2020 57th ACM/IEEE design automation Conference (DAC), 2020.
Kang B, "Energy-aware DNN quantization for processing-in-memory architecture," 2022.
Sun S, Bai J, Shi Z, Zhao W, Kang W (2024) CIM2PQ: an arraywise and hardware-friendly mixed precision quantization method for analog computing-in-memory. IEEE Trans Comput Aided Des Integr Circuits Syst 43(7):2084–2097. https://doi.org/10.1109/tcad.2024.3358609
Funding
The authors received no financial support for this article’s research, authorship, and publication.
Author information
Authors and Affiliations
Contributions
Motahareh BahmanAbadi was contributed conceptualization, methodology, investigation, software, and writing—original draft. Abdolah Amirany was involved in methodology, investigation, software, and writing—original draft. Mohammad Hossein Moaiyeri was performed methodology, validation, software, writing—reviewing and editing, and supervision. Kian Jafari was done methodology, validation, software, and reviewing and editing.
Corresponding author
Ethics declarations
Conflict of interest
No conflict of interest exists in the submission of this article, and all the authors approved the article.
Ethical approval
This article contains no studies with human participants or animals performed by any authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
BahmanAbadi, M., Amirany, A., Moaiyeri, M.H. et al. Synergizing spintronics and quaternary logic: a hardware accelerator for neural networks with optimized quantization algorithm. J Supercomput 81, 669 (2025). https://doi.org/10.1007/s11227-025-07176-z
Accepted:
Published:
DOI: https://doi.org/10.1007/s11227-025-07176-z