Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
-
- research-articleSeptember 2020
Energy-Efficient Hardware for Language Guided Reinforcement Learning
- Aidin Shiri,
- Arnab Neelim Mazumder,
- Bharat Prakash,
- Nitheesh Kumar Manjunath,
- Houman Homayoun,
- Avesta Sasan,
- Nicholas R. Waytowich,
- Tinoosh Mohsenin
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 131–136https://doi.org/10.1145/3386263.3407652Reinforcement learning (RL) has shown great performance in solving sequential decision-making problems. While a lot of works have done on processing state information such as images, there has been some effort towards integrating natural language ...
- research-articleSeptember 2020
A Review of In-Memory Computing Architectures for Machine Learning Applications
- Sathwika Bavikadi,
- Purab Ranjan Sutradhar,
- Khaled N. Khasawneh,
- Amlan Ganguly,
- Sai Manoj Pudukotai Dinakarrao
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 89–94https://doi.org/10.1145/3386263.3407649to meet the extensive computational load presented by the rapidly growing Machine Learning (ML) and Artificial Intelligence (AI) algorithms such as Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs). In order to obtain hardware ...
- research-articleSeptember 2020
MNSIM 2.0: A Behavior-Level Modeling Tool for Memristor-based Neuromorphic Computing Systems
- Zhenhua Zhu,
- Hanbo Sun,
- Kaizhong Qiu,
- Lixue Xia,
- Gokul Krishnan,
- Guohao Dai,
- Dimin Niu,
- Xiaoming Chen,
- X. Sharon Hu,
- Yu Cao,
- Yuan Xie,
- Yu Wang,
- Huazhong Yang
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 83–88https://doi.org/10.1145/3386263.3407647Memristor based neuromorphic computing systems give alternative solutions to boost the computing energy efficiency of Neural Network (NN) algorithms. Because of the large-scale applications and the large architecture design space, many factors will ...
- abstractSeptember 2020
Deep Neural Network accelerator with Spintronic Memory
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPage 51https://doi.org/10.1145/3386263.3407646Utilizing emerging nonvolatile memories to accelerate deep neural network (DNN) has been considered as one of the promising approaches to solve the bottleneck of data transfer during the multiplication and accumulation (MAC). Among them, spintronic ...
- research-articleSeptember 2020
Exploring DNA Alignment-in-Memory Leveraging Emerging SOT-MRAM
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 277–282https://doi.org/10.1145/3386263.3407590In this work, we review two alternative Processing-in-Memory (PIM) accelerators based on Spin-Orbit-Torque Magnetic Random Access Memory (SOT-MRAM) to execute DNA short read alignment based on an optimized and hardware-friendly alignment algorithm. We ...
- research-articleSeptember 2020
A Background Noise Self-adaptive VAD Using SNR Prediction Based Precision Dynamic Reconfigurable Approximate Computing
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 271–275https://doi.org/10.1145/3386263.3407589This paper proposed a background-noise self-adaptive voice activity detection (VAD) accelerator using SNR prediction based precision dynamic reconfigurable approximate computing. To improve the energy efficiency while maintaining high recognition ...
- short-paperSeptember 2020
In-Memory Computing: The Next-Generation AI Computing Paradigm
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 265–270https://doi.org/10.1145/3386263.3407588To overcome the memory bottleneck of von-Neuman architecture, various memory-centric computing techniques are emerging to reduce the latency and energy consumption caused by data communication. The great success of artificial intelligence (AI) ...
- research-articleSeptember 2020
An In-memory Highly Reconfigurable Logic Circuit Based on Diode-assisted Enhanced Magnetoresistance Device
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 259–264https://doi.org/10.1145/3386263.3407587In the post-Moore era, in order to solve the problem of von Neumann bottleneck and memory wall caused by separation of memory and processor, in-memory-processing (IMP) technique has aroused great attention. Novel non-volatile memory (NVM) based on ...
- short-paperSeptember 2020
Energy-Efficient Machine Learning Accelerator for Binary Neural Networks
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 77–82https://doi.org/10.1145/3386263.3407582Binary neural network (BNN) has shown great potential to be implemented with power efficiency and high throughput. Compared with its counterpart, the convolutional neural network (CNN), BNN is trained with binary constrained weights and activations, ...
- research-articleSeptember 2020
IMC-Sort: In-Memory Parallel Sorting Architecture using Hybrid Memory Cube
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 45–50https://doi.org/10.1145/3386263.3407581Processing-in-memory (PIM) architectures have gained significant importance as an alternative paradigm to the von-Neumann architectures to alleviate the memory wall and technology scaling problems. PIM architectures have achieved significant latency and ...
- short-paperSeptember 2020
Effective Algorithm-Accelerator Co-design for AI Solutions on Edge Devices
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 283–290https://doi.org/10.1145/3386263.3406956High quality AI solutions require joint optimization of AI algorithms, such as deep neural networks (DNNs), and their hardware accelerators. To improve the overall solution quality as well as to boost the design productivity, efficient algorithm and ...
- research-articleSeptember 2020
Architecture-Accuracy Co-optimization of ReRAM-based Low-cost Neural Network Processor
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 427–432https://doi.org/10.1145/3386263.3406954Resistive RAM (ReRAM) is a promising technology with such advantages as small device size and in-memory-computing capability. However, designing optimal AI processors based on ReRAMs is challenging due to the limited precision, and the complex interplay ...
- research-articleSeptember 2020
Accelerating RRT Motion Planning Using TCAM
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 481–486https://doi.org/10.1145/3386263.3406948Real-time motion planning is important for robot movement. In motion planning, path search and collision detection are two performance bottlenecks. In this paper, we adopt a range-based matching scheme with ternary content-addressable memories (TCAMs) ...
- research-articleSeptember 2020
Defect-Tolerant Mapping of CMOL Circuits with Delay Optimization
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 451–456https://doi.org/10.1145/3386263.3406944CMOS/nanowire/molecular hybrid (CMOL) circuit has a considerable number of defective nanodevices during manufacturing, therefore, the defect-tolerant cell mapping is critical for the logic implementation in CMOL architecture. To the best of our ...
- research-articleSeptember 2020
Analog Circuit Implementation of Neurons with Multiply-Accumulate and ReLU Functions
GLSVLSI '20: Proceedings of the 2020 on Great Lakes Symposium on VLSIPages 493–498https://doi.org/10.1145/3386263.3406941Although Artificial Neural Networks (ANNs) are inspired by biological neural systems, most of ANNs today are implemented with digital circuitry and use binary values in computation. In recent years, analog-based neuromorphic system has gained lots of ...