Nothing Special   »   [go: up one dir, main page]

IEICE Electronics Express
Online ISSN : 1349-2543
ISSN-L : 1349-2543
LETTER
A proposal for enhancing training speed in deep learning models based on memory activity survey
Dang Tuan KietBinh Kieu-Do-NguyenTrong-Thuc HoangKhai-Duy NguyenXuan-Tu TranCong-Kha Pham
Author information
JOURNAL FREE ACCESS

2021 Volume 18 Issue 15 Pages 20210252

Details
Abstract

Deep Learning (DL) training process involves intensive computations that require a large number of memory accesses. There are many surveys on memory behaviors with the DL training. They use well-known profiling tools or improving the existing tools to monitor the training processes. This paper presents a new approach to profile using a co-operate solution from software and hardware. The idea is to use Field-Programmable-Gate-Array memory as the main memory for the DL training processes on a computer. Then, the memory behaviors from both software and hardware point-of-views can be monitored and evaluated. The most common DL models are selected for the tests, including ResNet, VGG, AlexNet, and GoogLeNet. The CIFAR-10 dataset is chosen for the training database. The experimental results show that the ratio between read and write transactions is roughly about 3 to 1. The requested allocations are varied from 2-Byte to 64-MB, with the most requested sizes are approximately 16-KB to 64-KB. Based on the statistic, a suggestion was made to improve the training speed using an L4 cache for the Double-Data-Rate (DDR) memory. It can be demonstrated that our recommended L4 cache configuration can improve the DDR performance by about 15% to 18%.

Content from these authors
© 2021 by The Institute of Electronics, Information and Communication Engineers
Previous article Next article
feedback
Top