default search action
Wooyoung Jo
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j6]Sangjin Kim, Zhiyong Li, Soyeon Um, Wooyoung Jo, Sangwoo Ha, Juhyoung Lee, Sangyeob Kim, Donghyeon Han, Hoi-Jun Yoo:
DynaPlasia: An eDRAM In-Memory Computing-Based Reconfigurable Spatial Accelerator With Triple-Mode Cell. IEEE J. Solid State Circuits 59(1): 102-115 (2024) - [j5]Sangjin Kim, Soyeon Um, Wooyoung Jo, Jingu Lee, Sangwoo Ha, Zhiyong Li, Hoi-Jun Yoo:
Scaling-CIM: eDRAM In-Memory-Computing Accelerator With Dynamic-Scaling ADC and Adaptive Analog Operation. IEEE J. Solid State Circuits 59(8): 2694-2705 (2024) - [j4]Beomseok Kwon, Zhiyong Li, Sangjin Kim, Wooyoung Jo, Hoi-Jun Yoo:
A 92 fps and 2.56 mJ/Frame Computing-In-Memory-Based Human Pose Estimation Accelerator With Resource-Efficient Macro for Mobile Devices. IEEE Trans. Circuits Syst. II Express Briefs 71(6): 2921-2925 (2024) - [c21]Sangjin Kim, Zhiyong Li, Soyeon Um, Wooyoung Jo, Sangwoo Ha, Sangyeob Kim, Hoi-Jun Yoo:
NoPIM: Functional Network-on-Chip Architecture for Scalable High-Density Processing-in-Memory-based Accelerator. COOL CHIPS 2024: 1-3 - [c20]Sangyeob Kim, Sangjin Kim, Wooyoung Jo, Soyeon Kim, Seongyon Hong, Nayeong Lee, Hoi-Jun Yoo:
A Low-Power Large-Language-Model Processor with Big-Little Network and Implicit-Weight-Generation for On-Device AI. HCS 2024: 1 - [c19]Jiwon Choi, Wooyoung Jo, Seongyon Hong, Beomseok Kwon, Wonhoon Park, Hoi-Jun Yoo:
A 28.6 mJ/iter Stable Diffusion Processor for Text-to-Image Generation with Patch Similarity-based Sparsity Augmentation and Text-based Mixed-Precision. ISCAS 2024: 1-5 - [c18]Sangyeob Kim, Sangjin Kim, Wooyoung Jo, Soyeon Kim, Seongyon Hong, Hoi-Jun Yoo:
20.5 C-Transformer: A 2.6-18.1μJ/Token Homogeneous DNN-Transformer/Spiking-Transformer Processor with Big-Little Network and Implicit Weight Generation for Large Language Models. ISSCC 2024: 368-370 - [c17]Seongyon Hong, Wooyoung Jo, Sangjin Kim, Sangyeob Kim, Kyomin Sohn, Hoi-Jun Yoo:
Dyamond: A 1T1C DRAM In-memory Computing Accelerator with Compact MAC-SIMD and Adaptive Column Addition Dataflow. VLSI Technology and Circuits 2024: 1-2 - [i2]Jiwon Choi, Wooyoung Jo, Seongyon Hong, Beomseok Kwon, Wonhoon Park, Hoi-Jun Yoo:
A 28.6 mJ/iter Stable Diffusion Processor for Text-to-Image Generation with Patch Similarity-based Sparsity Augmentation and Text-based Mixed-Precision. CoRR abs/2403.04982 (2024) - 2023
- [c16]Jiwon Choi, Sangyeob Kim, Wonhoon Park, Wooyoung Jo, Hoi-Jun Yoo:
A Resource-Efficient Super-Resolution FPGA Processor with Heterogeneous CNN and SNN Core Architecture. A-SSCC 2023: 1-3 - [c15]Jingu Lee, Sangjin Kim, Wooyoung Jo, Hoi-Jun Yoo:
An Energy-Efficient Heterogeneous Fourier Transform-Based Transformer Accelerator with Frequency-Wise Dynamic Bit-Precision. A-SSCC 2023: 1-3 - [c14]Seongyon Hong, Soyeon Um, Sangjin Kim, Sangyeob Kim, Wooyoung Jo, Hoi-Jun Yoo:
A 332 TOPS/W Input/Weight-Parallel Computing-in-Memory Processor with Voltage-Capacitance-Ratio Cell and Time-Based ADC. ISCAS 2023: 1-5 - [c13]Seryeong Kim, Soyeon Kim, Soyeon Um, Sangjin Kim, Zhiyong Li, Sangyeob Kim, Wooyoung Jo, Hoi-Jun Yoo:
A Reconfigurable 1T1C eDRAM-based Spiking Neural Network Computing-In-Memory Processor for High System-Level Efficiency. ISCAS 2023: 1-5 - [c12]Hankyul Kwon, Gwangtae Park, Junha Ryu, Wooyoung Jo, Hoi-Jun Yoo:
A 15.9 mW 96.5 fps Memory-Efficient 3D Reconstruction Processor with Dilation-based TSDF Fusion and Block-Projection Cache System. ISCAS 2023: 1-5 - [c11]Wonhoon Park, Junha Ryu, Sangjin Kim, Soyeon Um, Wooyoung Jo, Sangyoeb Kim, Hoi-Jun Yoo:
A 5.99 TFLOPS/W Heterogeneous CIM-NPU Architecture for an Energy Efficient Floating-Point DNN Acceleration. ISCAS 2023: 1-4 - [c10]Sangjin Kim, Zhiyong Li, Soyeon Um, Wooyoung Jo, Sangwoo Ha, Juhyoung Lee, Sangyeob Kim, Donghyeon Han, Hoi-Jun Yoo:
DynaPlasia: An eDRAM In-Memory-Computing-Based Reconfigurable Spatial Accelerator with Triple-Mode Cell for Dynamic Resource Switching. ISSCC 2023: 256-257 - [c9]Wooyoung Jo, Sangjin Kim, Juhyoung Lee, Donghyeon Han, Sangyeob Kim, Seungyoon Choi, Hoi-Jun Yoo:
NeRPIM: A 4.2 mJ/frame Neural Rendering Processing-in-memory Processor with Space Encoding Block-wise Mapping for Mobile Devices. VLSI Technology and Circuits 2023: 1-2 - [c8]Sangjin Kim, Soyeon Um, Wooyoung Jo, Jingu Lee, Sangwoo Ha, Zhiyong Li, Hoi-Jun Yoo:
Scaling-CIM: An eDRAM-based In-Memory-Computing Accelerator with Dynamic-Scaling ADC for SQNR-Boosting and Layer-wise Adaptive Bit-Truncation. VLSI Technology and Circuits 2023: 1-2 - 2022
- [j3]Juhyoung Lee, Sangyeob Kim, Sangjin Kim, Wooyoung Jo, Ji-Hoon Kim, Donghyeon Han, Hoi-Jun Yoo:
OmniDRL: An Energy-Efficient Deep Reinforcement Learning Processor With Dual-Mode Weight Compression and Sparse Weight Transposer. IEEE J. Solid State Circuits 57(4): 999-1012 (2022) - [j2]Juhyoung Lee, Jihoon Kim, Wooyoung Jo, Sangyeob Kim, Sangjin Kim, Hoi-Jun Yoo:
ECIM: Exponent Computing in Memory for an Energy-Efficient Heterogeneous Floating-Point DNN Training Processor. IEEE Micro 42(1): 99-107 (2022) - [j1]Sangyeob Kim, Juhyoung Lee, Sanghoon Kang, Donghyeon Han, Wooyoung Jo, Hoi-Jun Yoo:
TSUNAMI: Triple Sparsity-Aware Ultra Energy-Efficient Neural Network Training Accelerator With Multi-Modal Iterative Pruning. IEEE Trans. Circuits Syst. I Regul. Pap. 69(4): 1494-1506 (2022) - [c7]Juhyoung Lee, Wooyoung Jo, Seong-Wook Park, Hoi-Jun Yoo:
Low-power Autonomous Adaptation System with Deep Reinforcement Learning. AICAS 2022: 300-303 - [c6]Wooyoung Jo, Sangjin Kim, Juhyeong Lee, Soyeon Um, Zhiyong Li, Hoi-Jun Yoo:
A 161.6 TOPS/W Mixed-mode Computing-in-Memory Processor for Energy-Efficient Mixed-Precision Deep Neural Networks. ISCAS 2022: 365-369 - 2021
- [c5]Wooyoung Jo, Juhyoung Lee, Seunghyun Park, Hoi-Jun Yoo:
An Energy-Efficient Deep Reinforcement Learning FPGA Accelerator for Online Fast Adaptation with Selective Mixed-precision Re-training. A-SSCC 2021: 1-3 - [c4]Juhyoung Lee, Jihoon Kim, Wooyoung Jo, Sangyeob Kim, Sangjin Kim, Donghyeon Han, Jinsu Lee, Hoi-Jun Yoo:
An Energy-efficient Floating-Point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory. HCS 2021: 1-20 - [c3]Juhyoung Lee, Sangyeob Kim, Ji-Hoon Kim, Sangjin Kim, Wooyoung Jo, Donghyeon Han, Hoi-Jun Yoo:
OmniDRL: An Energy-Efficient Mobile Deep Reinforcement Learning Accelerators with Dual-mode Weight Compression and Direct Processing of Compressed Data. HCS 2021: 1-21 - [c2]Juhyoung Lee, Jihoon Kim, Wooyoung Jo, Sangyeob Kim, Sangjin Kim, Jinsu Lee, Hoi-Jun Yoo:
A 13.7 TFLOPS/W Floating-point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory. VLSI Circuits 2021: 1-2 - [c1]Juhyoung Lee, Sangyeob Kim, Sangjin Kim, Wooyoung Jo, Donghyeon Han, Jinsu Lee, Hoi-Jun Yoo:
OmniDRL: A 29.3 TFLOPS/W Deep Reinforcement Learning Processor with Dualmode Weight Compression and On-chip Sparse Weight Transposer. VLSI Circuits 2021: 1-2 - [i1]Juhyoung Lee, Sangyeob Kim, Sangjin Kim, Wooyoung Jo, Hoi-Jun Yoo:
GST: Group-Sparse Training for Accelerating Deep Reinforcement Learning. CoRR abs/2101.09650 (2021)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-18 19:32 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint