default search action
41st ICCD 2023: Washington, DC, USA
- 41st IEEE International Conference on Computer Design, ICCD 2023, Washington, DC, USA, November 6-8, 2023. IEEE 2023, ISBN 979-8-3503-4291-8
- Sutej Kulkarni, Ryan Tsang, Asmita, Houman Homayoun, Soheil Salehi:
Leveraging Firmware Reverse Engineering for Stealthy Sensor Attacks via Binary Modification. 1-8 - Styliani Tompazi, Georgios Karakonstantis:
A Compressed and Accurate Sparse Deep Learning-based Workload-Aware Timing Error Model. 9-12 - Wei Kong:
Transcend Adversarial Examples: Diversified Adversarial Attacks to Test Deep Learning Model. 13-20 - Yuxiao Chen, Yisong Chang, Ke Zhang, Mingyu Chen, Yungang Bao:
REMU: Enabling Cost-Effective Checkpointing and Deterministic Replay in FPGA-based Emulation. 21-29 - Zimin Li, Yongjian Li, Kaifan Wang, Kun Ma, Shizhen Yu:
Model Checking TileLink Cache Coherence Protocols By Murphi. 30-37 - Gogireddy Ravi Kiran Reddy, Sanampudi Gopala Krishna Reddy, D. R. Vasanthi, Madhav Rao:
MNHOKA - PPA Efficient M-Term Non-Homogeneous Hybrid Overlap-free Karatsuba Multiplier for GF (2n) Polynomial Multiplier. 38-45 - Bindu G. Gowda, S. N. Raghava, Prashanth H. C., Pratyush Nandi, Madhav Rao:
ApproxCNN: Evaluation Of CNN With Approximated Layers Using In-Exact Multipliers. 46-53 - Shuya Ji, Weidong Yang, Jianfei Jiang, Naifeng Jing, Weiguang Sheng, Ang Li, Qin Wang:
ACET: An Adaptive Clock Scheme Exploiting Comprehensive Timing Slack for Reconfigurable Processors. 54-61 - Jing Zhang, Hongbing Tan, Libo Huang:
SFDoP: A Scalable Fused BFloat16 Dot-Product Architecture for DNN. 62-65 - Shangshang Yao, Li Shen:
ImprLM: An Improved Logarithmic Multiplier Design Approach via Iterative Linear-Compensation and Modified Dynamic Segment. 66-69 - Guy Eichler, Biruk B. Seyoum, Kuan-Lin Chiu, Luca P. Carloni:
MindCrypt: The Brain as a Random Number Generator for SoC-Based Brain-Computer Interfaces. 70-77 - Maarten J. Molendijk, Floran A. M. de Putter, Manil Dev Gomony, Pekka Jääskeläinen, Henk Corporaal:
BrainTTA: A 28.6 TOPS/W Compiler Programmable Transport-Triggered NN SoC. 78-85 - Xiaorang Guo, Kun Qin, Martin Schulz:
HiSEP-Q: A Highly Scalable and Efficient Quantum Control Processor for Superconducting Qubits. 86-93 - Peiyi Li, Ji Liu, Hrushikesh Pramod Patil, Paul D. Hovland, Huiyang Zhou:
Enhancing Virtual Distillation with Circuit Cutting for Quantum Error Mitigation. 94-101 - Jinpeng Liu, Wei Tong, Bing Wu, Huan Cheng, Chengning Wang:
ICON: An IR Drop Compensation Method at OU Granularity with Low Overhead for eNVM-based Accelerators. 102-109 - Dhandeep Challagundla, Ignatius Bezzam, Biprangshu Saha, Riadul Islam:
Resonant Compute-In-Memory (rCIM) 10T SRAM Macro for Boolean Logic. 110-117 - Madhava Sarma Vemuri, Umamaheswara Rao Tida:
Small Footprint 6T-SRAM Design with MIV-Transistor Utilization in M3D-IC Technology. 118-125 - Yibin Gu, Hua Wang, Man Luo, Jingyu Tang, Ke Zhou:
Offline and Online Algorithms for Cache Allocation with Monte Carlo Tree Search and a Learned Model. 126-133 - Xu Zhang, Tianyue Lu, Yisong Chang, Ke Zhang, Mingyu Chen:
Morpheus: An Adaptive DRAM Cache with Online Granularity Adjustment for Disaggregated Memory. 134-141 - Hai Zhou, Yuchong Hu, Dan Feng, Wei Wang, Huadong Huang:
Locality-aware Speculative Cache for Fast Partial Updates in Erasure-Coded Cloud Clusters. 142-149 - Menglei Chen, Yu Hua, Rong Bai, Jianming Huang:
A Cost-Efficient Failure-Tolerant Scheme for Distributed DNN Training. 150-157 - Xuhang Wang, Zhuoran Song, Xiaoyao Liang:
RealArch: A Real-Time Scheduler for Mapping Multi-Tenant DNNs on Multi-Core Accelerators. 158-165 - Lingxiang Yin, Amir Ghazizadeh, Shilin Tian, Ahmed Louri, Hao Zheng:
Polyform: A Versatile Architecture for Multi-DNN Execution via Spatial and Temporal Acceleration. 166-169 - Jiawen Wang, Quan Chen, Deze Zeng, Zhuo Song, Chen Chen, Minyi Guo:
STAG: Enabling Low Latency and Low Staleness of GNN-based Services with Dynamic Graphs. 170-173 - Renzhi Xiao, Hong Jiang, Dan Feng, Yuchong Hu, Wei Tong, Kang Liu, Yucheng Zhang, Xueliang Wei, Zhengtao Li:
Accelerating Persistent Hash Indexes via Reducing Negative Searches. 174-181 - Xiangyu Xiang, Yu Hua, Hao Xu:
PMA: A Persistent Memory Allocator with High Efficiency and Crash Consistency Guarantee. 182-189 - Taejoon Song, JuneHyung Kim, Myeongseon Kim, Youngjin Kim:
Prediction-Guided Metadata Backup for Improving Lifetime on Flash-based Swap. 190-193 - Jinlei Hu, Zijie Wei, Jianxi Chen, Dan Feng:
RWORT: A Read and Write Optimized Radix Tree for Persistent Memory. 194-197 - Tingyu Fan, Xiulong Liu, Baochao Chen, Wenyu Qu:
An Effective and Balanced Storage Extension Approach for Sharding Blockchain Systems. 198-205 - Wenjie Qi, Zhipeng Tan, Ziyue Zhang, Jing Zhang, Chao Yu, Ying Yuan, Shikai Tan:
BlzFS: Crash Consistent Log-structured File System Based on Byte-loggable Zone for ZNS SSD. 206-213 - Hao Wen, Zhichao Cao, Bingzhe Li, David H. C. Du, Ayman Abouelwafa, Doug Voigt, Shiyong Liu, Jim Diehl, Fenggang Wu:
K8sES: Optimizing Kubernetes with Enhanced Storage Service-Level Objectives. 214-222 - Hao Liu, Mengting Lu, Fang Wang, Wenpeng He:
CostFM: A High Cost-Performance Fingerprint Management Mechanism for Shared SSDs. 223-230 - Chuang Gan, Yuchong Hu, Leyan Zhao, Xin Zhao, Pengyu Gong, Wenhao Zhang, Lin Wang, Dan Feng:
Enabling Encrypted Delta Compression for Outsourced Storage Systems via Preserving Similarity. 231-238 - Changlong Li, Chao Wang, Xuehai Zhou, Edwin H.-M. Sha:
FlashDAM: Flexible I/O Throttling for the User Experience of Mobile Systems. 239-242 - Weihong Xu, Viji Swaminathan, Sumukh Pinge, Sean Fuhrman, Tajana Rosing:
HyperMetric: Robust Hyperdimensional Computing on Error-prone Memories using Metric Learning. 243-246 - Zejia Lin, Zewei Mo, Xuanteng Huang, Xianwei Zhang, Yutong Lu:
KeSCo: Compiler-based Kernel Scheduling for Multi-task GPU Applications. 247-254 - Manuel Renz, Sohan Lal:
Beyond Compression Ratio: A Throughput Analysis of Memory Compression Techniques for GPUs. 255-262 - Yibo Du, Ying Wang, Shengwen Liang, Huawei Li, Xiaowei Li, Yinhe Han:
PANG: A Pattern-Aware GCN Accelerator for Universal Graphs. 263-266 - Jintong Zhang, Jianxi Chen, Kezheng Liu, Yongkang Zhuo, Panfei Yuan:
HyF2FS: A Filesystem to Fully Exploit the Parallelism of Hybrid Storage. 267-274 - Zhichao Cao, Hao Wen, Fenggang Wu, David H. C. Du:
SMRTS: A Performance and Cost-Effectiveness Optimized SSD-SMR Tiered File System with Data Deduplication. 275-282 - Chao Dong, Fang Wang, Yuxin Yang, Mengya Lei, Jianshun Zhang, Dan Feng:
Low-Latency and Scalable Full-path Indexing Metadata Service for Distributed File Systems. 283-290 - Yu Wang, You Zhou, Zhonghai Lu, Xiaoyi Zhang, Kun Wang, Feng Zhu, Shu Li, Changsheng Xie, Fei Wu:
FlexZNS: Building High-Performance ZNS SSDs with Size-Flexible and Parity-Protected Zones. 291-299 - Biyong Liu, Yuan Xia, Xueliang Wei, Wei Tong:
LifetimeKV: Narrowing the Lifetime Gap of SSTs in LSMT-based KV Stores for ZNS SSDs. 300-307 - Devashish R. Purandare, Sam Schmidt, Ethan L. Miller:
Persimmon: an append-only ZNS-first filesystem. 308-315 - Weilin Zhu, Wei Tong:
Turn Waste Into Wealth: Alleviating Read/Write Interference in ZNS SSDs. 316-319 - Chenyang Lv, Ziling Wei, Weikang Qian, Junjie Ye, Chang Feng, Zhezhi He:
GPT-LS: Generative Pre-Trained Transformer with Offline Reinforcement Learning for Logic Synthesis. 320-326 - Linyu Zhu, Xinfei Guo:
Delay-Driven Physically-Aware Logic Synthesis with Informed Search. 327-335 - Liwei Ni, Zonglin Yang, Jiaxi Zhang, Junfeng Liu, Huawei Li, Biwei Xie, Xinquan Li:
Adaptive Reconvergence-driven AIG Rewriting via Strategy Learning. 336-343 - Junfeng Liu, Liwei Ni, Xingquan Li, Min Zhou, Lei Chen, Xing Li, Qinghua Zhao, Shuai Ma:
AiMap: Learning to Improve Technology Mapping for ASICs via Delay Prediction. 344-347 - Yue Dai, Xulong Tang, Youtao Zhang:
FlexGM: An Adaptive Runtime System to Accelerate Graph Matching Networks on GPUs. 348-356 - Zhiwei Wang, Peinan Li, Rui Hou, Dan Meng:
NTTFusion: Efficient Number Theoretic Transform Acceleration on GPUs. 357-365 - Jiazhi Jiang, Rui Tian, Jiangsu Du, Dan Huang, Yutong Lu:
MixRec: Orchestrating Concurrent Recommendation Model Training on CPU-GPU platform. 366-374 - Xuan Zhang, Zhuoran Song, Xing Li, Zhezhi He, Li Jiang, Naifeng Jing, Xiaoyao Liang:
HyAcc: A Hybrid CAM-MAC RRAM-based Accelerator for Recommendation Model. 375-382 - Chunyu Qi, Zilong Li, Zhuoran Song, Xiaoyao Liang:
ViTframe: Vision Transformer Acceleration via Informative Frame Selection for Video Recognition. 383-390 - Jun Yin, Linyan Mei, Andre Guntoro, Marian Verhelst:
ACCO: Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators. 391-398 - Yuling Zhang, Ao Ren, Xianzhang Chen, Qiu Lin, Yujuan Tan, Duo Liu:
Re-compact: Structured Pruning and SpMM Kernel Co-design for Accelerating DNNs on GPUs. 399-406 - Jun-Shen Wu, Ren-Shuo Liu:
FM-P2L: An Algorithm Hardware Co-design of Fixed-Point MSBs with Power-of-2 LSBs in CNN Accelerators. 407-414 - Rui Tang, Xiaoyu Zhang, Rui Liu, Zhejian Luo, Xiaoming Chen, Yinhe Han:
Hardware-Software Co-Design for Content-Based Sparse Attention. 415-418 - Donghui Lee, Yongtae Kim:
Towards Quantized Stochastic Computing by Leveraging Reduced Precision Binary Numbers through Bit Truncation. 419-422 - Seongwook Kim, Gwangeun Byeon, Sihyung Kim, Hyungjin Kim, Seokin Hong:
Conveyor: Towards Asynchronous Dataflow in Systolic Array to Exploit Unstructured Sparsity. 423-431 - Tianyu Liu, Wenming Li, Zhihua Fan:
DFGC: DFG-aware NoC Control based on Time Stamp Prediction for Dataflow Architecture. 432-439 - Haibin Wu, Wenming Li, Zhihua Fan, Zhen Wang, Tianyu Liu, Junying Huang, Shengzhong Tang, Yanhuan Liu, Kunming Zhang, Xiaochun Ye, Dongrui Fan:
Alleviating Transfer Latency in DataFlow Accelerator for DSP Applications. 440-443 - Sofiane Bouaziz, Hadjer Benmeziane, Youcef Imine, Leila Hamdad, Smaïl Niar, Hamza Ouarnoughi:
FLASH-RL: Federated Learning Addressing System and Static Heterogeneity using Reinforcement Learning. 444-447 - Gelin Fu, Tian Xia, Shaoru Qu, Zhongpei Luo, Shuyu Li, Pengyu Cheng, Runfan Guo, Yitong Ding, Pengju Ren:
PrSpMV: An Efficient Predictable Kernel for SpMV. 448-456 - Zeyu Xue, Mei Wen, Zhaoyun Chen, Yang Shi, Minjin Tang, Jianchao Yang, Zhongdi Luo:
Releasing the Potential of Tensor Core for Unstructured SpMM using Tiled-CSR Format. 457-464 - Yongseung Yu, Donghyun Son, Younghyun Lee, Sunghyun Park, Giha Ryu, Myeongjin Cho, Jiwon Seo, Yongjun Park:
Tailoring CUTLASS GEMM using Supervised Learning. 465-474 - Jongseok Kim, Chanu Yu, Euiseong Seo:
Revitalizing Buffered I/O: Optimizing Page Reclaim and I/O Throttling. 475-482 - Keni Qiu, Chuting Xu, Kunyu Zhou, Dehui Qiu:
ResCheck: Resilient Checkpointing for Energy Harvesting Systems. 483-486 - Liangxu Nie, Shengan Zheng, Bowen Zhang, Jinyan Xu, Linpeng Huang:
Heart: a Scalable, High-performance ART for Persistent Memory. 487-490 - Hyeonsu Bang, Kang Eun Jeon, Johnny Rhe, Jong Hwan Ko:
DCR: Decomposition-Aware Column Re-Mapping for Stuck-At-Fault Tolerance in ReRAM Arrays. 491-494 - Suyash Mahar, Mingyao Shen, Terence Kelly, Steven Swanson:
Snapshot: Fast, Userspace Crash Consistency for CXL and PM Using msync. 495-498 - Chen Nie, Guoyang Chen, Weifeng Zhang, Zhezhi He:
GIM: Versatile GNN Acceleration with Reconfigurable Processing-in-Memory. 499-506 - Fangxin Liu, Ning Yang, Li Jiang:
PSQ: An Automatic Search Framework for Data-Free Quantization on PIM-based Architecture. 507-514 - Chia-Chun Wang, Yun-Chen Lo, Jun-Shen Wu, Yu-Chih Tsai, Chia-Cheng Chang, Tsen-Wei Hsu, Min-Wei Chu, Chuan-Yao Lai, Ren-Shuo Liu:
Exploiting and Enhancing Computation Latency Variability for High-Performance Time-Domain Computing-in-Memory Neural Network Accelerators. 515-522 - Suraj Singireddy, Muhammad Rashedul Haq Rashed, Sven Thijssen, Rickard Ewetz, Sumit Kumar Jha:
Input-Aware Flow-Based In-Memory Computing. 523-530 - Yun-Chen Lo, Chia-Chun Wang, Ren-Shuo Liu:
BICEP: Exploiting Bitline Inversion for Efficient Operation-Unit-Based Compute-in-Memory Architecture: No Retraining Needed! 531-534 - Tianyang Niu, Min Lyu, Wei Wang, Qiliang Li, Yinlong Xu:
Cerasure: Fast Acceleration Strategies For XOR-Based Erasure Codes. 535-542 - Giovanni Brignone, Mihai T. Lazarescu, Luciano Lavagno:
A DSP shared is a DSP earned: HLS Task-Level Multi-Pumping for High-Performance Low-Resource Designs. 551-557 - Niko Zurstraßen, Nils Bosbach, Jan Moritz Joseph, Lukas Jünger, Jan Henrik Weinstock, Rainer Leupers:
Efficient RISC-V-on-x64 Floating Point Simulation. 558-565 - Yifan Zhang, Qiang Cao, Shaohua Wang, Jie Yao, Hong Jiang:
HF-LDPC: HLS-friendly QC-LDPC FPGA Decoder with High Throughput and Flexibility. 566-573 - Chenfeng Zhao, Zehao Dong, Yixin Chen, Xuan Zhang, Roger D. Chamberlain:
GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis. 574-577 - Franz A. Fuchs, Jonathan Woodruff, Peter Rugg, Marno van der Maas, Alexandre Joannou, Alexander Richardson, Jessica Clarke, Nathaniel Wesley Filardo, Brooks Davis, John Baldwin, Peter G. Neumann, Simon W. Moore, Robert N. M. Watson:
Architectural Contracts for Safe Speculation. 578-586 - Xiaoni Meng, Qiusong Yang, Yiwei Ci, Pei Zhao, Shan Zhao, Mingshu Li:
Execute on Clear (EoC): Enhancing Security for Unsafe Speculative Instructions by Precise Identification and Safe Execution. 587-595 - Eleonora Vacca, Giorgio Ajmone, Luca Sterpone:
RunSAFER: A Novel Runtime Fault Detection Approach for Systolic Array Accelerators. 596-604 - Donglei Wu, Weihao Yang, Cai Deng, Xiangyu Zou, Shiyi Li, Wen Xia:
BIRD: A Lightweight and Adaptive Compressor for Communication-Efficient Distributed Learning Using Tensor-wise Bi-Random Sampling. 605-613 - Chia-Wei Chang, Jing-Jia Liou, Chih-Tsun Huang, Wei-Chung Hsu, Juin-Ming Lu:
MultiFuse: Efficient Cross Layer Fusion for DNN Accelerators with Multi-level Memory Hierarchy. 614-622 - Xuhang Wang, Zhuoran Song, Qiyue Huang, Xiaoyao Liang:
DEQ: Dynamic Element-wise Quantization for Efficient Attention Architecture. 623-630 - Yu-Chih Tsai, Chung-Yueh Liu, Chia-Chun Wang, Tsen-Wei Hsu, Ren-Shuo Liu:
CNN Inference Accelerators with Adjustable Feature Map Compression Ratios. 631-634
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.