default search action
Minjia Zhang
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [j5]Syed Zawad, Xiaolong Ma, Jun Yi, Cheng Li, Minjia Zhang, Lei Yang, Feng Yan, Yuxiong He:
FedCust: Offloading hyperparameter customization for federated learning. Perform. Evaluation 167: 102450 (2025) - 2024
- [j4]Yongye Su, Yinqi Sun, Minjia Zhang, Jianguo Wang:
Vexless: A Serverless Vector Data Management System Using Cloud Functions. Proc. ACM Manag. Data 2(3): 187 (2024) - [c50]Conglong Li, Zhewei Yao, Xiaoxia Wu, Minjia Zhang, Connor Holmes, Cheng Li, Yuxiong He:
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing. AAAI 2024: 18490-18498 - [c49]Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao:
Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. ICLR 2024 - [c48]Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Reza Yazdani Aminabadi, Shuaiwen Leon Song, Samyam Rajbhandari, Yuxiong He:
System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models. IPDPS (Workshops) 2024: 1206-1208 - [c47]Jiangfei Duan, Ziang Song, Xupeng Miao, Xiaoli Xi, Dahua Lin, Harry Xu, Minjia Zhang, Zhihao Jia:
Parcae: Proactive, Liveput-Optimized DNN Training on Preemptible Instances. NSDI 2024 - [c46]Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Reza Yazdani Aminadabi, Shuaiwen Leon Song, Samyam Rajbhandari, Yuxiong He:
System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models. PODC 2024: 121-130 - [i37]Yao Lu, Song Bian, Lequn Chen, Yongjun He, Yulong Hui, Matthew Lentz, Beibin Li, Fei Liu, Jialin Li, Qi Liu, Rui Liu, Xiaoxuan Liu, Lin Ma, Kexin Rong, Jianguo Wang, Yingjun Wu, Yongji Wu, Huanchen Zhang, Minjia Zhang, Qizhen Zhang, Tianyi Zhou, Danyang Zhuo:
Computing in the Era of Large Generative Models: From Cloud-Native to AI-Native. CoRR abs/2401.12230 (2024) - [i36]Jiangfei Duan, Ziang Song, Xupeng Miao, Xiaoli Xi, Dahua Lin, Harry Xu, Minjia Zhang, Zhihao Jia:
Parcae: Proactive, Liveput-Optimized DNN Training on Preemptible Instances. CoRR abs/2403.14097 (2024) - [i35]Xinyu Lian, Sam Ade Jacobs, Lev Kurilenko, Masahiro Tanaka, Stas Bekman, Olatunji Ruwase, Minjia Zhang:
Universal Checkpointing: Efficient and Flexible Checkpointing for Large Scale Distributed Training. CoRR abs/2406.18820 (2024) - [i34]Haozhe Zhao, Xiaojian Ma, Liang Chen, Shuzheng Si, Rujie Wu, Kaikai An, Peiyu Yu, Minjia Zhang, Qing Li, Baobao Chang:
UltraEdit: Instruction-based Fine-Grained Image Editing at Scale. CoRR abs/2407.05282 (2024) - [i33]Zheng Wang, Boxiao Jin, Zhongzhi Yu, Minjia Zhang:
Model Tells You Where to Merge: Adaptive KV Cache Merging for LLMs on Long-Context Tasks. CoRR abs/2407.08454 (2024) - [i32]Guangzhi Xiong, Qiao Jin, Xiao Wang, Minjia Zhang, Zhiyong Lu, Aidong Zhang:
Improving Retrieval-Augmented Generation in Medicine with Iterative Follow-up Questions. CoRR abs/2408.00727 (2024) - 2023
- [j3]Minjia Zhang, Jie Ren, Zhen Peng, Ruoming Jin, Dong Li, Bin Ren:
iQAN: Fast and Accurate Vector Search with Efficient Intra-Query Parallelism on Multi-Core Architectures. IEEE Data Eng. Bull. 46(3): 22-38 (2023) - [j2]Reza Yazdani Aminabadi, Olatunji Ruwase, Minjia Zhang, Yuxiong He, José-María Arnau, Antonio González:
SHARP: An Adaptable, Energy-Efficient Accelerator for Recurrent Neural Networks. ACM Trans. Embed. Comput. Syst. 22(2): 30:1-30:23 (2023) - [c45]Shuangyan Yang, Minjia Zhang, Wenqian Dong, Dong Li:
Betty: Enabling Large-Scale GNN Training with Batch-Level Graph Partitioning. ASPLOS (2) 2023: 103-117 - [c44]Minjia Zhang, Uma-Naresh Niranjan, Yuxiong He:
Revisiting the Efficiency-Accuracy Tradeoff in Adapting Transformer Models via Adversarial Fine-Tuning. ECAI 2023: 3026-3033 - [c43]Yucheng Lu, Conglong Li, Minjia Zhang, Christopher De Sa, Yuxiong He:
Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam. ICLR 2023 - [c42]Xinyue Ma, Suyeon Jeong, Minjia Zhang, Di Wang, Jonghyun Choi, Myeongjae Jeon:
Cost-effective On-device Continual Learning over Memory Hierarchy with Miro. MobiCom 2023: 83:1-83:15 - [c41]John Thorpe, Pengzhan Zhao, Jonathan Eyolfson, Yifan Qiao, Zhihao Jia, Minjia Zhang, Ravi Netravali, Guoqing Harry Xu:
Bamboo: Making Preemptible Instances Resilient for Affordable Training of Large DNNs. NSDI 2023: 497-513 - [c40]Zhen Peng, Minjia Zhang, Kai Li, Ruoming Jin, Bin Ren:
iQAN: Fast and Accurate Vector Search with Efficient Intra-Query Parallelism on Multi-Core Architectures. PPoPP 2023: 313-328 - [i31]Min Zhang, Fuxun Yu, Yongbo Yu, Minjia Zhang, Ang Li, Xiang Chen:
FedHC: A Scalable Federated Learning Framework for Heterogeneous and Resource-Constrained Clients. CoRR abs/2305.15668 (2023) - [i30]Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He:
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales. CoRR abs/2308.01320 (2023) - [i29]Xinyue Ma, Suyeon Jeong, Minjia Zhang, Di Wang, Jonghyun Choi, Myeongjae Jeon:
Cost-effective On-device Continual Learning over Memory Hierarchy with Miro. CoRR abs/2308.06053 (2023) - [i28]Fengxiang Bie, Yibo Yang, Zhongzhu Zhou, Adam Ghanem, Minjia Zhang, Zhewei Yao, Xiaoxia Wu, Connor Holmes, Pareesa Ameneh Golnari, David A. Clifton, Yuxiong He, Dacheng Tao, Shuaiwen Leon Song:
RenAIssance: A Survey into AI Text-to-Image Generation in the Era of Large Model. CoRR abs/2309.00810 (2023) - [i27]Zhewei Yao, Xiaoxia Wu, Conglong Li, Minjia Zhang, Heyang Qin, Olatunji Ruwase, Ammar Ahmad Awan, Samyam Rajbhandari, Yuxiong He:
DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention. CoRR abs/2309.14327 (2023) - [i26]Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari, Yuxiong He:
DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models. CoRR abs/2309.14509 (2023) - [i25]Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, Jianfeng Gao:
Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. CoRR abs/2310.01801 (2023) - [i24]Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Jeff Rasley, Ammar Ahmad Awan, Connor Holmes, Martin Cai, Adam Ghanem, Zhongzhu Zhou, Yuxiong He, Pete Luferenko, Divya Kumar, Jonathan A. Weyn, Ruixiong Zhang, Sylwester Klocek, Volodymyr Vragov, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Cristina Negri, Rao Kotamarthi, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Troy Arcomano, Romit Maulik, Maxim Zvyagin, Alexander Brace, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, Bharat Kale, Danilo Perez-Rivera, Heng Ma, Carla M. Mann, Michael W. Irvin, J. Gregory Pauloski, Logan T. Ward, Valérie Hayot-Sasson, Murali Emani, Zhen Xie, Diangen Lin, Maulik Shukla, Ian T. Foster, James J. Davis, Michael E. Papka, Thomas S. Brettin, Prasanna Balaprakash, Gina Tourassi, John Gounley, Heidi A. Hanson, Thomas E. Potok, Massimiliano Lupo Pasini, Kate Evans, Dan Lu, Dalton D. Lunga, Junqi Yin, Sajal Dash, Feiyi Wang, Mallikarjun Shankar, Isaac Lyngaas, Xiao Wang, Guojing Cong, Pei Zhang, Ming Fan, Siyan Liu, Adolfy Hoisie, Shinjae Yoo, Yihui Ren, William Tang, Kyle Felker, Alexey Svyatkovskiy, Hang Liu, Ashwin M. Aji, Angela Dalton, Michael J. Schulte, Karl Schulz, Yuntian Deng, Weili Nie, Josh Romero, Christian Dallago, Arash Vahdat, Chaowei Xiao, Thomas Gibbs, Anima Anandkumar, Rick Stevens:
DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies. CoRR abs/2310.04610 (2023) - 2022
- [c39]Minjia Zhang, Uma-Naresh Niranjan, Yuxiong He:
Adversarial Data Augmentation for Task-Specific Knowledge Distillation of Pre-trained Transformers. AAAI 2022: 11685-11693 - [c38]Soobee Lee, Minindu Weerakoon, Jonghyun Choi, Minjia Zhang, Di Wang, Myeongjae Jeon:
CarM: hierarchical episodic memory for continual learning. DAC 2022: 1147-1152 - [c37]Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He:
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale. ICML 2022: 18332-18346 - [c36]Conglong Li, Minjia Zhang, Yuxiong He:
The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models. NeurIPS 2022 - [c35]Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, Yuxiong He:
XTC: Extreme Compression for Pre-trained Transformers Made Simple and Efficient. NeurIPS 2022 - [c34]Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, Yuxiong He:
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers. NeurIPS 2022 - [c33]Reza Yazdani Aminabadi, Samyam Rajbhandari, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Olatunji Ruwase, Shaden Smith, Minjia Zhang, Jeff Rasley, Yuxiong He:
DeepSpeed- Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale. SC 2022: 46:1-46:15 - [c32]Minjia Zhang, Wenhan Wang, Yuxiong He:
GraSP: Optimizing Graph-based Nearest Neighbor Search with Subgraph Sampling and Pruning. WSDM 2022: 1395-1405 - [c31]Yongbo Yu, Fuxun Yu, Zirui Xu, Di Wang, Minjia Zhang, Ang Li, Shawn Bray, Chenchen Liu, Xiang Chen:
Powering Multi-Task Federated Learning with Competitive GPU Resource Sharing. WWW (Companion Volume) 2022: 567-571 - [i23]Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He:
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale. CoRR abs/2201.05596 (2022) - [i22]Minjia Zhang, Uma-Naresh Niranjan, Yuxiong He:
ScaLA: Accelerating Adaptation of Pre-Trained Transformer-Based Language Models via Efficient Large-Batch Adversarial Noise. CoRR abs/2201.12469 (2022) - [i21]Zhen Peng, Minjia Zhang, Kai Li, Ruoming Jin, Bin Ren:
Speed-ANN: Low-Latency and High-Accuracy Nearest Neighbor Search via Intra-Query Parallelism. CoRR abs/2201.13007 (2022) - [i20]Yucheng Lu, Conglong Li, Minjia Zhang, Christopher De Sa, Yuxiong He:
Maximizing Communication Efficiency for Large-scale Training via 0/1 Adam. CoRR abs/2202.06009 (2022) - [i19]Fuxun Yu, Di Wang, Longfei Shangguan, Minjia Zhang, Chenchen Liu, Xiang Chen:
A Survey of Multi-Tenant Deep Learning Inference on GPU. CoRR abs/2203.09040 (2022) - [i18]John Thorpe, Pengzhan Zhao, Jonathan Eyolfson, Yifan Qiao, Zhihao Jia, Minjia Zhang, Ravi Netravali, Guoqing Harry Xu:
Bamboo: Making Preemptible Instances Resilient for Affordable Training of Large DNNs. CoRR abs/2204.12013 (2022) - [i17]Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, Yuxiong He:
Extreme Compression for Pre-trained Transformers Made Simple and Efficient. CoRR abs/2206.01859 (2022) - [i16]Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, Yuxiong He:
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers. CoRR abs/2206.01861 (2022) - [i15]Connor Holmes, Minjia Zhang, Yuxiong He, Bo Wu:
Compressing Pre-trained Transformers via Low-Bit NxM Sparsity for Natural Language Understanding. CoRR abs/2206.15014 (2022) - [i14]Reza Yazdani Aminabadi, Samyam Rajbhandari, Minjia Zhang, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Jeff Rasley, Shaden Smith, Olatunji Ruwase, Yuxiong He:
DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale. CoRR abs/2207.00032 (2022) - [i13]Zhewei Yao, Xiaoxia Wu, Conglong Li, Connor Holmes, Minjia Zhang, Cheng Li, Yuxiong He:
Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers. CoRR abs/2211.11586 (2022) - [i12]Conglong Li, Zhewei Yao, Xiaoxia Wu, Minjia Zhang, Yuxiong He:
DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing. CoRR abs/2212.03597 (2022) - 2021
- [c30]Jie Ren, Jiaolin Luo, Kai Wu, Minjia Zhang, Hyeran Jeon, Dong Li:
Sentinel: Efficient Tensor Migration and Allocation on Heterogeneous Memory Systems for Deep Learning. HPCA 2021: 598-611 - [c29]Minjia Zhang, Menghao Li, Chi Wang, Mingqin Li:
DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation. ICLR 2021 - [c28]Junfeng Zhao, Minjia Zhang, Hongji Yang:
Vertical Scaling of Resource for OpenMP Application. ICSOC 2021: 839-849 - [c27]Minjia Zhang, Zehua Hu, Mingqin Li:
DUET: A Compiler-Runtime Subgraph Scheduling Approach for Tensor Programs on a Coupled CPU-GPU Architecture. IPDPS 2021: 151-161 - [c26]Connor Holmes, Minjia Zhang, Yuxiong He, Bo Wu:
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM. NeurIPS 2021: 1818-1830 - [c25]Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, Yuxiong He:
ZeRO-Offload: Democratizing Billion-Scale Model Training. USENIX ATC 2021: 551-564 - [c24]Minjia Zhang:
DL Inference and Training Optimization Towards Speed and Scale. WWW (Companion Volume) 2021: 192 - [i11]Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, Yuxiong He:
ZeRO-Offload: Democratizing Billion-Scale Model Training. CoRR abs/2101.06840 (2021) - [i10]Dantong Zhu, Minjia Zhang:
Understanding and Generalizing Monotonic Proximity Graphs for Approximate Nearest Neighbor Search. CoRR abs/2107.13052 (2021) - [i9]Conglong Li, Minjia Zhang, Yuxiong He:
Curriculum Learning: A Regularization Method for Efficient and Stable Billion-Scale GPT Model Pre-Training. CoRR abs/2108.06084 (2021) - [i8]Soobee Lee, Minindu Weerakoon, Jonghyun Choi, Minjia Zhang, Di Wang, Myeongjae Jeon:
Carousel Memory: Rethinking the Design of Episodic Memory for Continual Learning. CoRR abs/2110.07276 (2021) - [i7]Connor Holmes, Minjia Zhang, Yuxiong He, Bo Wu:
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM. CoRR abs/2110.15766 (2021) - [i6]Fuxun Yu, Di Wang, Longfei Shangguan, Minjia Zhang, Xulong Tang, Chenchen Liu, Xiang Chen:
A Survey of Large-Scale Deep Learning Serving System Optimization: Challenges and Opportunities. CoRR abs/2111.14247 (2021) - 2020
- [c23]Jie Ren, Minjia Zhang, Dong Li:
HM-ANN: Efficient Billion-Point Nearest Neighbor Search on Heterogeneous Memory. NeurIPS 2020 - [c22]Menghao Li, Minjia Zhang, Chi Wang, Mingqin Li:
AdaTune: Adaptive Tensor Program Compilation Made Efficient. NeurIPS 2020 - [c21]Minjia Zhang, Yuxiong He:
Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. NeurIPS 2020 - [c20]Conglong Li, Minjia Zhang, David G. Andersen, Yuxiong He:
Improving Approximate Nearest Neighbor Search through Learned Adaptive Early Termination. SIGMOD Conference 2020: 2539-2554 - [i5]Minjia Zhang, Yuxiong He:
Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. CoRR abs/2010.13369 (2020)
2010 – 2019
- 2019
- [c19]Minjia Zhang, Yuxiong He:
GRIP: Multi-Store Capacity-Optimized High-Performance Nearest Neighbor Search for Vector Search Engine. CIKM 2019: 1673-1682 - [c18]Minjia Zhang, Samyam Rajbhandari, Wenhan Wang, Elton Zheng, Olatunji Ruwase, Jeff Rasley, Jason Li, Junhua Wang, Yuxiong He:
Accelerating Large Scale Deep Learning Inference through DeepCPU at Microsoft. OpML 2019: 5-7 - [c17]Junfeng Zhao, Minjia Zhang, Hongji Yang:
Code Refactoring from OpenMP to MapReduce Model for Big Data Processing. SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI 2019: 930-935 - [i4]Jie Ren, Jiaolin Luo, Kai Wu, Minjia Zhang, Dong Li:
Sentinel: Runtime Data Management on Heterogeneous Main MemorySystems for Deep Learning. CoRR abs/1909.05182 (2019) - [i3]Reza Yazdani, Olatunji Ruwase, Minjia Zhang, Yuxiong He, José-María Arnau, Antonio González:
LSTM-Sharp: An Adaptable, Energy-Efficient Hardware Accelerator for Long Short-Term Memory. CoRR abs/1911.01258 (2019) - 2018
- [c16]Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li:
Learning Intrinsic Sparse Structures within Long Short-Term Memory. ICLR (Poster) 2018 - [c15]Junfeng Zhao, Minjia Zhang:
Refactoring OpenMP Code Based on MapReduce Model. ISPA/IUCC/BDCloud/SocialCom/SustainCom 2018: 1040-1041 - [c14]Minjia Zhang, Wenhan Wang, Xiaodong Liu, Jianfeng Gao, Yuxiong He:
Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models. NeurIPS 2018: 6311-6322 - [c13]Minjia Zhang, Samyam Rajbhandari, Wenhan Wang, Yuxiong He:
DeepCPU: Serving RNN-based Deep Learning Models 10x Faster. USENIX ATC 2018: 951-965 - [i2]Minjia Zhang, Xiaodong Liu, Wenhan Wang, Jianfeng Gao, Yuxiong He:
Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models. CoRR abs/1806.04189 (2018) - [i1]Minjia Zhang, Yuxiong He:
Zoom: SSD-based Vector Search for Optimizing Accuracy, Latency and Memory. CoRR abs/1809.04067 (2018) - 2017
- [j1]Man Cao, Minjia Zhang, Aritra Sengupta, Swarnendu Biswas, Michael D. Bond:
Hybridizing and Relaxing Dependence Tracking for Efficient Parallel Runtime Support. ACM Trans. Parallel Comput. 4(2): 9:1-9:42 (2017) - [c12]Swarnendu Biswas, Man Cao, Minjia Zhang, Michael D. Bond, Benjamin P. Wood:
Lightweight data race detection for production runs. CC 2017: 11-21 - [c11]Minjia Zhang, Swarnendu Biswas, Michael D. Bond:
Avoiding consistency exceptions under strong memory models. ISMM 2017: 115-127 - [c10]Minjia Zhang, Swarnendu Biswas, Michael D. Bond:
POSTER: On the Problem of Consistency Exceptions in the Context of Strong Memory Models. PPoPP 2017: 459-460 - 2016
- [c9]Minjia Zhang, Swarnendu Biswas, Michael D. Bond:
Relaxed dependence tracking for parallel runtime support. CC 2016: 45-55 - [c8]Man Cao, Minjia Zhang, Aritra Sengupta, Michael D. Bond:
Drinking from both glasses: combining pessimistic and optimistic tracking of cross-thread dependences. PPoPP 2016: 20:1-20:13 - 2015
- [c7]Aritra Sengupta, Swarnendu Biswas, Minjia Zhang, Michael D. Bond, Milind Kulkarni:
Hybrid Static: Dynamic Analysis for Statically Bounded Region Serializability. ASPLOS 2015: 561-575 - [c6]Minjia Zhang:
SIRe: an efficient snapshot isolation-based memory model for detecting and tolerating region conflicts. SPLASH (Companion Volume) 2015: 87-88 - [c5]Swarnendu Biswas, Minjia Zhang, Michael D. Bond, Brandon Lucia:
Valor: efficient, software-only region conflict exceptions. OOPSLA 2015: 241-259 - [c4]Minjia Zhang, Jipeng Huang, Man Cao, Michael D. Bond:
Low-overhead software transactional memory with progress guarantees and strong semantics. PPoPP 2015: 97-108 - 2013
- [c3]Michael D. Bond, Milind Kulkarni, Man Cao, Minjia Zhang, Meisam Fathi Salmi, Swarnendu Biswas, Aritra Sengupta, Jipeng Huang:
OCTET: capturing and controlling cross-thread dependences efficiently. OOPSLA 2013: 693-712 - 2011
- [c2]Jithin Jose, Hari Subramoni, Miao Luo, Minjia Zhang, Jian Huang, Md. Wasi-ur-Rahman, Nusrat S. Islam, Xiangyong Ouyang, Hao Wang, Sayantan Sur, Dhabaleswar K. Panda:
Memcached Design on High Performance RDMA Capable Interconnects. ICPP 2011: 743-752 - 2010
- [c1]Minjia Zhang, Hai Jin, Xuanhua Shi, Song Wu:
VirtCFT: A Transparent VM-Level Fault-Tolerant System for Virtual Clusters. ICPADS 2010: 147-154
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-02 22:35 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint