default search action
Wangchunshu Zhou
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j2]Yan Zeng, Xinsong Zhang, Hang Li, Jiawei Wang, Jipeng Zhang, Wangchunshu Zhou:
X$^{2}$2-VLM: All-in-One Pre-Trained Model for Vision-Language Tasks. IEEE Trans. Pattern Anal. Mach. Intell. 46(5): 3156-3168 (2024) - [c43]Shuofei Qiao, Ningyu Zhang, Runnan Fang, Yujie Luo, Wangchunshu Zhou, Yuchen Eleanor Jiang, Chengfei Lv, Huajun Chen:
AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning. ACL (1) 2024: 3003-3021 - [c42]Ziyu Zhao, Leilei Gan, Guoyin Wang, Wangchunshu Zhou, Hongxia Yang, Kun Kuang, Fei Wu:
LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild. ACL (Findings) 2024: 4447-4462 - [c41]Yizhi Li, Ge Zhang, Xingwei Qu, Jiali Li, Zhaoqun Li, Noah Wang, Hao Li, Ruibin Yuan, Yinghao Ma, Kai Zhang, Wangchunshu Zhou, Yiming Liang, Lei Zhang, Lei Ma, Jiajun Zhang, Zuowen Li, Wenhao Huang, Chenghua Lin, Jie Fu:
CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the Generalizability of Large Language Models. ACL (Findings) 2024: 12431-12446 - [c40]Noah Wang, Z. y. Peng, Haoran Que, Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu, Hongcheng Guo, Ruitong Gan, Zehao Ni, Jian Yang, Man Zhang, Zhaoxiang Zhang, Wanli Ouyang, Ke Xu, Wenhao Huang, Jie Fu, Junran Peng:
RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models. ACL (Findings) 2024: 14743-14777 - [c39]Zekun Wang, Jingchang Chen, Wangchunshu Zhou, Haichao Zhu, Jiafeng Liang, Liping Shan, Ming Liu, Dongliang Xu, Qing Yang, Bing Qin:
SmartTrim: Adaptive Tokens and Attention Pruning for Efficient Vision-Language Models. LREC/COLING 2024: 14937-14953 - [c38]Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, Yang You:
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models. ICML 2024 - [c37]Xiangru Tang, Yiming Zong, Jason Phang, Yilun Zhao, Wangchunshu Zhou, Arman Cohan, Mark Gerstein:
Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? NAACL (Short Papers) 2024: 12-34 - [i53]Shuofei Qiao, Ningyu Zhang, Runnan Fang, Yujie Luo, Wangchunshu Zhou, Yuchen Eleanor Jiang, Chengfei Lv, Huajun Chen:
AUTOACT: Automatic Agent Learning from Scratch via Self-Planning. CoRR abs/2401.05268 (2024) - [i52]Tiannan Wang, Jiamin Chen, Qingrui Jia, Shuai Wang, Ruoyu Fang, Huilin Wang, Zhaowei Gao, Chunzhao Xie, Chuou Xu, Jihong Dai, Yibin Liu, Jialong Wu, Shengwei Ding, Long Li, Zhiwei Huang, Xinle Deng, Teng Yu, Gangan Ma, Han Xiao, Zixin Chen, Danjun Xiang, Yunxia Wang, Yuanyuan Zhu, Yi Xiao, Jing Wang, Yiru Wang, Siran Ding, Jiayang Huang, Jiayi Xu, Yilihamu Tayier, Zhenyu Hu, Yuan Gao, Chengfeng Zheng, Yueshu Ye, Yihang Li, Lei Wan, Xinyue Jiang, Yujie Wang, Siyu Cheng, Zhule Song, Xiangru Tang, Xiaohua Xu, Ningyu Zhang, Huajun Chen, Yuchen Eleanor Jiang, Wangchunshu Zhou:
Weaver: Foundation Models for Creative Writing. CoRR abs/2401.17268 (2024) - [i51]Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, Yang You:
OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models. CoRR abs/2402.01739 (2024) - [i50]Xiangru Tang, Qiao Jin, Kunlun Zhu, Tongxin Yuan, Yichi Zhang, Wangchunshu Zhou, Meng Qu, Yilun Zhao, Jian Tang, Zhuosheng Zhang, Arman Cohan, Zhiyong Lu, Mark Gerstein:
Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science. CoRR abs/2402.04247 (2024) - [i49]Ziyu Zhao, Leilei Gan, Guoyin Wang, Wangchunshu Zhou, Hongxia Yang, Kun Kuang, Fei Wu:
LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed Tasks in the Wild. CoRR abs/2402.09997 (2024) - [i48]Yizhi Li, Ge Zhang, Xingwei Qu, Jiali Li, Zhaoqun Li, Zekun Wang, Hao Li, Ruibin Yuan, Yinghao Ma, Kai Zhang, Wangchunshu Zhou, Yiming Liang, Lei Zhang, Lei Ma, Jiajun Zhang, Zuowen Li, Stephen W. Huang, Chenghua Lin, Wenhu Chen, Jie Fu:
CIF-Bench: A Chinese Instruction-Following Benchmark for Evaluating the Generalizability of Large Language Models. CoRR abs/2402.13109 (2024) - [i47]Chunyuan Deng, Xiangru Tang, Yilun Zhao, Hanming Wang, Haoran Wang, Wangchunshu Zhou, Arman Cohan, Mark Gerstein:
MIMIR: A Streamlined Platform for Personalized Agent Tuning in Domain Expertise. CoRR abs/2404.04285 (2024) - [i46]Ge Zhang, Scott Qu, Jiaheng Liu, Chenchen Zhang, Chenghua Lin, Chou Leuang Yu, Danny Pan, Esther Cheng, Jie Liu, Qunshu Lin, Raven Yuan, Tuney Zheng, Wei Pang, Xinrun Du, Yiming Liang, Yinghao Ma, Yizhi Li, Ziyang Ma, Bill Y. Lin, Emmanouil Benetos, Huan Yang, Junting Zhou, Kaijing Ma, Minghao Liu, Morry Niu, Noah Wang, Quehry Que, Ruibo Liu, Sine Liu, Shawn Guo, Soren Gao, Wangchunshu Zhou, Xinyue Zhang, Yizhi Zhou, Yubo Wang, Yuelin Bai, Yuhan Zhang, Yuxiang Zhang, Zenith Wang, Zhenzhu Yang, Zijian Zhao, Jiajun Zhang, Wanli Ouyang, Wenhao Huang, Wenhu Chen:
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series. CoRR abs/2405.19327 (2024) - [i45]Wangchunshu Zhou, Yixin Ou, Shengwei Ding, Long Li, Jialong Wu, Tiannan Wang, Jiamin Chen, Shuai Wang, Xiaohua Xu, Ningyu Zhang, Huajun Chen, Yuchen Eleanor Jiang:
Symbolic Learning Enables Self-Evolving Agents. CoRR abs/2406.18532 (2024) - [i44]Yu Wang, Chi Han, Tongtong Wu, Xiaoxin He, Wangchunshu Zhou, Nafis Sadeq, Xiusi Chen, Zexue He, Wei Wang, Gholamreza Haffari, Heng Ji, Julian J. McAuley:
Towards LifeSpan Cognitive Systems. CoRR abs/2409.13265 (2024) - [i43]Haoran Que, Feiyu Duan, Liqun He, Yutao Mou, Wangchunshu Zhou, Jiaheng Liu, Wenge Rong, Zekun Moore Wang, Jian Yang, Ge Zhang, Junran Peng, Zhaoxiang Zhang, Songyang Zhang, Kai Chen:
HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models. CoRR abs/2409.16191 (2024) - 2023
- [j1]Siyu Lu, Zheng Liu, Tianlin Liu, Wangchunshu Zhou:
Scaling-up medical vision-and-language representation learning with federated learning. Eng. Appl. Artif. Intell. 126(Part D): 107037 (2023) - [c36]Wangchunshu Zhou, Qifei Li, Chenle Li:
Learning to Predict Persona Information for Dialogue Personalization without Explicit Persona Description. ACL (Findings) 2023: 2979-2991 - [c35]Yan Zeng, Wangchunshu Zhou, Ao Luo, Ziming Cheng, Xinsong Zhang:
Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training. ACL (1) 2023: 5731-5746 - [c34]Wangchunshu Zhou, Ronan Le Bras, Yejin Choi:
Commonsense Knowledge Transfer for Pre-trained Language Models. ACL (Findings) 2023: 5946-5960 - [c33]Wangchunshu Zhou, Ronan Le Bras, Yejin Choi:
Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference. ACL (Findings) 2023: 10452-10465 - [c32]Tiannan Wang, Wangchunshu Zhou, Yan Zeng, Xinsong Zhang:
EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning. ACL (Findings) 2023: 13899-13913 - [c31]Ying Jiao, Kumar Shridhar, Peng Cui, Wangchunshu Zhou, Mrinmaya Sachan:
Automatic Educational Question Generation with Difficulty Level Controls. AIED 2023: 476-488 - [c30]Vilém Zouhar, Shehzaad Dhuliawala, Wangchunshu Zhou, Nico Daheim, Tom Kocmi, Yuchen Eleanor Jiang, Mrinmaya Sachan:
Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference. EACL 2023: 1303-1317 - [c29]Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma:
Evaluating Large Language Models on Controlled Generation Tasks. EMNLP 2023: 3155-3168 - [c28]Yifan Hou, Jiaoda Li, Yu Fei, Alessandro Stolfo, Wangchunshu Zhou, Guangtao Zeng, Antoine Bosselut, Mrinmaya Sachan:
Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models. EMNLP 2023: 4902-4919 - [c27]Ruida Wang, Wangchunshu Zhou, Mrinmaya Sachan:
Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models. EMNLP (Findings) 2023: 11817-11831 - [c26]Shizhe Diao, Yongyu Lei, Liangming Pan, Tianqing Fang, Wangchunshu Zhou, Sedrick Scott Keh, Min-Yen Kan, Tong Zhang:
Doolittle: Benchmarks and Corpora for Academic Writing Formalization. EMNLP 2023: 13093-13111 - [c25]Shizhe Diao, Wangchunshu Zhou, Xinsong Zhang, Jiawei Wang:
Write and Paint: Generative Vision-Language Models are Unified Modal Learners. ICLR 2023 - [c24]Wangchunshu Zhou, Yuchen Eleanor Jiang, Ethan Wilcox, Ryan Cotterell, Mrinmaya Sachan:
Controlled Text Generation with Natural Language Instructions. ICML 2023: 42602-42613 - [c23]Fuzhao Xue, Yao Fu, Wangchunshu Zhou, Zangwei Zheng, Yang You:
To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis. NeurIPS 2023 - [c22]Kirill Semenov, Vilém Zouhar, Tom Kocmi, Dongdong Zhang, Wangchunshu Zhou, Yuchen Eleanor Jiang:
Findings of the WMT 2023 Shared Task on Machine Translation with Terminologies. WMT 2023: 663-671 - [i42]Vilém Zouhar, Shehzaad Dhuliawala, Wangchunshu Zhou, Nico Daheim, Tom Kocmi, Yuchen Eleanor Jiang, Mrinmaya Sachan:
Poor Man's Quality Estimation: Predicting Reference-Based MT Metrics Without the Reference. CoRR abs/2301.09008 (2023) - [i41]Wangchunshu Zhou, Yuchen Eleanor Jiang, Ethan Wilcox, Ryan Cotterell, Mrinmaya Sachan:
Controlled Text Generation with Natural Language Instructions. CoRR abs/2304.14293 (2023) - [i40]Wangchunshu Zhou, Yuchen Eleanor Jiang, Ryan Cotterell, Mrinmaya Sachan:
Efficient Prompting via Dynamic In-Context Learning. CoRR abs/2305.11170 (2023) - [i39]Fuzhao Xue, Yao Fu, Wangchunshu Zhou, Zangwei Zheng, Yang You:
To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis. CoRR abs/2305.13230 (2023) - [i38]Zekun Wang, Ge Zhang, Kexin Yang, Ning Shi, Wangchunshu Zhou, Shaochun Hao, Guangzheng Xiong, Yizhi Li, Mong Yuan Sim, Xiuying Chen, Qingqing Zhu, Zhenzhu Yang, Adam Nik, Qi Liu, Chenghua Lin, Shi Wang, Ruibo Liu, Wenhu Chen, Ke Xu, Dayiheng Liu, Yike Guo, Jie Fu:
Interactive Natural Language Processing. CoRR abs/2305.13246 (2023) - [i37]Wangchunshu Zhou, Yuchen Eleanor Jiang, Peng Cui, Tiannan Wang, Zhenxin Xiao, Yifan Hou, Ryan Cotterell, Mrinmaya Sachan:
RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text. CoRR abs/2305.13304 (2023) - [i36]Zekun Wang, Jingchang Chen, Wangchunshu Zhou, Ming Liu, Bing Qin:
SmartTrim: Adaptive Tokens and Parameters Pruning for Efficient Vision-Language Models. CoRR abs/2305.15033 (2023) - [i35]Wangchunshu Zhou, Ronan Le Bras, Yejin Choi:
Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference. CoRR abs/2306.02379 (2023) - [i34]Wangchunshu Zhou, Ronan Le Bras, Yejin Choi:
Commonsense Knowledge Transfer for Pre-trained Language Models. CoRR abs/2306.02388 (2023) - [i33]Wangchunshu Zhou, Yuchen Eleanor Jiang, Long Li, Jialong Wu, Tiannan Wang, Shi Qiu, Jintian Zhang, Jing Chen, Ruipu Wu, Shuai Wang, Shiding Zhu, Jiyu Chen, Wentao Zhang, Ningyu Zhang, Huajun Chen, Peng Cui, Mrinmaya Sachan:
Agents: An Open-source Framework for Autonomous Language Agents. CoRR abs/2309.07870 (2023) - [i32]Xiangru Tang, Yiming Zong, Jason Phang, Yilun Zhao, Wangchunshu Zhou, Arman Cohan, Mark Gerstein:
Struc-Bench: Are Large Language Models Really Good at Generating Complex Structured Data? CoRR abs/2309.08963 (2023) - [i31]Yilei Wu, Zijian Dong, Chongyao Chen, Wangchunshu Zhou, Juan Helen Zhou:
Mixup Your Own Pairs. CoRR abs/2309.16633 (2023) - [i30]Zekun Moore Wang, Zhongyuan Peng, Haoran Que, Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu, Hongcheng Guo, Ruitong Gan, Zehao Ni, Man Zhang, Zhaoxiang Zhang, Wanli Ouyang, Ke Xu, Wenhu Chen, Jie Fu, Junran Peng:
RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models. CoRR abs/2310.00746 (2023) - [i29]Ruida Wang, Wangchunshu Zhou, Mrinmaya Sachan:
Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models. CoRR abs/2310.13671 (2023) - [i28]Yifan Hou, Jiaoda Li, Yu Fei, Alessandro Stolfo, Wangchunshu Zhou, Guangtao Zeng, Antoine Bosselut, Mrinmaya Sachan:
Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models. CoRR abs/2310.14491 (2023) - [i27]Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma:
Evaluating Large Language Models on Controlled Generation Tasks. CoRR abs/2310.14542 (2023) - [i26]Yuliang Liu, Xiangru Tang, Zefan Cai, Junjie Lu, Yichi Zhang, Yanjun Shao, Zexuan Deng, Helan Hu, Zengxian Yang, Kaikai An, Ruijun Huang, Shuzheng Si, Sheng Chen, Haozhe Zhao, Zhengliang Li, Liang Chen, Yiming Zong, Yan Wang, Tianyu Liu, Zhiwei Jiang, Baobao Chang, Yujia Qin, Wangchunshu Zhou, Yilun Zhao, Arman Cohan, Mark Gerstein:
ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks. CoRR abs/2311.09835 (2023) - [i25]Haoqin Tu, Chenhang Cui, Zijun Wang, Yiyang Zhou, Bingchen Zhao, Junlin Han, Wangchunshu Zhou, Huaxiu Yao, Cihang Xie:
How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs. CoRR abs/2311.16101 (2023) - 2022
- [c21]Zhiyi Fu, Wangchunshu Zhou, Jingjing Xu, Hao Zhou, Lei Li:
Contextual Representation Learning beyond Masked Language Modeling. ACL (1) 2022: 2701-2714 - [c20]Wangchunshu Zhou, Canwen Xu, Julian J. McAuley:
BERT Learns to Teach: Knowledge Distillation with Meta Learning. ACL (1) 2022: 7037-7049 - [c19]Wangchunshu Zhou, Canwen Xu, Julian J. McAuley:
Efficiently Tuned Parameters Are Task Embeddings. EMNLP 2022: 5007-5014 - [c18]Wangchunshu Zhou, Yan Zeng, Shizhe Diao, Xinsong Zhang:
VLUE: A Multi-Task Multi-Dimension Benchmark for Evaluating Vision-Language Pre-training. ICML 2022: 27395-27411 - [i24]Zhiyi Fu, Wangchunshu Zhou, Jingjing Xu, Hao Zhou, Lei Li:
Contextual Representation Learning beyond Masked Language Modeling. CoRR abs/2204.04163 (2022) - [i23]Wangchunshu Zhou, Yan Zeng, Shizhe Diao, Xinsong Zhang:
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models. CoRR abs/2205.15237 (2022) - [i22]Yan Zeng, Wangchunshu Zhou, Ao Luo, Xinsong Zhang:
Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training. CoRR abs/2206.00621 (2022) - [i21]Shizhe Diao, Wangchunshu Zhou, Xinsong Zhang, Jiawei Wang:
Prefix Language Models are Unified Modal Learners. CoRR abs/2206.07699 (2022) - [i20]Tiannan Wang, Wangchunshu Zhou, Yan Zeng, Xinsong Zhang:
EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning. CoRR abs/2210.07795 (2022) - [i19]Wangchunshu Zhou, Canwen Xu, Julian J. McAuley:
Efficiently Tuned Parameters are Task Embeddings. CoRR abs/2210.11705 (2022) - [i18]Yan Zeng, Xinsong Zhang, Hang Li, Jiawei Wang, Jipeng Zhang, Wangchunshu Zhou:
X2-VLM: All-In-One Pre-trained Model For Vision-Language Tasks. CoRR abs/2211.12402 (2022) - 2021
- [c17]Wangchunshu Zhou, Qifei Li, Chenle Li:
Learning from Perturbations: Diverse and Informative Dialogue Generation with Inverse Adversarial Training. ACL/IJCNLP (1) 2021: 694-703 - [c16]Wangchunshu Zhou, Tao Ge, Canwen Xu, Ke Xu, Furu Wei:
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting. EMNLP (1) 2021: 571-582 - [c15]Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, Furu Wei:
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression. EMNLP (1) 2021: 10653-10659 - [c14]Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Xiang Ren:
Pre-training Text-to-Text Transformers for Concept-centric Common Sense. ICLR 2021 - [c13]Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, Furu Wei:
Blow the Dog Whistle: A Chinese Dataset for Cant Understanding with Common Sense and World Knowledge. NAACL-HLT 2021: 2139-2145 - [i17]Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei:
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting. CoRR abs/2101.00416 (2021) - [i16]Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, Furu Wei:
Blow the Dog Whistle: A Chinese Dataset for Cant Understanding with Common Sense and World Knowledge. CoRR abs/2104.02704 (2021) - [i15]Wangchunshu Zhou, Qifei Li, Chenle Li:
Learning from Perturbations: Diverse and Informative Dialogue Generation with Inverse Adversarial Training. CoRR abs/2105.15171 (2021) - [i14]Wangchunshu Zhou, Canwen Xu, Julian J. McAuley:
Meta Learning for Knowledge Distillation. CoRR abs/2106.04570 (2021) - [i13]Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian J. McAuley, Furu Wei:
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression. CoRR abs/2109.03228 (2021) - [i12]Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, Lei Li:
A Survey on Green Deep Learning. CoRR abs/2111.05193 (2021) - [i11]Wangchunshu Zhou, Qifei Li, Chenle Li:
Learning to Predict Persona Information forDialogue Personalization without Explicit Persona Description. CoRR abs/2111.15093 (2021) - 2020
- [c12]Wangchunshu Zhou, Ke Xu:
Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models. AAAI 2020: 9717-9724 - [c11]Bill Yuchen Lin, Ming Shen, Wangchunshu Zhou, Pei Zhou, Chandra Bhagavatula, Yejin Choi, Xiang Ren:
CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning. AKBC 2020 - [c10]Qifei Li, Wangchunshu Zhou:
Connecting the Dots Between Fact Verification and Fake News Detection. COLING 2020: 1820-1825 - [c9]Wangchunshu Zhou, Tao Ge, Chang Mu, Ke Xu, Furu Wei, Ming Zhou:
Improving Grammatical Error Correction with Machine Translation Pairs. EMNLP (Findings) 2020: 318-328 - [c8]Wangchunshu Zhou, Tao Ge, Ke Xu:
Pseudo-Bidirectional Decoding for Local Sequence Transduction. EMNLP (Findings) 2020: 1506-1511 - [c7]Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, Xiang Ren:
CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning. EMNLP (Findings) 2020: 1823-1840 - [c6]Wangchunshu Zhou, Tao Ge, Furu Wei, Ming Zhou, Ke Xu:
Scheduled DropHead: A Regularization Method for Transformer Models. EMNLP (Findings) 2020: 1971-1980 - [c5]Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, Ming Zhou:
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing. EMNLP (1) 2020: 7859-7869 - [c4]Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou:
Self-Adversarial Learning with Comparative Discrimination for Text Generation. ICLR 2020 - [c3]Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiaodan Liang, Maosong Sun, Chenyan Xiong, Jian Tang:
Towards Interpretable Natural Language Understanding with Explanations as Latent Variables. NeurIPS 2020 - [c2]Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian J. McAuley, Ke Xu, Furu Wei:
BERT Loses Patience: Fast and Robust Inference with Early Exit. NeurIPS 2020 - [i10]Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou:
Self-Adversarial Learning with Comparative Discrimination for Text Generation. CoRR abs/2001.11691 (2020) - [i9]Wangchunshu Zhou, Tao Ge, Ke Xu:
Pseudo-Bidirectional Decoding for Local Sequence Transduction. CoRR abs/2001.11694 (2020) - [i8]Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, Ming Zhou:
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing. CoRR abs/2002.02925 (2020) - [i7]Wangchunshu Zhou, Ke Xu:
Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models. CoRR abs/2002.05058 (2020) - [i6]Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou:
Scheduled DropHead: A Regularization Method for Transformer Models. CoRR abs/2004.13342 (2020) - [i5]Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian J. McAuley, Ke Xu, Furu Wei:
BERT Loses Patience: Fast and Robust Inference with Early Exit. CoRR abs/2006.04152 (2020) - [i4]Qifei Li, Wangchunshu Zhou:
Connecting the Dots Between Fact Verification and Fake News Detection. CoRR abs/2010.05202 (2020) - [i3]Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiaodan Liang, Maosong Sun, Chenyan Xiong, Jian Tang:
Towards Interpretable Natural Language Understanding with Explanations as Latent Variables. CoRR abs/2011.05268 (2020) - [i2]Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Bill Yuchen Lin, Xiang Ren:
Pre-training Text-to-Text Transformers for Concept-centric Common Sense. CoRR abs/2011.07956 (2020)
2010 – 2019
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-08 20:28 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint