Nothing Special   »   [go: up one dir, main page]

Qiming Bao

Ph.D. Candidate in Strong AI Lab, NAOInstitute, University of Auckland

Homepage    LinkedIn    GitHub    Gmail    Google Scholar    DBLP    Twitter    CV    简历

Personal Details

Education

  • Ph.D. in Computer Science, University of Auckland (2020 - 2024)
  • B.Sc. (Honours) in Computer Science (First Class), University of Auckland (2018 - 2019)
  • B.Sc. in Computer Science, China Jiliang University (2014 - 2018)
  • Research Interests

    AI/DL, NLP, LLMs, Neural-Symbolic AI, Reasoning, Multimodal Document AI, Intelligent Document Processing

    Publications

  • Qiming Bao, Alex Peng, Zhenyun Deng, Wanjun Zhong, Gaël Gendron, Neşet Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock, Jiamou Liu. Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning., the Findings of ACL-24 [#1 on the ReClor Leaderboard] [Paper link] [Source code]
  • Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Tim Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu. Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models, AGI@ICLR 2024 [Paper link] [Source code]
  • Qiming Bao, Gaël Gendron, Alex Peng, Neset Tan, Michael Witbrock, Jiamou Liu. Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning., ICONIP-24 [Paper link] [Source code]
  • Qiming Bao, Gaël Gendron, Alex Peng, Neset Tan, Michael Witbrock, Jiamou Liu. A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks., LLM@IJCAI'23 [Paper link] [Source code]
  • Qiming Bao, Alex Peng, Zhenyun Deng, Wanjun Zhong, Gaël Gendron, Neşet Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock, Jiamou Liu. Enhancing Logical Reasoning of Large Language Models through Logic-Driven Data Augmentation., LLM@IJCAI'23 [#1 on the ReClor Leaderboard] [Paper link] [Source code]
  • Qiming Bao, Alex Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu. Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation, IJCLR-NeSy-22 [Paper link] [Source code and dataset] [Presentation recording]
  • Nathan Young, Qiming Bao, Joshua Ljudo Bensemann, Michael J. Witbrock. AbductionRules: Training Transformers to Explain Unexpected Inputs, the Findings of ACL-22 [Paper link] [Source code]
  • Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie. Large Language Models Are Not Strong Abstract Reasoners, IJCAI 2024 [Paper link] [Source code and evaluation platform]
  • Lin Ni, Qiming Bao, Xiaoxuan Li, Qianqian Qi, Paul Denny, Jim Warren, Michael Witbrock, Jiamou Liu. DeepQR: Neural-based Quality Ratings for Learnersourced Multiple-Choice Questions, AAAI/EAAI-22 [Paper link]
  • Qianqian Qi, Qiming Bao*, Alex Yuxuan Peng, Jiamou Liu, Michael Witbrock. A Dynamic Prompt-tuning Method for Data Augmentation with Associated Knowledge, ICLR-23 TinyPapers [Paper link]
  • Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie. Large Language Models Are Not Strong Abstract Reasoners Yet, AGI@ICLR 2024 [Paper link] [Source code and evaluation platform]
  • Qiming Bao, Lin Ni, Jiamou Liu. HHH: An Online Medical Chatbot System based on Knowledge Graph and Hierarchical Bi-Directional Attention, ACSW-20 [Paper link] [Source code] [Presentation slide] [Recording]
  • Zhongsheng Wang, Jiamou Liu, Qiming Bao, Hongfei Rong, Jingfeng Zhang. ChatLogic: Integrating Logic Programming with Large Language Models for Multi-step Reasoning, NucLeaR@AAAI 2024 [Paper link] [Source code]
  • Neset TAN, Trung Nguyen, Josh Bensemann, Alex Peng, Qiming Bao, Yang Chen, Mark Gahegan, Michael Witbrock. Multi2Claim: Generating Scientific Claims from Multi-Choice Questions for Scientific Fact-Checking, EACL-23 [Paper link]
  • Neset TAN, Alex Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock. Input-length-shortening and text generation via attention values, AAAI-EMC^2-23 [Paper link]
  • Work & Project Experience

    Enhancing Max Sequence Length in Large Multimodal Models Xtracta, Auckland, New Zealand
    Artificial Intelligence Researcher/Engineer 07/22 – now
  • Investigated and implemented alternative attention mechanisms to extend the effective sequence length in multi-modal document processing models such as LayoutLMv3 and ERNIE-LayoutX.
  • By applied the sliding window technique and a global attention mask from Longformer to extend the maximum sequence length from 512 to 4096, which model among LayoutLMv3 and ERNIE-LayoutX achieves a higher F1 score on the XFUND, FUNSD and other company internal datasets without significantly increasing GPU memory usage.
  • Replicated the multi-task, multimodal pre-training code for LayoutLMv3, which Microsoft did not open source, including masked language modeling, masked image modeling, and word-patch alignment.
  • Integrated deepspeed and adapters into ERNIE-LayoutX and LayoutLMv3, which can reduce training costs, result in a smaller model size, and make it easier to deploy in the production environment.
  • Successfully applied for the Research & Development Tax Incentive (RDTI) grants from Callaghan Innovation (New Zealand's Innovation Agency) for both 2022 and 2023, each offering a tax credit equal to 15% of eligible R&D expenditure. This credit can be utilised to reduce the income tax payable by the company.
  • Integrated Flash-Attention 2 into Self-Attention can help ERNIE-LayoutX reduce maximum training GPU memory usage by up to 50% under FP16.
  • Applied affine transformations for data augmentation to train the model and improve the robustness of line alignment issues for document extraction.
  • By using the PEFT adapter to train the large language submodel Qwen2 of the multimodal large model InternVL2, and combining it with continuous training, it is possible to train the 1-billion-parameter InternVL2 multimodal large model on a single A6000 GPU.
  • Large Language Model and Logical Reasoning (Ph.D. Main Topic) UoA, Auckland, New Zealand
    Research & Development Project Leader/Developer 02/20 – 03/24
  • We have developed an iterative enhancement framework based on LLM for generating explanations. The framework iteratively interacts between an explanation generation module and an explanation evaluation module to enhance the quality of the generated explanations. Our paper has been accepted by AGI@ICLR 2024. paper and source code.
  • Our method "AMR-LDA" (GPT-4 + AMR-LDA Prompt Augmentation) achieved #1 on the ReClor leaderboard, and we are the first group scored above 90% on the hidden test set around the world. Our paper has been accepted by the Findings of ACL-24 and LLM@IJCAI'23. paper, source code and model weights.
  • We evaluated generative and discriminative large language models on out-of-distribution logical reasoning tasks. While they excel in standard tasks, minor changes lead to notable performance drops, indicating insufficient reasoning capabilities. Our paper has been accepted by LLM@IJCAI'23. paper and source code.
  • To address depth imbalance in multi-step reasoning datasets and enhance model performance, we created the IMA-GloVe-GA model, combining DeepLogic with Gate Attention. Additionally, we developed a larger dataset, PARARULE-Plus, for deep multi-step reasoning over natural language. We published the paper, code and data and presentation recording on IJCLR-NeSy-22.
  • We built up a dataset called AbductionRules to increase the Transformer's performance on the tasks requiring abduction reasoning. We published the paper, code and data on the Findings of ACL-22.
  • PARARULE Plus (Multi-step deductive reasoning) and AbductionRules (Abductive reasoning) datasets are collected and merged as part of LogiTorch.ai, ReasoningNLP, Prompt4ReasoningPapers and OpenAI/Evals.
  • Abstract Extraction and Multi-Turn Dialogue System Advanced Institute of Information Technology, Peking University, Hangzhou, China
    Research and Development Engineer 11/19 – 02/20
  • We developed and researched a robot-based system including automatic abstract extraction, text segmentation, theme prediction, and multi-turn question answering.
  • Investigation and standard documentation of robot-related technologies.
  • We built a well-encapsulated API to implement meeting record document processing based on the abstract extraction, text segmentation, and theme prediction.
  • HHH: An Online Medical Chatbot System Precision Driven Health & Orion Health, Auckland, New Zealand
    Research Project Leader and Developer 11/18 – 04/19
  • We developed a medical text similarity algorithm called HBAM using Pre-trained Language Model and Knowledge Graph.
  • Compared with BERT and MaLSTM models, HBAM performs higher test accuracy than the two Deep Learning models respectively code (#star: 85+), news, recording and published paper (#citation: 60+) on ACSW-20.
  • Invited Speaker/Visiting Scholar

  • Microsoft Research Asia Invited Talk 2022 (Invitation Letter) (Presentation Slide) (Recording)
  • Samsung AI Center Cambridge UK Invited Talk 2022 (Invitation Letter) (Presentation Slide) (Recording)
  • IEEE Vehicular Technology Society (VTS) New Zealand North Chapter and IEEE New Zealand North Section SIGHT Group 2022 (Invitation Letter) (Presentation Slide) (Recording)
  • ZJU-NLP Group, Zhejiang University 2023
  • Shanghai AI Lab 2023
  • NLP Group, The University of Melbourne Invited Talk 2023 (Invitation Letter) (Presentation Slide)
  • Institute of Automation, Chinese Academy of Sciences Invited Talk 2023 (Invitation Poster) (Presentation Slide)
  • Shenzhen MSU-BIT University Invited Talk 2024 (Invitation Letter) (Presentation Slide)
  • University of Massachusetts - Amherst Invited Talk 2024 (Invitation Letter) (Presentation Slide)
  • Pen State University & University of Auckland Online Workshop 2024 Day 1 Session 2 Children's Future, Intercultural Learning (Invitation Letter) (Presentation Slide) (Recording)
  • Conference Reviewer

  • NAACL 2024 (Reviewer) (Core Rank: A, CCF Rank: B)
  • ICONIP 2024, Auckland, New Zealand (Program Committee) (Core Rank: B, CCF Rank: C)
  • IJCLR 2024, Nanjing, China (Program Committee) (CCF Rank: C)
  • NuCLeaR@AAAI 2024, Vancouver, Canada (Program Committee)
  • ECAI 2023, Kraków, Poland (Program Committee) (Core Rank: A, CCF Rank: B)
  • ACL 2022 (Reviewer) (Core Rank: A*, CCF Rank: A)
  • NLPCC 2021/2022/2023 (Program Committee) (CCF Rank: C)
  • Journal Reviewer

  • Knowledge-based Systems 2024 (SCI, IF:8.8, JCR Q1)
  • International Journal of Artificial Intelligence in Education 2024 (SCI, IF:4.9, JCR Q1)
  • IEEE/ACM Transactions on Computational Biology and Bioinformatics 2022 (SCI, IF:3.71, CCF Rank B, JCR Q1)
  • Magazine Guest Editor

  • CRACCUM, The University of Auckland Student Magazine (My article about whether school should allow students to use ChatGPT/GPT-4)
  • Teaching/Grant Experience and Other Achievements

  • Vice President of Australia and New Zealand Alumni Association of China Jiliang University
  • Outstanding PhD student of the Faculty of Science, University of Auckland
  • PhD Extension Award, University of Auckland
  • The Computer Science Graduate Student Travel (CSGST) Award 2023 & 2024, University of Auckland
  • PhD Mentor (Outstanding PhD Mentor, School of Computer Science, University of Auckland)
  • The Research & Development Tax Incentive (RDTI) Grants, Callaghan Innovation (New Zealand's Innovation Agency)
  • PhD Research Project Scholarship, University of Auckland
  • First-Class Honours, University of Auckland
  • Precision Driven Health & Orion Health Summer Scholarship
  • Outstanding Graduate Student, Zhejiang Province (Top 1%)
  • The Honourable Mention of 2018 Interdisciplinary Contest In Modeling (Top 10%)
  • Outstanding Graduates of Hangzhou No.11 High School (杭十一中优秀毕业校友)
  • Outstanding Graduates of China Jiliang University (中国计量大学优秀毕业校友)
  • The University of Auckland

  • COMPSCI 110 Introduction to Computer Systems (Course Marker)
  • COMPSCI 220 Algorithms and Data Structures (Tutor)
  • COMPSCI 235 Software Development Methodologies (Tutor for students from both University of Auckland and Southwest University)
  • SOFTENG 325 Software Architecture (Tutor)
  • COMPSCI 367 Artificial Intelligence (Course Marker)
  • COMPSCI 399 Capstone: Computer Science (Tutor/Project Supervisor)
  • COMPSCI 703 Generalising Artificial Intelligence (Tutor)
  • COMPSCI 778 Master of Information Technology Internship Mentor
  • Lab Demonstrator
  • Monash University & Southeast University Joint Graduate School (Monash-SEU JGS)

  • FIT5046 Mobile and Distributed Computing Systems (Teaching Assistant for Master's Programs)