Li Qian
2024
SparkRA: A Retrieval-Augmented Knowledge Service System Based on Spark Large Language Model
Dayong Wu
|
Jiaqi Li
|
Baoxin Wang
|
Honghong Zhao
|
Siyuan Xue
|
Yanjie Yang
|
Zhijun Chang
|
Rui Zhang
|
Li Qian
|
Bo Wang
|
Shijin Wang
|
Zhixiong Zhang
|
Guoping Hu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Large language models (LLMs) have shown remarkable achievements across various language tasks. To enhance the performance of LLMs in scientific literature services, we developed the scientific literature LLM (SciLit-LLM) through pre-training and supervised fine-tuning on scientific literature, building upon the iFLYTEK Spark LLM. Furthermore, we present a knowledge service system Spark Research Assistant (SparkRA) based on our SciLit-LLM. SparkRA is accessible online and provides three primary functions: literature investigation, paper reading, and academic writing. As of July 30, 2024, SparkRA has garnered over 50,000 registered users, with a total usage count exceeding 1.3 million.
2023
Distinguishability Calibration to In-Context Learning
Hongjing Li
|
Hanqi Yan
|
Yanran Li
|
Li Qian
|
Yulan He
|
Lin Gui
Findings of the Association for Computational Linguistics: EACL 2023
Recent years have witnessed increasing interests in prompt-based learning in which models can be trained on only a few annotated instances, making them suitable in low-resource settings. It is even challenging in fine-grained classification as the pre-trained language models tend to generate similar output embedding which makes it difficult to discriminate for the prompt-based classifier. In this work, we alleviate this information diffusion issue by proposing a calibration method based on a transformation which rotates the embedding feature into a new metric space where we adapt the ratio of each dimension to a uniform distribution to guarantee the distinguishability of learned embeddings. Furthermore, we take the advantage of hyperbolic embedding to capture the relation between dimensions by a coarse-fine metric learning strategy to enhance interpretability. Extensive experiments on the three datasets under various settings demonstrate the effectiveness of our approach.
Search
Co-authors
- Dayong Wu 1
- Jiaqi Li 1
- Baoxin Wang 1
- Honghong Zhao 1
- Siyuan Xue 1
- show all...