Internlm2 technical report

Z Cai, M Cao, H Chen, K Chen, K Chen, X Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
… This paper introduces InternLM2, an open-source LLM that … The pre-training process of
InternLM2 is meticulously detailed, … In this paper, we introduce InternLM2, a new Large …

Internlm2. 5-stepprover: Advancing automated theorem proving via expert iteration on large-scale lean problems

Z Wu, S Huang, Z Zhou, H Ying, J Wang, D Lin… - arXiv preprint arXiv …, 2024 - arxiv.org
… In this paper, we introduce InternLM2.5-StepProver which improves its automated
theorem-proving ability via large-scale expert iteration and achieves state-of-the-art on multiple …

Internlm-xcomposer2. 5-omnilive: A comprehensive multimodal system for long-term streaming video and audio interactions

P Zhang, X Dong, Y Cao, Y Zang, R Qian, X Wei… - arXiv preprint arXiv …, 2024 - arxiv.org
Creating AI systems that can interact with environments over long periods, similar to human
cognition, has been a longstanding research goal. Recent advancements in multimodal …

Internlm-xcomposer2: Mastering free-form text-image composition and comprehension in vision-language large model

X Dong, P Zhang, Y Zang, Y Cao, B Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
… Our model based on InternLM27B [77] not only significantly outperforms existing multimodal
models but also matches or even surpasses GPT-4V [58] and Gemini Pro [76] in certain …

Internlm-law: An open source chinese legal large language model

Z Fei, S Zhang, X Shen, D Zhu, X Wang, M Cao… - arXiv preprint arXiv …, 2024 - arxiv.org
… We employ InternLM2-Chat as our foundation model and perform a two-stage supervised
fine-tuning (SFT) to specialize it for legal domain. The training pipeline is presented in Figure 2…

InternLM2. 5-StepProver: Advancing Automated Theorem Proving via Critic-Guided Search

Z Wu, S Huang, Z Zhou, H Ying, Z Yuan… - 2nd AI for Math … - openreview.net
… A comprehensive analysis of InternLM2.5-StepProver is conducted on several standard
formal benchmarks, in comparison with our previous model InternLM2-StepProver, as well as a …

PsyLite Technical Report

F Ding, R Zhang, X Feng, C Xie, Z Zhang… - arXiv preprint arXiv …, 2025 - arxiv.org
… This project is based on the base model InternLM2.57B-chat and has developed a
lightweight psychological counseling large language model application with low hardware …

Xmodel-2 Technical Report

W Qun, L Yang, L Qingquan, Q Zhijiu, J Ling - arXiv preprint arXiv …, 2024 - arxiv.org
… This paper introduced Xmodel-2, a 1.2-billion-parameter model optimized for reasoning
tasks. By leveraging the maximal update parametrization (µP), Warmup-Stable-Decay (WSD) …

XLBench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies

X Ni, H Cai, X Wei, S Wang, D Yin, P Li - arXiv preprint arXiv:2404.05446, 2024 - arxiv.org
… In this subsection, we assess the performance of InternLM2-Chat-20B-200K, which utilizes
three distinct retrievers on Law Reading scenarios. Results illustrated in Table 5, indicate a …

Internlm-xcomposer-2.5: A versatile large vision language model supporting long-contextual input and output

P Zhang, X Dong, Y Zang, Y Cao, R Qian… - arXiv preprint arXiv …, 2024 - arxiv.org
… 16 out of 28 benchmarks based on InternLM2-7B [143] backend. As shown in Figure 1, the
performance of IXC2.5 matches or even surpasses proprietary APIs, eg, GPT4V [112] and …