May 20, 2024 · In this paper, we conduct a systematic study for lightweight LMMs from the aspects of model architecture, training strategy, and training data.
Imp models steadily outperform the existing lightweight LMMs of similar size on most benchmarks, and even surpass the state-of-the-art LMMs at the 13B scale.
This repository contains the official training/evaluation code of the Imp project, which aims to provide a family of highly capable yet efficient large ...
May 20, 2024 · In this paper, we conduct a systematic study for lightweight LMMs from the aspects of model architecture, training strategy, and training data.
Sep 7, 2024 · Large Multimodal Models (LMMs) such as LLaVA have shown strong performance in visual-linguistic reasoning. These models first embed images into ...
This paper conducts a systematic study for lightweight LMMs from the aspects of model architecture, training strategy, and training data, and obtains Imp ...
May 21, 2024 · Imp Highly Capable Large Multimodal Models for Mobile Devices By harnessing the capabilities of large language models (LLMs), recent large ...
May 21, 2024 · To fit the MLC framework for mobile devices, we further perform the 4-bit quantization to Imp-v1.5-3B-196 to obtain Imp-v1.5-3B-196-q4f16_1-MLC.
By harnessing the capabilities of large language models (LLMs), recent large multimodal models (LMMs) have shown remarkable versatility in open-world multimodal ...
May 20, 2024 · Imp-3B model steadily outperforms all the existing lightweight LMMs of similar size, and even surpasses the state-of-the-art LMMs at the 13B scale.