This paper develops a Versatile and Honest vision language Model (VHM) for remote sensing image analysis. VHM is built on a large-scale remote sensing image-text dataset with rich-content captions (VersaD), and an honest instruction dataset comprising both factual and deceptive questions (HnstD). Unlike prevailing remote sensing image-text datasets, in which image captions focus on a few prominent objects and their relationships, VersaD captions provide detailed information about image properties, object attributes, and the overall scene. This comprehensive captioning enables VHM to thoroughly understand remote sensing images and perform diverse remote sensing tasks. Moreover, different from existing remote sensing instruction datasets that only include factual questions, HnstD contains additional deceptive questions stemming from the non-existence of objects. This feature prevents VHM from producing affirmative answers to nonsense queries, thereby ensuring its honesty. In our experiments, VHM significantly outperforms various vision language models on common tasks of scene classification, visual question answering, and visual grounding. Additionally, VHM achieves competent performance on several unexplored tasks, such as building vectorizing, multi-label classification and honest question answering.
We utilized the gemini-1.0-pro-vision API to generate descriptions for images from multiple public RS datasets, thereby obtaining a dataset of image-text pairs to serve as the pre-training data for RSVLMs.
The training instrutions during the Supervised Fine Tuning (SFT) stage comprises four parts, each with a specified quantity: the VersaD-Instruct dataset (30k), the HnstD dataset (44k), the RS-Specialized-Instruct dataset (29.8k), and the RS-ClsQaGrd-Instruct dataset (78k), summing up to a total of 180k
We adopted the LLaVA model and continued to train it to obtain the VHM. It includes three main components: (1) A pretrained vision encoder using the CLIP-Large model, with a resolution of 336 × 336 and a patch size of 14, capable of converting input images into 576 tokens. (2) An LLM based on the open-source Vicuna-v1.5, originating from LLaMA2. We use the 7B-version throughout this paper. (3) It incorporates a projector, which is a multilayer perceptron composed of two layers, used to connect the vision encoder and the LLM.
@misc{pang2024vhmversatilehonestvision,
title={VHM: Versatile and Honest Vision Language Model for Remote Sensing Image Analysis},
author={Chao Pang and Xingxing Weng and Jiang Wu and Jiayu Li and Yi Liu and Jiaxing Sun and Weijia Li and Shuai Wang and Litong Feng and Gui-Song Xia and Conghui He},
year={2024},
eprint={2403.20213},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2403.20213},
}