Chappuis, ChristelZermatten, ValérieLobry, SylvainLe Saux, BertrandTuia, Devis2023-01-302023-01-302023-01-302022-0610.1109/CVPRW56347.2022.00143https://infoscience.epfl.ch/handle/20.500.14299/194539Remote sensing visual question answering (RQA) was recently proposed with the aim of interfacing natural language and vision to ease the access of information contained in Earth Observation data for a wide audience, which is granted by simple questions in natural language. The traditional vision/language interface is an embedding obtained by fusing features from two deep models, one processing the image and another the question. Despite the success of early VQA models, it remains difficult to control the adequacy of the visual information extracted by its deep model, which should act as a context regularizing the work of the language model. We propose to extract this context information with a visual model, convert it to text and inject it, i.e. prompt it, into a language model. The language model is therefore responsible to process the question with the visual context, and extract features, which are useful to find the answer. We study the effect of prompting with respect to a black-box visual extractor and discuss the importance of training a visual model producing accurate context.Prompt–RSVQA: Prompting visual context to a language model for Remote Sensing Visual Question Answeringtext::conference output::conference proceedings::conference paper