Adaimi, GeorgeMizrahi, DavidAlahi, Alexandre2022-10-312022-10-312022-10-31202310.1109/WACV56688.2023.00014https://infoscience.epfl.ch/handle/20.500.14299/191701Scene graph generation (SGG) methods extract relationships between objects. While most methods focus on improving top-down approaches, which build a scene graph based on detected objects from an off-the-shelf object detector, there is a limited amount of work on bottom-up approaches, which jointly detect objects and their relationships in a single stage. In this work, we present a novel bottom-up SGG approach by representing relationships using Composite Relationship Fields (CoRF). CoRF turns relationship detection into a dense regression and classification task, where each cell of the output feature map identifies surrounding objects and their relationships. Furthermore, we propose a refinement head that leverages Transformers for global scene reasoning, resulting in more meaningful relationship predictions. By combining both contributions, our method outperforms previous bottom-up methods on the Visual Genome dataset by 26% while preserving real-time performance.Scene Graph GenerationScene UnderstandingVisual Relationship DetectionObject DetectionComputer VisionComposite Relationship Fields with Transformers for Scene Graph Generationtext::conference output::conference proceedings::conference paper