Duong Chi ThangHoang Thanh DatNguyen Thanh TamJo, JunNguyen Quoc Viet HungAberer, Karl2022-08-152022-08-152022-08-152022-07-0110.1016/j.patrec.2022.04.036https://infoscience.epfl.ch/handle/20.500.14299/190068WOS:000830083900007Graph neural networks take node features and graph structure as input to build representations for nodes and graphs. While there are a lot of focus on GNN models, understanding the impact of node features and graph structure to GNN performance has received less attention. In this paper, we propose an explanation for the connection between features and structure: graphs can be constructed by connecting node features according to a latent function. While this hypothesis seems trivial, it has several important implications. First, it allows us to define graph families which we use to explain the transferability of GNN models. Second, it enables application of GNNs for featureless graphs by reconstructing node features from graph structure. Third, it predicts the existence of a latent function which can create graphs that when used with original features in a GNN outperform original graphs for a specific task. We propose a graph generative model to learn such function. Finally, our experiments confirm the hypothesis and these implications. (C) 2022 Elsevier B.V. All rights reserved.Computer Science, Artificial IntelligenceComputer Sciencegraph neural networkstransferabilityNature vs. Nurture: Feature vs. Structure for Graph Neural Networkstext::journal::journal article::research article