Chen, ZemingGao, Qiyue2024-02-192024-02-192024-02-192023-11-1510.1007/s10849-023-09411-3https://infoscience.epfl.ch/handle/20.500.14299/204214WOS:001101203000002The recent advance of large language models (LLMs) demonstrates that these large-scale foundation models achieve remarkable capabilities across a wide range of language tasks and domains. The success of the statistical learning approach challenges our understanding of traditional symbolic and logical reasoning. The first part of this paper summarizes several works concerning the progress of monotonicity reasoning through neural networks and deep learning. We demonstrate different methods for solving the monotonicity reasoning task using neural and symbolic approaches and also discuss their advantages and limitations. The second part of this paper focuses on analyzing the capability of large-scale general-purpose language models to reason with monotonicity.TechnologyMonotonicityNatural Language InferenceNeural Language ModelNeural Symbolic InferenceMonotonicity Reasoning in the Age of Neural Foundation Modelstext::journal::journal article::research article