CRoW: Benchmarking Commonsense Reasoning in Real-World Tasks
Recent efforts in natural language processing (NLP) commonsense reasoning research have yielded a considerable number of new datasets and benchmarks. However, most of these datasets formulate commonsense reasoning challenges in artificial scenarios that are not reflective of the tasks which real-world NLP systems are designed to solve. In this work, we present CRoW, a manually-curated, multi-task benchmark that evaluates the ability of models to apply commonsense reasoning in the context of six real-world NLP tasks. CRoW is constructed using a multi-stage data collection pipeline that rewrites examples from existing datasets using commonsense-violating perturbations. We use CRoW to study how NLP systems perform across different dimensions of commonsense knowledge, such as physical, temporal, and social reasoning. We find a significant performance gap when NLP systems are evaluated on CRoW compared to humans, showcasing that commonsense reasoning is far from being solved in real-world task settings. We make our dataset and leaderboard available to the research community.
EPFL
École Polytechnique Fédérale de Lausanne
EPFL
DeepMind (United Kingdom)
EPFL
2023
Singapore
979-8-89176-060-8
9785
9821
Proceedings fulltext
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
EMNLP 2023 | Singapore | 2022-12-06 - 2022-12-10 | |
| Funder | Funding(s) | Grant Number | Grant URL |
Swiss National Science Foundation | |||
Innosuisse – Swiss Innovation Agency | |||
EPFL Science Seed Fund | |||
| Show more | |||