From Forest to Zoo: Great Ape Behavior Recognition with Chimpbehave
This paper addresses the significant challenge of recognizing behaviors in non-human primates, specifically focusing on chimpanzees. Automated behavior recognition is crucial for both conservation efforts and the advancement of behavioral research. However, it is often hindered by the labor-intensive process of manual video annotation. Despite the availability of large-scale animal behavior datasets, effectively applying machine learning models across varied environmental settings remains a critical challenge due to the variability in data collection contexts and the specificity of annotations. In this paper, we introduce ChimpBehave, a novel dataset comprising over 2 h and 20 min of video (approximately 215,000 frames) of zoo-housed chimpanzees, annotated with bounding boxes and fine-grained locomotive behavior labels. Uniquely, ChimpBehave aligns its behavior classes with those in PanAf, an existing dataset collected in distinct visual environments, enabling the study of cross-dataset generalization - where models are trained on one dataset and tested on another with differing data distributions. We benchmark ChimpBehave using state-of-the-art video-based and skeleton-based action recognition models, establishing performance baselines for both within-dataset and cross-dataset evaluations. Our results highlight the strengths and limitations of different model architectures, providing insights into the application of automated behavior recognition across diverse visual settings. The dataset, models, and code can be accessed at: https://github.com/MitchFuchs/ChimpBehave
10.1007_s11263-025-02484-6.pdf
Main Document
http://purl.org/coar/version/c_970fb48d4fbd8a85
openaccess
CC BY
2.36 MB
Adobe PDF
14f56e5d0cba4a644a4c1cb54604f122