Who's Doing What: Joint Modeling of Names and Verbs for Simultaneous Face and Pose Annotation

Given a corpus of news items consisting of images accompanied by text captions, we want to find out “who’s doing what”, i.e. associate names and action verbs in the captions to the face and body pose of the persons in the images. We present a joint model for simultaneously solving the image-caption correspondences and learning visual appearance models for the face and pose classes occurring in the corpus. These models can then be used to recognize people and actions in novel images without captions. We demonstrate experimentally that our joint ‘face and pose’ model solves the correspondence problem better than earlier models covering only the face, and that it can perform recognition of new uncaptioned images.

Presented at:
NIPS Foundation - Advances in Neural Information Processing Systems 22 (NIPS09), Vancouver, B.C., Canada
MIT Press

 Record created 2010-02-11, last modified 2018-01-28

External links:
Download fulltextURL
Download fulltextn/a
Rate this document:

Rate this document:
(Not yet reviewed)