This contribution presents a new database to address current challenges in face recognition. It contains face video sequences of 75 individuals acquired either through a laptop webcam or when mimicking the front-facing camera of a smartphone. Sequences have been acquired with a device allowing to record visual, near-infrared and depth data at the same time. Recordings have been made across three sessions with different, challenging illumination conditions and variations in pose. Together with the database, several experimental protocols are provided and correspond to real world scenarios, when a mismatch in conditions between enrollment and probe images occurs. A comprehensive set of baseline experiments using publicly available baseline algorithms show that extreme illumination conditions and pose variations are remaining issues. However, the usage of different data domains -and their fusion -allows to mitigate such variation. Finally, experiments on heterogeneous face recognition are also presented using a state-of-the-art model based on deep neural networks, and showed better performance. When applied to other tasks, this model proved to surpass all existing baselines as well. The data, as well as the code to reproduce all experiments are made publicly available to help foster research in selfie biometrics using latest imaging devices.