Abstract

The COVID-19 pandemic outbreak is causing a dramatic worsening in the already complicated living conditions of blind and visually impaired individuals. Social distancing is the most effective strategy to limit virus spread, but is extremely difficult for blind people to actuate. Here we propose a deep-learning algorithm to recognize and locate in space people and other categories of objects from RGB-D images. The algorithm, based on Mask R-CNN, performs semantic segmentation on RGB images and uses depth maps to extract information about the relative distance of the instances. It was evaluated using Salient Person dataset and RGB-D Scenes Dataset v.2, and proved effective in segmenting and locating instances. This preliminary work could be a valuable starting point for developing a technology to assist the visually impaired in implementing social distancing.

Details