Action Filename Description Size Access License Resource Version
Show more files...


Mobile robots are gradually appearing in our daily environments. In order to autonomously navigate in real world environments and interact with objects and humans, robots are facing various major technological challenges. Among the required key competences of such robots is their ability to perceive the environment and reason about it in order to plan appropriate actions. However, sensory information perceived from real world situations is error prone and incomplete and thus often results in ambiguous interpretations. This work proposes a new approach for objects recognition that incorporates visual and range information with spatial arrangement between objects (context information). It is based on Bayesian networks enabling to fuses and infer information probabilistically. The proposed framework firstly extracts potential objects from the scene image using simple features / characteristics like colors or the relation between height and width. This basic information is easy to extract, but results often in ambiguous situation between similar objects. In order to disambiguate the detected objects, the relative spatial arrangement (context information) of the objects is used in a second step. Consider for example a Coke can and a red trash-can that are both cylindrical, have similar ratios between width and height and have a very similar color. Dependant on their distance from the robot, they can therefore often hardly be distinguished. However, if we further consider their spatial arrangement with other objects, e.g. a table, they might be clearly differentiable, the Coke typically standing on a table and the trash-can on the floor. This contextual information is therefore a very efficient way to drastically increase the reliability in object recognition and scene interpretation. Moreover, range information from a laser scanner and speech recognition offer complementary information to further improve reliability. Thus an approach using laser range data to recognize places (corridor, crossing, room, doors, etc.) using Bayesian programming is also developed for both topological navigation in a typical indoor environment and object recognition. The proposed approach is verified through different real world experiences. The characteristics and typical spatial arrangements of the objects in various test scenarios are first used to train a Bayesian network through a series of example images (learning) and than verified on the test images. By fusing the extracted object probability from images and range data with spatial relations among the objects, ambiguities are reliability solved and the reliability of the detection is drastically increased. The results show the validity and performance of the proposed Bayesian approach that combines context information with simple object recognition.