Files

Abstract

Autonomous mobile robots equipped with arms have the potential to be used for automated construction of structures in various sizes and shapes, such as houses or other infrastructures. Existing construction processes, like many other additive manufacturing processes, are mostly based on precise positioning, which is achieved by machines that have a fixed mechanical link with the construction and therefore relying on absolute positioning. Mobile robots, by nature, do not have a fixed referential point, and their positioning systems are not as accurate as fixed-based systems. Therefore, mobile robots have to employ new technologies and/or methods to implement precise construction processes. In contrast to the majority of prior work on autonomous construction that has relied only on external tracking systems (e.g., GPS) or exclusively on short-range relative localization (e.g., stigmergy), this paper explores localization methods based on a combination of long-range self-positioning and short-range relative localization for robots to construct precise, separated artifacts in particular situations, such as in outer space or in indoor environments, where external support is not an option. Achieving both precision and autonomy in construction tasks requires understanding the environment and physically interacting with it. Consequently, we must evaluate the robot’s key capabilities of navigation and manipulation for performing the construction in order to analyze the impact of these capabilities on a predefined construction. In this paper, we focus on the precision of autonomous construction of separated artifacts. This domain motivates us to combine two methods used for the construction: 1) a self-positioning system and 2) a short-distance relative localization. We evaluate our approach on a miniature mobile robot that autonomously maps an environment using a simultaneous localization and mapping (SLAM) algorithm; the robot’s objective is then to manipulate blocks to build desired artifacts based on a plan given by a human. Our results illuminate practical issues for future applications that also need to integrate complex tasks under mobile robot constraints.

Details

Actions

Preview