Date of Award
Master of Science (MS)
Electrical and Computer Engineering
Yaz, Edwin E.
Data fusion has become an active research topic in recent years. Growing computational performance has allowed the use of redundant sensors to measure a single phenomenon. While Bayesian fusion approaches are common in general applications, the computer vision community has largely relegated this approach. Most object following algorithms have gone towards pure machine learning fusion techniques that tend to lack flexibility. Consequently, a more general data fusion scheme is needed. The motivation for this work is to propose methods that allow for the development of simple and cost effective, yet robust visual following robots capable of tracking a general object with limited restrictions on target characteristics. With that purpose in mind, in this work, a hierarchical adaptive Bayesian fusion approach is proposed, which outperforms individual trackers by using redundant measurements. The adaptive framework is achieved by relying in each measurement's local statistics and a global softened majority voting. Several approaches for robots that can follow targets have been proposed in recent years. However, many require the use of several, expensive sensors and often the majority of the image processing and other calculations are performed independently. In the proposed approach, objects are detected by several state-of-the-art vision-based tracking algorithms, which are then used within a Bayesian framework to filter and fuse the measurements and generate the robot control commands. Target scale variations and, in one of the platforms, a time-of-flight (ToF) depth camera, are used to determine the relative distance between the target and the robotic platforms. The algorithms are executed in real-time (approximately 30fps). The proposed approaches were validated in a simulated application and several robotics platforms: one stationary pan-tilt system, one small unmanned air vehicle, and one ground robot with a Jetson TK1 embedded computer. Experiments were conducted with different target objects in order to validate the system in scenarios including occlusions and various illumination conditions as well as to show how the data fusion improves the overall robustness of the system.