Ground-Air Robot Cooperative Tracking of an Adversarial Agent
From nursing homes to war zones, losing track of someone’s location can mean bad news. Keeping a close eye on someone’s whereabouts is invaluable, and sometimes it’s necessary. The use of security cameras or even ground robots would be ideal, however lack of control of the environment in which you want to locate people would prohibit cameras. Further, in situations where the environment is unknown or unpredictable, previous knowledge of the layout would not be available, which would be extremely challenging for a single ground robot to simultaneously navigate and track an agent. To address these challenging constraints, I created a multi-robot system composed of a ground robot capable of image-based object recognition and an agile aerial robot with rudimentary navigation and sensing capabilities. The system utilizes the object recognition capabilities of the ground robot to locate and begin tracking an agent, in our case a human, and spatially tasks the aerial robot to get within close proximity of the human to then begin tracking. When implementing this solution, I addressed multiple challenges regarding identifying and tracking a human from both ground robots’ perspective and the aerial robots’ perspective. I’ve currently addressed the challenges of the ground robot identifying and tracking a human and spatially tasking the aerial robot towards the human. Future challenges to tackle involve the aerial robot tracking the human using only rudimentary navigation and sensing capabilities. Implications include utilizing both the ground robot sensors and the aerial robot sensors to get a more accurate estimate of the true position of a human and scenarios where the human has moved to a location where it is not feasible for the ground robot to navigate to it, but the aerial robots maintain tracking of the human.