On the right is the simulation environment (Gazebo) running our robot, which is using lidar for mapping and localization. My first task was to make the robot cover the entire map in less than 5 minutes. Using frontier exploration, it would navigate to the largest grey area in its map, with some weight added to whichever was closest to the robot so it wasn’t going across the map for small improvements.
Stage 2: Navigate
Once mapped, the robot was tasked to wait at the starting point until a desired position was placed in the map. At that moment, it would plan a path using A* (and later RRT*) to get to its new location, then follow the generating path plan.
Stage 3: Avoid
Moving trash cans were then added to the environment, where our robot had to avoid contact as it navigated to points that crossed through their trajectories. My solution primarily relied on computer vision, identifying the cans through color masks, and using the size of the resultant blob to determine position relative to the robot. If too close, the robot would back up. When on a direct course and a can was detected, it would divert left or right dependent on wall alignment, and continue off course until the trashcan had passed, at which point to robot would return to the normal path plan and continue.