Vision Recognition Using Shape Context for Autonomous Underwater Sampling

TitleVision Recognition Using Shape Context for Autonomous Underwater Sampling
Publication TypeConference Paper
Year of Publication2012
AuthorsMcBryan, K. M., and D. L. Akin
Conference NameIEEE AUV2012 Conference,
PublisherIEEE
Conference LocationSouthampton, UK
Abstract

The ocean floor is one of the few remaining unexplored places on the planet. Underwater vehicles, both teleoperated and autonomous, have been built to take images of the ocean floor. The depth that a teleoperated vehicle can achieve is limited by its tether. Autonomous vehicles are able to study the deepest parts of the ocean without a complex tether system. These vehicles, while being great at mapping the ocean floor, are not able to autonomously retrieve samples. In order to retrieve samples the vehicle must: know what objects look like, correctly identify new instances of the target object, estimate the pose so the manipulator can grab it, and retrieve its coordinates in 3D space. Color filtering, shape context and the use of stereovision have been used to autonomously locate, identify, and estimate the pose of objects. Color filtering allows the image to be filtered so that only objects of similar color remain and extraneous information can be disregarded.

 

Shape context matches the shape, as defined by the edge pixels, of each potential target to a known object. Shape context uses a costing function to determine if the potential target is a match to the known object. The costing function takes into account the amount of 'bending energy' it takes to make the shape of the potential target conform to that of the known object. This gives a metric of how well the match is between the potential target and a known object and is done for both the left and right cameras. Once objects have been identified in each image, calibration parameters can be used to retrieve the 3D position of the object. This allows a manipulator on an underwater vehicle to autonomously sample targets.

Citation Key42