I am currently a research scientist at the Georgia Tech Research Institute. Before that I was at SRI International Sarnoff in Princeton, N.J. My research spans the areas robotics, machine learning, distributed sensing, and computer vision. Specifically, I am interested in perception and learning for robotics, focusing on two main research directions:
Distributed Perception: I am interested in the creation of coherent semantic maps using a large collection of sensors (EO/IR, LIDAR, etc.) in complex outdoor environments. Towards that end, we are developing algorithms using deep learning to learn multi-layered feature representations and applying them to a variety of tasks on air and ground robots.Part of this research is related to my thesis which made progress in showing how robots can teach each other, allowing the use of one robot's experiences to speed up learning on another robot (transfer learning). I used information theoretic metrics to allow robots that differ perceptually to build model of their similarities and differences to facilitate such knowledge sharing.
Human-Robot Interaction: As robots become increasingly common, they will have to interact with humans in everyday environments. How can robots perceive the outcome of interactions with humans? I am interested in perception algorithms capable ot accurately detecting various characteristics of the person that the robot is interacting with. This includes the use of 3D sensors such as the Kinect to allow the robot to detect gestures and other natural forms of interaction. The flip side of this is how can people naturally command many robots at once, especially when the multi-robot system is a distributed swarm.