top of page
  • Robotics and Computer Vision 

  • Simultaneous Localization & Mapping (SLAM)

  • Trajectory Optimization 

  • Non-convex Optimization 

  • Stochastic Optimization 

  • Online Learning

  • Reinforcement Learning 

  • Imitation Learning: We develop a novel formulation that allows us to use stochastic gradient descent (SGD) and its variants, which we believe is an accurate way of learning from demonstrations. We have also tested our algorithm on Open AI Gym environments that are simulated with Mujoco.

  • Stochastic Non-convex Optimization: We develop an accelerated algorithm that minimizes the expected non-convex objective function under non-convex functional constraints. We are demonstrated our algorithm in the context of learning a sparse classifier.

  • Distributed and Asynchronous Non-convex ADMM: We develop a distributed and asynchronous framework that incorporates ideas from alternating direction method of multipliers (ADMM) and successive convex approximations (SCA) to solve for non-convex problems whose objective function is a sum of smooth functions that are local to each agent/robot. Specifically, we analyze both fusion-centric and fully decentralized architectures. More importantly, our method can handle the presence of non-convex constraints, and also delays in the network. Experimental evaluations are being made on distributed motion planning and distributed pose graph SLAM problems.

     

Energy-efficient trajectory optimization in time-varying environments

  • Given a start and end location the goal is to plan optimal trajectories for a robot/agent operating in time-varying environments. Specifically, we have developed and analyzed an online inexact gradient descent framework while guaranteeing sub-linear regret rates. We have demonstrated the efficacy of our algorithm by formulating the energy optimal trajectory design problem for un-manned surface vehicles operating under strong disturbances in ocean environments.

bottom of page