I'm Rob DiPietro, a PhD student in the Department of Computer Science at Johns Hopkins, where I'm advised by Gregory D. Hager. I'm part of the Computational Interaction and Robotics Laboratory and the Malone Center for Engineering in Healthcare. Previously I was an associate research-staff member at MIT Lincoln Laboratory and a BS/MS student at Northeastern University, where I studied applied physics and electrical engineering.
Our paper on surgical activity recognition has been accepted as an oral presentation at MICCAI 2016:
R. DiPietro, C. Lea, A. Malpani, N. Ahmidi, S. Vedula, G.I. Lee, M.R. Lee, G.D. Hager: Recognizing Surgical Activities with Recurrent Neural Networks. Medical Image Computing and Computer Assisted Intervention (2016).
My research focuses on machine-learning methods for modeling sequential data, and I'm especially interested in recurrent neural networks. Is it possible to improve over LSTM and GRUs when it comes to capturing long-term dependencies? If so, can these approaches be carried over to very deep feed-forward networks? Is it possible to reduce RNN training times in a way that's principled?