Amir-massoud Farahmand
Amir-massoud Farahmand
Vector Institute, University of Toronto
Verified email at vectorinstitute.ai - Homepage
Title
Cited by
Cited by
Year
Regularized Policy Iteration
A Farahmand, M Ghavamzadeh, S Mannor, C Szepesvári
Advances in Neural Information Processing Systems 21 (NeurIPS 2008), 441-448, 2009
1512009
Error propagation for approximate policy and value iteration
A Farahmand, C Szepesvári, R Munos
Advances in Neural Information Processing Systems (NeurIPS), 568-576, 2010
1472010
Manifold-adaptive dimension estimation
A Farahmand, C Szepesvári, JY Audibert
Proceedings of the 24th International Conference on Machine Learning (ICML …, 2007
872007
Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
American Control Conference (ACC), 725-730, 2009
80*2009
Learning from Limited Demonstrations
B Kim, A Farahmand, J Pineau, D Precup
Advances in Neural Information Processing Systems (NeurIPS), 2859-2867, 2013
742013
Robust jacobian estimation for uncalibrated visual servoing
A Shademan, A Farahmand, M Jägersand
IEEE International Conference on Robotics and Automation (ICRA), 5564-5569, 2010
612010
Regularized policy iteration with nonparametric function spaces
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Journal of Machine Learning Research (JMLR) 17 (1), 4809-4874, 2016
502016
Global visual-motor estimation for uncalibrated visual servoing
A Farahmand, A Shademan, M Jagersand
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS …, 2007
47*2007
Model Selection in Reinforcement Learning
AM Farahmand, C Szepesvári
Machine learning 85 (3), 299-332, 2011
432011
Action-Gap Phenomenon in Reinforcement Learning
AM Farahmand
Neural Information Processing Systems (NeurIPS), 2011
312011
Regularization in Reinforcement Learning
AM Farahmand
Department of Computing Science, University of Alberta, 2011
292011
Approximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions
DA Huang, AM Farahmand, KM Kitani, JA Bagnell
AAAI Conference on Artificial Intelligence (AAAI), 2015
262015
Regularized Fitted Q-iteration: Application to Bounded Resource Planning
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
25*2008
Regularized fitted Q-iteration: Application to planning
AM Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Recent Advances in Reinforcement Learning, 55-68, 2008
25*2008
Model-based and model-free reinforcement learning for visual servoing
A Farahmand, A Shademan, M Jagersand, C Szepesvári
IEEE International Conference on Robotics and Automation (ICRA), 2917-2924, 2009
23*2009
Deep reinforcement learning for partial differential equation control
A Farahmand, S Nabi, DN Nikovski
American Control Conference (ACC), 3120-3127, 2017
222017
Value-aware loss function for model-based reinforcement learning
A Farahmand, A Barreto, D Nikovski
Artificial Intelligence and Statistics (AISTATS), 1486-1494, 2017
222017
Interaction of Culture-based Learning and Cooperative Co-evolution and its Application to Automatic Behavior-based System Design
AM Farahmand, MN Ahmadabadi, C Lucas, BN Araabi
IEEE Transactions on Evolutionary Computation 14 (1), 23-57, 2010
212010
Attentional network for visual object detection
K Hara, MY Liu, O Tuzel, A Farahmand
arXiv preprint arXiv:1702.01478, 2017
202017
Value Pursuit Iteration
A Farahmand, D Precup
Advances in Neural Information Processing Systems (NeurIPS - 25), 1349-1357, 2012
192012
The system can't perform the operation now. Try again later.
Articles 1–20