Bei Peng
Title
Cited by
Cited by
Year
Interactive learning from policy-dependent human feedback
J MacGlashan, MK Ho, R Loftin, B Peng, G Wang, DL Roberts, ME Taylor, ...
34th International Conference on Machine Learning (ICML 2017), 2285-2294, 2017
942017
Learning behaviors via human-delivered discrete feedback: modeling implicit feedback strategies to speed up learning
R Loftin, B Peng, J MacGlashan, ML Littman, ME Taylor, J Huang, ...
Autonomous agents and multi-agent systems (JAAMAS 2016) 30 (1), 30-59, 2016
672016
A strategy-aware technique for learning behaviors from discrete human feedback
RT Loftin, J MacGlashan, B Peng, ME Taylor, ML Littman, J Huang, ...
Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2014), 2014
592014
A need for speed: Adapting agent action speed to improve task learning from non-expert humans
B Peng, J MacGlashan, R Loftin, ML Littman, DL Roberts, ME Taylor
Autonomous Agents and Multiagent Systems (AAMAS 2016), 2016
382016
Learning something from nothing: Leveraging implicit human feedback strategies
R Loftin, B Peng, J MacGlashan, ML Littman, ME Taylor, J Huang, ...
The 23rd IEEE International Symposium on Robot and Human Interactive …, 2014
202014
Training an agent to ground commands with reward and punishment
J MacGlashan, ML Littman, R Loftin, B Peng, DL Roberts, ME Taylor
Proceedings of the AAAI Machine Learning for Interactive Systems Workshop, 6-12, 2014
202014
An empirical study of non-expert curriculum design for machine learners
B Peng, J MacGlashan, R Loftin, ML Littman, DL Roberts, ME Taylor
In Proceedings of the IJCAI Interactive Machine Learning Workshop, 2016
112016
Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey
S Narvekar, B Peng, M Leonetti, J Sinapov, ME Taylor, P Stone
Journal of Machine Learning Research (JMLR 2020) 21, 1-50, 2020
72020
Towards integrating real-time crowd advice with reinforcement learning
GV de la Cruz, B Peng, WS Lasecki, ME Taylor
Proceedings of the 20th International Conference on Intelligent User …, 2015
72015
Curriculum Design for Machine Learners in Sequential Decision Tasks
B Peng, J MacGlashan, R Loftin, ML Littman, DL Roberts, ME Taylor
IEEE Transactions on Emerging Topics in Computational Intelligence 2 (4 …, 2018
52018
Convergent Actor Critic by Humans
J MacGlashan, ML Littman, DL Roberts, R Loftin, B Peng, ME Taylor
International Conference on Intelligent Robots and Systems (IROS 2016), 2016
52016
Language and policy learning from human-delivered feedback
B Peng, R Loftin, J MacGlashan, ML Littman, ME Taylor, DL Roberts
Proc. Mach. Learn. Social Robot. Workshop, 2015
42015
Generating real-time crowd advice to improve reinforcement learning agents
GV de la Cruz, B Peng, WS Lasecki, ME Taylor
AAAI Workshop: Learning for General Competency in Video Games, 2015
4*2015
Optimistic Exploration even with a Pessimistic Initialisation
T Rashid, B Peng, W Böhmer, S Whiteson
International Conference on Learning Representations (ICLR 2020), 2020
32020
Weighted QMIX: Expanding Monotonic Value Function Factorisation
T Rashid, G Farquhar, B Peng, S Whiteson
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020), 2020
22020
Deep Multi-Agent Reinforcement Learning for Decentralized Continuous Cooperative Control
CS de Witt, B Peng (equal contribution), PA Kamienny, P Torr, W Böhmer, ...
arXiv preprint arXiv:2003.06709, 2020
12020
How do humans teach: On curriculum design for machine learners
B Peng
Proceedings of the 16th Conference on Autonomous Agents and MultiAgent …, 2017
12017
UneVEn: Universal Value Exploration for Multi-Agent Reinforcement Learning
T Gupta, A Mahajan, B Peng, W Böhmer, S Whiteson
arXiv preprint arXiv:2010.02974, 2020
2020
RODE: Learning Roles to Decompose Multi-Agent Tasks
T Wang, T Gupta, A Mahajan, B Peng, S Whiteson, C Zhang
arXiv preprint arXiv:2010.01523, 2020
2020
AI-QMIX: Attention and Imagination for Dynamic Multi-Agent Reinforcement Learning
S Iqbal, CAS de Witt, B Peng, W Böhmer, S Whiteson, F Sha
arXiv preprint arXiv:2006.04222, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–20