Följ
Ming Yin
Ming Yin
Verifierad e-postadress på princeton.edu - Startsida
Titel
Citeras av
Citeras av
År
Near-optimal provable uniform convergence in offline policy evaluation for reinforcement learning
M Yin, Y Bai, YX Wang
(AISTATS oral) International Conference on Artificial Intelligence and …, 2021
78*2021
Asymptotically efficient off-policy evaluation for tabular reinforcement learning
M Yin, YX Wang
(AISTATS) International Conference on Artificial Intelligence and Statistics …, 2020
652020
Near-optimal offline reinforcement learning via double variance reduction
M Yin, Y Bai, YX Wang
(NeurIPS) Advances in neural information processing systems 34, 7677-7688, 2021
632021
Towards instance-optimal offline reinforcement learning with pessimism
M Yin, YX Wang
(NeurIPS) Advances in neural information processing systems 34, 4065-4078, 2021
612021
Near-optimal offline reinforcement learning with linear representation: Leveraging variance information with pessimism
M Yin, Y Duan, M Wang, YX Wang
(ICLR) Internation Conference on Learning Representations, 2022, 2022
582022
Optimal uniform ope and model-based offline reinforcement learning in time-homogeneous, reward-free and task-agnostic settings
M Yin, YX Wang
(NeurIPS) Advances in Neural Information Processing Systems, 2021, 2021
222021
Sample-efficient reinforcement learning with loglog (t) switching cost
D Qiao, M Yin, M Min, YX Wang
(ICML) International Conference on Machine Learning, 18031-18061, 2022
182022
TheoremQA: A Theorem-driven Question Answering dataset
W Chen, M Yin, M Ku, E Wan, X Ma, J Xu, T Xia, X Wang, P Lu
(EMNLP) Conference on Empirical Methods in Natural Language Processing, 2023
122023
Offline reinforcement learning with differentiable function approximation is provably efficient
M Yin, M Wang, YX Wang
(ICLR) Internation Conference on Learning Representations, 2023, 2023
112023
On Instance-Dependent Bounds for Offline Reinforcement Learning with Linear Function Approximation
T Nguyen-Tang, M Yin, S Gupta, S Venkatesh, R Arora
(AAAI) AAAI Conference on Artificial Intelligence, 2023, 2023
102023
Offline Stochastic Shortest Path: Learning, Evaluation and Towards Optimality
M Yin, W Chen, M Wang, YX Wang
(UAI) The 38th Conference on Uncertainty in Artificial Intelligence, 2022
42022
Logarithmic Switching Cost in Reinforcement Learning beyond Linear MDPs
D Qiao, M Yin, YX Wang
arXiv preprint arXiv:2302.12456, 2023
32023
No-Regret Linear Bandits beyond Realizability
C Liu, M Yin, YX Wang
(UAI) The 39th Conference on Uncertainty in Artificial Intelligence, 2023
22023
Non-stationary Reinforcement Learning under General Function Approximation
S Feng, M Yin, R Huang, YX Wang, J Yang, Y Liang
(ICML) International Conference on Machine Learning, 2023
12023
Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
K Zhang, M Yin, YX Wang
arXiv preprint arXiv:2206.05916, 2022
12022
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
X Yue, Y Ni, K Zhang, T Zheng, R Liu, G Zhang, S Stevens, D Jiang, ...
arXiv preprint arXiv:2311.16502, 2023
2023
Posterior Sampling with Delayed Feedback for Reinforcement Learning with Linear Function Approximation
M Yin*, NL Kuang*, M Wang, YX Wang, YA Ma
(NeurIPS) Advances in neural information processing systems, 2023, 2023
2023
Model-Free Algorithm with Improved Sample Efficiency for Zero-Sum Markov Games
S Feng, M Yin, YX Wang, J Yang, Y Liang
arXiv preprint arXiv:2308.08858, 2023
2023
Offline Reinforcement Learning with Closed-form Policy Improvement Operators
J Li, E Zhang, M Yin, B Qinxun, YX Wang, WY Wang
(ICML) International Conference on Machine Learning, 2023
2023
Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data
S Madhow, D Xiao, M Yin, YX Wang
3rd Offline RL Workshop: Offline RL as a''Launchpad'', 2022
2022
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20