Follow
Hongyuan Mei
Hongyuan Mei
Toyota Technology Institute at Chicago, Johns Hopkins University, The University of Chicago
Verified email at ttic.edu - Homepage
Title
Cited by
Cited by
Year
The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process
H Mei, J Eisner
arXiv, 2016
5972016
What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment
H Mei, M Bansal, MR Walter
NAACL, 2016
3162016
Listen, attend, and walk: Neural mapping of navigational instructions to action sequences
H Mei, M Bansal, MR Walter
AAAI, 2016
2652016
Coherent Dialogue with Attention-based Language Models
H Mei, M Bansal, MR Walter
AAAI, 2017
1142017
Imputing missing events in continuous-time event streams
H Mei, G Qin, J Eisner
International Conference on Machine Learning, 4475-4485, 2019
372019
Advances in Neural Information Processing Systems
H Mei, JM Eisner
Volume, 2017
222017
Neural Datalog through time: Informed temporal modeling via logical specification
H Mei, G Qin, M Xu, J Eisner
International Conference on Machine Learning, 6808-6819, 2020
192020
Noise-contrastive estimation for multivariate point processes
H Mei, T Wan, J Eisner
Advances in neural information processing systems 33, 5204-5214, 2020
192020
Transformer embeddings of irregularly spaced events and their participants
C Yang, H Mei, J Eisner
arXiv preprint arXiv:2201.00044, 2021
152021
Personalized dynamic treatment regimes in continuous time: a Bayesian approach for optimizing clinical decisions with timing
W Hua, H Mei, S Zohar, M Giral, Y Xu
Bayesian Analysis 17 (3), 849-878, 2022
142022
Accurate Vision-based Vehicle Localization using Satellite Imagery
H Chu, H Mei, M Bansal, MR Walter
NIPS 2015 Transfer and Multi-Task Learning workshop, 2015
132015
HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences
S Xue, X Shi, J Zhang, H Mei
Advances in Neural Information Processing Systems 35, 34641-34650, 2022
122022
Hidden state variability of pretrained language models can guide computation reduction for transfer learning
S Xie, J Qiu, A Pasad, L Du, Q Qu, H Mei
arXiv preprint arXiv:2210.10041, 2022
112022
Bellman meets hawkes: Model-based reinforcement learning via temporal point processes
C Qu, X Tan, S Xue, X Shi, J Zhang, H Mei
Proceedings of the AAAI Conference on Artificial Intelligence 37 (8), 9543-9551, 2023
72023
Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning
X Shi, S Xue, K Wang, F Zhou, JY Zhang, J Zhou, C Tan, H Mei
arXiv preprint arXiv:2305.16646, 2023
72023
Statler: State-maintaining language models for embodied reasoning
T Yoneda, J Fang, P Li, H Zhang, T Jiang, S Lin, B Picker, D Yunis, H Mei, ...
arXiv preprint arXiv:2306.17840, 2023
62023
Explicit Planning Helps Language Models in Logical Reasoning
H Zhao, K Wang, M Yu, H Mei
arXiv preprint arXiv:2303.15714, 2023
62023
Easytpp: Towards open benchmarking the temporal point processes
S Xue, X Shi, Z Chu, Y Wang, F Zhou, H Hao, C Jiang, C Pan, Y Xu, ...
arXiv preprint arXiv:2307.08097, 2023
52023
Tiny-attention adapter: Contexts are more important than the number of parameters
H Zhao, H Tan, H Mei
arXiv preprint arXiv:2211.01979, 2022
52022
Transformer embeddings of irregularly spaced events and their participants
H Mei, C Yang, J Eisner
International conference on learning representations, 2021
42021
The system can't perform the operation now. Try again later.
Articles 1–20