Mohammad Emtiyaz Khan
Mohammad Emtiyaz Khan
Center for Advanced Intelligence Project (AIP), RIKEN, Tokyo
Verifierad e-postadress på postman.riken.jp - Startsida
Citeras av
Citeras av
Fast and scalable bayesian deep learning by weight-perturbation in adam
M Khan, D Nielsen, V Tangkaratt, W Lin, Y Gal, A Srivastava
International Conference on Machine Learning, 2611-2620, 2018
Practical Deep Learning with Bayesian Principles
K Osawa, S Swaroop, A Jain, R Eschenhagen, RE Turner, R Yokota, ...
arXiv preprint arXiv:1906.02506, 2019
AI for social good: unlocking the opportunity for positive impact
N Tomašev, J Cornebise, F Hutter, S Mohamed, A Picciariello, B Connelly, ...
Nature Communications 11 (1), 1-6, 2020
Smarper: Context-aware and automatic runtime-permissions for mobile devices
K Olejnik, I Dacosta, JS Machado, K Huguenin, ME Khan, JP Hubaux
2017 IEEE Symposium on Security and Privacy (SP), 1058-1076, 2017
Conjugate-computation variational inference: Converting variational inference in non-conjugate models to inferences in conjugate models
M Khan, W Lin
Artificial Intelligence and Statistics, 878-887, 2017
Variational bounds for mixed-data factor analysis
MEE Khan, G Bouchard, KP Murphy, BM Marlin
Advances in Neural Information Processing Systems 23, 1108-1116, 2010
Approximate Inference Turns Deep Networks into Gaussian Processes
MEE Khan, A Immer, E Abedi, M Korzepa
Advances in Neural Information Processing Systems, 3088-3098, 2019
Continual Deep Learning by Functional Regularisation of Memorable Past
P Pan, S Swaroop, A Immer, R Eschenhagen, RE Turner, ME Khan
arXiv preprint arXiv:2004.14070, 2020
An expectation-maximization algorithm based Kalman smoother approach for event-related desynchronization (ERD) estimation from EEG
ME Khan, DN Dutt
IEEE transactions on biomedical engineering 54 (7), 1191-1198, 2007
A Stick-Breaking Likelihood for Categorical Data Analysis with Latent Gaussian Models.
ME Khan, S Mohamed, BM Marlin, KP Murphy
AISTATS, 610-618, 2012
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning
A Immer, M Bauer, V Fortuin, G Rätsch, ME Khan
arXiv preprint arXiv:2104.04975, 2021
Slang: Fast structured covariance approximations for bayesian deep learning with natural gradient
A Mishkin, F Kunstner, D Nielsen, M Schmidt, ME Khan
Advances in Neural Information Processing Systems, 6248-6258, 2018
Fast yet simple natural-gradient descent for variational inference in complex models
ME Khan, D Nielsen
2018 International Symposium on Information Theory and Its Applications …, 2018
Variational Message Passing with Structured Inference Networks
W Lin, N Hubacher, ME Khan
arXiv preprint arXiv:1803.05589, 2018
Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models
ME Khan, A Aravkin, M Friedlander, M Seeger
International conference on Machine learning, 2013
Piecewise bounds for estimating Bernoulli-logistic latent Gaussian models
BM Marlin, ME Khan, KP Murphy
ICML, 2011
Kullback-Leibler Proximal Variational Inference
ME Khan, P Baqué, F Fleuret, P Fua
Advances in Neural Information Processing Systems, 2015
Faster Stochastic Variational Inference using Proximal-Gradient Methods with General Divergence Functions
ME Khan, L Switzerland, R Babanezhad, W Lin, M Schmidt, M Sugiyama
Uncertainty in Artificial Intelligence (UAI), 2016
The Bayesian Learning Rule
ME Khan, H Rue
arXiv preprint arXiv:2107.04562, 2021
Fast and simple natural-gradient variational inference with mixture of exponential-family approximations
W Lin, ME Khan, M Schmidt
International Conference on Machine Learning, 3992-4002, 2019
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20