Ryo Yonetani
Ryo Yonetani
Principal Investigator, OMRON SINIC X
Verifierad e-postadress på sinicx.com - Startsida
Titel
Citeras av
Citeras av
År
Client selection for federated learning with heterogeneous resources in mobile edge
T Nishio, R Yonetani
ICC 2019-2019 IEEE International Conference on Communications (ICC), 1-7, 2019
1612019
Future person localization in first-person videos
T Yagi, K Mangalam, R Yonetani, Y Sato
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018
702018
Percutaneous intracerebral navigation by duty-cycled spinning of flexible bevel-tipped needles
JA Engh, DS Minhas, D Kondziolka, CN Riviere
Neurosurgery 67 (4), 1117-1123, 2010
682010
Can eye help you? Effects of visualizing eye fixations on remote collaboration scenarios for physical tasks
K Higuch, R Yonetani, Y Sato
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems …, 2016
622016
Recognizing micro-actions and reactions from paired egocentric videos
R Yonetani, KM Kitani, Y Sato
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2016
582016
Computational models of human visual attention and their implementations: A survey
A Kimura, R Yonetani, T Hirayama
IEICE TRANSACTIONS on Information and Systems 96 (3), 562-578, 2013
572013
Ego-surfing first-person videos
R Yonetani, KM Kitani, Y Sato
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2015
422015
Gaze target determination device and gaze target determination method
K Sakata, S Maeda, R Yonetani, H Kawashima, T Hirayama, ...
US Patent 8,678,589, 2014
342014
Egoscanning: Quickly scanning first-person videos with egocentric elastic timelines
K Higuchi, R Yonetani, Y Sato
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems …, 2017
322017
Degree of interest estimating device and degree of interest estimating method
K Sakata, S Maeda, R Yonetani, H Kawashima, T Hirayama, ...
US Patent 9,538,219, 2017
282017
Privacy-preserving visual learning using doubly permuted homomorphic encryption
R Yonetani, V Naresh Boddeti, KM Kitani, Y Sato
Proceedings of the IEEE International Conference on Computer Vision, 2040-2050, 2017
272017
Multi-mode saliency dynamics model for analyzing gaze and attention
R Yonetani, H Kawashima, T Matsuyama
Proceedings of the Symposium on Eye Tracking Research and Applications, 115-122, 2012
242012
Hybrid-fl: Cooperative learning mechanism using non-iid data in wireless networks
N Yoshida, T Nishio, M Morikura, K Yamamoto, R Yonetani
arXiv preprint arXiv:1905.07210, 2019
19*2019
Mental focus analysis using the spatio-temporal correlation between visual saliency and eye movements
R Yonetani, H Kawashima, T Hirayama, T Matsuyama
Journal of information Processing 20 (1), 267-276, 2012
192012
Gaze probing: Event-based estimation of objects being focused on
R Yonetani, H Kawashima, T Hirayama, T Matsuyama
2010 20th International Conference on Pattern Recognition, 101-104, 2010
152010
Discovering objects of joint attention via first-person sensing
H Kera, R Yonetani, K Higuchi, Y Sato
Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2016
112016
Temporal localization and spatial segmentation of joint attention in multiple first-person videos
Y Huang, M Cai, H Kera, R Yonetani, K Higuchi, Y Sato
Proceedings of the IEEE International Conference on Computer Vision …, 2017
102017
Semantic interpretation of eye movements using designed structures of displayed contents
E Ishikawa, R Yonetani, H Kawashima, T Hirayama, T Matsuyama
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine …, 2012
92012
Learning aspects of interest from gaze
K Shimonishi, H Kawashima, R Yonetani, E Ishikawa, T Matsuyama
Proceedings of the 6th workshop on Eye gaze in intelligent human machine …, 2013
82013
MGpi: A Computational Model of Multiagent Group Perception and Interaction
N Sanghvi, R Yonetani, K Kitani
arXiv preprint arXiv:1903.01537, 2019
6*2019
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20