Follow
Daquan Zhou
Daquan Zhou
Verified email at u.nus.edu - Homepage
Title
Cited by
Cited by
Year
Coordinate attention for efficient mobile network design
Q Hou, D Zhou, J Feng
CVPR2021, 2021
26692021
Panet: Few-shot image semantic segmentation with prototype alignment
K Wang, JH Liew, Y Zou, D Zhou, J Feng
proceedings of the IEEE/CVF international conference on computer vision …, 2019
10042019
Deepvit: Towards deeper vision transformer
D Zhou, B Kang, X Jin, L Yang, X Lian, Z Jiang, Q Hou, J Feng
arXiv preprint arXiv:2103.11886, 2021
4792021
Rethinking bottleneck structure for efficient mobile network design
D Zhou, Q Hou, Y Chen, J Feng, S Yan
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
2042020
All tokens matter: Token labeling for training better vision transformers
ZH Jiang, Q Hou, L Yuan, D Zhou, Y Shi, X Jin, A Wang, J Feng
Advances in neural information processing systems 34, 18590-18602, 2021
1612021
Magicvideo: Efficient video generation with latent diffusion models
D Zhou, W Wang, H Yan, W Lv, Y Zhu, J Feng
arXiv preprint arXiv:2211.11018, 2022
1602022
Convbert: Improving bert with span-based dynamic convolution
Z Jiang, W Yu, D Zhou, Y Chen, J Feng, S Yan
NeurIPS2020, 2020
1582020
Shunted self-attention via multi-scale token aggregation
S Ren, D Zhou, S He, J Feng, X Wang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
1532022
Understanding The Robustness in Vision Transformers
D Zhou, Z Yu, E Xie, C Xiao, A Anandkumar, J Feng, JM Alvarez
ICML2022, 2022
1262022
MBEV: Multi-Camera Joint 3D Detection and Segmentation with Unified Birds-Eye View Representation
E Xie, Z Yu, D Zhou, J Philion, A Anandkumar, S Fidler, P Luo, JM Alvarez
arXiv preprint arXiv:2204.05088, 2022
1172022
Deep Model Reassembly
X Yang, Z Daquan, S Liu, J Ye, X Wang
NeurIPS 2022, 2022
1042022
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning
D Lian, D Zhou, J Feng, X Wang
NeurIPS22, 2022
992022
Progressive tandem learning for pattern recognition with deep spiking neural networks
J Wu, C Xu, X Han, D Zhou, M Zhang, H Li, KC Tan
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (11), 7824 …, 2021
972021
Sharpness-aware training for free
J Du, D Zhou, J Feng, V Tan, JT Zhou
Advances in Neural Information Processing Systems 35, 23439-23451, 2022
582022
Refiner: Refining self-attention for vision transformers
D Zhou, Y Shi, B Kang, W Yu, Z Jiang, Y Li, X Jin, Q Hou, J Feng
arXiv preprint arXiv:2106.03714, 2021
582021
Token labeling: Training a 85.5% top-1 accuracy vision transformer with 56m parameters on imagenet
Z Jiang, Q Hou, L Yuan, D Zhou, X Jin, A Wang, J Feng
arXiv preprint arXiv:2104.10858 3 (6), 7, 2021
512021
Diffusion probabilistic model made slim
X Yang, D Zhou, J Feng, X Wang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
442023
Coordinate attention for efficient mobile network design. arXiv 2021
Q Hou, D Zhou, J Feng
arXiv preprint arXiv:2103.02907, 2021
442021
Bubogpt: Enabling visual grounding in multi-modal llms
Y Zhao, Z Lin, D Zhou, Z Huang, J Feng, B Kang
arXiv preprint arXiv:2307.08581, 2023
372023
Magicmix: Semantic mixing with diffusion models
JH Liew, H Yan, D Zhou, J Feng
arXiv preprint arXiv:2210.16056, 2022
312022
The system can't perform the operation now. Try again later.
Articles 1–20