Följ
Chao Weng
Chao Weng
Tencent AI Lab
Verifierad e-postadress på tencent.com - Startsida
Titel
Citeras av
Citeras av
År
Recurrent deep neural networks for robust speech recognition
C Weng, D Yu, S Watanabe, BHF Juang
2014 IEEE International Conference on Acoustics, Speech and Signal …, 2014
1452014
Deep neural networks for single-channel multi-talker speech recognition
C Weng, D Yu, ML Seltzer, J Droppo
IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (10 …, 2015
862015
Durian: Duration informed attention network for multimodal synthesis
C Yu, H Lu, N Hu, M Yu, C Weng, K Xu, P Liu, D Tuo, S Kang, G Lei, D Su, ...
arXiv preprint arXiv:1909.01700, 2019
742019
Improving Attention Based Sequence-to-Sequence Models for End-to-End English Conversational Speech Recognition.
C Weng, J Cui, G Wang, J Wang, C Yu, D Su, D Yu
Interspeech, 761-765, 2018
532018
Component fusion: Learning replaceable language model component for end-to-end speech recognition system
C Shan, C Weng, G Wang, D Su, M Luo, D Yu, L Xie
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
512019
Investigating end-to-end speech recognition for mandarin-english code-switching
C Shan, C Weng, G Wang, D Su, M Luo, D Yu, L Xie
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
502019
Past review, current progress, and challenges ahead on the cocktail party problem
Y Qian, C Weng, X Chang, S Wang, D Yu
Frontiers of Information Technology & Electronic Engineering 19 (1), 40-63, 2018
452018
Mixed speech recognition
D Yu, C Weng, ML Seltzer, J Droppo
US Patent 9,390,712, 2016
422016
Single-channel mixed speech recognition using deep neural networks
C Weng, D Yu, ML Seltzer, J Droppo
2014 IEEE International Conference on Acoustics, Speech and Signal …, 2014
422014
DurIAN: Duration Informed Attention Network for Speech Synthesis.
C Yu, H Lu, N Hu, M Yu, C Weng, K Xu, P Liu, D Tuo, S Kang, G Lei, D Su, ...
INTERSPEECH, 2027-2031, 2020
412020
Replay and synthetic speech detection with res2net architecture
X Li, N Li, C Weng, X Liu, D Su, D Yu, H Meng
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
342021
Joint training of complex ratio mask based beamformer and acoustic model for noise robust asr
Y Xu, C Weng, L Hui, J Liu, M Yu, D Su, D Yu
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
282019
Minimum bayes risk training of rnn-transducer for end-to-end speech recognition
C Weng, C Yu, J Cui, C Zhang, D Yu
arXiv preprint arXiv:1911.12487, 2019
272019
Feature space maximum a posteriori linear regression for adaptation of deep neural networks
Z Huang, J Li, SM Siniscalchi, IF Chen, C Weng, CH Lee
Fifteenth Annual Conference of the International Speech Communication …, 2014
272014
A Multistage Training Framework for Acoustic-to-Word Model.
C Yu, C Zhang, C Weng, J Cui, D Yu
Interspeech, 786-790, 2018
252018
Beyond cross-entropy: Towards better frame-level objective functions for deep neural network training in automatic speech recognition
Z Huang, J Li, C Weng, CH Lee
Fifteenth Annual Conference of the International Speech Communication …, 2014
232014
Durian-sc: Duration informed attention network based singing voice conversion system
L Zhang, C Yu, H Lu, C Weng, C Zhang, Y Wu, X Xie, Z Li, D Yu
arXiv preprint arXiv:2008.03009, 2020
212020
Pitchnet: Unsupervised singing voice conversion with pitch adversarial network
C Deng, C Yu, H Lu, C Weng, D Yu
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
212020
Gigaspeech: An evolving, multi-domain asr corpus with 10,000 hours of transcribed audio
G Chen, S Chai, G Wang, J Du, WQ Zhang, C Weng, D Su, D Povey, ...
arXiv preprint arXiv:2106.06909, 2021
192021
Non-autoregressive transformer asr with ctc-enhanced decoder input
X Song, Z Wu, Y Huang, C Weng, D Su, H Meng
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
192021
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20