郑成诗 男 博士生导师 中国科学院声学研究所
电子邮件: cszheng@mail.ioa.ac.cn
通信地址: 北京市北四环西路21号
邮政编码: 100190
研究领域
Statistical-model-based Speech Processing (基于统计特性的语音处理);
Microphone Array Signal Processing (传声器阵列信号处理);
Machine Learning for Speech and Audio Processing(机器学习语音和音频处理).
招生信息
招生专业
招生方向
最新信息
2025年入学的免试硕士研究生和直博生招生中,感兴趣的优秀本科生欢迎通过Email或者电话联系咨询。
博士后常年招聘中,可随时入站,感兴趣的优秀博士生欢迎通过Email或者电话联系咨询。
联系方式:cszheng@mail.ioa.ac.cn
办公电话:01082547945
教育背景
工作经历
工作简历
社会兼职
2023-12-01-今,北京听力协会, 理事
2023-10-21-今,《声学学报》, 第一届青年编委
2023-02-28-今,山东省低空监测网技术重点实验室, 第一届学术委员会委员
2023-01-01-今,中国声学学会, 理事
2022-08-01-今,Sound & Vibration, Editorial Board
2021-12-27-今,中国人工智能学会, 智能传媒专委会第二届委员
2021-12-01-今,深圳市音响行业协会, 专家委员会专家
2021-09-01-今,《中国传媒大学学报(自然科学版)》, 青年编委
2021-08-01-今,Frontiers in Signal Processing, Review Editor
2019-11-01-今,通信学会人工智能技术与应用委员会, 委员
2019-01-01-今,中国高科技产业化研究会智能信息处理产业化分会, 理事
2015-11-20-今,IEEE, Senior Member
2010-01-01-今,EURASIP, Member
教授课程
专利与奖励
奖励信息
专利成果
出版信息
SCI&SSCI
Book Chapter
[1]. C. Zheng, Y. Ke, X. Luo, and X. Li. Convolutional neural network-based models for speech denoising and dereverberation: algorithms and applications. in M. Naved, V. A. Devi, L. Gaur, and A. A. Elngar. (eds). IoT-enabled convolutional neural networks: techniques and applications. River Publisher, Denmark, 2023.
Submitted and to be published
[1]. W. Meng, X. Li, X. Luo, X. Li, and C. Zheng*. Deep Kronecker Product Beamforming for Large-scale Microphone Arrays. IEEE-ACM Transactions on Audio, Speech, and Language Processing, Under Second-Round Review.
[2]. H. Zhang, B. C. J. Moore, F. Jiang, M. Diao, X. Li, and C. Zheng*. Neural-WDRC: A deep-learning wide dynamic range compression method combined with controllable noise reduction for hearing aids. Trends in Hearing, in Revision.
[3]. C. Xu, B. C. J. Moore, X. Li, and C. Zheng*. Predicting the intelligibility of Mandarin Chinese with manipulated tonal information. J. Acoust. Soc. Am., in Revision.
[4]. Y. Liang, F. Liu, A. Li, X. Li, and C. Zheng*. NaturalL2S: End-to-End High-quality Multispeaker Lip-to-Speech Synthesis with Differential Digital Signal Processing. IEEE-ACM Transactions on Audio, Speech, and Language Processing, Under Review.
Publication
International Journal Papers (SCI Index or SSCI Index)
[1]. F. Hao, X. Li, and C. Zheng*. X-TF-GridNet: A Time-Frequency Domain Target Speaker Extraction Network with Adaptive Speaker Embedding Fusion. Information Fusion, 112(2024)102550.
[2]. A. Li, G. Yu, Z. Xu, C. Fan, X. Li, and C. Zheng*. TaBE: decoupling spatial and spectral processing with Taylor’s unfolding method for multi-channel speech enhancement. Information Fusion, 101 (2024)101976.
[3]. Y. Ge, W. Meng, X. Li, and C. Zheng*. Geometry Calibration for Deformable Linear Microphone Arrays with Bézier Curve Fitting. IEEE Signal Processing Letters, vol. 31, pp. 1620-1624, June 2024.
[4]. X. Luo, Y. Ke*, X. Li, and C. Zheng. On phase recovery and preserving early reflections for deep-learning speech dereverberation. J. Acoust. Soc. Am., vol. 155, pp. 436-451, 2024.
[5]. X. Luo, Y. Ke, X. Li, and C. Zheng*. Deep Informed Spatio-Spectral Filtering for Multi-channel Speech Extraction against Steering Vector Uncertainties. Applied Acoustics, Accepted.
[6]. W. Meng, J. Li, Y. Ge, X. Li, and C. Zheng*. Frame-wise speech extraction with recursive expectation maximization for partially deformable microphone arrays. Digital Signal Processing, 151(2024) 104530.
[7]. Y. Zhang, J. Sang, C. Zheng*, and X. Li*. A denoising-aided multi-task learning method for blind estimation of reverberation time. Measurement, 231(2024)114568.
[8]. C. Fan, J. Xue, J. Tao, J. Yi, C. Wang, C. Zheng, and Z. Lv. Spatial reconstructed local attention Res2Net with F0 subband for fake speech detection. Neural Networks, 175 (2024)106320.
[9]. J. Xu, J. Li, W. Meng, X. Li*, and C. Zheng. Low-complexity frequency-invariant beampattern synthesis using accurate response control for speech extraction. Applied Acoustics, 224(2024)110129.
[10]. C. Zheng, H. Zhang, W. Liu, X. Luo, A. Li, X. Li, and B. C. J. Moore. Sixty years of frequency-domain monaural speech enhancement: from traditional to deep learning algorithms. Trends in Hearing, 2023;27. doi:10.1177/23312165231209913.
Paper: https://journals.sagepub.com/doi/full/10.1177/23312165231209913
Source Codes: https://github.com/cszheng-ioa/Sixty-years-of-frequency-domain-monaural-speech-enhancement
[11]. C. Zheng, C. Xu, M. Wang, X. Li, and B. C. J. Moore. Evaluation of deep marginal feedback cancellation for hearing aids using speech and music. Trends in Hearing, 2023;27. doi:10.1177/23312165231192290.
[12]. A. Li, G. Yu, C. Zheng*, W. Liu, X. Li. A General Unfolding Speech Enhancement Method Motivated by Taylor's Theorem. IEEE-ACM Transactions on Audio, Speech, and Language Processing, vol. 31, pp. 3629-3646, 2023.
[13]. F. Hao, X. Li, and C. Zheng*. End-to-end neural speaker diarization with an iterative attractor estimation. Neural Networks, 166 (2023): 566-578.
[14]. G. Yu, A. Li, H. Wang, W. Liu, Y. Zhang, Y. Wang, and C. Zheng*. FSI-Net: a dual-stage Full- and Sub-band Integration Network for full-band speech enhancement. Applied Acoustics, 211(2023)109539.
[15]. W. Meng, M. Yuan, C. Zheng*, and X. Li. A Comparison of Robust Capon Beamformers using a Large-scale Microphone Array for Speech Extraction. Applied Acoustics, 202(2023)109123.
[16]. R. Zhang, R. Meng, J. Sang, Y. Hu, X. Li, and C. Zheng*. Modeling individual HRTF based on anthropometric parameters and generic HRTF amplitudes. CAAI Transactions on Intelligence Technology, vol. 8, no. 2, pp. 364-378, 2023
[17]. J. Wang, Y. Chen, S. Stenfelt, J. Sang*, X. Li, and C. Zheng*. Analysis of cross-talk cancellation of bilateral bone conduction stimulation. Hearing Research 434 (2023): 108781.
[18]. Z. Han, Y. Ke, X. Li, and C. Zheng*. Parallel processing of distributed beamforming and multichannel linear prediction for speech denoising and dereverberation in wireless acoustic sensor networks. EURASIP Journal of Audio, Speech, and Music Processing, 25(2023).
[19]. Z. Jiang, J. Sang*, C. Zheng, A. Li, and X. Li. Modeling individual HRTFs from Sparse Measurements based on U-net. J. Acoust. Soc. Am., vol. 153, pp. 248-259, 2023.
[20]. Y. Nie, J. Sang*, C. Zheng, et al. A calibration method for bone conduction transducers using electrical input impedance[J]. Applied Acoustics, 213(2013)109631.
[21]. C. Fan, H. Zhang, A. Li, X. Wang, C. Zheng, L. Zhao, and X. Wu. CompNet: Complementary network for single-channel speech enhancement. Neural Networks, vol. 168, pp. 508-517, 2023.
[22]. G. Li, C. Zheng, Y. Ke*, and X. Li. Deep learning-based acoustic echo cancellation for surround sound systems. Applied Sciences, 2023, 13, 1266.
[23]. C. Zheng*, M. Wang, X. Li, and B. C. J. Moore. A deep learning solution to the marginal stability problems of acoustic feedback systems for hearing aids. J. Acoust. Soc. Am., vol. 152, no. 6, pp. 3616-3634, 2022.
[24]. C. Zheng*, W. Liu, A. Li, Y. Ke, and X. Li. Low-latency monaural speech enhancement with deep filter-bank equalizer. J. Acoust. Soc. Am., vol. 151, no. 5, pp. 3291-3304, 2022.
[25]. A. Li, C. Zheng*, G. Yu, J. Cai, and X. Li. Filtering and Refining: A Collaborative-Style Framework for Single-Channel Speech Enhancement. IEEE-ACM Transactions on Audio, Speech, and Language Processing, vol.30, pp. 2156-2172, 2022.
[26]. G. Yu, A. Li, H. Wang, Y. Wang, Y. Ke, and C. Zheng*. DBT-Net: Dual-branch federative magnitude and phase estimation with attention-in-attention transformer for monaural speech enhancement. IEEE-ACM Transactions on Audio, Speech, and Language Processing, vol. 30, pp. 2629-2644, 2022.
[27]. W. Liu, A. Li, C. Zheng*, and X. Li. A Separation and Interaction Framework for Causal Multi-channel Speech Enhancement. Digital Signal Processing, 126(2022)103519.
[28]. A. Li, C. Zheng*, L. Zhang, and X. Li. Glance and Gaze: A Collaborative Learning Framework for Single-channel Speech Enhancement. Applied Acoustics, 187(2022)108499.
[29]. F. Liu, H. Wang, Y. Ke, and C. Zheng*. One-shot voice conversion using a combination of U2-Net and vector quantization. Applied Acoustics, 199(2022)109014.
[30]. F. Zhang, J. Li, W. Meng, X. Li, and C. Zheng*. A Vehicle Whistle Database for Evaluation of Outdoor Acoustic Source Localization and Tracking using an Intermediate-Sized Microphone Array. Applied Acoustics, 201(2022)109113.
[31]. X. Luo, C. Zheng, A. Li, Y. Ke*, and X. Li. Analysis of trade-offs between magnitude and phase estimation in loss functions for speech denoising and dereverberation. Speech Communication, 145(2022)71-87.
[32]. K. Zheng, C. Zheng, J. Sang*, Y. Zhang, and X. Li. Noise-robust blind reverberation time estimation using noise-aware time-frequency masking. Measurement, 192(2022)110901.
[33]. Y. Nie, J. Wang*, C. Zheng, J. Xu, X. Li, Y. Wang, B. Zhong, J. Cai, and J. Sang. Measurement and modeling of the mechanical impedance of human mastoid and condyle. J. Acoust. Soc. Am., vol. 151, pp. 1434-1448, 2022.
[34]. J. Wang, X. Lu, J. Sang*, J. Cai, and C. Zheng. Effects of stimulation position and frequency band on auditory spatial perception with bilateral bone conduction. Trends in Hearing, vol. 26, pp. 1-17, 2022.
[35]. J. Wang, S. Stenfelt, S. Wu, Z. Yan, J. Sang*, C. Zheng, and X. Li. The Effect of Stimulation Position and Ear Canal Occlusion on Perception of Bone Conducted Sound. Trends in Hearing, vol. 26, pp. 1-15, 2022.
[36]. W. Liu, A. Li, X. Wang*, M. Yuan, Y. Chen, C. Zheng, and X. Li. A Neural Beamspace-Domain Filter for Real-Time Multi-Channel Speech Enhancement. Symmetry, 2022, 14(6), 1081.
[37]. K. Zheng, R. Meng, C. Zheng, X. Li, J. Sang*, J. Cai, J. Wang, and X. Wang. EmotionBox: A music-element-driven emotional music generation system based on music psychology. Frontiers in Psychology, 13(2022)841926.
[38]. Y. Nie, J. Sang*, C. Zheng, J. Xu, F. Zhang, and X. Li. An objective bone conduction verification tool using a piezoelectric thin-film force transducer. Front. Neurosci., 16(2022)1068682.
[39]. A. Li, W. Liu, C. Zheng*, C. Fan, and X. Li. Two Heads Are Better Than One: A Two-Stage Complex Spectral Mapping Approach for Monaural Speech Enhancement. IEEE-ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 1829-1843, 2021.
[40]. W. Meng, Y. Ke, J. Li, C. Zheng*, and X. Li. Finite Data Performance Analysis of One-Bit MVDR and Phase-Only MVDR. Signal Processing, 183(2021)108018.
[41]. G. Yu, Y. Wang, H. Wang, Q. Zhang, and C. Zheng*. A two-stage complex network using cycle-consistent generative adversarial networks for speech enhancement. Speech Communication, vol. 134, pp. 42-54, Nov. 2021.
[42]. L. Cheng, R. Peng, A. Li, C. Zheng*, and X. Li. Deep Learning-based Stereophonic Acoustic Echo Suppression without Decorrelation. J. Acoust. Soc. Am., vol. 150, pp. 816-829, 2021.
[43]. X. Guo, M. Yuan, Y. Ke, C. Zheng*, and X. Li. Distributed Node-Specific Block-Diagonal LCMV Beamforming in Wireless Acoustic Sensor Networks. Signal Processing, 185(2021)108085.
[44]. J. Wang, J. Zhang, J. Xu, C. Zheng*, and X. Li. An optimization framework for designing robust cascade biquad feedback controllers on active noise cancellation headphones. Applied Acoustics, 179(2021)108081.
[45]. J. Zhang, C.Zheng*, F. Zhang, and X. Li. A Low-complexity Volterra Filtered-Error LMS Algorithm with a Kronecker Product Decomposition. Applied Sciences, 2021, 11, 9637.
[46]. J. Wang, Y. Guan, C. Zheng, R. Peng*, and X. Li. A temporal-spectral generative adversarial network based end-to-end packet loss concealment for wideband speech transmission. J. Acoust. Soc. Am., vol. 150, pp. 2577-2588, 2021.
[47]. Y. Ke, A. Li, C. Zheng, R. Peng*, and X. Li. Low-complexity artificial noise suppression methods for deep learning-based speech enhancement algorithms. EURASIP Journal of Audio, Speech, and Music Processing, (2021)2021:17.
[48]. F. Liu, H. Wang, R. Peng*, C. Zheng, and X. Li. U2-VC: one-shot voice conversion using two-level nested U-structure. EURASIP Journal of Audio, Speech, and Music Processing, 2021, 40 (2021).
[49]. A. Li, C. Zheng, R. Peng*, and X. Li. On the importance of power compression and phase estimation in monaural speech dereverberation. JASA Express Letters, 1, 014802(2021).
[50]. R. Meng, J. Xiang, J. Sang*, C. Zheng, X. Li, S. Bleeck, J. Cai, and J. Wang. Investigation of an MAA Test with Virtual Sound Synthesis. Frontiers in Psychology, 12(2021)656052.
[51]. J. Ding, Y. Ke, L. Cheng, C. Zheng*, and X. Li. Joint estimation of binaural distance and azimuth by exploiting deep neural networks. J. Acoust. Soc. Am., vol. 147, pp. 2625-2635, 2020.
[52]. J. Ding, J. Li, C. Zheng*, and X. Li. Wideband sparse Bayesian learning for off-grid binaural sound source localization. Signal Processing, 166(2020)107250.
[53]. A. Li, M. Yuan, C. Zheng*, and X. Li. Speech enhancement using progressive learning-based convolutional recurrent neural network. Applied Acoustics, 166(2020)107347.
[54]. A. Li, R. Peng, C. Zheng*, and X. Li. A Supervised Speech Enhancement Approach with Residual Noise Control for Voice Communication. Applied Sciences, 2020, 10, 2894.
[55]. Z. Jiang, J. Sang*, C. Zheng, and X. Li. The effect of pinna filtering in binaural transfer functions on externalization in a reverberant environment. Applied Acoustics, 164(2020) 107257.
[56]. G. Li, C. Zheng*, X. Li, T. Yu, S. Bleeck, and J. Sang, Evaluation of headphone phase equalization on sound reproduction. Applied Acoustics, vol. 156, pp. 208-216, 2019.
[57]. C. Zheng, A. Deleforge, X. Li, and W. Kellermann. Statistical analysis of the multichannel Wiener filter using a bivariate normal distribution for sample covariance matrices. IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 26, no. 5, pp. 951-966, 2018.
[58]. C. Zheng, Z. Tan, R. Peng, and X. Li. Guided spectrogram filtering for speech dereverberation. Applied Acoustics, vol. 134, pp. 154-159, 2018.
[59]. R. Peng, Z. Tan, X. Li, and C. Zheng*. A perceptually motivated LP residual estimator in noisy and reverberant environments. Speech Communication, vol. 96, pp. 129-141, 2018.
[60]. Y. Ke, C. Zheng*, R. Peng, and X. Li. Robust Adaptive Beamforming using Noise Reduction Preprocessing-based Fully Automatic Diagonal and Steering Vector Estimation. IEEE Access, vol.5, pp. 12974-12987, 2017.
[61]. H. Yang, J. Wang, C. Zheng*, and X. Li. Stereophonic Channel Decorrelation Using a Binaural Masking Model. Applied Acoustics, vol.110, no. 9, pp. 128-136, Sept. 2016.
[62]. C. Zheng, C. Hofmann, X. Li, and W. Kellermann. Analysis of additional stable gain by frequency shifting for acoustic feedback suppression using statistical room acoustics. IEEE Signal Processing Letters, vol.23, no. 1, pp. 159-163, Jan. 2016.
[63]. C. Lei, J. Xu, C. Zheng*, and X. Li. Active headrest with robust performance against head movement. Journal of Low Frequency Noise; Vibration and Active Control, vol. 34, no. 3, pp. 233-250, 2015.
[64]. X. Li, Z. Cai, C. Zheng*, and X. Li. Equalization of loudspeaker response using balanced model truncation. J. Acoust. Soc. Am., vol. 137, no. 4, pp. EL241-EL247, 2015.
[65]. J. Sang, H. Hu, C. Zheng, G. Li, M. E. Lutman, and S. Bleeck. Speech quality evaluation of a sparse coding shrinkage noise reduction algorithm with normal hearing and hearing impaired listeners. Hearing Research, vol. 327, pp. 175-185, 2015.
[66]. C. Zheng, R. Peng, J. Li, and X. Li. A constrained MMSE LP residual estimator for speech dereverberation in noisy environments. IEEE Signal Processing Letters, vol. 21, no. 12, pp. 1462-1466, Dec. 2014
[67]. C. Zheng, H. Yang, and X. Li. On generalized auto-spectral coherence function and its applications to signal detection. IEEE Signal Processing Letters, vol. 21, no. 5, pp. 559-563, May 2014.
[68]. S. Wang, C.Zheng*, R. Peng, and X. Li. A statistical analysis of power-level- difference-based dual-channel post-filter estimator. Applied Acoustics, vol. 83, pp. 40-46, 2014.
[69]. J. Sang, H. Hu, C. Zheng, G. Li, M. E. Lutman, and S. Bleeck. Evaluation of the sparse coding shrinkage noise reduction algorithm in normal hearing and hearing impaired listeners. Hearing Research, vol. 310, no. 4, pp. 36-47, 2014.
[70]. R. Peng, C. Zheng*, and X. Li. Two-stage optimization algorithm for adaptive IIR notch filter. Electronic Letters, vol. 50, no. 14, pp. 985-987, 2014.
[71]. C. Zheng, H. Liu, R. Peng,and X. Li. A Statistical Analysis of Two-Channel Post-Filter Estimators inIsotropic Noise Fields. IEEE Trans. on Audio, Speech, and Lang. Process., vol. 21, no. 2, pp. 336-342, 2013.
[72]. H. Hu, S. Wang, C. Zheng*, and X. Li. A cepstrum-based preprocessing and postprocessing for speech enhancement in adverse environments. Applied Acoustics, vol. 74, no. 12, pp. 1458-1462, 2013.
[73]. J. Wang, H. Liu, C. Zheng*, and X. Li. Spectral subtraction based on two-stage spectral estimation and modified cepstrum thresholding. Applied Acoustics, vol. 74, no. 3, pp. 450-458, 2013.
[74]. C. Zheng. On second-order statistics of log-periodogram and cepstral coefficients for processes with mixed spectra. Signal Processing, vol. 92, pp. 2560-2565, 2012.
[75]. C. Zheng, and X. Li. Detection of multiple sinusoids in unknown colored noise using truncated cepstrum thresholding and local signal-to-noise-ratio. Applied Acoustics, vol.73, pp. 809-816, 2012.
[76]. C. Zheng, Y. Zhou, and X. Li. Generalized framework for the nonparametric coherence function estimation. Electronics Letters, vol. 46, no. 6, pp.450-452, 2010.
[77]. M. Bao, C. Zheng, X. Li, J. Yang, and J. Tian. Acoustical vehicle detection based on bispectral entropy. IEEE Signal Processing Letters, vol. 16, no. 5, pp. 378-381, May 2009.
[78]. C. Zheng, M. Zhou, and X. Li. On the relationship of non-parametric methods for coherence function estimation. Signal Processing, vol. 88, pp.2863-2867, 2008.
Peer-Reviewed International Conference Proceedings
[1]. W. Meng, X. Li, A. Li, J. Li, X. Li, and C. Zheng. All neural Kronecker product beamforming for speech extraction with large-scale microphone arrays. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Seoul, Korea, April 14-19, 2024.
[2]. G. Yu, X. Zheng, N. Li, R. Han, C. Zheng, C. Zhang, C. Zhou, Q. Huang, and B. Yu. BAE-Net: A low complexity and high fidelity bandwith-adpative neural network for speech super-resolution. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Seoul, Korea, April 14-19, 2024.
[3]. F. Hao, H. Zhang, L. Dai, X. Luo, X. Li, and C. Zheng. RENET: A time-frequency domain general speech restoration network for ICASSP 2024 speech improvement challenge. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Seoul, Korea, April 14-19, 2024.
[4]. L. Dai, Y. Ke, H. Zhang, F. Hao, X. Luo, X. Li, and C. Zheng. A time-frequency band-split neural network for real-time full-band packet loss concealment. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Seoul, Korea, April 14-19, 2024.
[5]. A. Li, W. Meng, G. Yu, W. Liu, X. Li, and C. Zheng. TaylorBeamixer: Learning Taylor-Inspired All-Neural Multi-Channel Speech Enhancement from Beam-Space Dictionary Perspective. INTERSPEECH 2023, Dublin, Ireland, August 20-24, 2023.
[6]. J. Xu, J. Li, W. Meng, X. Li, and C. Zheng. Low-complexity Broadband Beampattern Synthesis using Array Response Control. INTERSPEECH 2023, Dublin, Ireland, August 20-24, 2023.
[7]. J. Chen, Y. Shi, W. Liu, W. Rao, S. He, A. Li, Y. Wang, Z. Wu, S. Shang, and C. Zheng. Gesper: A Unified Framework for General Speech Restoration. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Greece, June 4-10, 2023.
[8]. A. Li, S. You, G. Yu, C. Zheng*, and X. Li. Taylor, can you hear me now? A Taylor-unfolding framework for monaural speech enhancement. IJCAI-ECAI 2022.
[9]. G. Yu, A. Li, C. Zheng, Y. Guo, Y. Wang, and H. Wang. Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Singapore, May 22-27, 2022.
[10]. G. Yu, A. Li, Y. Wang, Y. Guo, H. Wang, and C. Zheng. Joint magnitude estimation and phase recovery using Cycle-in-Cycle GAN for non-parallel speech enhancement. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Singapore, May 22-27, 2022.
[11]. A. Li, W. Liu, C. Zheng, and X. Li. Embedding and Beamforming: All-neural Causal Beamformer for Multichannel Speech Enhancement. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Singapore, May 22-27, 2022.
[12]. A. Li, G. Yu, C. Zheng*, and X. Li. TaylorBeamformer: Learning All-Neural Beamformer for Multi-Channel Speech Enhancement from Taylor’s Approximation Theory, in INTER- SPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[13]. W. Meng, C. Zheng*, and X. Li. Fully Automatic Balance between Directivity Factor and White Noise Gain for Large-scale Microphone Arrays in Diffuse Noise Fields, in INTERSPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[14]. Y. Guan, G. Yu, A. Li, C. Zheng*, and J. Wang. TMGAN-PLC: Audio Packet Loss Concealment using Temporal Memory Generative Adversarial Network, in INTERSPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[15]. L. Cheng, C. Zheng*, A. Li, Y. Wu, R. Peng, and X. Li. A deep complex multi-frame filtering network for stereophonic acoustic echo cancellation, in INTERSPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[16]. X. Luo, C. Zheng, A. Li, Y. Ke, and X. Li. Bifurcation and Reunion: A Loss-Guided Two-Stage Approach for Monaural Speech Dereverberation, in INTERSPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[17]. A. Li, W. Liu, X. Luo, C. Zheng, and X. Li. ICASSP 2021 Deep Noise Suppression Challenge: Decoupling Magnitude and Phase Optimization with a Two-Stage Deep Network. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Toronto, Ontario, Canada, June 6-11, 2021.
[18]. R. Peng, L. Cheng, C. Zheng, and X. Li. ICASSP 2021 Acoustic Echo Cancellation Challenge: Integrated Adaptive Echo Cancellation with Time Alignment and Deep Learning-based Residual Echo plus Noise Suppression. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Toronto, Ontario, Canada, June 6-11, 2021.
[19]. A. Li, W. Liu, X. Luo, G. Yu, C. Zheng, and X. Li. A simultaneous denoising and dereverberation framework with target decoupling. in INTERSPEECH 2021, Brno, Czech Republic, Aug. 30-Sept. 3, 2021.
[20]. W. Liu, A. Li, Y. Ke, C. Zheng, and X. Li. Know Your Enemy, Know Yourself: A Unified Two-Stage Framework for Speech Enhancement. in INTERSPEECH 2021, Brno, Czech Republic, Aug. 30-Sept. 3, 2021.
[21]. R. Peng, L. Cheng, C. Zheng, and X. Li. Acoustic Echo Cancellation using Deep Complex Neural Network with Nonlinear Magnitude Compression and Phase Information. in INTERSPEECH 2021, Brno, Czech Republic, Aug. 30-Sept. 3, 2021.
[22]. A. Li, C. Zheng, L. Zhang, and X. Li. Learning to inference with early exit in the progressive speech enhancement. in the 2021 European Signal Processing Conference (EUSIPCO-2021), Virtual Conference, Aug. 23-27, 2021.
[23]. A. Li, C. Zheng, C. Fang, R. Peng, and X. Li. A Recursive Network with Dynamic Attention for Monaural Speech Enhancement. in INTERSPEECH 2020, Shanghai, China, Oct. 25-29, 2020.
[24]. A. Li, C. Zheng, L. Cheng, R. Peng, and X. Li. A time-domain monaural speech enhancement with recursive learning. in 2020 Asia-Pacific Signal and Information Processing Association (APSIPA), Virtual Conference, Dec. 7-10, 2020.
[25]. L. Cheng, C. Zheng, R. Peng, and X. Li. Improvement of DNN-based speech enhancement with non-normalized features by using an automatic gain control. in the 147th AES Convention, New York, Oct. 16- 19, 2019.
[26]. G. Li, R. Peng, C. Zheng, and X. Li. A non-intrusive speech quality assessment model based on DNN. in Proc. of the 26th International Congress on Sound and Vibration, Prague, July 7-11, 2019.
[27]. Y. Leng, C. Zheng, F. Zhang, and X. Li. Fast independent vector analysis using non-overlapping frequency subbands partition and power ratio correlation. in Proc. of the 26th International Congress on Sound and Vibration, Prague, July 7-11, 2019.
[28]. Y. Nie, J. Sang, C. Zheng, and X. Li. Modelling of a chip scale package on the acoustic behavior of a MEMS microphone. in the 147th AES Convention, New York, Oct. 16- Oct. 19, 2019.
[29]. J. Wang, D. Wang, Y. Chen, X. Lu, and C. Zheng. Noise robustness automatic speech recognition with convolutional neural network and time delay neural network. in the 147th AES Convention, New York, Oct. 16- Oct. 19, 2019.
[30]. T. Wei, J. Sang, C. Zheng, and X. Li. Near-Field Compensated Higher-Order Ambisonics Using a Virtual Source Panning Method. in the 145th AES Convention, New York, Oct. 16- Oct. 19, 2018.
[31]. Z. Li, P. Luo, C. Zheng, and X. Li. Vibrational contrast control for local sound source rendering on flat panel loudspeakers. in the 145th AES Convention, New York, Oct. 16- Oct. 19, 2018.
[32]. P. Luo, Z. Li, C. Zheng, and X. Li. Theoretical analysis of the far-field directional active noise control. in the 145th AES Convention, New York, Oct. 16- Oct. 19, 2018.
[33]. Y. Ke, Y. Hu, J. Li, C. Zheng, and X. Li. A Generalized Subspace Approach for Multichannel Speech Enhancement Using Machine Learning-Based Speech Presence Probability Estimation. in the 146th AES Convention, Dublin, Mar. 20- Mar. 23, 2018.
[34]. R. Peng, B. Xu, G. Li, C. Zheng, and X. Li. Long-range Speech Acquirement and Enhancement with Dual-point Laser Doppler Vibrometers. in 23rd Inter. Conf. on Digital Signal Process., Shanghai, Nov. 19-21, 2018.
[35]. J. Li, J. Ding, C. Zheng, and X. Li. An efficient and robust speech dereverberation method using spherical microphone array. in 23rd Inter. Conf. on Digital Signal Process., Shanghai, Nov. 19-21, 2018.
[36]. Z. Jiang, J. Sang, J. Wang, C. Zheng, F. Zhang, and X. Li. An audio loudness compression and compensation method for miniature loudspeaker playback. in the 143rd AES Convention, New York, Oct. 18- Oct. 20, 2017.
[37]. G. Li, Z. Jiang, J. Sang, C. Zheng, R. Peng, and X. Li. Auditory-based smoothing for equalization of headphone-to-eardrum transfer function. in the 143rd AES Convention, New York, Oct. 18- Oct. 20, 2017.
[38]. J. Ding, J. Wang, C. Zheng, R. Peng, and X. Li. Analysis of Binaural Features for Supervised Localization in Reverberant Environments. in the 141th AES Convention, Los Angeles, Sept. 29-Oct. 2, 2016.
[39]. Y. Cui, J. Wang, C. Zheng, and X. Li. Acoustic echo cancellation for asynchronous systems based on resampling adaptive filter coefficients. in the 141th AES Convention, Los Angeles, Sept. 29- Oct. 2, 2016.
[40]. C. Zheng, X. Li, A. Schwarz, and W. Kellermann. Statistical analysis and improvement of coherent-to-diffuse power ratio estimators for dereverberation. in the 15th International Workshop on Acoustics Echo and Noise Control (IWAENC),Xi'an China, Sept. 13-16, 2016.
[41]. C. Zheng, A. Schwarz, W. Kellermann, and X. Li. Binaural coherent-to-diffuse-ratio estimation for dereverberation using an ITD model. in the 2015 European Signal Processing Conference (EUSIPCO-2015), Nice, France, Aug. 31-Sept. 4, 2015.
[42]. R. Peng, C. Zheng, and X. Li. Bandwidth extension for speech acquired by laser Doppler vibrometer with an auxiliary microphone. in the 10th Inter. Conf. on Information, Communications and Signal Processing (ICICS), Singapore, Dec. 2-4, 2015.
[43]. C. Zheng, Y. Ke, R. Peng, X. Li, and Y. Zhou. Statistical analysis of temporal coherence function and its application in howling detecton. in the 19th Inter. Conf. on Digital Signal Processing, Hongkong, China, Aug. 20-23, 2014.
[44]. C. Zheng, S. Wang, R. Peng, and X. Li. Delayless method to suppress transient noise using speech properties and spectral coherence. in the 135th AES Convention, New York, Oct. 17-20, 2013.
[45]. R. Peng, J. Li, X. Chen, X. Li, and C. Zheng. Cepstrum-based preprocessing for howling detection in speech applications. in the 135th AES Convention, New York, Oct. 17-20, 2013.
[46]. J. Wang, C. Zheng, C. Zhang, and Y. Sun. The structure of noise power spectral density-driven adaptive post-filtering algorithm. in the 135th AES Convention, New York, Oct. 17-20, 2013.
[47]. C. Zheng, H. Liu, R. Peng, and X.Li. Temporal Coherence-Based Howling Detection for Speech Applications. in the AES 133rd Convention, San Francisco, 2012.
[48]. C. Zheng, H. Liu, and X. Li. Combining Capon and Bartlett Spectral Estimators for Detection of Multiple Sinusoids in Colored Noise Environments. J. Acoust. Soc. Am., Vol. 131, pp.3444-3444, 2012.
[49]. J.Sang, H. Hu, C. Zheng, G. Li, M. E. Lutman, and S. Bleek. Evaluation of a sparse coding shrinkage algorithm in normal hearing and hearing impaired listeners. in the 20th European Signal Processing Conference (EUSIPCO 2012), Bucharest, Romania, Aug. 27-31, 2012.
[50]. X. Hu, S. Wang, Y. Zhou, X. Li, and C. Zheng. Robustness analysis of time-domain and frequency-domain adaptive null-forming schemes. in the 8th IEEE International Conference on Information, Communications, and Signal Processing (ICICS), Singapore, pp. 1-4, 2011.
[51]. C. Zheng, Y. Zhou, X. Hu, and X. Li. Two-channel post-filtering based on adaptive smoothing and noise properties. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Prague, Czech, pp. 1745-1748, May, 22-27, 2011.
[52]. C. Zheng, Y. Zhou, X. Hu, and X. Li. Speech enhancement based on the structure of noise power spectral density. in the 2010 European Signal Processing Conference (EUSIPCO-2010), Aalborg, Denmark, Aug. 23-27, 2010.
[53]. C. Zheng, Y. Zhou, X. Hu, J. Tian, and X. Li. Speech enhancement based on estimating expected values of speech cepstra. in Proc. of 20th Inter. Congress on Acoustics, ICA 2010, Aug. 23-27, Sydney, Australia.
国内学术期刊论文
已录用论文
1. 王梅煌+,章辉勇+,徐晨阳,李晓东,郑成诗+*. 助听器端到端联合声反馈抑制和去噪去混响研究. 声学学报,已录用.
2. 徐嘉懿,厉剑*,李晓东,郑成诗. 基于精确阵列响应控制的时域宽带波束图综合. 声学学报, 已录用.
已发表论文
1. 柯雨璇,厉剑,彭任华,郑成诗*,李晓东. 用于自适应滤波形成语音增强的球谐域掩蔽函数估计方法. 声学学报, 2021, 46(1): 67-80.
2. 王杰, 陈运达, 陆锡坤, 杨乔赫, 严志豪, 桑晋秋*, 郑成诗. 真人头部骨导效应实验和分析. 声学学报, 2021, 46(4): 687-698.
3. 程琳娟, 彭任华, 郑成诗*, 李晓东. 两阶段复数谱卷积循环网络立体声回声消除. 声学学报,已录用。
4. 厉剑,柯雨璇,郑成诗*,李晓东. 球谐域类正则化宽带超指向性波束形成算法. 声学学报, 2020, 45(2): 145-160.
5. 厉剑, 彭任华, 郑成诗*, 李晓东. 球谐域自适应混响抵消与声源定位算法. 声学学报, 2019, 44(5): 874-886.
6. 丁建策, 厉剑, 彭任华, 郑成诗*, 李晓东. 室内两步法监督式学习双耳声源距离估计. 声学学报, 2019, 44(4): 405-416.
7. 杨鹤飞, 郑成诗, 李晓东. 基于谱优势与非线性变换混合的立体声声学回声消除方法. 电子与信息学报, 2015, 37(2): 373-379.
8. 郑成诗, 胡笑浒, 周翊, 李晓东. 基于噪声谱结构特性的谱减法. 声学学报, 2010, 35(2): 215-222.
9. 周翊, 郑成诗, 李晓东. 一种用于立体声声学回波消除的新型鲁棒梯度法格梯形自适应滤波算法. 声学学报, 2010, 35(2): 223-229.
10. 郑成诗, 周名远, 李晓东. 基于联合语音出现概率的先验信噪比估计算法. 电子与信息学报, 2008, 30(7): 1680-1683.
11. 郑成诗, 李晓东, 陈佳路, 田静. 自适应平滑周期图语音增强研究. 声学学报, 2007, 32(5): 461-467.
International Conferences
Peer-Reviewed International Conference Proceedings
[1]. W. Meng, X. Li, A. Li, J. Li, X. Li, and C. Zheng. All neural Kronecker product beamforming for speech extraction with large-scale microphone arrays. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Seoul, Korea, April 14-19, 2024.
[2]. G. Yu, X. Zheng, N. Li, R. Han, C. Zheng, C. Zhang, C. Zhou, Q. Huang, and B. Yu. BAE-Net: A low complexity and high fidelity bandwith-adpative neural network for speech super-resolution. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Seoul, Korea, April 14-19, 2024.
[3]. F. Hao, H. Zhang, L. Dai, X. Luo, X. Li, and C. Zheng. RENET: A time-frequency domain general speech restoration network for ICASSP 2024 speech improvement challenge. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Seoul, Korea, April 14-19, 2024.
[4]. L. Dai, Y. Ke, H. Zhang, F. Hao, X. Luo, X. Li, and C. Zheng. A time-frequency band-split neural network for real-time full-band packet loss concealment. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Seoul, Korea, April 14-19, 2024.
[5]. A. Li, W. Meng, G. Yu, W. Liu, X. Li, and C. Zheng. TaylorBeamixer: Learning Taylor-Inspired All-Neural Multi-Channel Speech Enhancement from Beam-Space Dictionary Perspective. INTERSPEECH 2023, Dublin, Ireland, August 20-24, 2023.
[6]. J. Xu, J. Li, W. Meng, X. Li, and C. Zheng. Low-complexity Broadband Beampattern Synthesis using Array Response Control. INTERSPEECH 2023, Dublin, Ireland, August 20-24, 2023.
[7]. J. Chen, Y. Shi, W. Liu, W. Rao, S. He, A. Li, Y. Wang, Z. Wu, S. Shang, and C. Zheng. Gesper: A Unified Framework for General Speech Restoration. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Greece, June 4-10, 2023.
[8]. A. Li, S. You, G. Yu, C. Zheng*, and X. Li. Taylor, can you hear me now? A Taylor-unfolding framework for monaural speech enhancement. IJCAI-ECAI 2022.
[9]. G. Yu, A. Li, C. Zheng, Y. Guo, Y. Wang, and H. Wang. Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Singapore, May 22-27, 2022.
[10]. G. Yu, A. Li, Y. Wang, Y. Guo, H. Wang, and C. Zheng. Joint magnitude estimation and phase recovery using Cycle-in-Cycle GAN for non-parallel speech enhancement. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Singapore, May 22-27, 2022.
[11]. A. Li, W. Liu, C. Zheng, and X. Li. Embedding and Beamforming: All-neural Causal Beamformer for Multichannel Speech Enhancement. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Singapore, May 22-27, 2022.
[12]. A. Li, G. Yu, C. Zheng*, and X. Li. TaylorBeamformer: Learning All-Neural Beamformer for Multi-Channel Speech Enhancement from Taylor’s Approximation Theory, in INTER- SPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[13]. W. Meng, C. Zheng*, and X. Li. Fully Automatic Balance between Directivity Factor and White Noise Gain for Large-scale Microphone Arrays in Diffuse Noise Fields, in INTERSPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[14]. Y. Guan, G. Yu, A. Li, C. Zheng*, and J. Wang. TMGAN-PLC: Audio Packet Loss Concealment using Temporal Memory Generative Adversarial Network, in INTERSPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[15]. L. Cheng, C. Zheng*, A. Li, Y. Wu, R. Peng, and X. Li. A deep complex multi-frame filtering network for stereophonic acoustic echo cancellation, in INTERSPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[16]. X. Luo, C. Zheng, A. Li, Y. Ke, and X. Li. Bifurcation and Reunion: A Loss-Guided Two-Stage Approach for Monaural Speech Dereverberation, in INTERSPEECH 2022, Incheon, Korea, Sept. 18-22, 2022.
[17]. A. Li, W. Liu, X. Luo, C. Zheng, and X. Li. ICASSP 2021 Deep Noise Suppression Challenge: Decoupling Magnitude and Phase Optimization with a Two-Stage Deep Network. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Toronto, Ontario, Canada, June 6-11, 2021.
[18]. R. Peng, L. Cheng, C. Zheng, and X. Li. ICASSP 2021 Acoustic Echo Cancellation Challenge: Integrated Adaptive Echo Cancellation with Time Alignment and Deep Learning-based Residual Echo plus Noise Suppression. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Toronto, Ontario, Canada, June 6-11, 2021.
[19]. A. Li, W. Liu, X. Luo, G. Yu, C. Zheng, and X. Li. A simultaneous denoising and dereverberation framework with target decoupling. in INTERSPEECH 2021, Brno, Czech Republic, Aug. 30-Sept. 3, 2021.
[20]. W. Liu, A. Li, Y. Ke, C. Zheng, and X. Li. Know Your Enemy, Know Yourself: A Unified Two-Stage Framework for Speech Enhancement. in INTERSPEECH 2021, Brno, Czech Republic, Aug. 30-Sept. 3, 2021.
[21]. R. Peng, L. Cheng, C. Zheng, and X. Li. Acoustic Echo Cancellation using Deep Complex Neural Network with Nonlinear Magnitude Compression and Phase Information. in INTERSPEECH 2021, Brno, Czech Republic, Aug. 30-Sept. 3, 2021.
[22]. A. Li, C. Zheng, L. Zhang, and X. Li. Learning to inference with early exit in the progressive speech enhancement. in the 2021 European Signal Processing Conference (EUSIPCO-2021), Virtual Conference, Aug. 23-27, 2021.
[23]. A. Li, C. Zheng, C. Fang, R. Peng, and X. Li. A Recursive Network with Dynamic Attention for Monaural Speech Enhancement. in INTERSPEECH 2020, Shanghai, China, Oct. 25-29, 2020.
[24]. A. Li, C. Zheng, L. Cheng, R. Peng, and X. Li. A time-domain monaural speech enhancement with recursive learning. in 2020 Asia-Pacific Signal and Information Processing Association (APSIPA), Virtual Conference, Dec. 7-10, 2020.
[25]. L. Cheng, C. Zheng, R. Peng, and X. Li. Improvement of DNN-based speech enhancement with non-normalized features by using an automatic gain control. in the 147th AES Convention, New York, Oct. 16- 19, 2019.
[26]. G. Li, R. Peng, C. Zheng, and X. Li. A non-intrusive speech quality assessment model based on DNN. in Proc. of the 26th International Congress on Sound and Vibration, Prague, July 7-11, 2019.
[27]. Y. Leng, C. Zheng, F. Zhang, and X. Li. Fast independent vector analysis using non-overlapping frequency subbands partition and power ratio correlation. in Proc. of the 26th International Congress on Sound and Vibration, Prague, July 7-11, 2019.
[28]. Y. Nie, J. Sang, C. Zheng, and X. Li. Modelling of a chip scale package on the acoustic behavior of a MEMS microphone. in the 147th AES Convention, New York, Oct. 16- Oct. 19, 2019.
[29]. J. Wang, D. Wang, Y. Chen, X. Lu, and C. Zheng. Noise robustness automatic speech recognition with convolutional neural network and time delay neural network. in the 147th AES Convention, New York, Oct. 16- Oct. 19, 2019.
[30]. T. Wei, J. Sang, C. Zheng, and X. Li. Near-Field Compensated Higher-Order Ambisonics Using a Virtual Source Panning Method. in the 145th AES Convention, New York, Oct. 16- Oct. 19, 2018.
[31]. Z. Li, P. Luo, C. Zheng, and X. Li. Vibrational contrast control for local sound source rendering on flat panel loudspeakers. in the 145th AES Convention, New York, Oct. 16- Oct. 19, 2018.
[32]. P. Luo, Z. Li, C. Zheng, and X. Li. Theoretical analysis of the far-field directional active noise control. in the 145th AES Convention, New York, Oct. 16- Oct. 19, 2018.
[33]. Y. Ke, Y. Hu, J. Li, C. Zheng, and X. Li. A Generalized Subspace Approach for Multichannel Speech Enhancement Using Machine Learning-Based Speech Presence Probability Estimation. in the 146th AES Convention, Dublin, Mar. 20- Mar. 23, 2018.
[34]. R. Peng, B. Xu, G. Li, C. Zheng, and X. Li. Long-range Speech Acquirement and Enhancement with Dual-point Laser Doppler Vibrometers. in 23rd Inter. Conf. on Digital Signal Process., Shanghai, Nov. 19-21, 2018.
[35]. J. Li, J. Ding, C. Zheng, and X. Li. An efficient and robust speech dereverberation method using spherical microphone array. in 23rd Inter. Conf. on Digital Signal Process., Shanghai, Nov. 19-21, 2018.
[36]. Z. Jiang, J. Sang, J. Wang, C. Zheng, F. Zhang, and X. Li. An audio loudness compression and compensation method for miniature loudspeaker playback. in the 143rd AES Convention, New York, Oct. 18- Oct. 20, 2017.
[37]. G. Li, Z. Jiang, J. Sang, C. Zheng, R. Peng, and X. Li. Auditory-based smoothing for equalization of headphone-to-eardrum transfer function. in the 143rd AES Convention, New York, Oct. 18- Oct. 20, 2017.
[38]. J. Ding, J. Wang, C. Zheng, R. Peng, and X. Li. Analysis of Binaural Features for Supervised Localization in Reverberant Environments. in the 141th AES Convention, Los Angeles, Sept. 29-Oct. 2, 2016.
[39]. Y. Cui, J. Wang, C. Zheng, and X. Li. Acoustic echo cancellation for asynchronous systems based on resampling adaptive filter coefficients. in the 141th AES Convention, Los Angeles, Sept. 29- Oct. 2, 2016.
[40]. C. Zheng, X. Li, A. Schwarz, and W. Kellermann. Statistical analysis and improvement of coherent-to-diffuse power ratio estimators for dereverberation. in the 15th International Workshop on Acoustics Echo and Noise Control (IWAENC),Xi'an China, Sept. 13-16, 2016.
[41]. C. Zheng, A. Schwarz, W. Kellermann, and X. Li. Binaural coherent-to-diffuse-ratio estimation for dereverberation using an ITD model. in the 2015 European Signal Processing Conference (EUSIPCO-2015), Nice, France, Aug. 31-Sept. 4, 2015.
[42]. R. Peng, C. Zheng, and X. Li. Bandwidth extension for speech acquired by laser Doppler vibrometer with an auxiliary microphone. in the 10th Inter. Conf. on Information, Communications and Signal Processing (ICICS), Singapore, Dec. 2-4, 2015.
[43]. C. Zheng, Y. Ke, R. Peng, X. Li, and Y. Zhou. Statistical analysis of temporal coherence function and its application in howling detecton. in the 19th Inter. Conf. on Digital Signal Processing, Hongkong, China, Aug. 20-23, 2014.
[44]. C. Zheng, S. Wang, R. Peng, and X. Li. Delayless method to suppress transient noise using speech properties and spectral coherence. in the 135th AES Convention, New York, Oct. 17-20, 2013.
[45]. R. Peng, J. Li, X. Chen, X. Li, and C. Zheng. Cepstrum-based preprocessing for howling detection in speech applications. in the 135th AES Convention, New York, Oct. 17-20, 2013.
[46]. J. Wang, C. Zheng, C. Zhang, and Y. Sun. The structure of noise power spectral density-driven adaptive post-filtering algorithm. in the 135th AES Convention, New York, Oct. 17-20, 2013.
[47]. C. Zheng, H. Liu, R. Peng, and X.Li. Temporal Coherence-Based Howling Detection for Speech Applications. in the AES 133rd Convention, San Francisco, 2012.
[48]. C. Zheng, H. Liu, and X. Li. Combining Capon and Bartlett Spectral Estimators for Detection of Multiple Sinusoids in Colored Noise Environments. J. Acoust. Soc. Am., Vol. 131, pp.3444-3444, 2012.
[49]. J.Sang, H. Hu, C. Zheng, G. Li, M. E. Lutman, and S. Bleek. Evaluation of a sparse coding shrinkage algorithm in normal hearing and hearing impaired listeners. in the 20th European Signal Processing Conference (EUSIPCO 2012), Bucharest, Romania, Aug. 27-31, 2012.
[50]. X. Hu, S. Wang, Y. Zhou, X. Li, and C. Zheng. Robustness analysis of time-domain and frequency-domain adaptive null-forming schemes. in the 8th IEEE International Conference on Information, Communications, and Signal Processing (ICICS), Singapore, pp. 1-4, 2011.
[51]. C. Zheng, Y. Zhou, X. Hu, and X. Li. Two-channel post-filtering based on adaptive smoothing and noise properties. in Inter. Conf. Acoustics, Speech, and Signal Processing (ICASSP), Prague, Czech, pp. 1745-1748, May, 22-27, 2011.
[52]. C. Zheng, Y. Zhou, X. Hu, and X. Li. Speech enhancement based on the structure of noise power spectral density. in the 2010 European Signal Processing Conference (EUSIPCO-2010), Aalborg, Denmark, Aug. 23-27, 2010.
[53]. C. Zheng, Y. Zhou, X. Hu, J. Tian, and X. Li. Speech enhancement based on estimating expected values of speech cepstra. in Proc. of 20th Inter. Congress on Acoustics, ICA 2010, Aug. 23-27, Sydney, Australia.