General

Dongbin Zhao, Ph.D. Professor, Insitute of Automation, Chinese Academy of Sciences, University of Chinese Academy of Sciences
Email: dongbin.zhao@ia.ac.cn
Telephone: (+86) 10 8254 4764 
Mobile phone:
Address: Rm 1005, Smart Buld, No 95 Zhong guan cun East Road, Beijing, China
Postcode: 100190

Research Areas

Deep Reinforcement Learning, Computational Intelligence, Intelligent driving, Games AI, Intelligent Transportation System, Robotics, Process Control 

Education

2000.5-2002.1, Postdoctoral Fellow, Tsinghua University, Beijing, China.
1996.9-2000.4, Ph.D., Harbin Institute of Technology, Harbin, China.
1994.9-1996.8, M.S., Harbin Institute of Technology, Harbin, China.
1990.9-1994.8, B.S., Harbin Institute of Technology, Harbin, China.

Experience

   
Work Experience
2007.8-2008.8, Visiting Scholar, University of Arizona, Tucson, U.S.A.
2002.4-present, Associate Professor -> Professor, The State Key Lab of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
2000.5-2002.1, Post-doctor, Department of Mechanical Engineering, Tsinghua University, Beijing, China
Teaching Experience
Comutational Intelligence

Honors & Distinctions

[1] 2015, Third Award for Science and Technology Progress of Beijing, China. 

[2] 2014, Excellent Teaching Award of ZhuliYuehua, China.

[3] 2011, Sustentation Fund for International Conference, K. C. Wong Education Foundation, Hong Kong.
[4] 2010, Third Award for Science and Technology Progress of Beijing, China.
[5] 2010, First Award for Scientific and Technology Progress, China Petroleum and Chemical Automation Implementation Association.
[6] 2010, Third Prize, International Scilab Contest, INRIA & CASIA.
[7] 2010, Excellent Reviewer, Chinese Journal of Mechanical Engineering.
[8] 2009, Third Award for Scientific and Technology Progress, China Petroleum and Chemical Industry Association.
[9] 2009, Best Paper Award, The Second International Symposium on Intelligent Informatics, Qinhuangdao, China, September 13-15, 2009.
[10] 2009, Major Contribution Award, Key Lab of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences.
[11] 2009, Excellent Reviewer, Chinese Journal of Mechanical Engineering.
[12] 2008, listed in Marquis Who’s Who in the World.
[13] 2007, Excellent Reviewer, Chinese Journal of Mechanical Engineering.
[14] 2006, Excellent Reviewer, Chinese Journal of Mechanical Engineering.
[15] 2006, Scientific Award (Cooperation with Gary G. Yen), K. C. Wong Education Foundation, Hong Kong.
[16] 2004, Dean Award, Key Lab of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences.
[17] 2003, Sustentation Fund for International Conference, K. C. Wong Education Foundation, Hong Kong.
[18] 2001, First Award for Scientific Progress of Chinese Universities, Ministry of Education of China.
[19] 2001, Excellent Young Researcher Award, China Welding Society.
[20] 1999, Second Award for Scientific Progress of National Defense, Commission of Science technology and industry for National Defense of China.

Publications

Call for Papers

IEEE Computational Intelligence Magazine

Special Issue on

Deep Reinforcement Learning and Games

 

Aims and Scope

Recently, there has been tremendous progress in artificial intelligence (AI) and computational intelligence (CI) and games. In 2015, Google DeepMind published a paper “Human-level control through deep reinforcement learning” in Nature, showing the power of AI&CI in learning to play Atari video games directly from the screen capture. Furthermore, in Nature 2016, it published a cover paper “Mastering the game of Go with deep neural networks and tree search” and proposed the computer Go program, AlphaGo. In March 2016, AlphaGo beat the world’s top Go player Lee Sedol by 4:1. In early 2017, the Master, a variant of AlphaGo, won 60 matches against top Go players. In late 2017, AlphaGo Zero learned only from self-play and was able to beat the original AlphaGo without any losses (Nature 2017). This becomes a new milestone in the AI&CI history, the core of which is the algorithm of deep reinforcement learning (DRL). Moreover, the achievements on DRL and games are manifest. In 2017, the AIs beat the expert in Texas Hold’em poker (Science 2017). OpenAI developed an AI to outperform the champion in the 1V1 Dota 2 game. Facebook released a huge database of StarCraft I. Blizzard and DeepMind turned StarCraft II into an AI research lab with a more open interface. In these games, DRL also plays an important role.

 

Needless to say, the great achievements of DRL are first obtained in the domain of games, and it is timely to report major advances in a special issue of IEEE Computational Intelligence Magazine. IEEE Trans. on Neural network and Learning Systems and IEEE Trans. on Computational Intelligence and AI in Games have organized similar ones in 2017.

 

DRL is able to output control signals directly based on input images, and integrates the capacity for perception of deep learning (DL) and the decision making of reinforcement learning (RL). This mechanism has many similarities to human modes of thinking. However, there is much work left to do. The theoretical analysis of DRL, e. g., the convergence, stability, and optimality, is still in early days. Learning efficiency needs to be improved by proposing new algorithms or combining with other methods. DRL algorithms still need to be demonstrated in more diverse practical settings. Therefore, the aim of this special issue is to publish the most advanced research and state-of-the-art contributions in the field of DRL and its application in games. We expect this special issue to provide a platform for international researchers to exchange ideas and to present their latest research in relevant topics. Specific topics of interest include but are not limited to:

 

·       Survey on DRL and games;

·       New AI&CI algorithms in games;

·       Learning forward models from experience;

·       New algorithms of DL, RL and DRL;

·       Theoretical foundation of DL, RL and DRL;

·       DRL combined with search algorithms or other learning methods;

·       Challenges of AI&CI games as limitations in strategy learning, etc.;

·       DRL or AI&CI Games based applications in realistic and complicated systems.

Important Dates

Submission Deadline: October 1st, 2018

Notification of Review Results: December 10th, 2018

Submission of Revised Manuscripts: January 31st, 2019

Submission of Final Manuscript: March 15th, 2019

Special Issue Publication: August 2019 Issue

 

Guest Editors

D. Zhao, Institute of Automation, Chinese Academy of Sciences, China, Dongbin.zhao@ia.ac.cn

 

Dr. Zhao is a professor at Institute of Automation, Chinese Academy of Sciences and also a professor with the University of Chinese Academy of Sciences, China. His current research interests are in the area of deep reinforcement learning, computational intelligence, adaptive dynamic programming, games, and robotics. Dr. Zhao is the Associate Editor of IEEE Transactions on Neural Networks and Learning Systems and IEEE Computation Intelligence Magazine, etc. He is the Chair of Beijing Chapter, and the past Chair of Adaptive Dynamic Programming and Reinforcement Learning Technical Committee of IEEE Computational Intelligence Society (CIS). He works as several guest editors of renowned international journals, including the leading guest editor of the IEEE Trans.on Neural Network and Learning Systems special issue on Deep Reinforcement Learning and Adaptive Dyanmic Programming.

 

S. Lucas, Queen Mary University of London, UK, simon.lucas@qmul.ac.uk

 

Dr. Lucas was a full professor of computer science, in the School of Computer Science and Electronic Engineering at the University of Essex until July 31, 2017, and now is the Professor and Head of School of Electronic Engineering and Computer Science at Queen Mary University of London. He was the Founding Editor-in-Chief of the IEEE Transactions on Computational Intelligence and AI in Games, and also co-founded the IEEE Conference on Computational Intelligence and Games, first held at the University of Essex in 2005.  He is the Vice President for Education of the IEEE Computational Intelligence Society. His research has gravitated toward Game AI: games provide an ideal arena for AI research, and also make an excellent application area.

 

J. Togelius, New York University, USA, julian.togelius@nyu.edu.

 

Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on all aspects of computational intelligence and games and on selected topics in evolutionary computation and evolutionary reinforcement learning. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, and fair and relevant benchmarking of AI through game-based competitions. He is the Editor-in-Chief of IEEE Transactions on Computational Intelligence and AI in Games, and a past chair of the IEEE CIS Technical Committee on Games.

 

Submission Instructions

1.     The IEEE CIM requires all prospective authors to submit their manuscripts in electronic format, as a PDF file. The maximum length for Papers is typically 20 double-spaced typed pages with 12-point font, including figures and references. Submitted manuscript must be typewritten in English in single column format. Authors of Papers should specify on the first page of their submitted manuscript up to 5 keywords. Additional information about submission guidelines and information for authors is provided at the IEEE CIM website. Submission will be made via https://easychair.org/conferences/?conf=ieeecimcitbb2018.

2.     Send also an email to guest editor D. Zhao (dongbin.zhao@ia.ac.cn) with subject “IEEE CIM special issue submission” to notify about your submission.

3.      Early submissions are welcome. We will start the review process as soon as we receive your contribution.


Papers

International Journal Papers in recent 5 years:

[1]       Yaran Chen, Dongbin Zhao, Le Lv, Qichao Zhang, “Multi-task learning for dangerous object detection in autonomous driving”, Information Sciences, DOI: 10.1016/j.ins.2017.08.035, 2017.

[2]       Li Bu, Dongbin Zhao, Cesare Alippi, “An incremental change detection test based on density difference estimation,” IEEE Transactions on Systems, Man and Cybernetics: Systems, SMCA-16-06-0569, 10.1109/TSMC.2017.2682502, 2017.

[3]       Yuanheng Zhu, Dongbin Zhao, “Comprehesive comparison of online ADP algorithms for continuous-time optimal control,” Artificial Intelligence Review, 2017. DOI :10.1007/s10462-017-9548-4

[4]       Qichao Zhang, Dongbin Zhao*, Yuanheng Zhu, “Data-driven adaptive dynamic programming for continuous-time fully cooperative games with partially constrained inputs,” Neurocomputing, 10.1016/j.neucom.2017.01.076, 2017.

[5]       Yuanheng Zhu, Dongbin Zhao*, Xiong Yang, Qichao Zhang, “Policy iteration for Hinfinity optimal control of polynomial nonlinear systems via sum of squares programming,” IEEE Transactions on Cybernetics, DOI 10.1109/TCYB.2016.2643687, 2016.

[6]       Li Bu, Cesare Alippi, Dongbin Zhao*, “A distribution-free change detection test based on density difference estimation,” IEEE Transactions on Neural Networks and Learning Systems, DOI 10.1109/TNNLS.2016.2619909, 2016.

[7]       Dongbin Zhao*, Yaran, Chen, Le Lv, “Deep reinforcement learning with visual attention for vehicle classification,” IEEE Transactions on Cognitive and Developmental Systems, DOI 10.1109/TCDS.2016.2614675, 2016. (Top 5 popular articles)

[8]       Qichao Zhang, Dongbin Zhao*, Ding Wang, “Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming,” IEEE Transactions on Neural Networks and Learning Systems, DOI 10.1109/TNNLS.2016.2614002, 2016.

[9]       Yuanheng Zhu, Dongbin Zhao*, Haibo He, Junhong Ji, "Event-triggered optimal control for nonlinear constrained-input systems with partially unknown dynamics via adaptive dynamic programming," IEEE Transactions on Industrial Electronics, DOI 10.1109/TIE.2016.2597763.

[10]    Qichao Zhang, Dongbin Zhao*, Yuanheng Zhu, “Event-triggered H control for continuous-time nonlinear system via concurrent learning”, IEEE Transactions on Systems, Man and Cybernetics: Systems, vol. 47, no. 7, pp. 1071–1081, 2017, DOI 10.1109/TSMC.2016.2531680.

[11]    Zhen Zhang, Dongbin Zhao*, Junwei Gao, Dongqing Wang, Yujie Dai, “FMRQ-A multiagent reinforcement learning algorithm for fully cooperative tasks”, IEEE Transactions on Cybernetics, vol. 47, no. 6, pp. 1367–1379, 2017. DOI 10.1109/TCYB.2016.2544866.

[12]    Dongbin Zhao, Zhongpu Xia, Qichao Zhang, “Model-free optimal control based intelligent cruise control with hardware-in-the-loop demonstration,” IEEE Computational Intelligence Magazine, vol. 12, no. 2, pp. 56–69, 2017. 10.1109/MCI.2017.2670380.

[13]    Le Lv, Dongbin Zhao*, Qingqiong Deng, “A semi-supervised predictive sparse decomposition based on the task-driven dictionary learning,” Cognitive Computation, vol. 9, pp.115–124, 2017. DOI 10.1007/s12559-016-9438-0, 2017.

[14]    Yuanheng Zhu, Dongbin Zhao*, Xiangjun Li, “Iterative adaptive dynamic programming solving unknown nonlinear zero-sum game based on online measurement”, IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 3, pp. 714 – 725, 2017, DOI 10.1109/TNNLS.2016.2561300.

[15]    Ding Wang, Derong Liu, Qichao Zhang, Dongbin Zhao, “Data-based adaptive critic design for nonlinear robust optimal control with uncertain dynamics,” IEEE Transactions on Systems, Man and Cybernetics: Systems, vol. 46, no. 11, pp. 1544-1555, 2016, DOI 10.1109/TSMC.2015.2492941.

[16]    Yuanheng Zhu, Dongbin Zhao*, Xiangjun Li, “Using reinforcement learning techniques to solve continuous-time nonlinear optimal tracking problem without system dynamics”, IET Control Theory and Applications, vol. 10, no. 12, pp. 1339-1347, 2016, DOI 10.1049/iet-cta.2015.0769.

[17]    Zhongpu Xia, Dongbin Zhao*, “Online reinforcement learning control by Bayesian inference,” IET Control Theory & Applications, vol. 10, no. 12, pp. 1331-1338, 2016, DOI 10.1049/iet-cta.20150669.

[18]    Yufei Tang, Haibo He, Zhen Ni, Xiangnan Zhong, Dongbin Zhao, and Xin Xu, Fuzzy-based goal representation adaptive dynamic programming, IEEE Transactions on Fuzzy Systems, vol. 24, no. 5, pp. 1159-1175, 2016, DOI 10.1109/TFUZZ.2015.2505327.

[19]    Dongbin Zhao*, Qichao Zhang, Ding Wang, Yuanheng Zhu, “Experience replay for optimal control of nonzero-sum game systems with unknown dynamics”, IEEE Transactions on Cybernetics, vol.46, no.3, pp. 854-865, 2016.

[20]    Dongbin Zhao*, Yuanheng Zhu, “MEC—a near-optimal online reinforcement learning algorithm for continuous deterministic systems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 2, pp. 346–356, 2015.

[21]    Yuanheng Zhu, Dongbin Zhao*, Haibo He, Junhong Ji, “Convergence proof of approximate policy iteration for undiscounted optimal control of discrete-time systems,” Cognitive Computation, vol. 7, no. 6, pp. 763-771, 2015. DOI 10.1007/s12559-015-9350-z.

[22]    Zhen Ni, Haibo He*, Dongbin Zhao, Xin Xu, and Danil Prokhorov, “GrDHP: a general utility function representation for dual heuristic dynamic programming,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 3, pp. 614–627, 2015.

[23]    Yuanheng Zhu, Dongbin Zhao*, Derong Liu, “Convergence analysis and application of fuzzy-HDP for nonlinear discrete-time HJB systems,” Neurocomputing, Vol. 149, pp. 124–131, 2015.

[24]    Zhen Zhang, Dongbin Zhao*. Clique-based cooperative multiagent reinforcement learning using factor graphs, IEEE/CAA Journal of Automatica Sinica, vol. 3, no. 1, pp. 248–256, 2015.

[25]    Dongbin Zhao*, Zhongpu Xia, Ding Wang, “Model-free optimal control for affine nonlinear systems based on action dependent heuristic dynamic programming with convergency analysis,” IEEE Transactions on Automation and Science Engineering. vol. 12, no. 4, pp. 1461–1468, 2015, 10.1109/TASE.2014.2348991.

[26]    Yuanheng Zhu, Dongbin Zhao*, “A data-based online reinforcement learning algorithm satisfying probably approximately correct principle,” Neural Computing and Applications, vol. 26, no. 4, pp. 775-787, 2015,  DOI.10.1007/s00521-014-1738-2.

[27]    Dongbin Zhao, Zhaohui Hu, Zhongpu Xia*, Cesare Alippi, Ding Wang, “Full range adaptive cruise control based on supervised adaptive dynamic programming,” Neurocomputing, vol.125, pp. 57-67, 2014.

[28]    Cesare Alippi, Derong Liu, Dongbin Zhao*, Li Bu, “Detecting and reacting to changes in sensing units: the active classifier case,” IEEE Transactions on System, Man and Cybernetics Part A – Systems, vol. 44, no. 3, pp. 353-362, 2014.

[29]    Bin Wang, Dongbin Zhao*, Cesare Alippi, Derong Liu, “Dual heuristic dynamic programming for nonlinear discrete-time uncertain systems with state delay,” Neurocomputing, vol. 134, pp. 222-229, 2014.

[30]    Dongbin Zhao*, Bin Wang, Derong Liu, “A supervised actor-critic approach for adaptive cruise control,” Soft Computing, 2013, Vol. 17, No. 11, pp 2089-2099.

[31]    Ding Wang, Derong Liu*, Dongbin Zhao, Yuzhu Huang, and Dehua Zhang, “A neural-network-based iterative GDHP approach for solving a class of nonlinear optimal control problems with control constraints,” Neural Computing and Applications, vol. 22, no. 2, pp. 219227, Feb. 2013.

[32]    Dongbin Zhao*, Yujie Dai, Zheng Zhang, “Computational intelligence in urban traffic signal control, a survey,” IEEE Transactions on System, Man & Cybernetics Part C: Applications and Reviews, vol.42, no.4, pp. 485-494, 2012.

[33]    Dongbin Zhao, Zhen Zhang*, Yujie Dai, “Self-teaching adaptive dynamic programming for Go-Moku,” Neurocomputing, vol. 78, no. 1, pp. 23-29, 2012.

[34]    Derong Liu*, Ding Wang, Dongbin Zhao, Qinglai Wei, Ning Jin, “Neural-network-based optimal control for a class of unknown discrete-time nonlinear systems using globalized dual heuristic programming,” IEEE Transactions on Automation Science and Engineering, vol. 9, no. 3, pp.628 – 634, 2012.

[35]    Ding Wang, Derong Liu*, Qinglai Wei, Dongbin Zhao, Ning Jin, “Optimal control of unknown nonaffine nonlinear discrete-time systems based on adaptive dynamic programming,” Automatica, vol. 48, no. 8, pp.1825–1832, 2012.

[36]    Yongduan Song, Frank L. Lewis, Marios Polycarpou, Danil Prokhorov, Dongbin Zhao, Editorial: new developments in neural network structures for signal processing, autonomous decision, and adaptive control, IEEE Transactions on Neural Networks and Learning Systems, Vol. 28, No. 3, pp. 494 – 499, 2017.

[37]    Amir Hussain, Dacheng Tao, Jonathan Wu, Dongbin Zhao, “Editorial: computational intelligence for changing environments,” IEEE Computational Intelligence Magazine, Vol. 11, No. 10(4): 10-11, 2015. DOI: 10.1109/MCI.2015.24721192015.

[38]    Stefano Squartini, Derong Liu, Francesco Piazza, Dongbin Zhao, Haibo He, “Editorial: computational energy management in smart grids”, Neurocomputing, Vol. 170: 267-269, 2015.

[39]    Xin Xu, Haibo He, Dongbin Zhao, Shiliang Sun, Lucian Busoniu, Simon X. Yang, “Editorial: machine learning with applications to autonomous systems”, Mathematical Problems in Engineering, 2015.

[40]    Dongbin Zhao*, Cesare Alippi, Derong Liu, Huaguang Zhang, “Editorial: Intelligent control and information processing,” Soft Computing, Vol. 17, No. 11, pp. 1967-1969, 2013.

[41]    Dongbin Zhao*, Yi Shen, Zhanshan Wang, Xiaolin Hu, “Data-based control, optimization, modeling and applications,” Neural Computing and Applications, vol.23, no. 7-8, pp. 1839–1842, 2013.

[42]    Huaguang Zhang*, Cesare Alippi, Dongbin Zhao, “Data-driven optimal algorithms and their applications to pattern recognition,” Neurocomputing, vol. 78, no. 1, 2012, pp. 1-2.



Patents

Over 30 Chinese invent patents have been authorized.

Research Interests

   

Conferences

Major Recent Conference Activities

[1]           The 24th International Conference on Neural Information Processing (ICONIP 2017), November 14-18, 2017, Guangzhou, China, Program Chair.

[2]           IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2017), Honolulu, Hawaii, Nov. 27- Dec. 1, 2017, Program Chair.

[3]           IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2016), Athens, Greece, Dec. 6-9, 2016, Program Chair.

[4]           IEEE World Congress on Computational Intelligence (WCCI 2016), Vancouver, Canada, July 25-29, 2016, Publicity Chair.

[5]           The 13th World Congress on Intelligent Control and Automation (WCICA 2016), Guilin, China, June 12–15, 2016, Program Co-Chair.

[6]           The 5th International Conference on Information Science and Technology (ICIST 2015), Changsha, China, April 24-26, 2015, Program Chair.

[7]           IEEE Symposiums Series on Computational Intelligence (SSCI 2014), Atlanta, US, Dec. 9-12, 2014, Poster Chair.

[8]           IEEE CIS Summer School on Automated Computational Intelligence, Beijing, July 5-11, 2014Chair.

[9]           IEEE World Congress on Computational Intelligence (WCCI 2014: IJCNN 2014FUZZ-IEEE 2014 and CEC2014), July 6-11, 2014, Beijing, Finance Chair.

[10]        4th International Conference on Intelligent Control and Information Processing, June 9-11, 2013, Beijing, China, Program Chair.


Collaboration

Professional Membership

[1]           2017.1-2018.12, IEEE Computational Intelligence Society (CIS) Beijing Chapter, Chair.

[2]           2015.1-2016.12, IEEE CIS Adaptive Dynamic Programming and Reinforcement Learning Technical Committee, Chair.

[3]           2015.1-2016.12, IEEE CIS Multimedia Committee, Chair.

[4]           2015.1-2015.12, IEEE CIS Travel Grant Subcommittee, Chair.

[5]           2013.1-2014.12, IEEE CIS Newsletter Subcommittee, Chair.

[6]           2013-, Secretary General, Computer Application Committee, Chinese Association of Automation.

[7]           2013-Senior member, Chinese Association of Automation.

[8]           2010.10-, Senior member, IEEE.

Editor Board

[1]           2014-, Associate Editor, IEEE Computational Intelligence Magazine.

[2]           2014-, Associate Editor, International Journal of Computational Intelligence and Pattern Recognition.

[3]           2012-, Associate Editor, IEEE Transactions on Neural Networks and Learning Systems.

[4]           2011-, Associate Editor, Cognitive Computation

Selected Guest Editors

[1]           2017, D. Zhao, D. Liu, F. L. Lewis, J. Principe, S.Squartini, “Deep Reinforcement Learning and Adaptive Dynamic Programming”, Special issue, IEEE Transactions on Neural Networks and Learning Systems.

[2]           2017, Yongduan Song, Frank L. Lewis, Marios Polycarpou, Danil Prokhorov, and Dongbin Zhao, “New Developments in Neural Network Structures for Signal Processing, Autonomous Decision, and Adaptive Control” Special issue, IEEE Transactions on Neural Networks and Learning Systems.

[3]           2015, Amir Hussain, Dacheng Tao, Jonathan Wu and Dongbin Zhao, “Concept Drift in Biologically-Inspired Learning” Special Issue, IEEE Computational Intelligence Magazine.

[4]           2011, Tianyou Chai*, Zhongsheng Hou, Frank L. Lewis, Amir Hussain, Dongbin Zhao, Guest Editor, Special Issue on “Data-based optimization, control and modeling”, IEEE Transactions on Neural Networks

Students

已指导学生

田艺  02  19253  

胡朝辉  02  19253  

苏永生  02  19253  

戴钰桀  01  19253  

张震  01  19253  

王滨  01  19253  

朱圆恒  01  19253  

王海涛  02  19253  

夏中谱  01  19253  

现指导学生

张启超  01  19253  

吕乐  01  19253  

卜丽  01  19253  

李浩然  02  19253  

陈亚冉  01  19253  

唐振韬   01  19253  

邵坤  01  19253  

李栋  01  19253