Hi, I am a Ph.D. candidate of Computer Science at UCLA, advised by Cho-Jui Hsieh . Prior to coming to UCLA, I was a Ph.D. student in UC Davis during 2015-2018. I received my bachelor degree in Computer Science from University of Electronic Science and Technology of China, advised by Yu Xiang.
My research interests includes large-scale optimization, distributed and parallel machine learning, recommender system and robustness of machine learning models. You could reach me at email@example.com.
I am actively looking for jobs both on industry and academia.
Adversarial Attack on Interactive Dialog System
Although we have shown that the sequence-to-sequence model is not robust against adversarial attack, it is still a open question about the robustness of dialog system implemented by deep neural networks. We develop three algorithms which could attack the goal-based dialog system successfully with or without knowing the deep neural network model structure. And we could improve the model's robustness by adversarial training.
Query Efficient Hard-label Black-box Attack
It has been proved that DNNs models are vulnerable to a very small human-imperceptible perturbation. However, it is a still a challenge when we could only get hard-label instead of probability output. We develop a query efficient algorithm which could apply to industrial-strength image classifiers.
Adversarial Example for Sequence to Sequence Model
Recent research on DNNs has indicated ever-increasing concern on the
robustness to adversarial examples. We are designing algorithms to generate adversarial example for sequence to sequence model which is widely used in machine translation, text summarization.
Low-rank Approximation of rankSVM in Recommendation System
New low-rank approximation method using for the recommendation system. Previously, it uses the ranksvm and its variations to train the data. Now we use the low-rank method to get better speed and accuracy dealing with the pair-wised data in ranksvm.
Line Search Method for Distributed Primal-Dual Optimization
New line search method in stochastic algorithms in large-scale distributed machine learning scenario to overcome the large Primal-Dual gap in the training periods so that we can achieve a faster convergence and a better accuracy in the distributed machine learning case.
University of California Los Angeles
Sept 2018 - now
Ph.D. candidate in Computer Science
University of California Davis
Sept 2015 - Sept 2018
Ph.D. student in Computer Science
University of Electronic Science and Technology of China
Sept 2011 - June 2015
Bachelor of Science in Computer Science
Rethinking Architecture Selection in Differentiable NAS.
Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh. To Appear In International Conference on Learning Representations (ICLR), 2021. (Outstanding Paper Award)
DrNAS: Dirichlet Neural Architecture Search.
Xiangning Chen*, Ruochen Wang*, Minhao Cheng*, Xiaocheng Tang, Cho-Jui Hsieh (* Equal Contribution). To Appear In International Conference on Learning Representations (ICLR), 2021.
Self-Progressing Robust Training.
Minhao Cheng, Pin-Yu Chen, Sijia Liu, Shiyu Chang, Cho-Jui Hsieh, Payel Das. To Appear In AAAI Conference on Artificial Intelligence (AAAI), 2021.