Zhouxing Shi

UCLA

About me

I am a final-year Ph.D. Candidate at UCLA Computer Science Department, advised by Prof. Cho-Jui Hsieh, and my primary research focus is on trustworthy machine learning. Before that, I received my B.Eng. degree from the CST department at Tsinghua University where I worked with Prof. Minlie Huang on natural language processing.

I am on the 2024-2025 job market.

Research

  • Formal Verification for ML: General and scalable approaches to formally verifying NNs across diverse NN architectures and verification specifications, enabling automatic verification in broad ML applications for obtaining high quality verifiable guarantees.

  • Training Verifiably Robust and Safe ML Models: Verification-aware NN training methods to efficiently train NNs which are easier to verify and achieve stronger verified guarantees in practical applications.

  • Provably Safe NN-based Control Systems: Provably safe NN-based control systems with verification and verification-aware training.

  • Empirical Robustness Evaluation and Defense for ML Foundation Models: Evaluating and enhancing the robustness of large-scale ML foundation models.

Preprints (* equal contribution)

Neural Network Verification with Branch-and-Bound for General Nonlinearities

Publications (* equal contribution)

Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation
Defending LLMs against Jailbreaking Attacks via Backtranslation
Red Teaming Language Model Detectors with Language Models
Formal Verification for Neural Networks with General Nonlinearities via Branch-and-Bound
Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Towards Robustness Certification Against Universal Perturbations
Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation
On the Adversarial Robustness of Vision Transformers
On the Sensitivity and Stability of Model Interpretations in NLP
On the Convergence of Certified Robust Training with Interval Bound Propagation
Robust Text CAPTCHAs Using Adversarial Examples
Fast Certified Robust Training with Short Warmup
Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond
Robustness Verification for Transformers

Awards

  • UCLA Dissertation Year Award, 2024-2025
  • Amazon Fellowship (Amazon & UCLA Science Hub fellowship), 2022-2023
  • 4X first-place winner at the International Verification of Neural Networks Competition (VNN-COMP), 2021-2024
  • Outstanding Bachelor’s Thesis Award, Tsinghua University, 2020

Teaching

TA at UCLA:

Service