About me

I am a final-year Ph.D. Candidate at UCLA Computer Science Department, advised by Prof. Cho-Jui Hsieh. My primary research focus is on trustworthy machine learning, where I study formal methods for verifying and training ML models with verifiable safety guarantees in mission-critical applications, as well as empirical methods for the robustness and safety evaluation and defense for large-scale ML foundation models. I received my B.Eng. degree from the CST department at Tsinghua University.

I am currently on the job market.

Research

  • Formal Verification for ML: General and scalable approaches for formally verifying NNs with diverse model architectures and verification specifications, to enable automatic verification in a broad range of ML applications.

  • Training Verifiably Robust and Safe ML Models: Verification-aware NN training methods to efficiently train NNs with stronger verifiable safety guarantees.

  • Verifiably Safe NN-based Control Systems: Applications of both formal verification and verification-aware NN training to the safety of NN-based control systems.

  • Empirical Robustness Evaluation and Defense for ML Foundation Models: Empirical methods for evaluating and enhancing the robustness and safety of large-scale ML foundation models such as LLMs.

Selected Publications

(* Equal contribution; for full publications: see Publications or Google Scholar )
Neural Network Verification with Branch-and-Bound for General Nonlinearities
Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation
Defending LLMs against Jailbreaking Attacks via Backtranslation
Red Teaming Language Model Detectors with Language Models
Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Fast Certified Robust Training with Short Warmup
Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

Preprints (* Equal contribution)

Testing Neural Network Verifiers: A Soundness Benchmark with Hidden Counterexamples

Software

  • auto_LiRPA : A library for automatically computing differentiable verified output bounds for general computational graphs. Originally proposed in our NeurIPS 2020 paper, it is the core bound computation engine in the award-winning alpha-beta-CROWN.
  • alpha-beta-CROWN : A comprehensive neural network verification toolbox that consists of multiple complete verification algorithms (such as GenBaB) on top of auto_LiRPA. It is the 1st place winner at the annual VNN-COMP from 2021 to 2024.

Awards

  • UCLA Dissertation Year Award (fellowship), 2024-2025
  • Amazon Fellowship (Amazon & UCLA Science Hub fellowship), 2022-2023
  • 4X first-place winner at the International Verification of Neural Networks Competition (VNN-COMP), 2021-2024
  • Outstanding Bachelor’s Thesis Award, Tsinghua University, 2020

Teaching

TA at UCLA:

Work Experience

  • Research Scientist Intern at Meta, Sunnyvale, 2023
  • Student Researcher at Google Research, New York, 2022
  • Research Intern at JD AI Research, Shanghai, 2020-2021
  • Research Intern at ByteDance AI Lab, Shanghai, 2020