About Me
I am a pre-doctoral graduate student at UCLA.
I am currently a member of TurningPointAI, an AIGC Research Collaboration based off several labs to pursue targeted topics in Multimodal Language Agents, advised by Dr. Ruochen Wang, Prof. Cho-Jui Hsieh. My research currently focuses on Trustworthy AI, especially on controllability and interpretations on foundation models (LLMs/VLMs). Prior to the era of Language Models, I worked on object detection and visual interpretations. Besides research, I am also interested in entrepreneurial opportunities.
I obtained my B.S. degree in Electrical and Computer Engineering from the Technical University of Munich. During this period, I did my thesis about Interpretability of Transformer Object Detection at fortiss under supervision from Dr. Shen and Dr. Qiu.
Research Interests
- Language Agents: Multimodal Post-Training, Controllability and Interpretations on foundation models (LLMs/VLMs)
News
- [Feb. 2025] We realse our witness of aha moment on 2b models
- [Jan. 2025] Our paper, Mossbench, about oversensitivity of VLMs has been accepted to ICLR 2025!
- [Nov. 2024] I am actively looking for a PhD student position starting in Fall 2025!
- [Oct. 2024] Our paper, DrAttack, about attack on LLMs has been accepted to EMNLP 2024.
- [Jul. 2024] Our paper about oversensitivity of Multimodal-LLMs is preprinted on ArXiv.
- [Feb. 2024] Our paper about attack on LLMs is preprinted on ArXiv.
Publications
-
Hengguang Zhou*, Xirui Li*, Ruochen Wang, Minhao Cheng, Tianyi Zhou, Cho-Jui Hsieh
Tech Report, 2025.
-
Xirui Li*, Hengguang Zhou*, Ruochen Wang, Tianyi Zhou, Minhao Cheng, Cho-Jui Hsieh
ICLR, 2025.
-
Xirui Li, Ruochen Wang, Minhao Cheng, Tianyi Zhou, Cho-Jui Hsieh
EMNLP, 2024.
Powered by Jekyll and Minimal Light theme.