Bingbing Wen 🎓

Bingbing Wen

PhD Student

University of Washington

👋 About Me

I am a PhD student at University of Washington. I’m fortunate to be advised by Prof. Bill Howe and Prof. Lucy Lu Wang. I also work closely with Prof. Yulia Tsvetkov. I’m a member of UW RAISE Center and The AI Clinic.

My research focuses on three key areas: Data Efficiency through Curation and Optimization - optimizing data mixtures and designing fine-grained preference signals that go beyond correctness; Model Efficiency via Modular and Adaptive Architectures - exploring mixture-of-LoRA experts, routing mechanisms and reinforcement learning approaches to enhance collaboration among multiple specialized models; and Evaluation for Efficient Reliability - designing abstention and confidence-based evaluation frameworks that help models decide when not to compute unnecessary outputs.

During my PhD, I had the opportunity to conduct research internships at Apple, Microsoft Cloud AI, and OPPO Research, where I explored challenges in building large-scale AI systems. I also collaborate closely with the Allen Institute for AI.

I actively mentor undergraduate and master students in developing and carrying out research projects–feel free to reach out if you’re interested in my research or PhD application.

Education

PhD in Information Science (Natural Language Processing)

University of Washington

MS in Computational Science & Engineering (Artificial Intelligence)

University of Hong Kong

BS in Control Science & Engineering (Robotics)

Zhejiang University

Research Interests

Developing data‑ and compute‑efficient methods that enable foundation models to learn, adapt, and allocate resources optimally across tasks and data sources—from training through inference
Featured Publications
MARVEL: Modular Abstention for Reliable and Versatile Expert LLMs featured image

MARVEL: Modular Abstention for Reliable and Versatile Expert LLMs

A modular abstention framework for reliable expert LLMs that enables selective abstention from uncertain questions.

avatar
Bingbing Wen
•
Read more
AutoScale-Automatic Prediction of Compute-optimal Data Composition for Training LLMs featured image

AutoScale-Automatic Prediction of Compute-optimal Data Composition for Training LLMs

Automatic prediction of compute-optimal data composition for efficient LLM training.

Feiyang Kang
•
Read more
Do Language Models Mirror Human Confidence? Exploring Psychological Insights to Address Overconfidence in LLMs featured image

Do Language Models Mirror Human Confidence? Exploring Psychological Insights to Address Overconfidence in LLMs

Exploring psychological insights to address overconfidence in LLMs by comparing with human confidence patterns.

Chenjun Xu*
•
Read more
Know Your Limits: A Survey of Abstention in Large Language Models featured image

Know Your Limits: A Survey of Abstention in Large Language Models

A comprehensive survey of abstention mechanisms in large language models, covering theory, implementation, and evaluation.

avatar
Bingbing Wen
•
Read more
Recent Publications
(2025). MARVEL: Modular Abstention for Reliable and Versatile Expert LLMs. ICML 2025.
(2025). AutoScale-Automatic Prediction of Compute-optimal Data Composition for Training LLMs. COLM 2025.
(2025). Do Language Models Mirror Human Confidence? Exploring Psychological Insights to Address Overconfidence in LLMs. ACL 2025.
(2025). Know Your Limits: A Survey of Abstention in Large Language Models. TACL 2025.
(2024). Characterizing LLM Abstention Behavior in Science QA with Context Perturbations. EMNLP 2024.
📰 News

9/2025 Our paper about MLLM spurious correlation has been accepted by NeurIPS 2025!

7/2025 I presented our abstention survey in LLMs (oral presentation) and confidence calibration (poster) at ACL 2025!

7/2025 Our paper about modular abstention has been accepted by ICML 2025!

6/2025 I will start my summer internship at Apple as a research intern!

5/2025 Our paper about optimal data mixing in pretraining has been accepted by COLM 2025!

5/2025 Our paper about confidence calibration has been accepted by ACL 2025!

2/2025 Our paper about abstention survey in LLMs has been accepted by TACL 2025!