Luxi (Lucy) He

prof_pic.jpg

Hi! I’m Luxi He (feel free to call me Lucy). I’m a second-year CS Ph.D. student at Princeton University, where I’m fortunate to be co-advised by Prof. Danqi Chen and Prof. Peter Henderson. My current research focuses on understanding language models and improving their alignment and safety. I’m particularly interested in the impact of data in the language model life cycle, as well as safe deployment of models. Recently, I’ve also been exploring multimodal topics. Motivated by real-world impact and my hope to bridge the gap between tech and policy, I want to bring in insights from both technical and policy/law sides to my research.

Before Princeton, I graduated from Harvard in 2023 with Highest Honors in Computer Science & Mathematics and a concurrent Master’s in Applied Math.

Outside of research, I’m a singer, dancer, photographer, and amateur food blogger.

Email: luxihe at princeton.edu

news

2024-07 Gave a spotlight presentation remotely at ICML 2024 GenLaw Workshop on our Fantastic Copyrighted Beasts paper.
2024-05 Gave an oral presentation at ICLR 2024 Data Problems in Foundation Models Workshop on our Benign Data Safety paper.
2023-10 Received the Social Imapct Fellowship from Princeton.
2023-08 Started my Ph.D.! I’m fortunate to be supported by the Gordon Wu Fellowship.
2023-05 Graduated from Harvard with both my Bachelor’s and Master’s degrees.

selected publications

  1. benign_data_safety.png
    What is in Your Safe Data? Identifying Benign Data that Breaks Safety
    Luxi He*, Mengzhou Xia*, and Peter Henderson
    Conference on Language Modeling (COLM), ICLR Data Problems in Foundation Model (Best Paper), 2024
  2. copycat_cover.png
    Fantastic Copyrighted Beasts and How (Not) to Generate Them
    Luxi He*, Yangsibo Huang*, Weijia Shi*, Tinghao Xie, Haotian Liu , and 5 more authors
    ICML GenLaw (Spotlight), 2024
  3. charxiv_cover.png
    CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
    Zirui Wang, Mengzhou Xia, Luxi He, Howard Chen, Yitao Liu , and 8 more authors
    Preprint, 2024
  4. fairfront_cover.png
    Aleatoric and Epistemic Discrimination: Fundamental Limits of Fairness Interventions
    Hao Wang, Luxi He, Rui Gao, and Flavio Calmon
    In Advances in Neural Information Processing Systems (Spotlight) , 2023