Welcome!
Hi, I’m Qianli — a Research Scientist at Alibaba Group’s Tongyi Lab. I split my time between research on large language models (LLMs) and building open-source AI infrastructure.
I received my Ph.D. from the School of Computing at the National University of Singapore, advised by Prof. Kenji Kawaguchi. Before that, I earned a B.S. in Computer Science from Peking University, where I worked with Prof. Zhanxing Zhu. I’ve also spent time at Baichuan Inc., Sea AI Lab, and as a visiting researcher at Georgia Tech.
Outside of work, I enjoy 🐱🏀🤿🎿🧑🍳🎮🃏 …
Email: shenqianlilili[at]gmail.com
[CV] [Google Scholar] [GitHub] [WeChat]
Selected Publications
Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization [arxiv][code]
The Stronger the Diffusion Model, the Easier the Backdoor: Data Poisoning to Induce Copyright Breaches Without Adjusting Finetuning Pipeline [arxiv][code]
VA3: Virtually Assured Amplification Attack on Probabilistic Copyright Protection for Text-to-Image Generative Models [arxiv][code]
PICProp: Physics-Informed Confidence Propagation for Uncertainty Quantification [arxiv][code]
Deep Reinforcement Learning with Robust and Smooth Policy [arxiv]
Softwares