Dr. Yiling Lou (娄一翎)

                    Incoming Assistant Professor
                    Siebel School of Computing and Data Science
                    Grainger College of Engineering
                    University of Illinois Urbana-Champaign
                    4104 Siebel Center
                    201 N. Goodwin Ave.
                    Urbana, IL 61801, USA

                         



I will be joining the Siebel School of Computing and Data Science at UIUC as an Assistant Professor in Spring 2026. Before joining UIUC, I was a Postdoctoral Fellow at Purdue University working with Prof. Lin Tan and a Pre-tenure Associate Professor at Fudan Univeristy. I received my Ph.D degree and B.S degree in Computer Science from Peking University, under the supervision of Prof. Lu Zhang and Prof. Dan Hao. My research interests mainly focus on Software Engineering, and its synergy with Artificial Intelligence and Programming Language, such as LLM4Code, Agent&SE, Vulnerability Detection, Software Testing and Debugging.


Prospective Students: I am looking for Fall'26 PhD/MS students to join my research group at UIUC.
I am especially interested in working with self-motivated students who have strong backgrounds in Code Agents, Code LLMs, AI4SE, SE4AI, AI&Security. Please send me an email (yilingl@illinois.edu) with your CV if you are interested in working with me.

News

  • [Pinned] I am co-chairing the third International Workshop on Large Language Models for Code (LLM4Code 2026) co-located with ICSE 2026 in Rio de Janeiro, Brazil. Looking forward to your submissions by 20 October!
  • [Pinned] I am co-chairing AIware 2025 (co-located with ASE 2025 in Seoul, South Korea). Looking forward to your submissions!
  • I will join the Siebel School of Computing and Data Science at UIUC as an Assistant Professor in Spring 2026!
  • Our work "Can Agents Fix Agent Issues?" is accepted to NeurIPS 2025. Agents are emerging as a new software paradigm and automatically maintaining agent systems is challenging. Check out AgentIssue-Bench, the first reproducible benchmark of agent issue resolution tasks. We find that existing Software Engineering agents perform poorly in resolving agent issues.
  • Check out our survey on LLM-based agents for software engineering! [Preprint] [Github]
  • Our work on agents for vulnerability detection is accepted to ACL 2025.
  • Our work on human-in-the-loop patch correctness checking is accepted to OOPSLA 2025.
  • Our work on measuring fault diagnosis capabilities of tests is accepted to TOSEM.
  • Our work (INFERROI) on enhancing traditional static analysis with LLMs in resource leak detection is accepted by ICSE 2025. INFERROI extends the knowledge boundary of traditional static analysis with the API specifications inferred by LLMs, which has detected previously-unknown resource leaks in open-source projects.
  • We are organizing the second International Workshop on Large Language Models for Code (LLM4Code 2025) co-located with ICSE 2025. Looking forward to your high-quality submissions by Nov 18!
  • I attended the Dagstuhl Seminar on Automated Programming and Program Repair and gave a talk on LLM-based agents for software engineering.
  • Our work on patch validation efficiency is accepted by TOSEM.
  • ChatTester (LLM-based Unit Test Generation) is accepted to FSE 2024. Congratulations to Zhiqiang!
  • I attended No. 176 Shonan Meeting on "Foundation Models and Software Engineering: Challenges and Opportunities".
  • Check out ClassEval Leaderboard for evaluating LLMs on class-level code generation.
  • ClassEval is accepted to ICSE 2024. We're currently working on evaluating more recent code models on ClassEval. [Benchmark Github] [Hugging Face]
  • We are launching the first International Workshop on Large Language Models for Code (LLM4Code 2024) co-located with ICSE 2024. Looking forward to your submissions!
  • Our work won ACM SIGSOFT Distinguished Paper Award at ESEC/FSE 2023.
  • Check out our manually-crafted benchmark ClassEval for evaluating LLMs on class-level code generation. [Benchmark Github] [Hugging Face] [Preprint]
  • Check out our preprint for evaluating instruction-tuned LLMs on code comprehension tasks.
  • Two papers accepted to ESEC/FSE 2023 after major revision.
  • Three papers accepted to ASE 2023.
  • Check out our preprint on LLM-based unit test generation.
  • Two papers accepted to ESEC/FSE 2023.
  • Two papers accepted to ICSE 2023.